source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
76,993
Donald Trump has called for the US constitution to be terminated . Apart from something akin to a civil war or dramatic societal upheaval, is there a mechanism which could enable the termination of the US constitution?
The US constitution — like all national constitutions, to my knowledge — is considered a foundational document. It may be amended by Congress (subject to judicial review) or through a new constitutional convention , but it cannot be terminated in part or whole except by ending the US system of government and replacing it with something else. It's worth noting that Trump did not precisely call for the termination of the US constitution itself, but merely those rules and regulations that forced him out of office and prevent his (immediate) reinstatement. And I'm not entirely convinced that he has any interest in or understanding of the constitutional issues in play here. Like most things Trump, this comment suggests narcissism more than nefarious intent. For Trump, the Constitution is much like the Bible: something to be held up for photo-ops and used to browbeat opponents, but otherwise of no significance whatsoever.
{ "source": [ "https://politics.stackexchange.com/questions/76993", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/42857/" ] }
77,006
A notable majority of the US is supportive of legalizing marijuana, with Democrats having a much higher level of support, and Biden's own platform including legalizing it. Given the overwhelming support for it among Democrats and that the Democrats had full control of the government, I'm kind of surprised they didn't force through a bill legalizing marijuana. It doesn't even seem something Republicans would have fought as hard over given almost half of Republicans want it legalized too, seems they would have bigger fish to fry. I know I've heard that they're trying to rush something through before they lose the house, but why was it put off so long? Is there a reason this wasn't an easy no-brainer legal change for Democrats when they took control?
It would have required a filibuster proof majority in the U.S. Senate (i.e. 60 votes) in addition to a House majority since it was not a fiscal bill or a Presidential nomination to ratify. They also might not have gotten every single Democratic vote in the Senate. So, they didn't have the votes to pass it. Even Republicans who wouldn't really seriously have opposed the bill didn't want to give Biden any legislative accomplishments before the election, which is why the Respect for Marriage Act only passed after the election. Also, while Biden eventually came around to making some bold moves on marijuana with his pardon power and prosecutorial discretion, he is himself a one time stalwart of the drug war and probably doesn't see it as a personal priority upon which he wants to spend scarce political capital.
{ "source": [ "https://politics.stackexchange.com/questions/77006", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/6105/" ] }
77,048
Just hypothetically, what if an election involved the voting agency thoroughly compiling a registry of every citizen in the country and somehow trying to regularly confirm that they existed and were alive, and then having people publicly declare well ahead of time who they would be voting for. The choices would be available to publicly view and you could change your choice as you wanted, but as the election date drew nearer, maybe they began phasing out certain people who initially seemed final in their choice. Gradually as the date drew nearer they would continue to finalize cohorts of votes until the last stragglers or indecisive people made up their final mind or otherwise abstained from voting. The point is to not make elections such an inconvenience including traveling to polling stations, accusations of fraud, indecisiveness, low voter participation, and uncertain vote counting on a huge time-crunch. The downside would be that everybody knows who you are voting for, but a lot of people are already really open about that, and you could keep your cover until at least the last moment if you wanted to switch right before the end. Is voter anonymity that important? I wonder if there are situations where lawmaking bodies function just fine given that people’s stances are widely known.
The main reason for the use of a Secret Ballot is to prevent bullying, blackmail, or bribery from influencing a person's vote. This could come from an abusive partner who wants to make sure their spouse votes the way they do, a candidate or their supporters trying to buy votes, or simple peer pressure ("My friends all support Alice, so I don't want them to know I'm voting for Bob"). Anonymity makes it much harder to control another person's vote - if you can't confirm how a person voted, you can't punish them for voting against your wishes or reward them for voting how you want. Your system of "finalising" certain votes early also seems rather strange. It's not clear how they could identify which voters were really confident in their choices, and I'm not sure what the benefit is supposed to be anyway - if they really weren't going to change their minds then nothing is gained by preventing the option, and if they would have changed their vote then you've just forced them to vote for someone they don't want to.
{ "source": [ "https://politics.stackexchange.com/questions/77048", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/42684/" ] }
77,064
What are the criteria for a protest to be a strong incentivizing factor for policy change in China? https://www.cnn.com/2022/12/08/china/china-zero-covid-relaxation-reaction-intl-hnk/index.html Workers across China have dismantled some of the physical signs of the country’s zero-Covid controls, peeling health code scanning signs off metro station walls and closing some checkpoints after the government unveiled an overhaul of its pandemic policy. It seems China decided to change its COVID policy after multiple protests across the country. I thought China rarely listened to the demands of the protestors, but in this case they did. I am wondering in what situation the Chinese leadership seriously considers policy change due to protests. There seem to be other historical precedents I am not aware of and I would like to know how useful protests are in China to change the course of government policies.
Chinese government repression is not inscrutably arbitrary, but instead primarily focused against threats to the regime. That also means that there is room for compromise and limited tolerance of protests that do not fundamentally threaten the regime. Even with the incident on Tiananmen square on June 4th 1989, typically held up as the prime example of Chinese government repression, we can see that in the lead up the government was willing to make some compromises and opened dialogue with the protestors. The point where the government turned to repression was the point when senior officials started feeling that the protests had begun to threaten the fundamental political order. The internal division among the top officials before the fateful vote for martial law was between those who felt the protests threatened to overthrow the regime itself, and those that felt the movement only wanted reforms. In other words, the difference was between those who saw the movement as fundamentally anti-Party and those who saw it as demanding reform within the system . This distinction continues into the modern day in the policy of the government. Environmental protestors are substantially less repressed than other types of protestors and the (central) government does often accede to localized protests with localized demands. Protestors in Hunan protesting mismanagement by local rural banks succeeded in getting the central government to announce compensation for their deposits. Protests about delays in construction of pre-sold homes related to the recent real estate crisis succeeded in causing the government to place pressure on developers to accelerate construction and the government pressed banks to give developers the liquidity they need. The case of the chained trafficked woman and the beating in a restaurant of a woman who rejected a man's advances, both led to arrests after national outrage happened. The common thread running through all of this is that none of these protests have demands that fundamentally threaten the political order of the regime. Contrast the attitude of the government in those cases to the ruthless attitude towards Uyghur and Tibetan separatism, both of which make demands that necessarily threaten the stability of the current regime. Even with the Hong Kong protests, whose demands for political reforms did fundamentally threaten the political order of the regime, the government did give concessions to begin with, like withdrawing the extradition bill. But note that withdrawing the bill did not relate to any of the demands for political reforms the protestors put forth. On a side note, it's true that many of these protests are still repressed to some degree. But it's important here to distinguish between the central government and local governments, the latter of which often takes a more heavy-handed approach to protestors in general because local protests aren't good for the political careers of local political leaders, and oftentimes, the central government is somewhat more conciliatory than local governments. In sum, the Chinese government does not repress for the sake of repression. It is pragmatic, and selectively represses protests and social movements that threaten the fundamental nature of the political order, while it is more open to compromise when demands do not threaten the fundamental nature of the political order. In the case of the COVID protests, the demands of protestors for the government to relax pandemic restrictions clearly is not a fundamental threat to the political order of the regime, so it's not surprising that the government is willing to compromise.
{ "source": [ "https://politics.stackexchange.com/questions/77064", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/38301/" ] }
77,096
On 8 December 2022, the European Council has adopted a resolution on not accepting Russian travel documents issued in Ukraine and Georgia. I'm wondering if the Frontex (European Border and Coast Guard Agency) or other EU security services can differentiate these "passports" from legitimate ones issued on the territories which are currently recognized as belonging to the Russian Federation. From the common knowledge, travel documents have Issuer ( Passport Authority ) field. However, nothing prevents a perpetrator from issuing a travel document elsewhere deep on Russian territory. Or even use a specially crafted Passport Authority #777001 , like they did for Salisbury Novichok terrorists in the past. Correct me if I'm wrong, but according to what I read in Ukrainian news, there is no evidence to assume that the russians have established Passport Authorities in Ukraine or Georgia, at least publicly — for obvious reasons. Note: There are two kinds of documents both called "passport" on Russia — one as a proof of citizenship and another one, "international passport", which is intended for traveling abroad. They widely spread the "citizenship passports" while I'm talking about the "travel passports", of course.
It does not matter. While I am not privy to the technical details of passport processing, my guess is that the answer to your stated question is 'NO'. After all, we are not talking about counterfeits where some missing security features could be used to tell them apart, they are identical in every aspect to 'valid' Russian passports. So unless Russia decides to publicly inform border agents of where the passport was issued, they won't be able to tell. Which is the whole point, since the objective is this measure is for the EU to avoid recognizing, even in the slightest, Russian sovereignty over Ukrainian occupied territories. So, if you come with some paper from the Russian department of Kherson, the EU answer is "this document is not valid because of a factual error, Kherson belongs to Ukraine." Russia uses the issuing of passports as a sign of sovereignty, the EU confronts that. So if Russia hides the origin of the passport it is no longer making a claim of sovereignty, and the EU has nothing to fight against.
{ "source": [ "https://politics.stackexchange.com/questions/77096", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/2984/" ] }
77,105
The 2011 parliamentary election clearly demonstrated that Singapore has transformed into a competitive authoritarian regime. Not only did the ruling People’s Action Party’s share of the popular vote decline and the opposition win the most seats ever, there was meaningful contestation for ruling power for the first time. As a result of the government’s liberalization of the Internet, opposition parties were able to grow in strength by attracting more qualified candidates and an unprecedented number of volunteers. Besides signifying political change in Singapore, the election also worried Chinese leaders, who are trying to copy Singapore’s authoritarian state-capitalism. https://www.journalofdemocracy.org/articles/singapore-authoritarian-but-newly-competitive/ On Wikipedia, it says it's a democracy, but on some news articles or editorials it's considered to be a dictatorship? How close is Singapore to China? Do they have a similar system, or is the claim that Singapore not democracy a lie?
Why is Singapore considered to be a dictatorial regime and a multi-party democracy at the same time? Because it isn't fully one or the other. "Competitive authoritarian" is an intermediate classification between democracy and dictatorship, rather akin to Iran. It means that there is a dominant political group or party, but that elections (which are neither free and fair, nor entirely meaningless) are held. Often competitive authoritarian elections have tight government control of who can run for office and procedures that favor the ruling party disproportionately. The classification defies the view of dictatorship and democracy as a purely binary one. These systems are also sometimes call "partial" or "hybrid" democracies or "flawed" democracies. A subtly different classification is a dominant party system, in which free and fair elections are held, but nonetheless one political party ends up in control of everything after every election. Dominant party systems often ease into true multiparty democratic systems, as political parties change their positions and public opinion changes over time.
{ "source": [ "https://politics.stackexchange.com/questions/77105", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/38301/" ] }
77,138
Elon Musk tweeted that Dr. Anthony Fauci should be “prosecuted”. I have been aware of some public outcry against Fauci, but I do not know what the specific accusation against him is. Is it that they believe lockdowns are unethical and illegal, or do they believe Fauci is corrupt in some way and acting in his own interest?
Tucker Carlson was more specific about what supposed crime Fauci should be prosecuted for (in August 2022). The main point seems to be helping create COVID and then covering up his involvement. Both would be crimes if true (which it is not). Carlson asserted Fauci had committed “very serious crimes” and said he “apparently engineered the single most devastating event in modern American history.” Carlson falsely claimed that Fauci knowingly lied, resulting in people being hurt: In just the last 2 years, Fauci’s recommended treatments and preventative measures for COVID that not only didn’t work, but that he knew didn’t work. He admitted to The New York Times [NYT] that he lied about herd immunity in order to sell more vaccines He also falsely claimed that Fauci helped create COVID, covered up the evidence, and was conspiring with a foreign government to cover up the origins of the pandemic and to "suffocate" kids: Oh, so he knew. As your kids were suffocating during gym wearing a mask, Tony Fauci knew they didn’t work and then there’s this, maybe his most notable crime. He didn’t simply downplay and obfuscate the origins of the pandemic, apparently in conjunction with the Chinese government. No. Tony Fauci covered up evidence that he, Tony Fauci, helped create that virus in the first place. Rand Paul would be another example of someone alleging criminal behavior. Back in July 2021, Paul referred Fauci to the DoJ for allegedly lying to congress: Sen. Rand Paul (R-Ky.) made good on his threat to refer Anthony Fauci [...] to the Justice Department for allegedly lying to Congress about funding gain-of-function research at the Wuhan Institute of Virology. Chip Roy has accused Fauci in March 2022 of crimes against humanity , mainly for his support of the COVID vaccine. Regarding Musk, he specified later: As for Fauci, he lied to Congress and funded gain-of-function research that killed millions of people. The claim that Fauci lied about gain of function research has been rated false by The Washington Post fact checkers.
{ "source": [ "https://politics.stackexchange.com/questions/77138", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/42684/" ] }
77,245
My understanding is that lobbyists meet with politicians, ideally face-to-face, and try to persuade them to vote this way or that way. Lobbying is regulated to prevent outright bribery, so a lobbyist cannot offer a politician a million bucks to vote favorably. So what is a representative example(s) of what a lobbyist does offer?
Expertise, information, data. Politicians have to make complex decisions on things they are not experts in, where they cannot decide by themselves what will be the effects of a decision and which decision is therefore correct, and don't have the time to do intensive research. So one thing they can do is to have staff do research, reach out to experts and provide summaries. But it also makes sense to listen to the people who will be affected and who often have the expertise on the topic - and that is who lobbyists represent. There are many kinds of lobby groups, many of them are not industry-sponsored. Trade unions are lobby groups representing workers. There are environmentalist lobby groups as well. And yes, of course all of this is distorted by money in various ways.
{ "source": [ "https://politics.stackexchange.com/questions/77245", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/45152/" ] }
77,277
Russia and Ukraine have been fighting over it for over 4 months. Is there a reason why the city seems to be considered important for the war? I heard analysts say that the city only has a symbolic importance and they've been attacking it because of the sunk-cost fallacy. Is this true, or is there any strategic value in obtaining the city?
It seems to be agreed by all that the city's location and the city itself has no real strategic military importance. But even if it has just symbolic importance, that does not mean that a decision to continue attacking or defending it is merely an example of the sunk cost fallacy. The Russian Side In this case, from the Russian side, the importance seems to be related to prestige, both of the Russian army in general and of certain individual units within it: [T]he battle for Bakhmut is widely seen as an opportunity for Moscow to regain lost prestige after months of military setbacks.... The battle for Bakhmut is also a key test for the Wagner head, Yevgeny Prigozhin, who is believed to have recruited thousands of Russian convicts to help with the storming of the city. Prigozhin has previously fiercely criticised the Russian defence ministry for its performance in Ukraine and has lauded Wagner as the country’s most capable fighting force. The city’s capture by Russia would increase Prigozhin’s political standing as he seeks a more prominent position in the country’s decision-making process. [ g1220 ] Prigozhin has claimed that he's also befitting from a war of attrition here (for more on this see below), though that may be merely cover to try to provide a plausible military excuse for the above attempt to regain prestige: Yevgeny Prigozhin, the founder of Russia’s Wagner group, has said his troops have primarily centred their efforts on demolishing the Ukrainian army there. “Our task is not Bakhmut itself, but the destruction of the Ukrainian army and the reduction of its combat potential, which has an extremely positive effect on other areas, which is why this operation was dubbed the ‘Bakhmut meat grinder’.” [ o1210 ] The Ukrainian Side For Ukraine also, there is a component of morale and public opinion involved: If Bakhmut were to fall, military observers have said Ukraine could pull back to the west without suffering heavy strategic defeats. But a retreat from the city might suggest Kyiv’s military efforts were running out of steam after months of continuous gains.... “Militarily, Bakhmut has no strategic importance,” Col Gen Oleksandr Syrskyi, the commander of Ukraine’s ground forces, said earlier this month. “But it has psychological significance.” [ g1220 ] However, this battle can also be seen as a military opportunity for Ukraine. The battle is currently seen by both the Russians (as mentioned above) and the Ukranians as a war of attrition : [T]he embattled city of Bakhmut, which has largely been ravaged after nearly five months of fighting [has] been referred to by both sides as the “Bakhmut meat-grinder”. [ g1220 ] Normally in a war of attrition the side incurring higher costs would be wise to back away and chose another avenue of attack.¹ But Russia's apparent need to take Bakhmut for non-military reasons means that, if Ukraine incurs relatively lower costs defending it, the battle is an opportunity to wear down the Russian army at lower cost than Ukraine would have to spend otherwise. It does appear that Ukraine's costs are indeed lower than the Russians': “They are still using old Soviet tactics,” explains Mikola, the fire coordinator [for a unit of the Ukrainian 24th mechanised brigade], who supplies the target grids to the gun crews from the drone operators and forward fire controllers with the infantry. “We have more modern technology than the Russians so we can be more accurate and sparing with our ammunition.” “We only shoot when we have coordinates,” explains Vasily Pavlokavic, aged 42, a short and stocky officer who commands the crew of the howitzer.... “They just send one group after another against our positions,” Sasha, a member of Ukraine’s 24th mechanised brigade fighting in the area, told the Observer. “If the attack doesn’t succeed they’ll just try again in exactly the same way. The only strategy I can see at this point is that they want to take the city so they can claim some kind of victory after a year that has seen so many losses. “We’ve noticed in the past two weeks an increase in shelling and infantry attacks as if they are in a rush to take Bakhmut. That also means that they are suffering ever greater losses. They are just throwing in meat.” [ o1210 ] And it also appears that the Russians may indeed be suffering severe attritional costs: [Andrii, a crew member of the Ukrainian 24th mechanised brigade,] recalls when his brigade was last in this area, during the summer, when any Ukrainian fire was met multiple times over from Russian guns. “They would fire at everything. Now they have become more sparing,” he adds, suggesting shortages of Russian ammunition.... A recent assessment for the Institute for the Study of War [stated], “The costs associated with six months of brutal, grinding, and attrition-based combat around Bakhmut far outweigh any operational advantage that the Russians can obtain from taking Bakhmut.” [ o1210 ] Reports of Russian attrition on a more general level are widespread; see, for example, "‘The army has nothing’: new Russian conscripts bemoan lack of supplies" ( The Guardian, 2022-10-20). Summary On the Russian side, the importance of taking Bakhmut appears to be related to prestige, particularly of the Wagner unit and to some degree of the Russian side overall. On the Ukrainian side, public perception is also a factor, but it may also be an opportunity for the Ukrainians to wear down the Russian army at lower cost than they could do elsewhere. References [g1128] "Fighting in east Ukraine descends into trench warfare as Russia seeks breakthrough" ( The Guardian, 2022-11-28) [g1209] "‘Only 100 metres apart’: Ukrainians and Russians face off in Donetsk" ( The Guardian, 2022-12-09) [o1210] "In the ‘Bakhmut meat grinder’, deadlocked enemy forces slog it out" ( The Observer, 2022-12-10) [g1220] "Putin admits to ‘complicated’ situation in Russian-occupied Ukraine" ( The Guardian, 2022-12-20) ¹ The side incurring higher costs in a war of attrition might choose to continue the attack if they have have vastly larger resources available that cannot be more profitably used elsewhere. However, Russia in this case does not; they've already had to conscript hundreds of thousands more troops, many of whom are being sent to the front with minimal training.
{ "source": [ "https://politics.stackexchange.com/questions/77277", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/38301/" ] }
77,298
Are unsuccessful presidential candidates expected to retire? I ask this because it seems that the candidates work their whole political lives to make a presidential nomination, and if they are then not elected they seem to vanish from all popular discourse. I wonder why that is?
There are at least three examples from (relatively) recent history where the politician not only did not "vanish from all popular discourse" they actually managed to become President after failing in an earlier election cycle: Richard Nixon lost the Presidential election to John F. Kennedy in 1960 Ronald Reagan lost the Republican nomination to Gerald Ford 1976 Joe Biden dropped out of the 1988 Democrat field after multiple allegations of plagiarism and inflating his academic record Joe Biden in 2008 lost the Democratic nomination to Barack Obama Those are three politicians who went on to become President. There are multiple others who failed who remained visibly active in politics. Hillary Clinton, Mitt Romney, and Pete Buttigieg are just some recent examples.
{ "source": [ "https://politics.stackexchange.com/questions/77298", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/13345/" ] }
77,399
Mirror question to Why would Russia care about NATO troops on its borders if it has nuclear weapons? Naively one would think that Russia should not worry about NATO because they have nuclear weapons. Similarly, one would think that European countries that are members of NATO should not worry about Russia, because they also have nuclear weapons. Neither of the two can win a war against the other. Given that, why are countries like Germany and Poland increasing their defense budgets? Especially since 1) if Russia is unable to win a conventional war against Ukraine they are surely also unable to win a conventional war against NATO, and 2) NATO intelligence assessments are that Russia will take 1-3 years to replenish their forces .
Nuclear deterrance doesn't avoid war. India and Pakistan are both nuclear powers yet they fought a war in 1999. Countries would always want to avoid the nuclear option - they will try to achieve their goals with conventional means and only in extreme cases (e.g. if Moscow or St. Petersberg is captured) they will go the nuclear way. While nuclear option is there, it will almost never be exercised. Even Putin has very recently said : "We have not gone mad, we are aware of what nuclear weapons are. We aren't about to run around the world brandishing this weapon like a razor." Every country would like to be independent in some sense - i.e. independently able to counter a Russian attack, without external help. Plus, while NATO might help with weapons, they may not want to get involved in small scale wars to avoid escalation into a World War. No matter how wealthy and prosperous a nation is, if it is deprived of its independence , it no longer deserves to be regarded otherwise than as a slave in the eyes of civilized world. To accept the protectorate of a foreign power is to admit to a lack of all human qualities, to weakness and incapacity. - Ataturk on Independence Military experts point out that Germany had not a single combat-ready brigade to defend the country's territory. It does not suit Europe's economic superpower that the Bundeswehr is even lacking equipment . Hence they made big promises about miliary spending. As of Early Jan 2023, it doesn't look like they're following through: “It’s still open whether that [military spending goal] will be achieved [in 2023]", Hebestreit (Germany's Federal Government Spokesperson) said, adding that his “cautious expectation” was that Germany would still meet the target within this legislative period, which ends in 2025. Also if someone like Trump comes along again they may refuse to adhere by NATO norms unless European countries fulfill the 2% GDP criteria. Trump questions, "what good is NATO?" . The US is involved in many conflicts around the world and they might expect more from Europe. U.S. attention has increasingly been pulled toward Asia. Despite Russia’s invasion of Ukraine, the U.S. Department of Defense has continued to prioritize China. The need for the United States to provide precious military assets to defend Europe against Russia, support U.S. allies in Asia, and maintain other global commitments, such as in the Middle East, may put tremendous strain on the United States. Washington will therefore need more from Europe. The conflict in Ukraine has raised the importance of military power in European minds, and they now want to contribute more to it.
{ "source": [ "https://politics.stackexchange.com/questions/77399", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/22967/" ] }
77,438
Context Due to multiple causes such as the Russian invasion of Ukraine, most European countries have faced soaring prices for essential products. In some cases, the governments decided to impose price caps. The last such case that caught my attention was Austria, which considers price capping (among other measures) : Intervention to achieve price reductions on basic foodstuffs is gaining political momentum as calls for uniform national measures to counter increasing inflation gain steam. This could be realised either with a reduction of the value-added tax (VAT) or a price cap , according to the social democratic Mayor and Governor of Vienna Michael Ludwig. Austria seems to come late to this party, as other European countries tried price capping, and it did not work too well: Hungarian government scraps price cap on fuels as shortage worsens Illegal logging in Romania exploded after government capped the price of firewood (as a bonus to shortages) Question Why do the governments still try price caps instead of other solutions such as subsidies, to prevent shortages, in the context of a rather long crisis? It seems to be known that shortages are quite certain when a price cap is issued: Price ceilings are enacted in an attempt to keep prices low for those who demand the product—be it housing, prescription drugs, or auto insurance. But when the market price is not allowed to rise to the equilibrium level, quantity demanded exceeds quantity supplied, and thus a shortage occurs .
First of all, this is one of the many situations where we have to remember that the Econ101-explanations are based on assumptions that are never going to be satisfied in reality. There rarely is an "obvious truth" in economics which shouldn't be furthered questioned. If an increase in prices is due to a supply shortage which should be overcome with private investments to increase production, then a price cap is going to make things worse. But that is a big "if". For example, the record profits currently obtained by many oil and gas companies strongly indicate that the current retail prices for those products are not just due to supply-issues. Moreover, the long-term solution here is to reduce demand rather than increase supply. I am no expert on Austrian food economics, but I wouldn't be that surprised if the price issues aren't about supply at all. A too extreme price cap would certainly cause problems, but it doesn't seem obvious at all that there couldn't be a sweet spot. And then there is a secondary issue: There is a strong incentive for politicians to be seen doing something about a crisis. So in absence of good options, trying less-good options can be favoured over doing nothing.
{ "source": [ "https://politics.stackexchange.com/questions/77438", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/11278/" ] }
77,459
In a recent article, Anger in Russia as scores of troops killed in one of Ukraine war's deadliest strikes. Polityuk, Pavel , there is a reference to “Russian nationalist military bloggers”. One such blogger, “Archangel Spetznaz Z”—presumably a pseudonym, is mentioned in the article. However, I couldn’t find any reference to this person, except in Cyrillic articles which I could not translate. If someone could please explain: What are military bloggers in this context? Do they have specific military access or positions? Are they military or political directed, or are they unaffiliated individuals? Are the identities of these bloggers publicly known, or do they exist only behind the pseudonym? Who is Archangel Spetznaz Z?
You can use this rating of influential military bloggers by Readovka, which is itself in the rating of most influential mass media this year. Some comments: Almost exclusively, these bloggers use Telegram as their main channel. Sometimes also VKontakte , YouTube or LiveJournal , but primarily Telegram. They don't doubt that Russia has to win the war, but tend towards a realistic rather than propagandistic angle. They mostly do what may be called open-source intelligence (OSINT). They are usually anonymous or do not disclose their identity. Some of them may disclose their identity, but in this case, the actual copywriting may be done by other people (staff). They aggregate information about the course of military action, often having sources in the military or of civilians near the front line. They also repost each other, so you don't have to follow every single one of them. Their quality of content varies greatly. There is also another phenomenon, whereby some people with a known name and some position in the military (or known military reporters) also have Telegram channels and participate in the same cloud of reposts. As the result of this network, most important events are known to most onlookers of the Ukraine war within hours. With regards to information policy, there's a lot of variability, but the general approach is that important information should go first, and then some morale-boosting war trivia: you will not see random Russian dead bodies on these channels, but occasional Ukrainian dead bodies and a lot of footage of destroyed vehicles. However, they are going to cover important bad news as well in a lot of detail.
{ "source": [ "https://politics.stackexchange.com/questions/77459", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/14543/" ] }
77,497
As Kevin McCarthy is approaching his fifth defeat for a role as Speaker of the House of Representatives, I am interested to know whether there is a time limit (or special measures in place) until somebody must assume the role, in the event that the vote is unsuccessful each time.
There is no official time limit, just a point where the question becomes irrelevant. It is theoretically possible for the entire two-year term of this Congress to pass without ever electing a Speaker, at which point the next set of Member-elects can vote on a Speaker for that new session of Congress. The problem with letting the entire term pass is that the House can't do anything else (even swear in the rest of its members) until the Speaker is elected, so going two years without one means two years of no legislation passing, even the uncontroversial things that pass by unanimous consent. So the practical limit is the first "must-pass" bill's deadline - the first bill which will shut down the government or otherwise cause mass disruption if it hasn't passed. This page lists several upcoming fiscal bills - whether they're "must-pass" really depends on what Congress considers to be such. For example, bad things will likely happen if the debt ceiling isn't raised by the time the government hits the current one (probably this Summer) and the US defaults on its debt, but there's no requirement that it be passed.
{ "source": [ "https://politics.stackexchange.com/questions/77497", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/27416/" ] }
77,509
I've now read that McCarthy has failed 6 times in his bid to be speaker , many in the past few days. Why is it only him that I'm reading about? Can the Democrats nominate someone too, even if they fail to secure the role of speaker? Who gets to decide that McCarthy gets a 7th bid, or when someone else can be voted on?
Before each roll-call vote, members-elect are given the opportunity to nominate candidates by the Clerk of the House. They do so simply by stating that they are doing so; take for example the proceedings recorded in the Congressional Record before the first roll-call vote on January 3rd 2023: The CLERK. Pursuant to law and precedent, the next order of business is the election of the Speaker of the House of Representatives for the 118th Congress. Nominations are now in order. The Clerk recognizes the gentlewoman from New York (Ms. STEFANIK). Ms. STEFANIK Madam Clerk, on behalf of the House Republican Conference, I rise today to nominate the gentleman from California, KEVIN MCCARTHY, as Speaker of the House to lead America’s new Republican majority. Stefanik then continues on in her speech, concluding with: Madam Clerk, as the chair of the Re- publican Conference, it is my high honor to present our Conference’s nominee for election to the office of the Speaker of the people’s House, the Honorable KEVIN MCCARTHY from the State of California. The Clerk then continues, allowing Mr. Anguilar of California to nominate Hakeem Jeffries, and Mr. Gosar of Arizona to nominate Andy Biggs before voting commences. However, although these three candidates were the only ones formally nominated, members-elect are free to vote for any individual they like, without needing to nominate them to be a formal candidate beforehand. You can see this from the results of the first ballot. The individual doesn't even have to be a member of the House; according to the CRS report Speaker of the House Elections 1913-2021 , votes were cast for candidates who were not representatives or members-elect in "1997, 2013, 2015 (both instances), 2019, and 2021".
{ "source": [ "https://politics.stackexchange.com/questions/77509", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/1659/" ] }
77,528
It is my understanding that McCarthy is struggling to get elected, because some more extreme Republicans regard him as 'too moderate'. Doesn't that mean that the choice is basically McCarthy or someone more extreme? In that case wouldn't it be sensible for the House Democrats to vote for McCarthy since, even if not really, he's still more in line with Democratic policies than the others?
"Never interrupt your enemy when he is making a mistake." (Attributed to Napoleon Bonaparte) The Democrats, by refusing to assist the Republicans in electing a speaker of the house, highlight the disunity of the Republican Party. While the Republican Party is in disunity, they cannot enact any legislative efforts that the Democratic minority might object to, so this also achieves the goal of stopping Republican-supported bills from passing. Additionally, there are no rules stating that the Speaker of the House has to be from the majority party. The Speaker traditionally is from the majority party because the majority party typically has enough votes to install them. So there is also a counter-question of whether it would be sensible for House Republicans to vote for Jeffries as a way to end the standoff, as he is currently the one closest to the required 218 votes. That is the outcome the Democrats want the most, and they are willing to wait and see if it happens.
{ "source": [ "https://politics.stackexchange.com/questions/77528", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/45349/" ] }
77,580
On the seventh and eighth ballots, Matt Gaetz cast a vote for Donald Trump. What would have happened if Donald Trump (or any other person) had won a majority and had wanted to take that function? In an answer to another question , I found the following: In theory, the House "sets its own rules". If it decides that Kim Kardashian is to be made speaker, whether she likes it or not, then Kim is speaker (whether she likes it or not). But as the speaker is (AFAICS) a member of the House, wouldn't that mean that Congress would have to increase the number of seats? And could Congress do so, because the House cannot do much without a speaker?
This has never happened, and probably never will. If it does happen, what would mean is that the non-member Speaker would be a non-voting member of the House of Representatives. In a sense, there's already a precedent for this as there are five non-voting delegates and one non-voting resident commissioner in the House that represent American Samoa, the District of Columbia, Guam, the Northern Marianas Islands, Puerto Rico, and the US Virgin Islands. The non-voting person who represents Puerto Rico is titled as a resident commissioner. The others are titled as delegates. The non-voting Speaker would have the thankless job of herding the 435 voting members, who are much harder to herd than cats. The non-voting Speaker would however have considerably more influence than do the six non-voting delegates.
{ "source": [ "https://politics.stackexchange.com/questions/77580", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/23512/" ] }
77,592
In the following CNBC article the author uses the term “woke” which Merriam Webster defines as a US slang adjective meaning “to be socially and politically aware”. U.S. House Speaker Kevin McCarthy pledges to tackle immigration, ‘woke’ education policies and IRS funding. … he said he wants to address “America-Last” energy policies and “woke indoctrination” in schools, noting that children come first and should be taught to “dream big.” So what do the expressions “woke education/indoctrination” mean in this context? Is this usage of woke common in AmE (American English)?
The term as a whole is a pejorative, generally used by those who align with more right-wing politics where there is a belief that public education and universities have been co-opted by those with different beliefs than those using the term. Usage of "woke" gained traction as used by African Americans in the 1930s , and at least initially was used to describe a state of mind where one is aware of their surroundings in areas where they might not be welcome. Since then people began using the term "woke" to apply to other areas which amongst proponents of expanded civil rights had (generally) broad consensus, and was meant to indicate that someone was aware of issues in the world which needed fixing rather than just being content with whatever the current status quo happened to be. Due to its origins, it was generally only ever really popular for those that leaned more to the political left, leading to the present-day backlash against the term, which argues that those who are "woke" are in fact "brainwashed," and rather than being more aware of their surroundings are instead being used as political pawns to effect social change rather than actually try and solve any real problems. How common a term "woke indoctrination" is generally completely depends on your location and where you receive your news; right-leaning news sources are much more likely to use it. It also depends on how many school board meetings you're currently going to, since there is a concerted effort in America today by conservatives to prevent certain topics from being taught in public schools. Anecdotally I feel as if left-leaning publications do not use the term "woke" very often, except in explainer pieces about the term itself. It has, at least in my opinion, always been used as slang and never really entered into formal language as such.
{ "source": [ "https://politics.stackexchange.com/questions/77592", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/27357/" ] }
77,612
Apparently, federal politicians in the United States of America have said something about what type of education they want to see or not (see also What does “woke indoctrination” mean? and references therein), with a senior member of the Republican Party commenting on how he wishes certain topics to be less prominent in education. I'm puzzled. The U.S. is a highly federal country. According to Wikipedia, the federal government and Department of Education are not involved in determining curricula or educational standards or establishing schools or colleges , and where they are (schools on foreign military bases), the responsibility is with the Department of Defense rather than Education. Similarly, Canada apparently doesn't even have a federal department of education, and the one in Germany does not do all that much either beyond coordinating and distributing research funds. What power does the federal government have over the content of education? Why is this even a topic on a federal level? Does the Republican Party want to increase the power of the federal government in education matters, reducing the power of the states? That would be quite a change, because according to Wikipedia , they have previously sought to abolish the department entirely. Or do they mean something else?
The term as a whole is a pejorative, generally used by those who align with more right-wing politics where there is a belief that public education and universities have been co-opted by those with different beliefs than those using the term. Usage of "woke" gained traction as used by African Americans in the 1930s , and at least initially was used to describe a state of mind where one is aware of their surroundings in areas where they might not be welcome. Since then people began using the term "woke" to apply to other areas which amongst proponents of expanded civil rights had (generally) broad consensus, and was meant to indicate that someone was aware of issues in the world which needed fixing rather than just being content with whatever the current status quo happened to be. Due to its origins, it was generally only ever really popular for those that leaned more to the political left, leading to the present-day backlash against the term, which argues that those who are "woke" are in fact "brainwashed," and rather than being more aware of their surroundings are instead being used as political pawns to effect social change rather than actually try and solve any real problems. How common a term "woke indoctrination" is generally completely depends on your location and where you receive your news; right-leaning news sources are much more likely to use it. It also depends on how many school board meetings you're currently going to, since there is a concerted effort in America today by conservatives to prevent certain topics from being taught in public schools. Anecdotally I feel as if left-leaning publications do not use the term "woke" very often, except in explainer pieces about the term itself. It has, at least in my opinion, always been used as slang and never really entered into formal language as such.
{ "source": [ "https://politics.stackexchange.com/questions/77612", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/130/" ] }
77,623
Any good student of Schoolhouse Rock knows how a bill becomes a law, but how do laws get removed, like obsolete laws?
"The hand that giveth, may taketh away." Should a legislative body wish to, essentially, delete a law in its entirety, they simply pass another bill to that effect in the same manner as the first - that parity of method is important, anything less is not the hand that gaveth. . These can vary from as little as a single sentence, "Law X is hearby repealed in its entirety," to more comprehensive language , to far more complex versions that only remove part, usually by language such as "Title Whatever, Chapter Whatever, Section Whatever, Paragraph Whatever, of the Such and Such Code is changed to read as follows..." where what follows basically looks like someone edited the existing law with Track Changes turned on . But in most cases they're not removed, per se. They're frequently just left on the books and go unenforced . Laws that end up in this category are often the subject of entertaining internet articles, but a lack of enforcement is recognized by scholars as having sufficiently similar effect to the law having been repealed. Laws may also be removed by Judicial action in many jurisdictions, e.g. by being declared unconstitutional or the like, this usually follows the pattern discussed above, creating a condition wherein any attempt to enforce the law will be nullified until the courts (either the same level or higher) decide to nullify the finding of unconstitutionality. In many cases, legislatures or executives will amend such policies, rather than try and wait the courts out. Both of these scenarios, in dizzying array, are found in the history of the U.S.' brief period of banning the death penalty from 1972 to 1976 . An interesting case which was raised elsewhere is known as a de facto repeal, where instead of repealing the specific law in question, some other part of the law is amended or repealed upon which the target law necessarily depended. There are numerous laws on the books that have no effect because the specific cases they name as triggers are now legally impossible to achieve.
{ "source": [ "https://politics.stackexchange.com/questions/77623", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/45409/" ] }
77,664
Prince William is the crown prince of the United Kingdom. Next in line to the throne are his young children. If Prince William predeceased his father King Charles, and then King Charles died while Prince William's children were still young, who would serve as Regent for Prince William's child who is next in line to the throne?
Since 1937, this has been governed by the Regency Act 1937 , which sets out the following steps to determine who would serve as Regent: If a Regency becomes necessary under this Act, the Regent shall be that person who, excluding any persons disqualified under this section, is next in the line of succession to the Crown. A person shall be disqualified from becoming or being Regent, if he is not a British subject of full age and domiciled in some part of the United Kingdom, or is a person who would, under section two of the Act of Settlement, be incapable of inheriting, possessing, and enjoying the Crown, or is a person disqualified from succeeding to the Crown by virtue of section 3(3) of the Succession to the Crown Act 2013; and section three of the Act of Settlement shall apply in the case of a Regent as it applies in the case of a Sovereign. Prince William's other children are, of course, also too young to serve as Regent, so we pass down the line of succession to Prince Harry. Harry is not currently domiciled in the UK, so is not eligible. His children are also disqualified by both residency and age, so the Regency would pass to Prince Andrew. Parliament could, of course, decide otherwise by passing primary legislation. This was the common practice prior to 1937; see, for example, the Regency Act of 1811. For completeness, the other disqualifications listed in the Regency Act refer to someone within the first six people in line to the throne having married without the consent of the Monarch (Succession to the Crown Act) or are Catholic (Act of Settlement).
{ "source": [ "https://politics.stackexchange.com/questions/77664", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/9801/" ] }
77,722
It is reported that thousands of tanks are currently being used in the Ukraine War . To help Ukraine, Britain is sending 14 Challenger 2 tanks and America is sending a few (50) Bradley infantry fighting vehicles, which seems like a small number compared to the total number of armored vehicles in the current war. Meanwhile, according to some estimates , Russia may have around 17,300 tanks produced between the late 1950s and now. Why is this considered to be a significant event, given that the number of armored vehicles being sent is in the double digits?
Well, first, there are only 200so Challengers II in the UK army, so 14 isn't a lot, but it also isn't "14 out of thousands". In the same way, Canada donating 4 measly 777 howitzers seems the stinginess of stinginess. But we only had 37 of them to start with. (Keep in mind some of those are training vehicles/guns as well, further reducing the combat-available count.) (NATO countries typically don't have 100s of obsolete tanks sitting around waiting to be loaded with untrained ATM fodder . And NATO countries, except for the US, all individually have small militaries compared to Russia's. Any time you see donations being made you need to take that into account.) Second, the Challenger 2 isn't the goal here. The goal is to break the taboo . Ukraine is confident Britain will announce it plans to send about 10 Challenger 2 tanks to Kyiv shortly, a move it hopes will help Germany finally allow its Leopard 2s to be re-exported to the embattled country. A handful of Challenger 2s, taken from the UK’s existing fleet of 227, would not in itself make much difference on the battlefield, but it would be the first time any western country has agreed to send its own heavy armour to Ukraine. Once a modern Main Battle Tank has been donated, the hope is that Germany will lift its veto on Leopard 2s being donated by countries other than Germany that also operate them. There are lots of L2s in Europe to go around, but someone has to be the first to donate a modern MBT. Throughout this war the West has gradually ramped up the weapon systems it provides Ukraine to help it rein in its obnoxious invaders. Each time it's stated as a mini red line for Russia. But at the end of the day, while MAD means NATO can't have NATO-on-RU combat, the same works in the other direction. Next? Grippen jets? ATACMS - 300k range HIMARS-launched? What's the big problem, provided they don't attack Russian territory? (Besides escalation risks, large scale Ukrainian attacks on Russian territories with NATO gear run the very real risk of facilitating the sale of Putin's big lie: "NATO is attacking Russia!". Russian troops with a real, not made up, reason to fight would probably fight better). Last, what does a Challenger bring to the table? L2s, M1s and Challengers are Gen 3 tanks, 60-70 ton beasts. They were designed after advanced composite armor was found to be able to stop shape charged shells/missiles entirely, so Gen 3 got beefed up to take advantage of that. T72 T80 T90 (a spruced up 72) are Gen 2 tanks, before the advent of composite armor. If you can't stop shaped charges from piercing your armor, might as well privilege speed and small size instead to avoid getting hit. Gen 2 tanks (and that includes Western Leopard 1s and AMX-32s) are all in the 40t weight class. Crucially, they don't stand a chance against Gen 3s because they can't really penetrate their armor. about that armor wrt HEAT : Due to the extreme hardness of the ceramics used, they offer superior resistance against a shaped charge jet and they shatter kinetic energy penetrators (KE-penetrators). The (pulverised) ceramic also strongly abrades any penetrator. and its resistance to kinetic hits : several MlAl crews reported receiving direct frontal hits from Iraqi T-72s with minimal damage. Is it going to be a walk in the park then? Not necessarily. That 60t weight is a liability in an area where bridges are built to support up to 40t tanks. And are Gen 3s sufficiently protected against topside armor attacks like what Javelins do? Israel lost some Gen 3 Merkavas in Lebanon in 2006, to Russian Kornet ATMs . Then again Ukraine still has its old T64s and 72s in the fight and UA's leadership is generally well-regarded in how well they are able to use the vehicles they do have. We'll see. p.s. I don't agree that particular amounts don't matter. 14 tanks is a really small amount, about a US Army's company's worth (which in regards to the comments gives you a baseline of sorts). But it will still require setting up the training of the combat and maintenance personnel. This a sunk cost, whether 14 tanks get donated or 140. Then you have to provide the logistical tail in the field. If you have multiple weapons types being delivered at low volumes: duplication of effort! Chasing after multiple low-volume production run AFVs is one factor that lost WW2 for the Nazis. That's what makes unblocking the much more numerous Leopards attractive.
{ "source": [ "https://politics.stackexchange.com/questions/77722", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/7483/" ] }
77,728
I am struggling to map the concepts of "proletariat" and "bourgeoisie" to modern life. They appear to be equivalent to "lower class" and "middle class", but today those terms are really used to distinguish "blue collar" and "white collar" jobs, and map onto family income. My hunch is that anyone working for a full-time salary, a short-term contract or zero-hour contract is "proletariat", even if it's a really nice job. Especially since the business usually owns the product of your work (physical or intellectual). Only someone who owns a business, earns dividends and hires others to do work is bourgeoisie. I'm not sure if self-employed consultants and investors fit neatly into these buckets. And there aren't many landed gentry left in the world. So I appreciate Marx might not map neatly onto all modern roles. Am I correct to that a salaried, white-collar desk worker with a full-time contract (and no stock options) would fit the Marxist conception of "proletariat" even though we wouldn't call them "working class" today?
In Marxist Class theory : The first criterion divides a society into the owners and non-owners of means of production. In capitalism, these are capitalist (bourgeoisie) and proletariat. Finer divisions can be made, however: the most important subgroup in capitalism being petite bourgeoisie (small bourgeoisie), people who possess their own means of production but utilize it primarily by working on it themselves rather than hiring others to work on it. They include self-employed artisans, small shopkeepers, and many professionals. Jon Elster has found mention in Marx of 15 classes from various historical periods. People who receive most of their income from property ownership rather than labor are the bourgeoisie , and for the most part, salaried workers would not fit in this category. They would also not be part of the petite bourgeoisie for the most part, because they don't own their own means of production even though they don't hire others to work for them. The petite bourgeoisie in modern society would be people like taxi cab owner-operators, many truckers, most family farmers, and many skilled tradesmen like plumbers. Upper middle class professional and managerial salaried office workers who have not yet acquire significant income producing property of their own (as opposed to those who are self-employed or have gained, for example, wealth from stock options in the companies where they work) are still members of the proletariat in a Marxist analysis. From the same link, in the Marxist analysis: Class is thus determined by property relations, not by income or status. One would have to make a leap that Marx himself did not, to see human capital as a form of capital (contrary to the fundamental divide Marxism creates between labor and property based income), in which case one could see them as members of the petite bourgeoisie . They are similarly not members of the Lumpenproletariat made up of people such as criminals, vagabonds, and prostitutes. Marx largely predated the historical period in which the "upper middle class" emerged and gained economic importance. Union-management relations law, in contrast, largely puts the upper middle class among the members and allies of "management" who are therefore forbidden from unionizing. On the other hand, one could legitimately look at the migration of managerial and professional class salaried employees with college educations to the Democratic party in the U.S., and of self-employment working class and middle class farmers and small business owners to the Republican party in the U.S., as a validation of Marxist class theory. Also, notably, many such people including civil servants, teachers, professors, and graduate students, are unionized. I'm not sure if self-employed consultants and investors fit neatly into these buckets. Investors, even if they don't make huge money from doing so, are clearly members of the bourgeoisie in a Marxist analysis. Self-employed consultants would generally be members of the proletariat except to the extent that the consulting was capital intensive, in which case they might be members of the petite bourgeoisie .
{ "source": [ "https://politics.stackexchange.com/questions/77728", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/45152/" ] }
77,768
France strikes bid to halt Macron's rise in retirement age : President Macron's reform programme faces a make-or-break moment, as French unions stage a day of mass strikes and protests on Thursday against his plans to push back the age of retirement. A new bill due to go through parliament will raise the official age at which people can stop work from 62 to 64. Under the proposals outlined earlier this month by Prime Minister Élisabeth Borne, from 2027 people will have to work 43 years to qualify for a full pension, as opposed to 42 years now. Having "retirement age" and "full pension" thresholds and trying to change them is expected to not be welcomed. I am wondering why have them at all. Why not have a more dynamic system where you can virtually retire at any age? (a minimum might apply for better predictability, but a maximum retirement age restriction in some areas might also apply) Based on the contribution (worked period and contributions) and other factors such as work field you get a pension amount. Not happy? Work for a couple of years more. I will tag this with France to narrow down the scope, but I think this question applies to all pension systems that work similarly to the French one.
The French system is a pay as you go system . It's not your money you saved you retire on (unlike, say, Malaysia ), it's the money current workers are still putting in that you get. That, together with the proven actuarial fragility of the future French state to be able to make good on its current commitments makes this quite risky to carry out and makes it difficult to divide the kitty equitably : why should current workers pay for you if you bail out early? Note that, in France, the level of mandatory contributions is such that you really do expect to retire on state pensions, as you don't have that much disposable income left over to do major investments. Contrast that with the US or Canada, where there is bare minimum state safety net, but individual investments, in your name , is where the bulk of your retirement is expected to come from. There is a bit of verbiage worrying about the issue here , from the society of specialized actuaries in France. In addition, logically, the increase in life expectancy would lead to a decline in the retirement age. " In all the countries where the system has been put in place, we have seen the retirement age reduced", notes Brigitte Écary. The figures confirm this: thus, in Sweden, workers can leave as early as 61, but the average age at which they retire was over 65 in 2014, compared to under 60 in France, according to the OECD. In Italy, the phenomenon is even more obvious: the retirement age, which was 60 for men in the mid-1980s (55 for women), has exceeded 66 and is heading towards 67 , compared to 62.9 years on average in OECD countries! And he should reach 67 in 2019, in line with the increase in life expectancy. The system itself would encourage individuals to work longer, since the calculation of the conversion coefficient takes into account the retirement age: the later a person decides to retire, the higher their pension. Conversely, people who would like to retire earlier will be able to do so, but in return for a lower pension. The retirement age becoming only a cursor, and no longer a cleaver on which politicians can tear each other apart... Honestly, the system was already a house of cards when I left France in 1997. Nothing's changed. Macron is only doing this because he has to . Economist in 2019 : This puts strain on the public purse, all the more severe because the French system relies on taxing today’s workers to pay the pensions of their elders. In June the official pensions advisory council warned that by 2022 the public-pension deficit would rise to €10bn ($11bn), up from its previous forecast of half that figure. Overall, France spends nearly 14% of gdp on pensions, a bit less than massively indebted Italy (16%), but more than Germany (10%) and way above the 8% oecd average. Economist, this week : This lifestyle comes at a cost. France spends 14% of gdp on public pensions, nearly double the oecd average. By 2030, according to Bruno Le Maire, the finance minister, the deficit in the French pensions system will reach €14bn. The new measures should comfortably close that gap. “Given the current environment of upward pressure on interest rates, this pension reform is an important message to investors,” says Ludovic Subran, chief economist at Allianz, an insurer. What to understand here is that the system is already under strain, with the current age demographics. As France gets older... As to room to for tax-based solutions : 8.55% Old Age Insurance (ceiling of 3,428 EUR) (Employer) 6.90% Old Age Insurance (ceiling of 3,428 EUR)(Employee) So 15% of gross payroll already just for this. To which you add add health insurance deductions. Out of $100 being deposited to your bank account, there is typically another $40-45 pre-deducted by your employer and you. So yeah, "tax based solutions" aren't going to fix this. More on that, from OECD : France ranked 2nd¹ out of 38 OECD countries in terms of the tax-to-GDP ratio in 2021. In 2021, France had a tax-to-GDP ratio of 45.1% compared with the OECD average of 34.1%. In 2020, France was also ranked 2nd out of the 38 OECD countries in terms of the tax-to-GDP ratio. Or, another data point , see Data Summary table, France has one of the highest payroll taxes in the developed world. That payroll tax is often especially problematic for employers and low paid workers . I also want to quote Roland from the comments: I believe your proposal would create a feedback loop with an outcome that is difficult to predict and ultimately would reduce security. And Phillip' answer : The problem is that the average behavior of people in a macroeconomic system is much harder to predict that economists would like to admit. I think it would be very difficult to switch horses from a pay as you go model to something else (due to the temporal coupling effect across generations). Either a flexible system as suggested in this Q. Or a system relying mostly on individual retirement accounts. And even more so with a population with the entrenched attitudes of French people as per acquis sociaux ("earned social rights"). If that sounds snooty... veuillez m'excuser and let's see how many strikes come up in the next few months. From some now deleted comments: Don't think of this as a baseline, basic safety net. It is the whole pension plan for people , backed by the state. It is what they will have to live on and is intended that way. Immigration isn't something I am concerned with in this answer. I am generally neutral, even positive, towards immigration. But it's not a core part, or solution, to this issue, not for France. When modern French pension plans started post war, they used a 65 year old retirement age as basis. Since then the life expectancy has gone up, a lot (France is one of highest in the world). Other countries have responded to this pressure by upping their retirement age, not France. France is special! French politicians have often campaigned on lowering retirement age, because it wins elections . See Francois Hollande. In the past, a low retirement age, in measures to send off people before even that, was a deliberate mechanism to lower the apparent unemployment rate. Macron is not doing this for his own pleasure: this is what you lose elections on, in France . Once acquired, a benefit in France seems cemented in perpetuity. In the time of steam trains, retirement age for train drivers and machinists was 50 years old. Shoveling coal into locomotives and coal dust into their lungs, you can see why: they probably didn't live long past 50. It might have been the correct thing to do then, but it is still going on long past the disappearance of steam trains : Train drivers can retire from the age of 50, but the average retirement age is 54 years and 1 month according to the CPRPSNCF activity report. If this seems stridently ideological and right wing: consider what a possible collapse of this system in the future would mean to people nearing retirement age at that time - they would get pretty much nothing, having contributed all their life. Meanwhile, older people now, like in many countries, have more voting power than younger people and are not averse to using to further their own interests . Their retirement is not at risk.
{ "source": [ "https://politics.stackexchange.com/questions/77768", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/11278/" ] }
77,784
Is five years the usual length of a UK Parliament? Then why were some only four years long, such as from 1997 to 2001 and from 2001 to 2005? Did the Prime Minister call for these early elections, and is there a reason why Prime Ministers often support elections one year earlier?
The length of parliaments has varied over time based on the laws in force over the years; from 1716 until 1911 the maximum length of a parliament was set at seven years. From the passage of the Parliament Act 1911 to the passage of the Fixed Term Parliaments Act 2011, the maximum length was reduced to five years (the exception being during the World Wars). In practice, however, no parliament has reached the maximum duration under these pieces of legislation, but rather have been dissolved by the Monarch (acting in accordance with the instructions of the Prime Minister) via the Royal Prerogative. This power effectively allows the party of government to choose the timing of the General Election. Generally, the practice has been to dissolve parliament and hold an election in the Spring, so as to coincide with local elections. Taking your examples, the parliament which first sat on May 7th 1997 had to be dissolved by May 6th 2002. The Prime Minister decided, rather than running the Parliament down to the wire, to call an election in 2001. CNN gave the following reason for his decision: The prime minister does not need to call an election until mid-2002 but it has been an open secret for months that he wants an earlier vote to take advantage of a buoyant economy. The election was originally expected for the Spring, but was delayed to the Summer because of the foot-and-mouth outbreak. In 2005, the terminating parliament first sat in June 2001, so needed to be dissolved by June 2006. The PM decided instead to dissolve parliament over a year earlier, possibly again due to the good economic circumstances at the time. The Fixed Term Parliaments Act, in force from 2011 to 2022, complicates this somewhat, as it removed the ability for a Prime Minister to call an election unilaterally - instead setting the timing of General Elections to be held on the first Thursday in the May of the fifth calendar year after the last election. The only way around this was for parliament to pass a motion dissolving itself early, rather than the Prime Minister. Part of the intention of the Act was to bring some stability during the Cameron-Clegg coalition government, but during the premierships of Theresa May & Boris Johnson, the Act proved to be rather more of a hindrance than a help. Most recently, the Dissolution and Calling of Parliament Act 2022 repealed the Fixed Term Parliaments Act, restored the Royal Prerogative to dissolve parliament, and established the automatic dissolution of a parliament on the fifth anniversary of its first meeting. So yes, in law, parliaments may last for up to five years before being dissolved. In reality, the Government may decide - and nearly always does decide - to dissolve parliament early. This can be for a number of reasons such as seeking a renewed mandate for controversial issues à la 2019, during times of prosperity for the Government so as to increase or maintain their majority, as Blair's motive was suggested to be, or simply to avoid holding a winter election - we can't really know unless they tell us. On that last point, the current parliament is set to automatically dissolve in January 2025, but it is widely expected that the next election will be held in May 2024 so as to coincide with the 2024 local elections. Sunak could, however, decide to ask the King to dissolve parliament at any time of his choosing.
{ "source": [ "https://politics.stackexchange.com/questions/77784", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/45494/" ] }
77,864
Searching on Google News for World Economic Forum produces: About 23,800,000 results (1.00 seconds) Sampling some of these produces quite a mixed array of opinions, from highly supportive, to bored, to mocking, to highly critical. What is the nature of this organization that seems to give them influence? They seem to be basically a club . They don't seem to have any governmental powers. They seem to produce opinions and speeches. Various politicians have joined, but politicians join lots of groups. Lots of politicians have declined to join. What is it about the WEF that causes people to pay attention to them?
The Forum itself doesn't have any significant power, per se, nor any real specific agenda. The people who attend the forum have power, and are induced to do so to be around other powerful people for whatever agendas they may have with other powerful people. Since powerful people often have a desire to do business with other powerful people in an efficient way, in a pleasant atmosphere, the forum serves a need that many powerful people have, so many powerful people attend. Since it is mobbed with the press, it is also a convenient place to make announcements that the media will receive and cover for the benefit of other influential people worldwide. As long the the hosts don't screw it up, the virtuous cycle continues.
{ "source": [ "https://politics.stackexchange.com/questions/77864", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/44918/" ] }
77,878
Kaimo Kuusk, the Estonian Ambassador to Ukraine apparently announced that : We are giving all our 155mm howitzers to Ukraine. I can't see how an army would agree to give all their artillery in that caliber (which is fairly important) to another country. So is there some caveat to this announcement? Are they getting newer ones as replacements?
The ambassador's promise is referring to the donations set out in the recent Tallinn pledge. The UK government has summarised the contents of the donation package from each country involved here . For Estonia's part, this is as follows: The Estonian package consists of tens of 155mm FH-70 and 122mm D-30 howitzers, thousands of rounds of 155mm artillery ammunition, support vehicles for artillery units, hundreds of Carl-Gustaf M2 anti-tank grenade launchers with ammunition with the total replacement values of approx. 113 million euros. In addition, Estonia will continue to provide both basic and specialist training to hundreds of Ukrainian Armed Forces members in 2023. The press release on the package from the Estonian Ministry of Defence is fairly clear that this promise does not include the K9 155m howitzers that they're transitioning to: Military aid to Ukraine does not reduce Estonia's defense capability, the lack of combat readiness of the donated artillery will be fully restored in the near future. A planned transition to K9 mobile artillery is underway, and in addition, Estonia's defense will be strengthened by the presence of allies, including the recently added US HIMARS.
{ "source": [ "https://politics.stackexchange.com/questions/77878", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/18373/" ] }
77,905
Why do some political figures, such as Trump, Biden, and Pence, store classified documents at their personal residences? I have seen many articles discussing the legality and contents of these documents, but I cannot seem to understand the purpose behind it. Is it to hide something? Do they simply take the documents out of their office in boxes concealed under their coat?
Because they have to work on them there. Government leaders don't have a hard and fast line between work time and personal time like the rest of us. They have huge amounts of reading to do, and a whole pile of it is done when the rest of us are relaxing, in the evenings and the early mornings (unless you are the kind of President who watches cable TV and eats cheeseburgers all evening). In the UK and Canadian systems every high-ranking politician gets a big stack of boxes with stuff to read at home every night. I'm sure the US is no different. And most of the things they read are classified at some level. Some of it stays at the residence (or the non-government office). Likewise they are going to be taking documents on trips with them, and most of them will be classified. None of this is a problem while they are in office. The problem is that not that they go to residences, or even that they stay there while the politician is in office, but that some of it gets misplaced and isn't removed at the end of the politician's term of office. This problem is worse because it's not only the officials handling these, it's also a good number of aides and assistants (again perfectly properly). In these quantities it only takes one mistake in a thousand to have a document accidentally stay in a place after the person supposed to be using it leaves office and you can get incidents like the ones we've seen with Joe Biden and Mike Pence.
{ "source": [ "https://politics.stackexchange.com/questions/77905", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/45555/" ] }
78,017
In the United States, there are 2 major political parties: the Democratic Party and the Republican Party. However, not every vote is cast for one of those parties when other candidates run. In presidential elections from 2004 to 2020 (with the notable exception of 2016), 3rd parties have made up from 0.5% to 2% of the vote combined . But if you break it down by which 3rd party is being voted for, you find something interesting. In every presidential election from 2008 to 2020, the Libertarian Party (the largest right-wing third party) got significantly more (around 3 times more) votes than the left-wing Green Party. You could even see that in Senate races in 2022. It seems like since 2008, with the possible exception of 2016 (the right-wing Libertarian Party got half of the third party votes even then when a historic number of left-wing 3rd party votes were cast so it could have held true in 2016 too), third party voters seemed to have a conservative bent. Not all small-l libertarians are conservative, but the Libertarian Party leans right decisively.
Why do 3rd party voters seem to lean right politically? Given the growing gap between the two major parties, there's room in the middle for a third party. Third parties on the left tend to be even farther left than the Democratic Party. While some third parties that are more likely to associate with Republicans are even further to the right than is the Republican Party as a whole, some lean a tiny bit to the left compared to the right-leaning Republican Party. The graphic below, from a 2014 Pew Research article , shows how the parties in the USA had split between 2004 and 2014. Note that both parties moved slightly to the left between 1994 and 2004. After that, the Republicans moved markedly to the right. This split has continued, with the Republican Party moving even further to the right than it was in 2014. This leaves a gap in the middle that third parties can take advantage of. The third parties associated with the left in the USA such as the Green Party tends to be even further left than the Democratic Party as a whole. There's not much traction here (particularly in the USA, which is a right-leaning nation compared to other developed countries), and thus not many voters. The third parties typically associated with the right in the USA such as the Libertarian Party have at least to some extent taken advantage of that growing gap between the two major parties. There are voters there.
{ "source": [ "https://politics.stackexchange.com/questions/78017", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/29035/" ] }
78,041
As per Wiki , no countries have so far recognized the Taliban-led Islamic Emirate as the legitimate successor of the Islamic Republic of Afghanistan. But... what's the point of delaying the official establishment of diplomatic relations? The US failed to destroy the Taliban after a full occupation for 20 years, so they're clearly not going anywhere. Taliban seems to have full control over the entire territory of Afghanistan, unlike (say) the government of Somalia which doesn't actually control most of the country. The Taliban seems pretty popular with the locals given how fast they’ve overtaken the entire country. The Taliban is not a democratic regime and established Shariah law, however the same is also true of Saudi Arabia and even they haven't recognized the new government yet. It's understandable that the West will remain resentful for a long time over the failed military operation in Afghanistan, however this shouldn't stop Russia or China from extending recognition. Unlike North Korea, the Taliban doesn't seem to be conducting any rocket tests or otherwise threatening global security in a major way. The security loopholes exploited during 9/11 have long been closed and it's unlikely anyone in Afghanistan could do major harm to the rest of the world. So... why not recognize the Taliban? What is everyone waiting for?
Here's Recognition and the Taliban @ Brookings may not be a final answer, but it does have info that provides some insights: During the Trump administration’s negotiations with the Taliban in Doha, Qatar prior to this collapse, numerous countries indicated that they would refuse to recognize any government coming to power through force. Whether the Taliban’s recent accomplishment of exactly this may complicate those promises, the presenters noted, remained to be seen. ... Traditionally the standard for governmental recognition has been “effective control,” meaning the regime is “sufficiently established to give reasonable assurance of its permanence, and of the acquiescence of those who constitute the state in its ability to maintain itself and discharge its internal duties and its external obligations.” Modern cases of recognition have often been conditioned on factors like human rights compliance or democratic governance ...States are also wary of over-hasty recognition. De-recognition of states that fail to meet the initial recognition standards is disfavored and much harder than recognition. Recognition comes with several implications. The recognized government may claim ownership and exercise control over the state’s foreign property, the government is likely to gain UN representation, and uses of military force may be authorized. ... Third, the United States could engage in extended negotiations with the Taliban over recognition, subject to the Taliban meeting certain conditions—the likeliest outcome in the near and medium term as the United States and its allies try to use recognition as a carrot to urge the Taliban to undertake certain fundamental reforms. Given the recent news about Taliban barring women from schooling and barring them working from NGOs, it seems unproductive to give up the carrot while the Taliban have yet to offer anything in return. Even countries which could be expected to be pro-recognition are probably leery of swimming against the tide when the UN itself has such grave concerns . Large reputational cost, little benefit. GENEVA (29 December 2022) -- UN experts* today denounced and called for an immediate reversal of the Taliban’s recent order barring women from working in international and national non-governmental organisations (NGOs) and supported a unified effort of the international community to take a stand against this latest human rights violation, further banishing women from the workplace, preventing delivery of life-saving aid and crippling the work of NGOs which will have a terrible impact on the entire country. Their statement is as follows: ... Having already denied women and girls their rights to education and limited their freedom of movement, expression, and dress as well as public participation, further denying women’s right to work in NGOs in the middle of winter when the country is grappling with a humanitarian emergency shows the Taliban have no regard for women’s rights or their wellbeing and will stop at nothing. In this case, they are instrumentalising and victimising women and the recipients of critical aid, apparently in a power struggle over control of this sector. This may well be a case of gender persecution, a crime against humanity, and those responsible should be held to account. We call on the de facto authorities to immediately lift the ban on women working with national and international NGOs. As @ohwilleke states below, it is unclear how much the Taliban are trying , i.e. how much effort are they putting in? However, some claim it is not for lack of wanting, at least from some bits of the Taliban apparatus : Taliban officials claim that just “engaging” with the Taliban or the leadership is not enough: the international community needs to offer them something they want. The main diplomatic objective of the Taliban leadership is to gain relief from sanctions and obtain official recognition, above all from the United States. The fact that no one recognizes the Taliban does indicate that maybe it's just not something that they've put much actual effort into. I mean there are plenty of countries that dislike the West, don't give a fig about UN disapproval and aren't exactly pro human/women's rights. Not going to chase after one more Afghanistan link, but I recall recently seeing that it is the "hard men" from Kandahar that are currently calling the shots, not any of the Taliban "moderates". So they may be unwilling to make any concessions since after all oppressing women is front and center of that faction's ideology. As to a country like China, which might benefit from Afghan minerals ? It might, but there are enough risks with doing it that it may be in no great hurry. Overall, the Taliban don't seem to be offering much in return for recognition at this time. And it is early days. p.s. To be clear, I tend to agree with the OP that "not recognizing cuz we don't like them" is kinda peevish. And especially so if it is for domestic electoral considerations. Nationhood is kinda like UN membership - you're in the UN because you are a nation on this planet. Not because everyone likes you or you're a nice guy (we have had many questions about "why is X allowed in the UN?" or "why is Y in the Security Council?") But not giving away a bargaining chip is a valid realpolitik reason to wait them out.
{ "source": [ "https://politics.stackexchange.com/questions/78041", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/7434/" ] }
78,130
According to a recent news article: Canadian dairy farmer is speaking out after being forced to dump thousands of liters of milk after exceeding the government’s production quota. In a video shared on TikTok by Travis Huigen, Ontario dairy farmer Jerry Huigen says he’s heartbroken to dump 30,000 liters of milk amid surging dairy prices. “Right now we are over our quotum, um, it’s regulated by the government and by the DFO (Dairy Farmers of Ontario),” says Huigen, as he stands beside a machine spewing fresh milk into a drain. “Look at this milk running away. Cause it’s the end of the month. I dump thirty thousand liters of milk, and it breaks my heart.” Why does this quota exist and does it benefit the consumers of Ontario?
Canada has an agricultural policy called the Market Sharing Quota , which sets a national yearly milk production target. It's part of their broader framework for controlling supplies of dairy, poultry, and eggs, and the intention is to keep prices of these products stable, avoiding random periods of overproduction (lower prices, bad for producers) and underproduction (higher prices, bad for consumers). The Canadian Dairy Commission sets national dairy policies such as the quota, and according to their Vision, Mandate and Values page the purpose of the Commission is: Fair compensation Provide efficient producers of milk and cream with the opportunity to obtain a fair return for their labour and investment. Efficient supply Provide consumers of dairy products with a continuous and adequate supply of dairy products of high quality. And from their What we do page: Since the introduction of supply management in the dairy sector, the Commission has administered support prices and the national marketing quota. Each year, the Commission sets the support price for butter and skim milk powder after consulting industry members. These prices provide a benchmark and are used by provincial marketing boards to set the price of milk in each province. The Commission also continuously monitors national production and demand and recommends necessary adjustments to the national milk production target. Although the Dairy Commission allots a portion of the Market Sharing Quota to each province, it's up to each province to decide how to handle their quota allotment. Through the Dairy Direct Payment Program dairy farmers are paid automatically for their in-quota products, based on the target prices the government sets, but are not paid for excess products. For Ontario specifically, Dairy Farmers of Ontario is a regulatory body that, among other things, handles splitting Ontario's quota among farmers. DFO's Quota and Milk Transportation Policies document from February 1, 2023 details how they handle the quota, and what happens if it is exceeded. There's a lot of stuff in that policies document, but the gist is: Ontario farmers can only market milk through DFO DFO determines how to split the quota, and may allow above-quota production depending on the market situation Farmers can exceed their quota, but not by much and only if DFO approves If a farmer exceeds their quota by a lot, they a fine based on the volume exceeded as well as other potential penalties Some of the relevant policies from the document: General Licensing and Quota Requirements (a) Quota is the property of Dairy Farmers of Ontario (DFO). It is fixed and allotted to producers on such basis as DFO considers proper and is subject to the terms and conditions of DFO’s quota policies. ... (g) No person to whom a quota has not been fixed and allotted for the marketing of milk or whose quota has been cancelled shall market any milk. The Right to Adjust Quota If necessary, DFO will adjust the quota held by all producers on an equal percentage basis to meet Ontario’s share of the national and/or P5 market requirements Maximum Quota Levels (a) Producers are required to obtain approval from DFO before they can exceed 150 kg of quota and again before they can exceed each subsequent 100 kg level (i.e. 250, 350 etc.). Overproduction Credits Overproduction credits allow producers to occasionally ship slightly above their monthly quota at domestic prices, with the intent that overproduction credits will be paid back by under-quota production in future months, subject to limitations the Board may place on credit use. Over-Quota Milk (b) For milk marketed above 100 per cent of a producer’s quota and available credit and incentive days, or above such level as determined appropriate by the Board, the producer will receive an over quota penalty of $20 per hectolitre (hl) and will be subject to the normal deductions.
{ "source": [ "https://politics.stackexchange.com/questions/78130", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/7434/" ] }
78,142
There's been a lot of discussion on if the US embargo on Cuba should be ended, but I would like to know: how could it be ended? What legislative process? Would any laws have to be removed to allow this to take place?
Canada has an agricultural policy called the Market Sharing Quota , which sets a national yearly milk production target. It's part of their broader framework for controlling supplies of dairy, poultry, and eggs, and the intention is to keep prices of these products stable, avoiding random periods of overproduction (lower prices, bad for producers) and underproduction (higher prices, bad for consumers). The Canadian Dairy Commission sets national dairy policies such as the quota, and according to their Vision, Mandate and Values page the purpose of the Commission is: Fair compensation Provide efficient producers of milk and cream with the opportunity to obtain a fair return for their labour and investment. Efficient supply Provide consumers of dairy products with a continuous and adequate supply of dairy products of high quality. And from their What we do page: Since the introduction of supply management in the dairy sector, the Commission has administered support prices and the national marketing quota. Each year, the Commission sets the support price for butter and skim milk powder after consulting industry members. These prices provide a benchmark and are used by provincial marketing boards to set the price of milk in each province. The Commission also continuously monitors national production and demand and recommends necessary adjustments to the national milk production target. Although the Dairy Commission allots a portion of the Market Sharing Quota to each province, it's up to each province to decide how to handle their quota allotment. Through the Dairy Direct Payment Program dairy farmers are paid automatically for their in-quota products, based on the target prices the government sets, but are not paid for excess products. For Ontario specifically, Dairy Farmers of Ontario is a regulatory body that, among other things, handles splitting Ontario's quota among farmers. DFO's Quota and Milk Transportation Policies document from February 1, 2023 details how they handle the quota, and what happens if it is exceeded. There's a lot of stuff in that policies document, but the gist is: Ontario farmers can only market milk through DFO DFO determines how to split the quota, and may allow above-quota production depending on the market situation Farmers can exceed their quota, but not by much and only if DFO approves If a farmer exceeds their quota by a lot, they a fine based on the volume exceeded as well as other potential penalties Some of the relevant policies from the document: General Licensing and Quota Requirements (a) Quota is the property of Dairy Farmers of Ontario (DFO). It is fixed and allotted to producers on such basis as DFO considers proper and is subject to the terms and conditions of DFO’s quota policies. ... (g) No person to whom a quota has not been fixed and allotted for the marketing of milk or whose quota has been cancelled shall market any milk. The Right to Adjust Quota If necessary, DFO will adjust the quota held by all producers on an equal percentage basis to meet Ontario’s share of the national and/or P5 market requirements Maximum Quota Levels (a) Producers are required to obtain approval from DFO before they can exceed 150 kg of quota and again before they can exceed each subsequent 100 kg level (i.e. 250, 350 etc.). Overproduction Credits Overproduction credits allow producers to occasionally ship slightly above their monthly quota at domestic prices, with the intent that overproduction credits will be paid back by under-quota production in future months, subject to limitations the Board may place on credit use. Over-Quota Milk (b) For milk marketed above 100 per cent of a producer’s quota and available credit and incentive days, or above such level as determined appropriate by the Board, the producer will receive an over quota penalty of $20 per hectolitre (hl) and will be subject to the normal deductions.
{ "source": [ "https://politics.stackexchange.com/questions/78142", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/45643/" ] }
78,161
The US Embassy recently announced the following: The State Department will start spelling Turkey as "Türkiye" in diplomatic and formal settings. The name change was approved by the U.S. Board on Geographic Names following a request from the Turkish embassy, State Department spokesperson Ned Price confirmed on Thursday. Logically speaking it seems like Turkey should be trying to get every country with a Latin alphabet to use "Türkiye" as well, but it seems like this is not happening. I.e. in German their official name is "Republik Türkei", not "Republik Türkiye". Why is this the case? Do they not care about languages other than English or is their primary complaint that "turkey" can also refer to a bird?
According to e.g. Neue Zürcher Zeitung the goal is actually to have it changed in all languages , but start with English because English is considered more important (due to its use as a world wide lingua franca). From the article: Türkiye soll künftig in allen Fremdsprachen verwendet werden, auch im Deutschen. Nicht nur wegen ihrer internationalen Bedeutung steht die englische Sprache jedoch im Fokus. In the future, Türkiye is to be used in all foreign languages, including German. Focus is on the English language, not only because of its international importance. They link a video produced by the Government of Türkiye that has people in different languages using the name to promote the change. So it seems the premise of the question is wrong.
{ "source": [ "https://politics.stackexchange.com/questions/78161", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/7434/" ] }
78,164
Regarding Türkiye's campaign to have their name changed (from Turkey) in foreign languages, it struck me as a bit unusual to insist on the umlaut-ed ü, since English doesn't normally have that diacritic. So, is this an uncommon insistence, or are there other countries that insist on having some uncommon (in English) letters or diacritics, in English communications regarding the country's name? As someone asked for a quote, one of the ironies is that state broadcaster TRT spells it without umlaut sometimes [in English articles], but they also quote an official note saying that: The vast majority of people in Turkiye feel that calling the country by its local variation only makes sense and is in keeping with the country's aims of determining how others should identify it. In a nod to that, the recently published communique was clear that "within the scope of strengthening the 'Turkiye' brand, in all kinds of activities and correspondence, especially in official relations with other states and international institutions and organisations, necessary sensitivity will be shown on the use of the phrase 'Türkiye' instead of phrases such as 'Turkey,' 'Turkei,' 'Turquie' etc."
On the UN list of countries , Côte d’Ivoire is the only other country to use a “special” character. From Wikipedia: in April 1986, the government declared that Côte d'Ivoire (or, more fully, République de Côte d'Ivoire ) would be its formal name for the purposes of diplomatic protocol and has since officially refused to recognize any translations from French to other languages in its international dealings. Despite the Ivorian government's request, the English translation "Ivory Coast" (often "the Ivory Coast") is still frequently used in English by various media outlets and publications. Elsewhere [ 1 , 2 ], São Tomé and Príncipe is often spelled with its diacritics, however, the UN spells it “Sao Tome and Principe.” Its name is Portuguese. Across the world, Åland (a Swedish-speaking dependency of Finland) and Curaçao (a constituent country of the Kingdom of the Netherlands) are commonly spelled with their special characters. I haven't yet looked for the UN official list of dependencies.
{ "source": [ "https://politics.stackexchange.com/questions/78164", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/18373/" ] }
78,180
In this article at CNN , Robert Hockett, a law professor at Cornell University says that the debt ceiling was rooted in the 1917 Liberty Bond Act which was rendered obsolete in 1974. Part quote from the article, from under the question "Why presidents actually have the power to ignore Congress’ artificial debt ceiling" HOCKETT: Yes. It’s not only artificial, but I think it’s also legally invalid. The only reason we haven’t heard that definitively from the courts yet is that both parties have benefited, I think, from the grandstanding opportunity that it affords them. The bottom line here is the 1917 Liberty Bond Act, in which the debt ceiling is rooted, was rendered obsolete in 1974. It seems that Hockett is referring to the Congressional Budget and Impoundment Control Act of 1974 But if above is correct, it is surprising that even this explainer page published by the White House does not mention it. Rather the explainer page says Once the debt limit is hit, the Federal government cannot increase the amount of outstanding debt; therefore, it can only draw from any cash on hand and spend its incoming revenues. and When the U.S. Treasury exhausts its cash and extraordinary measures, the Federal government loses any means to pay its bills and fund its operations beyond its incoming revenues, which only cover part of what is required (about 80 percent in 2019). and Because the United States has never defaulted on its obligations, the scope of the negative repercussions of not satisfying all Federal obligations due to the debt limit are unknown; it is expected to be widespread and catastrophic for the U.S. (and global) economy. U.S. government officials recently made similar statements. What is the reason for all the warnings about the failure to raise the debt ceiling if it was invalidated in 1974? I mean, would the invalidation need to be legally validated somehow and is there a risk that that might not happen, or are there other reasons?
Most people understand that the President can't "just ignore it". We would be legally uncharted territory if the President ignored the debt ceiling. (Informal) precedent is against interpreting the 1974 act as meaning that the 1917 act is obsolete, as past Presidents have always waited for Congress to raise the ceiling. One legal expert doesn't create law - He may be right, but his opinion is not universal. As he says we "haven't heard from the courts" yet.
{ "source": [ "https://politics.stackexchange.com/questions/78180", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/15687/" ] }
78,235
U.S. Chairman of the Joint Chiefs of Staff Mark Milley urged Ukraine and Russia to find a "political solution", saying that the war in Ukraine is unwinnable by purely military means. Does the U.S. Chairman of the Joint Chiefs of Staff have any means to force Ukraine into negotiating peace with Russia, or is this decision solely made by the Ukrainian President? I am wondering what is the process for making the decision of negotiating peace and who are the parties involved in such a decision, and if it involves foreign parties as well. https://en.wikipedia.org/wiki/2022_Russia%E2%80%93Ukraine_peace_negotiations
I think at least one answer should point out the elephant in the room: It's not really Ukraine that is blocking negotiations for peace, it's Russia . Russia stole Crimea from Ukraine, had a frozen conflict with the country for years on their soil, and then decided by themselves without outside reason or provocation to invade the country and make it a hot war. If there were any interest on the Russian side concerning peace all they'd have to do is leave Ukraine alone and withdraw their troops. Seriously, Western countries have made it clear that they do not condone Ukraine attacking Russia and they are not in the position to do that alone, so if Russian troops retreat you'd basically have an instant armistice and grounds for negotiation. And that is a move that solely depends on Russia and could be performed entirely without Ukrainian intervention. So if there were any serious aspirations to end this war, Russia has all the cards in their hand to do that. So "asking Ukraine to negotiate peace" is really just coded language for asking them to surrender to Russia. Because given their most recent history Russia would once again have stolen land from Ukraine and there would be no guarantee that this won't happen again anytime soon. It would not be peace it would be a surrender and a ceasefire before the conflict arises anew in the near future. And unless Russia gives out credible guarantees to reinstate and respect the territorial integrity of Ukraine, there is no reason and no real ability for Ukraine to meaningfully negotiate peace with Russia. And apart from Ukraine joining NATO so that there's seriously no further Russian aggression or WWIII, which is likely not an option that is accepted by Russia, there's really not much room for what Russia could offer up as guarantees given how blatantly they broke all international laws and treaties by invading Ukraine. There's little to no trust left in their word with which they could broker. So no, in order to negotiate peace the U.S. Chairman would have to get Russia to accept that and he has little to no influence there beyond making threats that no one really wants to make. In terms of their influence on Ukraine. Well, you could speculate on what it would mean for the Ukrainian defense if the U.S. stopped supplying arms or even used their socio-political weight to impose sanctions on the EU for supplying arms. Though that would not be without heavy costs in terms of torn relations between the EU and the U.S. and after all it would not mean peace. It would mean a Russian victory over Ukraine with the very real threat of another invasion a few years down the line or other invasions, as Russia has used similar rhetoric to other neighbors already. So given the current situation, apart from (hopefully) unrealistic scenarios like WWIII, direct military intervention or forcing Ukraine to cease their existence, there's really not much the U.S. could do unless Russia is willing to start negotiating peace.
{ "source": [ "https://politics.stackexchange.com/questions/78235", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/38301/" ] }
78,268
The USA has recently created a hot sensation around the globe by shooting down four aerial objects, the most prominent one being the 'Chinese Balloon'. How does the USA even know that the balloon is Chinese even without investigating it physically? Source: NBC News I mean just look at this image. There is just some white fabric used to make the balloon and a solar panel. How can I say that it is Chinese or Russian or from North Korea?
China's Ministry of Foreign Affairs has released several statements saying that it's from China, they know its capabilities and purpose, and that they regret letting it enter US airspace. All in all, it's clear that China agrees that it was a Chinese balloon. On February 3rd, the day before it was shot down, a spokesperson for the Ministry of Foreign Affairs said during a Q&A : Q: According to media reports, a Chinese unmanned airship appeared in the US airspace recently. What is China's comment? A: The airship comes from China and is of a civilian nature, used for scientific research such as meteorology. Affected by the westerly wind and its own control ability is limited, the airship seriously deviated from the scheduled route. China regrets that the airship strayed into the United States due to force majeure. And on February 5th, the day after it was shot down, the Ministry of Foreign Affairs released a more official statement , part of which says: The Chinese side has repeatedly informed the US side after verification that the airship is for civilian use and entered the US due to force majeure, which was completely accidental. Both of those statements are on the Ministry of Foreign Affairs' official site, and I'm relying on an auto-translator to get the English version. Even if some nuance is lost in translation, China doesn't seem to be denying at all that it's a Chinese balloon, and since they claim to know what it was and it was doing it's safe to say the balloon was from China. How the US knew for sure it was a Chinese balloon likely isn't public information, however according to Pentagon Press Secretary Patrick Ryder and other Department of Defense officials this is not the first such balloon they've tracked so they have some familiarity. Some statements from Ryder and other officials from before the balloon was shot down can be found in this article on the Department of Defense's website: The United States government has detected and is tracking a high-altitude surveillance balloon that is over the continental United States right now...The U.S. government, to include NORAD, continues to track and monitor it closely. ... The official said this is not the first time such a balloon has been seen above the United States, but did say this time the balloon appears to be acting differently than what has been seen in the past. "It's happened a handful of other times over the past few years, to include before this administration," the official said. "It is appearing to hang out for a longer period of time, this time around...
{ "source": [ "https://politics.stackexchange.com/questions/78268", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/44494/" ] }
78,275
I've just watched this video that shows that the rise of boat immigrants to the UK is because the UK didn't want a return policy. It seems as though it would have been very important to the UK to have one in order to stop the illegal immigration, but why didn't the UK want to have one?
China's Ministry of Foreign Affairs has released several statements saying that it's from China, they know its capabilities and purpose, and that they regret letting it enter US airspace. All in all, it's clear that China agrees that it was a Chinese balloon. On February 3rd, the day before it was shot down, a spokesperson for the Ministry of Foreign Affairs said during a Q&A : Q: According to media reports, a Chinese unmanned airship appeared in the US airspace recently. What is China's comment? A: The airship comes from China and is of a civilian nature, used for scientific research such as meteorology. Affected by the westerly wind and its own control ability is limited, the airship seriously deviated from the scheduled route. China regrets that the airship strayed into the United States due to force majeure. And on February 5th, the day after it was shot down, the Ministry of Foreign Affairs released a more official statement , part of which says: The Chinese side has repeatedly informed the US side after verification that the airship is for civilian use and entered the US due to force majeure, which was completely accidental. Both of those statements are on the Ministry of Foreign Affairs' official site, and I'm relying on an auto-translator to get the English version. Even if some nuance is lost in translation, China doesn't seem to be denying at all that it's a Chinese balloon, and since they claim to know what it was and it was doing it's safe to say the balloon was from China. How the US knew for sure it was a Chinese balloon likely isn't public information, however according to Pentagon Press Secretary Patrick Ryder and other Department of Defense officials this is not the first such balloon they've tracked so they have some familiarity. Some statements from Ryder and other officials from before the balloon was shot down can be found in this article on the Department of Defense's website: The United States government has detected and is tracking a high-altitude surveillance balloon that is over the continental United States right now...The U.S. government, to include NORAD, continues to track and monitor it closely. ... The official said this is not the first time such a balloon has been seen above the United States, but did say this time the balloon appears to be acting differently than what has been seen in the past. "It's happened a handful of other times over the past few years, to include before this administration," the official said. "It is appearing to hang out for a longer period of time, this time around...
{ "source": [ "https://politics.stackexchange.com/questions/78275", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/25018/" ] }
78,307
Turkey objects to Sweden gaining NATO membership on the grounds that Sweden supports "terrorism," namely Kurdish separatist fighters. Why has Sweden of all countries taken such an interest in the cause of Kurdish nationalism? This has gotten a lot of downvotes. "Terrorism" is in quotes for a reason -- I don't mean to imply Sweden supports terrorism, just that this is the way Turkey construes their support for Kurdish separatists
There is a significant Kurdish minority in Sweden, about 100,000 people. In the 1960s Sweden was suffering a labour shortage and had an open immigration policy, which encouraged numbers of Near and Middle Eastern people to migrate there. There is a notable presence of Kurds in the Swedish Parliament too. Six members of parliament have Kurdish origins. Moreover, there is the case of the shooting in 1986 of the Prime Minister Palme. Initially this was blamed on PKK terrorists, and Sweden introduced various measures against the PKK, related groups and Kurds in general. Naturally the Turkish government was happy to fuel these claims. As it became clearer that the PKK had nothing to do with the assassination, the pendulum swung the other way, and the Swedish government was keen to demonstrate that it supported the US backed SDF and YPG (which Turkey considers to be terrorist groups, or merely a re-branding of the PKK) Sources: Why is Turkey really accusing Sweden of 'supporting terrorists'? , Assassination of Olof Palme , Kurds in Sweden
{ "source": [ "https://politics.stackexchange.com/questions/78307", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/10094/" ] }
78,393
From CNN's February 22, 2023 Pennsylvania attorney general’s office will investigate Norfolk Southern after ‘criminal referral’ from state officials “Our office has been monitoring the train derailment in East Palestine and we are outraged on behalf of the residents who have suffered the consequences of this catastrophe,” the office of acting Attorney General Michelle Henry wrote in a statement Tuesday. and a bit later they quote Michelle Henry : “Pennsylvanians have a constitutional right to clean air and pure water, and we will not hesitate to hold anyone or any company responsible for environmental crimes in our Commonwealth.” I'm not calling the statement into question, it's likely a state's attorney general won't be just making things up. But I wonder in exactly what way do Pennsylvanians have a constitutional right to clean air and pure water. Is it explicitly covered in the Commonwealth of Pennsylvania 's constitution, or the US constitution, or is the AG likely referring to some more fundamental constitutional right and proposing to argue that access to clean air and pure water is part of it?
Article I of the Pennsylvania Constitution is a declaration of rights, among which are the "right to clean air, pure water, and to the preservation of the natural, scenic, historic and esthetic values of the environment." The full text of Pennsylvania's constitution can be found here , and the relevant part is section 27: § 27. Natural resources and the public estate. The people have a right to clean air, pure water, and to the preservation of the natural, scenic, historic and esthetic values of the environment. Pennsylvania's public natural resources are the common property of all the people, including generations yet to come. As trustee of these resources, the Commonwealth shall conserve and maintain them for the benefit of all the people. According to the Pennsylvania Department of Conservation and Natural Resources in this article on their site , that section was drafted in 1967, and widely supported by Pennsylvanians during a referendum in 1971: The first Earth Day launched the movement to create laws and programs that make sure we have clean water to drink, and protections for our air, land, and wildlife. At that same time, Franklin Kury was working to address the impacts that industries like coal and steel had in Pennsylvania, especially on rivers and streams. As a member of the House of Representatives in 1967, Kury drafted and introduced the legislation that led to the establishment of Section 27 of the Declaration of the People’s Rights. Fifty years ago, on May 18, 1971, Pennsylvanians went to the polls and three out of four of them voted for the change, ratifying Article 1, Section 27. What followed were major statutes and regulations protecting the air, land, and water from degradation in Pennsylvania.
{ "source": [ "https://politics.stackexchange.com/questions/78393", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/16047/" ] }
78,463
The New York Times' February 26, 2023 Newspapers Drop ‘Dilbert’ After Creator’s Rant About Black ‘Hate Group’ includes the paragraph: In the video from Tuesday that led to backlash, Mr. Adams, who is white, said he had “started identifying as Black” years ago and then brought up a poll by Rasmussen Reports that found that 53 percent of Black Americans agreed with the statement “It’s okay to be white.” The Twitter link provides a video in which "Mark Mitchell, Head Pollster at Rasmussen Reports" explains the poll and a breakdown thereof by various demographics. I must be missing some context; the negative of the statement "It's not okay to be white." seems quite difficult to agree to, but what either of these statements mean seems to be open to interpretation and could mean different things in different contexts. I understand that the statement (slogan?) " All lives matter " while sounding innocuous ("who could disagree?") its usage is usually seen as a counter to " Black lives matter " which does not suggest other lives don't. Is that what's going on behind "It's ok to be white"? If so, then is the poll simply checking to see if people have been informed of the context or not? Question: Help me understand the context behind the "It's okay to be white" question in a recent Rasmussen Poll, and what if anything at all might these poll results show?
I am not sure how much "context" you want out of a poll question. This is the poll : 1* Do you agree or disagree with this statement: “It’s OK to be white.” 2* Do you agree or disagree with this statement: “Black people can be racist, too.” Letting aside question #2 where answering in the negative would mean no black person, anywhere, is racist, let's go to #1. On the surface, this is a reasonable question as well. Being white and not particularly self-conscious about it, I would normally respond in the affirmative. Except... haven't we heard this before? Whose slogan is it? Ah, ADL to the rescue ... Hate on Display / It's Okay To Be White Whether the original trollers were white supremacist or not, actual white supremacists quickly began to promote the campaign—often adding Internet links to white supremacist websites to the fliers or combining the phrase with white supremacist language or imagery. This was not a surprise, as white supremacists had themselves used the phrase in the past—including on fliers—long before the 4chan campaign originated. So, basically you don't know if you are answering that well, yes, being white isn't the mark of the devil. Doh! Or if you are being manipulated into endorsing white supremacist slogans. I'd sit this one out myself, but it's easy to see where people might react different ways depending on their knowledge of the meme's usage (or just thinking it sounds like a white supremacist slogan) and suspecting manipulation. Or just answering the question at face value. Now, Dilbert's creator - long a Trump supporter - decided that not agreeing with the question meant you hated white people ( out of character for him? ). Went on quite a rant . Scott Adams called Black Americans a "hate group" and suggested white Americans "get the hell away from Black people" in response to a conservative organization's poll purporting to show that many African Americans do not think it's OK to be white. "If nearly half of all Blacks are not OK with white people ... that's a hate group," Adams said on his YouTube channel on Wednesday. "And I don't want to have anything to do with them." Yahoo News “I would say, based on the current way things are going, the best advice I would give to white people is to get the hell away from Black people,” Adams said in the video. “Just get the fuck away. Wherever you have to go, just get away. Because there’s no fixing this. This can’t be fixed.” Adams' YouTube starting at 13:28ish . Grab yer popcorn, folk, there's plenty of "opinion" (skip if you are likely to be offended, it is offensive). While it is unclear what motivation to attribute to people answering in the negative, one would say Mr. Adams' statements are quite clear. And he's presumably politically savvy enough to know a question using this exact wording (rather than an alternative phrasing) might have baggage associated with it. FWIW, Rassmussen Report is supposedly quite well-liked by US conservatives (hence Reuter's use of conservative organization ). While that might have motivated them to carry out this particular poll, the same questions asked by a different polling organization would presumably elicit roughly the same answers, assuming a best-practices statistical sample population. Unless the person answering knew, and reacted to, Rassmussen Report being a "conservative organization" (which seems far fetched). Here are some details about the poll results , but not broken down by ethnic groups. I was kinda wondering if the question about illegal immigration impact of schools might have muddied the waters some more about these two questions above. But it was apparently a different poll. Ah, yes, Blacks only : BLACK AMERICANS ONLY: "It's okay to be white." 53% agree, 26% disagree, 21% not sure So, the pure "disagree" are 26% if you're a glass half-full kinda person. And that's before you consider the possible taint from the question's background. p.s. There is some debate about how widely used and known that phrase is. While we can't know how it influenced what percentage of respondents, we can safely assume that both Rassmussen Report and Scott Adams had plenty of opportunity to know about it. p.p.s. I would also be highly curious to know, if in the actual poll, Q2 (can Blacks be racist?) was known when answering Q1. That seems like it could make Black people more defensive via the priming effect . If the order was reversed then that would be a more definite possibility. Which makes it all the more interesting in knowing whether it was carried out by phone or online.
{ "source": [ "https://politics.stackexchange.com/questions/78463", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/16047/" ] }
78,480
There are two surveys that struck me recently: December 2021, KIIS , about then-current public opinion: February 2023, Gradus (note, the survey was taken in 2023, but it was talking about the public opinion on just before 24/February/2022): I find the decline 49% to 26% rather drastic. Almost half of those who in December 2021 believed in the possibility of a full-scale invasion changed their mind in about two months, by February 2022. What factors caused that?
As James K points out, comparing two surveys with two different methodologies needs to come with heavy caveats at the best of times, but when one poll asks respondents to recall their state of mind a year ago then this is highly unlikely to be comparable with a survey conducted at that time. KIIS produced a second poll with an identical question in late January 2022, which showed barely any difference in views among those surveyed.
{ "source": [ "https://politics.stackexchange.com/questions/78480", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/2984/" ] }
78,497
https://www.reuters.com/world/china/un-body-rejects-historic-debate-chinas-human-rights-record-2022-10-06/ The U.N. rights council on Thursday voted down a Western-led motion to hold a debate about alleged human rights abuses by China against Uyghurs and other Muslims in Xinjiang in a victory for Beijing as it seeks to avoid further scrutiny. The defeat - 19 against, 17 for, 11 abstentions - is only the second time in the council's 16-year history that a motion has been rejected and is seen by observers as a setback to both accountability efforts, the West's moral authority on human rights and the credibility of the United Nations itself. It seems ridiculous that China seems to have more sway on non-Western countries than the United States given that the United States is a lot more powerful than China. Is there a reason why non-Western countries seem to always vote in favour of China in the UN?
It shouldn't be that surprising - non-western states don't really have that high of a regard about the west's " moral authority on human rights " given their own sordid history (e.g. slavery, racism, imperialism and colonial abuses etc.) against the non-western states. (Recent events like the treatment of illegal immigrants or the renditions and tortures of terrorist suspect by US and EU, for example, also don't paint a pretty picture. Most people don't even know that Amnesty International once ranked the United States higher than China on a list of persistent violators of human rights). There is also the fact that many non-western states do still occasionally indulge in human right abuses against political groups believed to be created (or overtly or covertly supported) by such western states - either remnants of the colonial practice of "divide and rule" that still survive and cause political problems in many countries or even recent ones during the cold war. Thus, unfortunately, some non-western states do tend to band together on such issues, because otherwise their own human right abuses may also come into scrutiny. (For example, a common propaganda on Chinese social media is that the "west" support the Ugyhurs because the west believes the Ugyhurs are the "genetic descendants" of Europeans . Which is, ofcourse, a ridiculous notion.) All said, as others have pointed, political relationships does dictate a lot on how countries vote on these resolution. (In fact, in 2001, US was even kicked off the UN Human Rights Commission for the first time, while, ironically, countries with worst human rights record where admitted to it). And China has also been using its growing economic clout to successfully increase its influence internationally - The lender of choice for many nations over the past decade, Beijing now has the power to cut them off, lend more or forgive some of their debts .
{ "source": [ "https://politics.stackexchange.com/questions/78497", "https://politics.stackexchange.com", "https://politics.stackexchange.com/users/38301/" ] }
25
According to the Wikipedia article , Contracts similar to options are believed to have been used since ancient times. In London, puts and "refusals" (calls) first became well-known trading instruments in the 1690s during the reign of William and Mary. Privileges were options sold over the counter in nineteenth century America, with both puts and calls on shares offered by specialized dealers. But how had traders actually priced the simplest options before the Black–Merton-Scholes model became common knowledge? What were the most popular methods to determine arbitrage-free prices before 1973?
You may want to look at Chapter 5 - "The Quest for the Option Formula" from the Derivatives book. The book is available online for free and it has a very decent review of approaches that were used 20-30 years before the Black-Scholes-Merton equation.
{ "source": [ "https://quant.stackexchange.com/questions/25", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/70/" ] }
43
Is there a standard model for market impact? I am interested in the case of high-volume equities sold in the US, during market hours, but would be interested in any pointers.
There is a family of models that is so commonly used among practitioners that it can be almost regarded as standard. For a survey, check out Rob Almgren's entry in the Encyclopedia of Quantitative Finance. Check out also Barra, Axioma and Northfield's handbooks. In general, the impact term per unit traded currency is of the form $$MI \propto \sigma_n \cdot \text{(participation rate)}^\beta$$ where the exponent is somewhere between 1/2 and 1, depending on the model being used, and the participation rate is the percentage of total volume of the trade, during the trading interval itself. When including the total MI in optimization, the models commonly used are the "3/2" model and the "5/3" model, in which the costs are proportional to (dollar value being traded for asset i)^{3/2, 5/3}. Since the term is not quadratic (and not solvable by a quadratic optimizer) some people approximate it by a linear term plus a quadratic one, or by a piece-wise linear convex function.
{ "source": [ "https://quant.stackexchange.com/questions/43", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/108/" ] }
82
I heard about MetaTrader from http://www.metaquotes.net . Is there any other framework or program available? Do you use different software for backtracking and running your trading algorithms? Thank you guys for your great answers. I will check out the posted applications.
I am a big believer in do-it-yourself (DIY) backtesting and data analysis, that is, obtaining your own data and writing your own code. I use my own simple Python scripts to process, test, analyze, and backtest, starting with text-input data files (either OHLC bars or tick data). The reason for DIY: in order to have an effective backtest, analysis, etc., you must completely understand all the assumptions, explicit and implicit, that go into the test or analysis. You must understand how that relates to the trading algorithm you implement. As a quick example, people commonly say you must take off a tick or two in backtest results to account for slippage. However, I have found that for several of my backtest methods, I can actually count on getting better entries, on average, than the backtest. Whatever the case, I can sleep at night without worrying about someone changing something in the way the software works, which would throw off my tests without me knowing about it. For algorithm execution, I also use a DIY Java API and Java applications build on the TWS API . However, the reason for that is just to save a few bucks. Edit: Not sure I got this point across, but there is an intimate connection between back-test code, historical data, execution code, and real-time data. The relationship is different depending on what you are doing and what you are using, but it always important to understand the relationship.
{ "source": [ "https://quant.stackexchange.com/questions/82", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/26/" ] }
84
I know the derivation of the Black-Scholes differential equation and I understand (most of) the solution of the diffusion equation. What I am missing is the transformation from the Black-Scholes differential equation to the diffusion equation (with all the conditions) and back to the original problem. All the transformations I have seen so far are not very clear or technically demanding (at least by my standards). My question: Could you provide me references for a very easily understood, step-by-step solution?
One starts with the Black-Scholes equation $$\frac{\partial C}{\partial t}+\frac{1}{2}\sigma^2S^2\frac{\partial^2 C}{\partial S^2}+ rS\frac{\partial C}{\partial S}-rC=0,\qquad\qquad\qquad\qquad\qquad(1)$$ supplemented with the terminal and boundary conditions (in the case of a European call) $$C(S,T)=\max(S-K,0),\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(2)$$ $$C(0,t)=0,\qquad C(S,t)\sim S\ \mbox{ as } S\to\infty.\qquad\qquad\qquad\qquad\qquad\qquad$$ The option value $C(S,t)$ is defined over the domain $0< S < \infty$, $0\leq t\leq T$. Step 1. The equation can be rewritten in the equivalent form $$\frac{\partial C}{\partial t}+\frac{1}{2}\sigma^2\left(S\frac{\partial }{\partial S}\right)^2C+\left(r-\frac{1}{2}\sigma^2\right)S\frac{\partial C}{\partial S}-rC=0.$$ The change of independent variables $$S=e^y,\qquad t=T-\tau$$ results in $$S\frac{\partial }{\partial S}\to\frac{\partial}{\partial y},\qquad \frac{\partial}{\partial t}\to - \frac{\partial}{\partial \tau},$$ so one gets the constant coefficient equation $$\frac{\partial C}{\partial \tau}-\frac{1}{2}\sigma^2\frac{\partial^2 C}{\partial y^2}-\left(r-\frac{1}{2}\sigma^2\right)\frac{\partial C}{\partial y}+rC=0.\qquad\qquad\qquad(3)$$ Step 2. If we replace $C(y,\tau)$ in equation (3) with $u=e^{r\tau}C$, we will obtain that $$\frac{\partial u}{\partial \tau}-\frac{1}{2}\sigma^2\frac{\partial^2 u}{\partial y^2}-\left(r-\frac{1}{2}\sigma^2\right)\frac{\partial u}{\partial y}=0.$$ Step 3. Finally, the substitution $x=y+(r-\sigma^2/2)\tau$ allows us to eliminate the first order term and to reduce the preceding equation to the form $$\frac{\partial u}{\partial \tau}=\frac{1}{2}\sigma^2\frac{\partial^2 u}{\partial x^2}$$ which is the standard heat equation . The function $u(x,\tau)$ is defined for $-\infty < x < \infty$, $0\leq\tau\leq T$. The terminal condition (2) turns into the initial condition $$u(x,0)=u_0(x)=\max(e^{\frac{1}{2}(a+1)x}-e^{\frac{1}{2}(a-1)x},0),$$ where $a=2r/\sigma^2$. The solution of the heat equation is given by the well-known formula $$u(x,\tau)=\frac{1}{\sigma\sqrt{2\pi \tau}}\int_{-\infty}^{\infty} u_0(s)\exp\left(-\frac{(x-s)^2}{2\sigma^2 \tau}\right)ds.$$ Now, if we evaluate the integral with our specific function $u_0$ and return to the old variables $(x,\tau,u)\to(S,t,C)$, we will arrive at the usual Black–Merton-Scholes formula for the value of a European call. The details of the calculation can be found e.g. in The Mathematics of Financial Derivatives by Wilmott, Howison, and Dewynne (see Section 5.4).
{ "source": [ "https://quant.stackexchange.com/questions/84", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
85
Back in the mid 90's I used the Black-Scholes Model and the Cox-Ross-Rubenstein (Binomial) Model's to price Options. That was nearly 15 years ago and I was wondering if there are any new models being used to price Options?
Black-Scholes itself didn't change a lot but we can now adjust it to deal with a lot more complicated factors to price more complicated contracts: stochastic volatility (Heston, Gatheral) stochastic rates (Hull) credit risk dividends Other methods (computing intensive) have also evolved to deal with various types of contracts where BS is not very appropriate choice (e.g. Monte Carlo simulation for path-dependant options).
{ "source": [ "https://quant.stackexchange.com/questions/85", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/169/" ] }
111
I am not very sure, if this question fits in here. I have recently begun, reading and learning about machine learning. Can someone throw some light onto how to go about it or rather can anyone share their experience and few basic pointers about how to go about it or atleast start applying it to see some results from data sets? How ambitious does this sound? Also, do mention about standard algorithms that should be tried or looked at while doing this.
There seems to be a basic fallacy that someone can come along and learn some machine learning or AI algorithms, set them up as a black box, hit go, and sit back while they retire. My advice to you: Learn statistics and machine learning first, then worry about how to apply them to a given problem. There is no free lunch here. Data analysis is hard work . Read "The Elements of Statistical Learning" (the pdf is available for free on the website), and don't start trying to build a model until you understand at least the first 8 chapters. Once you understand the statistics and machine learning, then you need to learn how to backtest and build a trading model, accounting for transaction costs, etc. which is a whole other area. After you have a handle on both the analysis and the finance, then it will be somewhat obvious how to apply it. The entire point of these algorithms is trying to find a way to fit a model to data and produce low bias and variance in prediction (i.e. that the training and test prediction error will be low and similar). Here is an example of a trading system using a support vector machine in R , but just keep in mind that you will be doing yourself a huge disservice if you don't spend the time to understand the basics before trying to apply something esoteric. [Edit:] Just to add an entertaining update: I recently came across this master's thesis: "A Novel Algorithmic Trading Framework Applying Evolution and Machine Learning for Portfolio Optimization" (2012). It's an extensive review of different machine learning approaches compared against buy-and-hold. After almost 200 pages, they reach the basic conclusion: "No trading system was able to outperform the benchmark when using transaction costs." Needless to say, this does not mean that it can't be done (I haven't spent any time reviewing their methods to see the validity of the approach), but it certainly provides some more evidence in favor of the no-free lunch theorem .
{ "source": [ "https://quant.stackexchange.com/questions/111", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/175/" ] }
115
Since Mandelbrot, Fama and others have performed seminal work on the topic, it has been suspected that stock price fluctuations can be more appropriately modeled using Lévy alpha-stable distrbutions other than the normal distribution law. Yet, the subject is somewhat controversial, there is a lot of literature in defense of the normal law and criticizing distributions without bounded variation. Moreover, precisely because of the the unbounded variation, the whole standard framework of quantitative analysis can not be simply copy/pasted to deal with these more "exotic" distributions. Yet, I think there should be something to say about how to value risk of fluctuations. After all, the approaches using the variance are just shortcuts, what one really has in mind is the probability of a fluctuation of a certain size. So I was wondering if there is any literature investigating that in particular. In other words: what is the current status of financial theories based on Lévy alpha-stable distributions? What are good review papers of the field?
I recently read "Modeling financial data with stable distributions" (Nolan 2005) which gives a survey of this area and might be of interest (I believe it was contained in "Handbook of Heavy Tailed Distributions in Finance" ). Another more recent reference is "Alpha-Stable Paradigm in Financial Markets" (2008). I'm not aware of anything covering "risk of fluctuations" and this is still certainly not at the center of the field (i.e. most theory still includes some version of Gaussian or mixture of Gaussians). Would also be interested in other references.
{ "source": [ "https://quant.stackexchange.com/questions/115", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/156/" ] }
140
Are there common procedures prior or posterior backtesting to ensure that a quantitative trading strategy has real predictive power and is not just one of the thing that has worked in the past by pure luck? Surely if we search long enough for working strategies we will end up finding one. Even in a walk forward approach that doesn't tell us anything about the strategy in itself. Some people talk about white's reality check but there are no consensus in that matter.
Strictly speaking, data snooping is not the same as in-sample vs out-of-sample model selection and testing, but has to deal with sequential or multiple tests of hypothesis based on the same data set. To quote Halbert White: Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When such data reuse occurs, there is always the possibility that any satisfactory results obtained may simply be due to chance rather than to any merit inherent in the methody yielding the results. Let me provide an example. Suppose that you have a time series of returns for a single asset, and that you have a large number of candidate model families. You fit each of these models, on a test data set, and then check the performance of the model prediction on a hold-out sample. If the number of models is high enough, there is a non-negligible probability that the predictions provided by one model will be considered good. This has nothing to do with bias-variance trade-offs. In fact, each model may have been fitted using cross-validation on the training set, or other in-sample criteria like AIC, BIC, Mallows etc. For examples of a typical protocol and criteria, check Ch.7 of Hastie-Friedman-Tibshirani's " The Elements of Statistical Learning ". Rather the problem is that implicitly multiple tests of hypothesis are being run at the same time . Intuitively, the criterion to evaluate multiple models should be more stringent, and a naive approach would be to apply a Bonferroni correction . It turns out that this criterion is too stringent. That's where Benjamini-Hochberg , White , and Romano-Wolf kick in. They provide efficient criteria for model selection. The papers are too involved to describe here, but to get a sense of the problem, I recommend Benjamini-Hochberg first, which is both easier to read and truly seminal.
{ "source": [ "https://quant.stackexchange.com/questions/140", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/155/" ] }
141
What sources of financial and economic data are available online? Which ones are free or cheap? What has your experience been like with these data sources?
This post is Quant Stack Exchange's master list of data sources. Please append your links to other data sources to the list below. Economic Data See What are the most useful sources of economics data? on Cross Validated SE. World https://macrovar.com/macrovar-database/ includes free data for 5,000+ Financial and Macroeconomic Indicators of the 35 largest economies of the world. It includes macroeconomic indicators and financial markets covering equity indices, fixed income, foreign exchange, credit default swaps, futures and commodities. It also provides free financial and economic research. OECD.StatExtracts includes data and metadata for OECD countries and selected non-member economies. http://www.assetmacro.com/ includes data for 20,000+ Macroeconomic and Financial Indicators of 150 countries https://db.nomics.world is an open platform with more than 16,000 datasets among 50+ providers. United Kingdom http://www.statistics.gov.uk/ United States Federal Reserve Economic Data - FRED (includes URL-based API) http://www.census.gov/ http://www.bls.gov/ http://www.ssa.gov/ http://www.treasury.gov/ http://www.sec.gov/ http://www.economagic.com/ http://www.forecasts.org/ Foreign Exchange 1Forge Realtime FX Quotes OANDA Historical Exchange Rates Dukascopy - Historical FX prices; XML and CSV . There is a non-affiliated downloader called tickstory . ForexForums Historical Data - Historical FX downloads via Amazon S3 FXCM provides an open repository of tick data starting from January 4th 2015, with a download script on github. GAIN Capital - Historical FX rates (in ZIP format) TrueFX - Historical FX rates (in ZIP/CSV format) . A download helper script is available on GitHub. TrueFX.com asks for free registration. Same files are linked from Pepperstone , no registration needed. TraderMade - Real Time Forex Data [RTFXD - Real Time FX Data] 9 : Delivered via ssh. Very low pricing. Olsen Data / Olsen Financial Technologies : Historical FX data can be ordered online in custom format. Download link sent in 2 business days. Real time data service. Expensive but very high quality. Zorro : 1Minute bars from 2010 in t6 format (OHLC and tick volume) http://polygon.io Norgate Data : Historical FX data covering 74 currency currency and 14 bullion crosses with daily updates. PortaraCQG - Historical Forex Data Supplies FX 1 min, tick and level 1 from 1987. Updates and data tools included. Databento Real-time and historical data direct from colocation facilities. Integrates with Python, C++ and raw TCP. Includes order book, tick data, and subsampled OHLCV aggregates at 1s, 1min, 1h, daily granularity. Equity and Equity Indices http://finance.yahoo.com/ http://www.iasg.com/managed-futures/market-quotes http://kumo.swcp.com/stocks/ Kenneth French Data Library http://unicorn.us.com/advdec/ http://siblisresearch.com/ usfundamentals.com - Quarterly and annual financial data for US companies for the five years up until 2016 http://simfin.com/ Olsen Data / Olsen Financial Technologies https://www.tiingo.com/welcome - Equity, ETF, and Mutual Fund price and fundamental data http://polygon.io Norgate Data - Deep daily history of US, Australian and Canadian equities and indices, survivorship bias-free, and daily updates. PortaraCQG - Historical Intraday Data - Supplies global indices 1 min, tick and level 1 from 1987. Updates and data tools included. Databento - Real-time and historical data direct from colocation facilities. Integrates with Python, C++ and raw TCP. Includes order book, tick data, and subsampled OHLCV aggregates at 1s, 1min, 1h, daily granularity. EquityRT - Historic stock trading, index and detailed fundamental equity (historic and forecast) data and financial analysis along with industry-specific financial analysis and comparisons, institutional shareholding data and news reports provided through Excel APIs or a web browser. Service includes foreign exchange, commodity and cryptocurrency prices in addition to some fixed income, and macroeconomic data. Service was geared towards emerging countries' markets but recently expanded to include developed country markets. Not free but quite reasonably priced. Investing.com - Trading and some fundamental data for equities in addition to pricing data for commodities, futures, foreign exchange, fixed income and cryptocurrencies through a web browser. Only pricing data can be downloaded for free. Historic financials and forecasts and financial analysis available for companies with the pro subscription through a browser along with downloading and charting, no APIs. algoseek - Non-free provider of intraday and other data through various types of APIs and platforms for equities, ETFs, options, cash forex, futures, and cryptocurrencies mainly for US markets. Fixed Income FRB: H.15 Selected Interest Rates Barclays Capital Live (institutional clients only) CDS spreads PortaraCQG - Institutional Supplier - Supplies Tullet Prebon Sovereign Debt 1 min, tick and level 1 from 1987. Updates and data tools included. Credit Rating Agency Ratings History Data - Corporate rating histories from multiple agencies converted to CSV format. Data on historical, cross-country nominal yield curves - A Q&A on fixed income yield data providing links to official and commercial sources for many different countries. Options and Implied Volatility http://www.ivolatility.com/ http://www.optionmetrics.com/ http://www.livevol.com/ http://www.historicaloptiondata.com/ https://www.commodityvol.com/ Olsen Data / Olsen Financial Technologies Futures http://www.simiansavants.com/cmedata.shtml http://www.cmegroup.com/market-data/index.html http://www.quandl.com Olsen Data / Olsen Financial Technologies Norgate Data - Deep daily history of 100 futures markets from 11 worldwide exchanges, and daily updates. PortaraCQG - Historical Futures Data Supplies Global Futures. Daily from 1899, 1 min, tick and level 1 from 1987. Updates and data tools included. Databento . Real-time and historical data direct from colocation facilities. Integrates with Python, C++ and raw TCP. Includes order book, tick data, and subsampled OHLCV aggregates at 1s, 1min, 1h, daily granularity. Commodities LIFFE Commodity Derivatives - 15 min delay; free registration PortaraCQG - Historical Commodities Data Global Commodities including LME, Asian and Russian commodity exchanges. Multiple Asset Classes and Miscellaneous http://www.eoddata.com/ Robert Shiller Online Data S#.Data is a free application for downloading and storing market data from various sources Specific Exchanges Spanish Futures & Options (MEFF) CBOE Futures Exchange (CFE Vix Futures )
{ "source": [ "https://quant.stackexchange.com/questions/141", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/188/" ] }
156
There are a few things that form the common canon of education in (quantitative) finance, yet everybody knows they are not exactly true, useful, well-behaved, or empirically supported. So here is the question: which is the single worst idea still actively propagated? Please make it one suggestion per post.
CAPM as an allocation strategy. Market efficiency was predicated on several falicious ideas, including: Everyone can borrow (and lend) at the same rate, indefinitely (i.e. no matter their leverage) All information is known instantaneously by all market participants. There are no transaction costs. Rational behavior. One conclusion is that the higher the beta, the higher the return, but this has clearly been shown to be violated. While it is useful for segmenting $\alpha$ and $\beta$ (and for portfolio/strategy evaluation), it simply isn't entirely reliable as a portfolio allocation strategy. As Fama/French concluded in "The Capital Asset Pricing Model: Theory and Evidence" (2004): The CAPM, like Markowitz's (1952, 1959) portfolio model on which it is built, is nevertheless a theoretical tour de force. We continue to teach the CAPM as an introduction to the fundamental concepts of portfolio theory and asset pricing, to be built on by more complicated models like Merton's (1973) ICAPM. But we also warn students that despite its seductive simplicity, the CAPM's empirical problems probably invalidate its use in applications. Note that CAPM adds many assumptions to Markowitz's fundamental model to built itself. Therein lies its fallacy because as said above, those are difficult assumptions. Markowitz' model itself is fairly general in that you can inject 'views' of higher returns or greater volatility etc into the basic framework (or not!) and still be quite rooted in reality for mid-long term horizons.
{ "source": [ "https://quant.stackexchange.com/questions/156", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/69/" ] }
183
How do you explain what a stationary process is? In the first place, what is meant by process , and then what does the process have to be like so it can be called stationary ?
A stationary process is one where the mean and variance don't change over time. This is technically "second order stationarity" or "weak stationarity", but it is also commonly the meaning when seen in literature. In first order stationarity, the distribution of $(X_{t+1}, ..., X_{t+k})$ is the same as $(X_{1}, ..., X_{k})$ for all values of $(t, k)$. You can see whether a series is stationary through it's autocorrelation function (ACF): $\rho_k = Corr(X_t, X_{t-k})$. When the ACF of the time series is slowly decreasing, this is an indication that the mean is not stationary; conversely, a stationary series should converge on zero quickly. For instance, white noise is stationary, while a random walk is not. We can simulate these distributions easily in R ( from a prior answer of mine ): op <- par(mfrow = c(2,2), mar = .5 + c(0,0,0,0)) N <- 500 # Simulate a Gaussian noise process y1 <- rnorm(N) # Turn it into integrated noise (a random walk) y2 <- cumsum(y1) plot(ts(y1), xlab="", ylab="", main="", axes=F); box() plot(ts(y2), xlab="", ylab="", main="", axes=F); box() acf(y1, xlab="", ylab="", main="", axes=F); box() acf(y2, xlab="", ylab="", main="", axes=F); box() par(op) Which ends up looking somewhat like this: If a time series varies over time, it is possible to make it stationary through a number of different techniques.
{ "source": [ "https://quant.stackexchange.com/questions/183", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/40/" ] }
194
I have several measures: 1. Profit and loss (PNL). 2. Win to loss ratio (W2L). 3. Avg gain to drawdown ratio (AG2AD). 4. Max gain to maximum drawdown ratio (MG2MD). 5. Number of consecutive gains to consecutive losses ratio (NCG2NCL). If there were only 3 measures (A, B, C), then I could represent the "total" measure as a magnitude of a 3D vector: R = SQRT(A^2 + B^2 + C^2) If I want to combine those 5 measures into a single value, would it make sense to represent them as the magnitude of a 5D vector? Is there a better way to combine them? Is there a way to put more "weight" on certain measures, such as the PNL?
A stationary process is one where the mean and variance don't change over time. This is technically "second order stationarity" or "weak stationarity", but it is also commonly the meaning when seen in literature. In first order stationarity, the distribution of $(X_{t+1}, ..., X_{t+k})$ is the same as $(X_{1}, ..., X_{k})$ for all values of $(t, k)$. You can see whether a series is stationary through it's autocorrelation function (ACF): $\rho_k = Corr(X_t, X_{t-k})$. When the ACF of the time series is slowly decreasing, this is an indication that the mean is not stationary; conversely, a stationary series should converge on zero quickly. For instance, white noise is stationary, while a random walk is not. We can simulate these distributions easily in R ( from a prior answer of mine ): op <- par(mfrow = c(2,2), mar = .5 + c(0,0,0,0)) N <- 500 # Simulate a Gaussian noise process y1 <- rnorm(N) # Turn it into integrated noise (a random walk) y2 <- cumsum(y1) plot(ts(y1), xlab="", ylab="", main="", axes=F); box() plot(ts(y2), xlab="", ylab="", main="", axes=F); box() acf(y1, xlab="", ylab="", main="", axes=F); box() acf(y2, xlab="", ylab="", main="", axes=F); box() par(op) Which ends up looking somewhat like this: If a time series varies over time, it is possible to make it stationary through a number of different techniques.
{ "source": [ "https://quant.stackexchange.com/questions/194", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/78/" ] }
219
What is the intuition behind cointegration? What does the Dickey-Fuller test do to test for it? Ideally, a non-technical explanation would be appreciated. Say you need to explain it to an investor and justify why your pairs trading strategy should make him rich!
This one is quite easy: Think of a man walking his dog. He will go along and his dog will stroll along running back and forth. Man and dog are mathematically "cointegrated". As an investor you bet that the dog is coming back to his master or that the leash has only a certain length.
{ "source": [ "https://quant.stackexchange.com/questions/219", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/40/" ] }
310
How many trades per second are we talking about? What kind of strategies are used in this time frame? Can the small guy play the game?
You could for example look at this research paper released by Deutsche Bank's Research group ( mirror ) just yesterday which defines both high-frequency and ultra-high-frequency trading. In the paper it says Typically, a high frequency trader would not hold a position open for more than a few seconds. Empirical evidence reveals that the average U.S. stock is held for 22 seconds. And in a footnote it says There even is a subcategory of high-frequency trading, Ultra-HFT , which is sensitive to a latency down to the microsecond. Here, co-location [of servers] is exceedingly significant, and shaving off further microseconds is of utmost importance. And no, the small guy can't play for reason well-put in the paper, co-location probably being the single most important one.
{ "source": [ "https://quant.stackexchange.com/questions/310", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/262/" ] }
325
By HFT here I mean anything with holding period less than 5 to 10 minutes. Any empirical/anecdotal evidence of using it successfully on even higher frequencies?
Holding period and trade frequency are two different things. If you have a high trade frequency, the name of the game is negotiating lower commissions. That being said, the TWS API gives you the same quality feed as you get using TWS itself. From Article on HFT Provided by Dirk Eddelbuettel in this question about HFT : High-frequency trading (HFT) is a subset of algorithmic trading where a large number of orders (which are usually fairly small in size) are sent into the market at high speed, with round-trip execution times measured in microseconds (Brogaard, 2010). Programs running on high-speed computers analyse massive amounts of market data, using sophisticated algorithms to exploit trading opportunities that may open up for milliseconds or seconds. Participants are constantly taking advantage of very small price imbalances; by doing that at a high rate of recurrence, they are able to generate sizeable profits. Typically, a high frequency trader would not hold a position open for more than a few seconds. Empirical evidence reveals that the average U.S. stock is held for 22 seconds Updates and orders with the TWS API occur on the order of 10s to 100s of milliseconds, as far as I can tell, which would disqualify it for use in the regime described in the article. (This is just what I have measured on my own computer on my retail Internet connection.) Honestly I would be surprised if anyone could do HFT with any retail product. Sounds impossible.
{ "source": [ "https://quant.stackexchange.com/questions/325", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/40/" ] }
341
What is a martingale and how it compares with a random walk in the context of the Efficient Market Hypothesis?
Samuelson suggested in 1965 that the stock prices follow a martingale (see P. Samuelson “Proof That Properly Anticipated Prices Fluctuate Randomly” ). Assume there is a security with a random payoff $X_T$ at date $T$. Let $..., P_{t–1}, P_t, P_{t+1},...$ be the time series of prices of a security with this payoff. Finally, define the price change $\Delta P_{t+1}=P_{t+1} – P_{t}$ for any pair of successive dates $t$ and $t + 1$. Samuelson begins by defining “properly anticipated prices” as prices that are equal to the expected value of $X_T$ at every date $t \leq T$, based on the information $\Phi_t$ available at date t (which, in particular, includes the present and all past price realizations for that security, $...,P_{t–2}, P_{t–1}, P_t$). That is, for all $t \leq T$: $$P_t = \mathbb E(X_T|\Phi_t).$$ In particular, $P_T = X_T$. He then proves that the “prices fluctuate randomly” since it follows that for all $t \leq T$, $P_t = \mathbb E(P_{t+1}|\Phi_t)$ or alternatively that $\mathbb E(\Delta P_{t+1}|\Phi_t) = 0$, and $$\mathbb E(\Delta P_{t+1}\Delta P_{t+2}...\Delta P_T|\Phi_t) = \mathbb E(\Delta P_{t+1}|\Phi_t) \mathbb E(\Delta P_{t+2}|\Phi_t)...\mathbb E(\Delta P_T|\Phi_t)=0.$$ In words, prices follow a martingale, and successive price changes are mutually uncorrelated. This implies that if “prices are properly anticipated,” all the information in the past price series that is useful for forecasting next period’s expected price is contained in the current price. Note that this is a much weaker statement than to say that all information in the past price series that is useful for forecasting the probability distribution of next period’s price is contained in the current price (which is the random walk hypothesis suggested by Fama in his thesis ).
{ "source": [ "https://quant.stackexchange.com/questions/341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/40/" ] }
529
I would like to find stock pairs that exhibit low correlation. If the correlation between A and B is 0.9 and the correlation between A and C is 0.9 is there a minimum possible correlation for B and C? I'd like to save on search time so if I know that it is mathematically impossible for B and C to have a correlation below some arbitrary level based on A to B and A to C's correlations I obviously wouldn't have to waste time calculating the correlation of B and C. Is there such a "law"? If not, what are other methods of decreasing the search time?
Yes, there is such a rule and it is not too hard to grasp. Consider the 3-element correlation matrix $$\left(\begin{matrix} 1 & r & \rho \\ r & 1 & c \\ \rho & c & 1 \end{matrix}\right)$$ which must be positive semidefinite . In simpler terms, that means all its eigenvalues must be nonnegative. Assuming that $\rho$ and $r$ are known positive values, we find that the eigenvalues of this matrix go negative when \begin{equation} c<\rho r-\sqrt{1-\rho ^2+\rho ^2 r^2-r^2}. \end{equation} Therefore the right hand side of this expression is the lower bound for the AC correlation $c$ that you seek, with $\rho$ being the AB correlation and $r$ being the BC correlation.
{ "source": [ "https://quant.stackexchange.com/questions/529", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/352/" ] }
530
There is a concept of trading or observing the market with signal processing originally created by John Ehler . He wrote three books about it. Cybernetic Analysis for Stocks and Futures Rocket Science for Traders MESA and Trading Market Cycles There are number of indicators and mathematical models that are widely accepted and used by some trading software (even MetaStock), like MAMA, Hilbert Transform, Fisher Transform (as substitutes of FFT), Homodyne Discriminator, Hilbert Sine Wave, Instant Trendline etc. invented by John Ehler. But that is it. I have never heard of anybody other than John Ehler studying in this area. Do you think that it is worth learning digital signal processing? After all, each transaction is a signal and bar charts are somewhat filtered form of these signals. Does it make sense?
Wavelets are just one form of "basis decomposition". Wavelets in particular decompose in both frequency and time and thus are more useful than fourier or other purely-frequency based decompositions. There are other time-freq decompositions (for instance the HHT) which should be explored as well. Decomposition of a price series is useful in understanding the primary movement within a series. In general with a decomposition, the original signal is the sum its basis components (potentially with some scaling multiplier). The components range from the lowest frequency (a straight-line through the sample) to the highest frequency, a curve that oscillates with a frequency maximum approaching N / 2. How this is useful denoising a series determining the principal component of movement in the series determining pivots Denoising is accomplished by recomposing the series by summing up the components from the decomposition, less the last few highest frequency components. This denoised (or filtered) series, if chosen well, often gives a view on the core price process. Assuming continuation in the same direction, can be used to extropolate for a short period forward. As the timeseries ticks in real-time, one can look at how the denoised (or filtering) price process changes to determine whether a price movement in a different direction is significant or just noise. One of the keys, though, is determining how many levels of the decomposition to recompose in any given situation. Too few levels (low freq) will mean that the recomposed price series responds very slowly to events. Too many levels (high freq) will mean for fast response but , perhaps too much noise in some price regimes. Given that the market shifts between sideways movements and momentum movements, a filtering process needs to adjust to regime, becoming more or less sensitive to movements in projecting a curve. There are many ways to evaluate this, such looking at the power of the filtered series versus the power of the raw price series, targeting a certain % depending on regime. Assuming one has successfully employed wavelet or other decompositions to yield a smooth, appropriately reactive signal, can take the derivative and use to detect minima and maxima as the price series progresses. Problems One needs a basis that has "good behavior" at the endpoint so that the slope of the curve at the endpoint projects in an appropriate direction. The basis needs to provide consistent results at the endpoint as the timeseries ticks and not be positionally biased Unfortunately, I am not aware of any wavelet basis that avoids the above problems. There are some other bases that can be chosen that do better. Conclusion If you want to pursue Wavelets and build trading rules around them, expect to do a lot of research. You may also find that though the concept is good, you will need to explore other decomposition bases to get the desired behavior. I don't use decompositions for trade decisions, but I have found them useful in determining market regime and other backward looking measures.
{ "source": [ "https://quant.stackexchange.com/questions/530", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/42/" ] }
545
There is a large body of literature on the "success" of the application of evolutionary algorithms in general, and the genetic algorithm in particular, to the financial markets. However, I feel uncomfortable whenever reading this literature. Genetic algorithms can over-fit the existing data. With so many combinations, it is easy to come up with a few rules that work. It may not be robust and it doesn't have a consistent explanation of why this rule works and those rules don't beyond the mere (circular) argument that "it works because the testing shows it works". What is the current consensus on the application of the genetic algorithm in finance?
I've worked at a hedge fund that allowed GA-derived strategies. For safety, it required that all models be submitted long before production to make sure that they still worked in the backtests. So there could be a delay of up to several months before a model would be allowed to run. It's also helpful to separate the sample universe; use a random half of the possible stocks for GA analysis and the other half for confirmation backtests.
{ "source": [ "https://quant.stackexchange.com/questions/545", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/129/" ] }
549
Finance is drowning in a deluge of data. Humans are not very good at comprehending large amounts of data. One way out may be visualization. Traditional ways of visualizing patterns, complexities and contexts are of course charts and for derivatives e.g. payoff diagrams, a more modern approach are heat maps . My question: Do you know of any innovative (or experimental) ways of visualizing financial and/or derivatives data?
Visualization should lead to truth and understanding. As such, I find that simple visualizations tend to be the best. My favorite visualization for showing relationships is the scatterplot . Once you start to even introduce a line plot, you are implying continuities between data that may not exist. And trying to introduce more advanced visualizations like network diagrams ( ex ) or complicated pie charts ( ex ) can lead to more confusion than understanding if misapplied. A few thoughts: I think that you have already mentioned a few good ones. Heatmaps are good because they allow you to show three (or more) dimensional data without the added issues that arise when trying to create a 3D visualization. Payoff diagrams are simple but they accomplish their goal efficiently as a result. The FinViz website has a few nice examples of visualizations, including a simple bar chart , candlesticks , and heatmap . People often don't consider that it is possible to include more dimensions in a typical plot by changing the width, size, color, or intensity of a shape. This is a much better idea than trying to plot more than 2 dimensions spatially. The fourth real dimension is time, and time plays a very important role in financial data. One popular way to incorporate this as another dimension in a visualization is through video . A great example is gapminder , the software created by Hans Rosling, which made for some very compelling TED talks about global poverty. This was acquired by google and is now available as part of their web toolkit (also mentioned by Ben Hoffstein ). Visualization techniques from other fields are still very appropriate in finance, and the best starting point is Edward Tufte , especially "The Visual Display of Quantitative Information" and "Envisioning Information" . You also can get a benefit from learning a visualization language. I recommend any of these three (in order of complexity): R with ggplot2 , ( plotly now provide an easy way to make ggplot graphes interactive) Protovis Processing These each have a learning curve, but once you learn how to use them they all allow for exploratory data analysis in a way that can't be achieved with other tools. There are also many great and innovative commercial tools. To mention a few that are all used by banks and hedge funds: Panopticon does an amazing job with real-time visualization. Tableau , Spotfire , and Qlikview all allow for interactive visualization of data using in-memory databases.
{ "source": [ "https://quant.stackexchange.com/questions/549", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
557
In my firm we are beginning a new OMS (Order Management System) project and there is a debate whether we use Quickfix or we go for a professional fix engine? Because there is a common doubt that QuickFix is not enough fast and obviously we will not get any technical support. I heard that in BOVESPA it has been used for a while. They are changing it with a paid one now. Well that is enough for me. If they use it I can use it. Should I choose a professional one over QuickFix? Is it not good enough?
In order to answer your question (for you) you would need something to compare to . You would need numbers to know if it is slower/faster, how much, and if it will impact your system overall. Also knowing your performance goals could narrow down the options. My advice is to take a look at your overall architecture of the sytem you have or intend to build. To just look at QuickFIX is rather meaningless without the whole chain involved in processing information and reacting to it . As an example, say QuickFIX is 100 times faster than some part (in the chain of processing) you have or build. Now, replacing QuickFIX with another part which is 100 times faster than QuickFIX would not change anything because you're still held back by the slowest point. And remember that network hops are usually very expensive compared to in-memory processing of data. If you for some reason cannot compare different candidates against each other, why not start with e.g. QuickFIX, but make the system in such a way that it can be replaced with something faster later on. Generally speaking, QuickFIX is not the fastest option, but the key point is that it might not have to be. If performance is very critical and one has resources, you usually end up buying something or building something yourself. Drawbacks here are having resources like time, money and skilled people. To answer your question better one would need to know other aspects as well, like available resources (money, time, skill), overall system overview, performance expectations and other factors that limit decisions . E.g. if money is not a limiting factor, just find the fastest option and buy it.
{ "source": [ "https://quant.stackexchange.com/questions/557", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/42/" ] }
613
I was wondering what is best practice for representing elements in a time series, especially with large amounts of data. The focus/context is in a back testing engine and comparing multiple series. It seems there are two options: 1) Using an integer index, or 2) Using a date-based index At the moment I am using dates, but this impacts on performance & memory usage in that I am using a hash table rather than an array, and it requires some overhead in iteration (either forward or backwards) as I have to determine the next/previous valid date before I can access it. However, it does let me aggregate data on the fly (e.g. building the ohlc for the previous week when looking at daily bars) and most importantly for me allows me to compare different series with certainty I am looking at the same date/time. If I am looking at an equity issue relative to a broader index, and say the broader index is missing a few bars for whatever reason, using an integer indexed array would mean I'm looking at future data for the broad index vs present data for the given security. I don't see how you could handle these situations unless you're using date/times. Using integer indexes would be a lot easier code wise, so I was just wondering what others are doing or if there is best practice with this.
Representing time series (esp. tick data) using elaborate data structures may be not the best idea. You may want to try to use two arrays of the same length to store your time series. The first array stores values (e.g. price) and the second array stores time. Note that the second series is monotonically increasing (or at least non-decreasing), i.e. it's sorted. This property enables you to search it using the binary search algorithm. Once you get an index of a time of interest in the second array you also have the index of the relevant entry in the first array. If you wrap the two arrays and the search algorithm e.g. in a class you will have the whole implementation complexity hidden behind a simple interface.
{ "source": [ "https://quant.stackexchange.com/questions/613", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/489/" ] }
692
I'm looking at doing some research drawing comparisons between various methods of approaching option pricing. I'm aware of the Monte Carlo simulation for option pricing, Black-Scholes, and that dynamic programming has been used too. Are there any key (or classical) methods that I'm missing? Any new innovations? Recommended reading or names would be very much appreciated. Edit: Additionally, what are the standard methods and approaches for assessing/analysing a model or pricing approach?
There are a wide variety of models (by which I mean the theoretical / mathematical formulation of how the underlying financial variable(s) of interest behave). The most popular ones differ depending on the asset class under consideration (though some are mathematically the same and named differently). Some examples are: Black-Scholes / Black / Garman-Kohlhagan Local-volatility [aka Dupire model] Stochastic-volatility - a generic term for extensions of Black-Scholes where there is a second stochastic factor driving the volatility of the spot; examples are Heston, SABR Levy processes (usually actually log-Levy): a wide class of models with some features that make them theoretically / technically nice; examples are VG, CGMY jumps (often compound Poisson) of various kinds can be added to the above models, for example Merton model is Black-Scholes with jumps CIR, OU processes show up in fixed-income There are multi-factor (ie multiple driving Brownian motions) versions of the above; e.g. Libor market model, correlated log-normal models Models for pricing, say, credit-default swaps are often Poisson processes with random hazard rates Commodities such as electricity can require specialized models to handle the particular features of that market etc. Implementation methodologies can include: analytic formulae (usually involving special functions): examples are the classic case of European-style vanilla options in Black-Scholes / CEV / VG; many exotics in black-scholes can be "solved" in this way approximate analytic - for example, one might price average-rate options in Black-Scholes by approximating the final distribution (by moment-matching) with a shifted-lognormal and using the closed-form for the shifted-lognormal Binomial / trinomial trees can be viewed as a discretization technique for approximating, say, Black-Scholes. (Note that some people might view the approximation as a model in its own right --- a conflation of model & implementation and more of a philosophical stance than a practical consideration.) Numerical methods for solving or approximating the PDE governing the option price; this could be solved by finite-difference methods, finite-element methods, etc. Monte-carlo is a nice brute-force way to handle almost any kind of model and most options (though there are complications with early-exercise style features of options), but it typically takes a lot of computing power to get any accuracy in the price Interpolation could be viewed as a technique --- if you know the price of a collection of options (varying in some parameters) you can price a new option by interpolating based on the parameters (volatility surfaces implemented by interpolating a grid of given options are examples of this) etc.
{ "source": [ "https://quant.stackexchange.com/questions/692", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/473/" ] }
728
I've only recently begun exploring and learning R (especially since Dirk recommended RStudio and a lot of people in here speak highly of R ). I'm rather C(++) oriented, so it got me thinking - what are the limitations of R , in particular in terms of performance? I'm trying to weigh the C++/Python/R alternatives for research and I'm considering if getting to know R well enough is worth the time investment. Available packages look quite promising, but there are some issues in my mind that keep me at bay for the time being: How efficient is R when it comes to importing big datasets? And first of all, what's big in terms of R development? I used to process a couple hundred CSV files in C++ (around 0.5M values I suppose) and I remember it being merely acceptable. What can I expect from R here? Judging by Jeff's spectacular results I assume with a proper long-term solution (not CSV) I should be even able to switch to tick processing without hindrances. But what about ad-hoc data mangling? Is the difference in performance (compared to more low level implementations) that visible? Or is it just an urban legend? What are the options for GUI development? Let's say I would like to go further than research oriented analysis, like developing full blown UIs for investment analytics/trading etc. From what I found mentioned here and on StackOverflow , with proper bindings I am free to use Python 's frameworks here and even further chain into Qt if such a need arises. But deploying such a beast must be a real nuisance. How do you cope with it? In general I see R 's flexibility allows me to mix and match it with a plethora of other languages (either way round - using low level additions in R or embed/invoke R in projects written in another language). That seems nice, but does it make sense (I mean like thinking about it from start/concept phase, not extending preexisting solutions)? Or is it better to stick with one-and-only language (insert whatever you like/have experience with)? So to sum up: In what quant finance applications is R a (really) bad choice (or at least can be)?
R can be pretty slow, and it's very memory-hungry. My data set is only 8 GB or so, and I have a machine with 96 GB of RAM, and I'm always wrestling with R's memory management. Many of the model estimation functions capture a link to their environment, which means you can be keeping a pointer to each subset of the data that you're dealing with. SAS was much better at dealing with large-ish data sets, but R is much nicer to deal with. (This is in the context of mortgage prepayment and default modeling.) Importing the data sets is pretty easy and fast enough, in my experience. It's the ballooning memory requirements for actually processing that data that's the problem. Anything that isn't easily vectorizable seems like it would be a problem. P&L backtesting for a strategy that depends on the current portfolio state seems hard. If you're looking at the residual P&L from hedging a fixed-income portfolio, with full risk metrics, that's going to be hard. I doubt many people would want to write a term structure model in R or a monte-carlo engine. Even with all that, though, R is a very useful tool to have in your toolbox. But it's not exactly a computational powerhouse. I don't know anything about the GUI options.
{ "source": [ "https://quant.stackexchange.com/questions/728", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/38/" ] }
946
As far as I understand, most investors are willing to buy options (puts and calls) in order to limit their exposure to the market in case it moves against them. This is due to the fact that they are long gamma. Being short gamma would mean that the exposure to the underlying becomes more long as the underlying price drops and more short as the underlying price rises. Thus exposure gets higher with a P&L downturn and lower with a P&L upturn. Hence I wonder who is willing to be short gamma? Is it a bet on a low volatility? Also, for a market maker in the option market, writing (selling) an option means being short gamma, so if there is no counterparty willing to be short gamma, how are they going to hedge their gamma?
Being short gamma simply means that you are short options regardless of whether they are puts or calls. The most common type of investor that is willing to be short gamma is someone who sells options, also known as a premium collector. These investors commonly use strategies such as short puts, covered calls, iron condors, vertical credit spreads, and a few others. These strategies are typically referred to as income generation strategies. They offer the investor a return known in advance, in exchange for the risk of being short options. Frequently these types of income trades have have a probability of success over 80%. Clearly there is significant risk associated with a probability of success that high, so approach with caution.
{ "source": [ "https://quant.stackexchange.com/questions/946", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/553/" ] }
948
What would be the best approach to handle real-time intraday data storage ? For personal research I've always imported from flat files only into memory ( historical EOD ), so I don't have much experience with this. I'm currently working on a side project, which would require daily stock quotes updated every minute from an external feed. For the time being, I suppose any popular database solution should handle it without sweating too much in this scenario. But I would like the adopted solution to scale easily when real-time ticks become necessary . A similar problem has been mentioned by Marko , though it was mostly specific to R . I'm looking for a universal data storage accessible both for lightweight web front-ends ( PHP/Ruby/Flex ) and analytical back-end ( C++ , R or Python , don't know yet). From what chrisaycock mentioned column oriented databases should be the most viable solution. And it seems to be the case . But I'm not sure I understand all the intricacies of column oriented storage in some exemplary usage scenarios: Fetching all or subset of price data for a specific ticker for front-end charting Compared to row based solutions fetching price data should be faster because it's a sequential read. But how does storing multiple tickers in one place influence this? For example a statement like "select all timestamps and price data where ticker is equal to something". Don't I have to compare the ticker on every row I fetch? And in the situation where I have to provide complete data for some front-end application, wouldn't serving a raw flat file for the instrument requested be more efficient? Analytics performed in the back-end Things like computing single values for a stock (e.g. variance, return for last x days) and dependent time-series (daily returns, technical indicators etc.). Fetching input data for computations should be more efficient as in the preceding case, but what about writing? The gain I see is bulk writing the final result (like value of computed indicator for every timestamp), but still I don't know how the database handles my mashup of different tickers in one table. Does horizontal partitioning/sharding handle it for me automatically or am I better splitting manually into table per instrument structure (which seems unnecessary cumbersome)? Updating the database with new incoming ticks Using row based orientation would be more efficient here, wouldn't it? And the same goes about updating aggregated data (for example daily OHLC tables). Won't it be a possible bottleneck? All this is in the context of available open source solutions. I thought initially about InfiniDB or HBase , but I've seen MonetDB and InfoBright being mentioned around here too. I don't really need "production quality" (at least not yet) as mentioned by chrisaycock in the referenced question , so would any of this be a better choice than the others? And the last issue - from approximately which load point are specialized time-series databases necessary? Unfortunately, things like kdb+ or FAME are out of scope in this case, so I'm contemplating how much can be done on commodity hardware with standard relational databases ( MySQL/PostgreSQL ) or key-value stores (like Tokyo / Kyoto Cabinet 's B+ tree) - is it a dead end really? Should I just stick with some of the aforementioned column oriented solutions owing to the fact that my application is not mission critical or is even that an unnecessary precaution? Thanks in advance for your input on this. If some part is too convoluted, let me know in a comment. I will try to amend accordingly. EDIT: It seems that strictly speaking HBase is not a column oriented store but rather a sparse, distributed, persistent multidimensional sorted map , so I've crossed it out from the original question. After some research I'm mostly inclined towards InfiniDB . It has all the features I need, supports SQL (standard MySQL connectors/wrappers can be used for access) and full DML subset. The only thing missing in the open source edition is on the fly compression and scaling out to clusters. But I guess it's still a good bang for the buck, considering it's free.
Column-oriented storage is faster for reading because of the cache efficiency. Looking at your sample query: select price, time from data where symbol = `AAPL Here I'm concerned with three columns: price , time , and symbol . If all ticks were stored by row, the database would have to read through all rows just to search for the symbols. It would look like this on disk: IBM | 09:30:01 | 164.05; IBM | 09:30:02 | 164.02; AAPL | 09:30:02 | 336.85 So the software must skip over the price and time entries just to read the symbols. That would cause a cache miss for every tick! Now let's look at the column-oriented storage: IBM | IBM | AAPL; 09:30:01 | 09:30:02 | 09:30:02; 164.05 | 164.02 | 336.85 Here the database can sequentially scan the symbol list. This is cache efficient. Once the software has the array indices that represent the symbol locations of interest, the database can jump to the specific time and price entries via random access. (You may notice that the columns are actually associative arrays; the first element in each column refers to the first row in aggregate, so jumping to the N th row means simply accessing the N th element in each array.) As you can imagine, column-oriented storage really shines during analytics. To compute the moving average of the prices per symbol, the database will index-sort the symbol column to determine the proper ordering of the price entries, and then begin the calculation with the prices in contiguous (sequential) layout. Again, cache efficient. Beyond the column-oriented layout, many of these really new databases also store everything in memory when performing calculations. That is, if the data set is small enough, the software will read the entire tick history into memory, which will eliminate page faults when running queries. Thus, it will never access the disk! A second optimization that kdb+ does is that it will automatically enumerate text . (This feature is inspired by Lisp symbols ). So searching for a particular stock does not involve typical string searching; it's simply an integer search after the initial enumeration look-up. With the sequential storage, in-memory allocation, and the automatic text enumeration, searching for a symbol is really just scanning for an integer in an array. That's why a database like kdb+ is a few orders of magnitude faster than common relational databases for reading and analytics. As you've pointed-out in your question, writing is a weakness of column-oriented storage. Because each column is an array (in-memory) or file (on-disk), changing a single row means updating each array or file individually as opposed to simply streaming the entire row at once. Furthermore, appending data in-memory or on-disk is pretty straightforward, as is updating/inserting data in-memory, but updating/inserting data on-disk is practically impossible . That is, the user can't change historical data without some massive hack. For this reason, historical data (stored on-disk) is often considered append-only . In practice, column-oriented databases require the user to adopt a bitemporal or point-in-time schema . (I advise this schema for financial applications anyway for both better time-series analysis and proper compliance reporting.) I don't know enough about your application to determine performance or production-level requirements. I just hope the above guide will help you make an informed decision with regard to why column-oriented storage is often your best bet for analytics.
{ "source": [ "https://quant.stackexchange.com/questions/948", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/38/" ] }
955
Fund managers are acting in a highly stochastic environment. What methods do you know to systematically separate skillful fund managers from those that were just lucky? Every idea, reference, paper is welcome! Thank you!
Larry Harris has a chapter on performance evaluation in Trading and Exchanges . He states that over a long period of time, a skilled asset manager will consistently have excess returns whereas a lucky one will be expected to have random and unpredictable returns. Thus, we start with the portfolio's market-adjusted return standard deviation: \begin{equation} \sigma_{adj} = \sqrt{\sigma^2_{port} + \sigma^2_{mk} - 2\rho\sigma_{port}\sigma_{mk}} \end{equation} where $\rho$ is the correlation between the market and portfolio returns. For a sample size $n$ (generally number of years), the average excess returns, and the adjusted standard deviation from above, we have a t-statistic : \begin{equation} t = \frac{\overline{R_{port}} - \overline{R_{mk}}}{\frac{\sigma_{adj}}{\sqrt{n}}} \end{equation} Now we can simply determine the probability that the manager's excess returns were luck by plugging this t-statistic into the t-distribution 's PDF with degrees-of-freedom $n - 1$. The lower the probability, the more we can believe the manager's excess returns were from skill.
{ "source": [ "https://quant.stackexchange.com/questions/955", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
1,004
One of the answers to my previous question regarding the strategy of Renaissance Technologies , there was a reference to The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It . After doing some browsing in the book I found that it states that Renaissance Technologies obviously very successfully employs cryptography and speech recognition technology for forecasting financial time series. Do you know of any good papers (or other references) where the use of either of these technologies in connection with finance is shown?
By "cryptography" you mean information theory. Information theory is useful for portfolio optimization and for optimally allocating capital between trading strategies (a problem which is not well addressed by other theoretical frameworks.) See: --- J. L. Kelly, Jr., "A New Interpretation of Information Rate," Bell System Technical Journal, Vol. 35, July 1956, pp. 917-26 --- E. T. Jaynes, Probability Theory: The Logic of Science http://amzn.to/dtcySD --- http://en.wikipedia.org/wiki/Gambling_and_information_theory http://en.wikipedia.org/wiki/Kelly_criterion In the simple case, you would use "The Kelly Rule". More complicated information theory based strategies for allocating capital between trading strategies take into account correlations between the performance of trading strategies and the relationship between market conditions and strategy performance. As for Natural Language Processing and speech recognition; when you examine the founders of Renaissance Technology, you will notice that many of the early employees had backgrounds in natural language processing. Naively, you might assume that RT is using NLP based strategies. However, you will find that all of RT's NLP related hires have backgrounds (published research, Phd thesis's) in speech recognition and specifically in Hidden Markov Models and Kalman filters. The academic background and published research of RT employees gives you a good idea of the algorithms they are using. The information that has leaked out of RT suggests that RT heavily uses "hierarchical hidden markov models" for latent variable extraction from market time series. It is also believed that RT has developed a proprietary algorithm for "layering" multiple trading strategies for trade signal generation. RT does not have a single secret trading strategy that magically generates billions of dollars a year. Renaissance Technology's trading strategies are based upon the integration of information from multiple mathematical models.
{ "source": [ "https://quant.stackexchange.com/questions/1004", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
1,027
In what ways (and under what circumstances) are correlation and cointegration related, if at all? One difference is that one usually thinks of correlation in terms of returns and cointegration in terms of price. Another issue is the different measures of correlation (Pearson, Spearman, distance/Brownian) and cointegration (Engle/Granger and Phillips/Ouliaris).
This isn't really an answer, but it's too long to add as a comment. I've always had a real problem with the correlation/covariance of price . To me, it means nothing. I realize that it gets used (abused) in many contexts, but I just don't get anything out of it (over time, price has to generally go up, go down, or go sideways, so aren't all prices "correlated"?). On the flip side, correlation/covariance of returns makes sense. You're dealing with random series, not integrated random series. For example, below is the code required to generate two price series that have correlated returns . A typical plot is shown below. In general, when the red series goes up, the blue series is likely to go up. If you run this code over and over, you'll get a feel for "correlated returns". library(MASS) #The input data numpoi <- 1000 #Number of points to generate meax <- 0.0002 #Mean for x stax <- 0.010 #Standard deviation for x meay <- 0.0002 #Mean for y stay <- 0.005 #Standard deviation for y corxy <- 0.8 #Correlation coeficient for xy #Build the covariance matrix and generate the correlated random results (covmat <- matrix(c(stax^2, corxy*stax*stay, corxy*stax*stay, stay^2), nrow=2)) res <- mvrnorm(numpoi, c(meax, meay), covmat) plot(res[,1], res[,2]) #Calculate the stats of res[] so they can be checked with the input data mean(res[,1]) sd(res[,1]) mean(res[,2]) sd(res[,2]) cor(res[,1], res[,2]) #Plot the two price series that have correlated returns plot(exp(cumsum(res[,1])), main="Two Price Series with Correlated Returns", ylab="Price", type="l", col="red") lines(exp(cumsum(res[,2])), col="blue") If I try to generate correlated prices (not returns) , I'm stumped. The only techniques that I am aware of deal with random normally distributed inputs, not integrated inputs . So, my question is, does anyone know how to generate correlated prices ? I'm out of time, so I'll have to add my cointegration comments later. Edit 1 (04/24/2011) ================================================ The above deals with the correlation of returns , but as implied in the original question, in the real world it looks like correlation of prices is a more important issue. After all, even if the returns are correlated, if the two price series drift apart over time, my pairs trade is going to screw me. That's where co-integration comes in. When I look up "co-integration": http://en.wikipedia.org/wiki/Cointegration I get something like: "....If two or more series are individually integrated (in the time series sense) but some linear combination of them has a lower order of integration, then the series are said to be cointegrated...." What does that mean? I need some code so I can screw around with things to make that definition meaningful. Here's my stab at a very simple version of co-integration. I'll use the same input data as in the code above. #The input data numpoi <- 1000 #Number of data points meax <- 0.0002 #Mean for x stax <- 0.0100 #Standard deviation for x meay <- 0.0002 #Mean for y stay <- 0.0050 #Standard deviation for y coex <- 0.0200 #Co-integration coefficient for x coey <- 0.0200 #Co-integration coefficient for y #Generate the noise terms for x and y ranx <- rnorm(numpoi, mean=meax, sd=stax) #White noise for x rany <- rnorm(numpoi, mean=meay, sd=stay) #White noise for y #Generate the co-integrated series x and y x <- numeric(numpoi) y <- numeric(numpoi) x[1] <- 0 y[1] <- 0 for (i in 2:numpoi) { x[i] <- x[i-1] + (coex * (y[i-1] - x[i-1])) + ranx[i-1] y[i] <- y[i-1] + (coey * (x[i-1] - y[i-1])) + rany[i-1] } #Plot x and y as prices ylim <- range(exp(x), exp(y)) plot(exp(x), ylim=ylim, type="l", main=paste("Co-integrated Pair (coex=",coex,", coey=",coey,")", sep=""), ylab="Price", col="red") lines(exp(y), col="blue") legend("bottomleft", c("exp(x)", "exp(y)"), lty=c(1, 1), col=c("red", "blue"), bg="white") #Calculate the correlation of the returns. #Notice that for reasonable coex and coey values, #the correlation of dx and dy is dominated by #the spurious correlation of ranx and rany dx <- diff(x) dy <- diff(y) plot(dx, dy) cor(dx, dy) cor(ranx, rany) Notice above, that the "co-integration term" for x and y shows up inside the "for loop": x[i] <- x[i-1] + (coex * (y[i-1] - x[i-1])) + ranx[i-1] y[i] <- y[i-1] + (coey * (x[i-1] - y[i-1])) + rany[i-1] A positive coex determines how fast x will try to reduce the spread with y . Likewise, a positive coey determines how fast y will try to reduce the spread with x . You can tweak these values to generate all sorts of plots to see how those co-integration terms (y[i-1] - x[i-1]) and (x[i-1] - y[i-1]) work. After you've played with this a while, notice that it doesn't really answer the correlation of prices issue. It replaces it. So, am I now off-the-hook for the correlation of prices issue? ========================================================= Obviously, now it's time to put the two concepts together to get a model that is in the ballpark with pairs trading. Below is the code: library(MASS) #The input data numpoi <- 1000 #Number of data points meax <- 0.0002 #Mean for x stax <- 0.0100 #Standard deviation for x meay <- 0.0002 #Mean for y stay <- 0.0050 #Standard deviation for y coex <- 0.0200 #Co-integration coefficient for x coey <- 0.0200 #Co-integration coefficient for y corxy <- 0.800 #Correlation coeficient for xy #Build the covariance matrix and generate the correlated random results (covmat <- matrix(c(stax^2, corxy*stax*stay, corxy*stax*stay, stay^2), nrow=2)) res <- mvrnorm(numpoi, c(meax, meay), covmat) #Generate the co-integrated series x and y x <- numeric(numpoi) y <- numeric(numpoi) x[1] <- 0 y[1] <- 0 for (i in 2:numpoi) { x[i] <- x[i-1] + (coex * (y[i-1] - x[i-1])) + res[i-1, 1] y[i] <- y[i-1] + (coey * (x[i-1] - y[i-1])) + res[i-1, 2] } #Plot x and y as prices ylim <- range(exp(x), exp(y)) plot(exp(x), ylim=ylim, type="l", main=paste("Co-integrated Pair with Correlated Returns (coex=",coex,", coey=",coey,")", sep=""), ylab="Price", col="red") lines(exp(y), col="blue") legend("bottomleft", c("exp(x)", "exp(y)"), lty=c(1, 1), col=c("red", "blue"), bg="white") #Calculate the correlation of the returns. #Notice that for reasonable coex and coey values, #the correlation of dx and dy is dominated by #the correlation of res[,1] and res[,2] dx <- diff(x) dy <- diff(y) plot(dx, dy) cor(dx, dy) cor(res[, 1], res[, 2]) You can play around with the parameters and generate all sorts of combinations. Notice that even though these series consistently reduce the spread, you can't predict how or when the spread will be reduced. That's just one reason why pairs-trading is so much fun. The bottom line is, to get in the ballpark with modeling pairs-trading, it requires both correlated returns and co-integration . A typical example. Exxon (XOM) versus Chevron (CVX), where the above model applies if some additional terms are added. http://finance.yahoo.com/q/bc?s=XOM&t=5y&l=on&z=l&q=l&c=cvx So, to answer your question (as just my opinion), price correlation is typically used/abused as an attempt to deal with the longer term divergence/closeness of the paths of the series, when co-integration is what should be used. It is the co-integration terms that limit the drift between the series. Price correlation has no real meaning. Correlation of the returns of the series determine the short term similarity of the series. I did this in a hurry, so if anyone sees an error, don't be afraid to point it out.
{ "source": [ "https://quant.stackexchange.com/questions/1027", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/56/" ] }
1,150
Let the Black-Scholes formula be defined as the function $f(S, X, T, r, v)$. I'm curious about functions that are computationally simpler than the Black-Scholes that yields results that approximate $f$ for a given set of inputs $S, X, T, r, v$. I understand that "computationally simpler" is not well-defined. But I mean simpler in terms of number of terms used in the function. Or even more specifically, the number of distinct computational steps that needs to be completed to arrive at the Black-Scholes output. Obviously Black-Scholes is computationally simple as it is, but I'm ready to trade some accuracy for an even simpler function that would give results that approximate B&S. Does any such simpler approximations exist?
This is just to expand a bit on vonjd's answer . The approximate formula mentioned by vonjd is due to Brenner and Subrahmanyam ("A simple solution to compute the Implied Standard Deviation", Financial Analysts Journal (1988), pp. 80-83). I do not have a free link to the paper so let me just give a quick and dirty derivation here. For the at-the-money call option, we have $S=Ke^{-r(T-t)}$. Plugging this into the standard Black-Scholes formula $$C(S,t)=N(d_1)S-N(d_2)Ke^{-r(T-t)},$$ we get that $$C(S,t)=\left[N\left(\frac{1}{2}\sigma\sqrt{T-t}\right)-N\left(-\frac{1}{2}\sigma\sqrt{T-t}\right)\right]S.\qquad\qquad(1)$$ Now, Taylor's formula implies for small $x$ that $$N(x)=N(0)+N'(0)x+N''(0)\frac{x^2}{2}+O(x^3).\qquad\qquad\qquad\qquad(2)$$ Combining (1) and (2), we will get with some obvious cancellations that $$C(S,t)=S\left(N'(0)\sigma\sqrt{T-t}+O(\sigma^3\sqrt{(T-t)^3})\right).$$ But $$N'(0)=\frac{1}{\sqrt{2\pi}}=0.39894228...$$ so finally we have, for small $\sigma\sqrt{T-t}$, that $$C(S,t)\approx 0.4S\sigma\sqrt{T-t}.$$ The modified formula $$C(S,t)\approx 0.4Se^{-r(T-t)}\sigma\sqrt{T-t}$$ gives a slightly better approximation.
{ "source": [ "https://quant.stackexchange.com/questions/1150", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/526/" ] }
1,177
We all know it does mean revert. The question is why. What's making volatility mean-revert? Is it some sort of cyclical behaviour of option traders? The way it's calculated? Why?
Volatility is mean reverting if the underlying security doesn't drop to zero. If the security has some underlying "value" then its price is co-integrated with that "value". The volatility is the uncertainty of that price as it tracks the security's "value". Edit 12/03/2011 ================================================= @pteetor, I may have missed something, but the question was " Why is volatility mean-reverting?". I realize that the standard answer is that the VIX (I'm assuming he's asking about the VIX) is related to the historical volatility of the S&P. A simple version of that relationship provides a reasonable R^2 (see Fig. 1). It relates the VIX to the S&P "wiggliness" (30-day standard deviation of the daily log differences of the S&P), but it doesn't explain why more or less "wiggliness" takes place. To explain that, I have to look at the underlying fundamentals. Figure 2 below, shows the S&P Price (in gray) and what I think is the underlying S&P VALUE (in red). Both lines refer to the left hand scale. This VALUE is calculated from my estimate of sustainable earnings and the appropriate P/E ratio. It is what I think an Investor would set for the "value" (money generating value) of the S&P. The blue line is the VIX, and is read on the right hand scale. On the right hand side of the graph, I have divided the VIX range into three regions. From past experience, a VIX of 20 or less seems to be a time of "Don't Worry, Be Happy". Personally, that's when I worry the most, but the market seems to be in a care-free state, so I tagged it as such. Next, for a VIX from about 20 to 40, the market seems to be in an "I'm Nervous" state (not care-free, but also not panicky). For a VIX above 40, a "Panic" state seems to show up. Our current VIX of 28 puts us in the "I'm Nervous" state. Now to the issue of WHY . Don't the Happy/Nervous/Panicky states of the VIX have to be consistent with the level of the S&P Price (not just its "wiggliness")? If I'm "Happy" then I'm happy with the level of the Price. If I'm "Nervous" aren't I nervous about the level of the Price? If I'm "Panicky" then isn't the level of the Price nose-diving? As an Investor , the only way I can be "Happy" with the level of the Price is for the Price to be somewhere near or below my estimate of the VALUE of the S&P. That happened from 1991 to 1997, 2003 to 2007, and part of 2010 and 2011. As an Investor , I will be "Nervous" with the level of the Price when the Price is too high compared to the VALUE or when some external "thing" is going on (for example, the Euro-mess). That happened in 1990, 1997 to 2003, part of 2007, 2008, 2009, 2010, and 2011. And, as stated above, when "Panic" takes place, the market sells everything and the Price level nose-dives (1998 LTCM/Russian thing, 9/11/2001, July and Sept 2002, Sept 2008, the Debt-mess in 2010 and 2011). So, if all of the above fits the hypothesis, then " Why is volatility mean-reverting?" can be answered as.....the market has, and probably will continue to spend most of its time in the "Don't Worry, Be Happy" state or the "I'm Nervous" state (i.e. reverting to a state that is not extreme). I agree that my three-sentence answer at the top of this post left out a lot of details, but it is the short version of the same answer. Edit 12/09/2011 ======================================================= @pteetor, I must be getting old and forgetful. In your comment below, you asked for references (which I forgot to include above). Here are a few: http://faculty.fuqua.duke.edu/~charvey/Research/Working_Papers/W104_The_equity_risk.pdf http://research.stlouisfed.org/wp/2006/2006-007.pdf http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1268567 With a little Googling, you'll find more. In my answer above, I purposely didn't get into my technique for setting the "value" of the S&P. It always starts an argument about which is best, Modified Gordon Models, Modified Miller-Modigliani Models, Modified XXXX Models, or whatever. The bottom line is, no matter what valuation technique you use, you'll always find some form of "equity risk premium" that is related to volatility. It's just common sense......there has to be a "price" for being "Nervous" or "Happy". Another issue that usually comes up has to do with using Historical Volatility versus Implied Volatility. All I can say is, the volatility part of the equity risk premium existed long before options were traded.
{ "source": [ "https://quant.stackexchange.com/questions/1177", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/40/" ] }
1,274
Consider a market participant $A$ who is mechanically following an automated liquidity providing algorithm (HFT) in a number of large cap stocks on a specific exchange. Assume furthermore that we are able to observe all orders placed by $A$ and that we know that the algorithm used by $A$ takes only public market data as input. $A$ starts and ends all trading days with zero inventory. We want to reverse engineer the algorithm used by $A$. Let's call this algorithm $f(...)$. The first step in reverse engineering the algorithm $f(...)$ would be to collect potential input variables to the algorithm that can later be used to infer the exact form of $f(...)$. The first problem we face is which input variables we should collect in order to be able to reverse engineer $f(...)$. To have a starting point we can consider the input variables used in Avellaneda & Stoikov (2008) . In Avellaneda & Stoikov (2008) the authors derive how a rational market maker (non-specialist) should set his bid and ask quotes in a limit order book market. The results are obviously contingent on the assumptions and model choices made in the paper. The optimal bid (or ask) in Avellaneda & Stoikov (2008) is a function of the following inputs: The trader's reservation price, which is a function of the security price ($S$), the market maker's current inventory ($q$) and time left until terminal holding time ($T-t$) The relative risk aversion of the trader ($\gamma$) (obviously hard to observe!) The frequency of new bid and ask quotes ($\lambda_{bid}$ and $\lambda_{ask}$) The latest change in frequency of new bid and ask quotes ($\delta\lambda_{bid}$ and $\delta\lambda_{ask}$) What potential input variables should we collect in order to be able to reverse engineer $f(...)$?
I'll take a stab at it, but this is a really broad question. A direct answer: Bayesian models often use "probability that the counter-party is informed." Indirect answers: I think your assumption is that the algorithm operates on each stock individually, and has no knowledge of what it's doing in any other stock. But, it is likely that the algorithm is doing some hedging that you don't see yet. You should look at similar products (or build synthetic baskets) and see if your algorithm is changing it's quote sizes/prices when other products' quote sizes/prices change. (It is also possible that the algorithm is aware of all orders/positions it has in all stocks and it leaning more heavily on some bid/offers than others as it tries to flatten out delta (beta) of the entire portfolio.) If you're certain that the algorithm is working on an individual name with only that stock's orderbook as input, then I would study the active orders to see when and why it gets out. (maybe the only active orders are at the end-of-day flattening, in which case, I would study the canceled orders.) It's easy to make a little money working bids and offers most of the time, but at some point you will get run-over unless you know when to get out of the way (and/or hedge). I'd also look for times when the algo pauses. Are there times when it doesn't have both a bid and an offer in the book? If so, these are times that it is unsure, or it is at it's position limit (should be pretty easy to distinguish which is which). It might be helpful to see what it going on when it is 'unsure' that wasn't going on when it was 'sure.' This will help you eliminate possible parameters. Long, long, ago, we used a micro-pricer to come up with an estimate of where the real price was using things like qty traded on the ask vs. qty traded on the bid, and sums of bid and ask sizes. Initially, the micro-pricer worked as a stand-alone high-frequency liquidity-taker. Then it became a piece in a market-making algo that was just a way to know when to cancel orders.
{ "source": [ "https://quant.stackexchange.com/questions/1274", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/526/" ] }
1,391
I am looking for all kinds of research concerning option trading strategies. With that I mean papers that publish results on different option trading strategies properly backtested with real-world data.
I did some digging and found the following papers - most of them offering quite a distinct perspective compared to classical option pricing theory! Stock Options as Lotteries by Brian H. Boyer et al. (2011) The Efficiency of the Buy-Write Strategy: Evidence from Australia by Tafadzwa Mugwagwa et al. (2010) The following is my favorite: You could do some backtests on your own with freely available data (using the VXO as volatility information) and with any spreadsheet - easy and elegant: How Students Can Backtest Madoff’s Claims by Michael J. Stutzer (2009) Loosening Your Collar: Alternative Implementations of QQQ Collars by Edward Szado et al. (2009) A Study of Optimal Stock & Options Strategies by Mihir Dash et al. (2008) Is There Money to Be Made Investing in Options? A Historical Perspective by James S. Doran et al. (2008) EDIT: I will update this answer from time to time when new interesting papers arive: 15 Years of the Russell 2000 Buy-Write by N. Kapadia and E. Szado (2011) EDIT 2: I just published a blog post where I replicate the abovementioned paper by Stutzer (2009): Backtesting Options Strategies with R In the post, I provide the fully documented R code for your own experiments. For details please consult the post.
{ "source": [ "https://quant.stackexchange.com/questions/1391", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
1,489
There are many models available for calculating the implied volatility of an American option. The most popular method, employed by OptionMetrics and others, is probably the Cox-Ross-Rubinstein model. However, since this method is numerical, it yields a computationally intensive algorithm which may not be feasible (at least for my level of hardware) for repeated re-calculation of implied volatility on a hundreds of option contracts and underlying instruments with ever-changing prices. I am looking for an efficient and accurate closed form algorithm for calculating implied volatility. Does anyone have any experience with this problem? The most popular closed-form approximation appears to be Bjerksund and Stensland (2002), which is recommended by Matlab as the top choice for American options, although I've also seen Ju and Zhong (1999) mentioned on Wilmott . I am interested in knowing which of these (or other) methods gives the most reasonable and accurate approximations in a real-world setting.
I have worked on this topic extensively (pricing and calculating IV in production) and believe can offer an informed opinion. First of all Mathworks - the company that creates Matlab is not a trading firm so you should probably not rely on their advice so much. There are few closed form options pricing models, and all have practical shortcomings. Barone-Adesi and Whaley (please correct my spelling of last names as I'm typing from memory) model is simple approximation for American options but is unfortunately not very accurate, and does not deal with dividends. Roll-Geske-Whaley deals with dividends, but not very well - there are arbitrage situations that are possible in the model. Ju and Zhong have another approximation but again not very accurate. Finally Bjerksund and Stensland seem to have the best approximation (2002 version, not 1993) but that still does not solve the discrete dividend problem. In my experience the tree is the way to go. CRR trees are slow but Leisen and Reimer, 1995 came up with a scheme that converges much faster. Also Mark Joshi created his own binomial scheme that converges slightly faster. Instead of discrete dividend you can use discrete proportional dividend - so you don't end up with a bushy tree. Alternatively you can try a trinomial tree and extra DF will give you better resolution on dividend, but I did not find that big of an improvement in production. That in my opinion is the best combination for speed and accuracy. If you're looking for alternative opinions check out these two articles - http://www.nccr-finrisk.uzh.ch/media/pdf/ODD.pdf about discrete dividend problem, and http://ssrn.com/abstract=1567218 on pricing American options. Still the most important speed improvements will come in from your code: one technique that is extensively used in quoting servers is pre-computation. Basically you continuously compute prices for the stock price $\pm 0.1$ and volatility $\pm 0.01$, so when spot or vol moves you used pre-computed cached value (or interpolated from closest values).
{ "source": [ "https://quant.stackexchange.com/questions/1489", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1106/" ] }
1,501
According to my current understanding, there is a clear difference between data mining and mathematical modeling . Data mining methods treat systems (e.g., financial markets) as a "black box". The focus is on the observed variables (e.g., stock prices). The methods do not try to explain the observed phenomena by proposing underlying mechanisms that cause the phenomena (i.e., what happens in the black box). Instead, the methods try to find some features, patterns, or regularities in the data in order to predict future behavior. Mathematical modeling , in contrast, tries to propose a model for what happens inside the black box. Which approach dominates in quantitative finance? Do people try to use more and more fancy data mining techniques or do people try to construct better and better mathematical models?
I would offer the distinctions are i) pure statistical approach, ii) equilibrium based approach, and iii) empirical approach. The statistical approach includes data mining. Its techniques originate in statistics and machine learning. In its extreme there is no a priori theoretical structure imposed on asset returns. Factor structure might be identified thru Principal Components, for example. The goal here is to maximize predictive accuracy at the expense of intuition and explanatory power. This approach increasingly dominates at very short frequencies in modeling market microstructure, market making algorithms, volatility modeling, etc. However, even in high-frequency trading one can impose a factor model based on depth of order book, liquidity, factor characteristics (momentum, correlation with S&P), etc. Therefore my guess is that hybrid models (factor model + a statistical model to pick-up signal in residuals) are dominant in the HFT space. The equilibrium approach is best characterized by CAPM or Fama-French models that originate in academic finance. Here you have a theory (such as Arbitrage Pricing Theory, consumption-based theories, Black-Scholes, etc.) that imposes structure to the returns you are modeling. Many well-regarded quant shops use extensions of an equilibrium model (adding momentum, liquidity, own-volatility, etc.) or other factors to generate expected returns. In the mid-late 90's many academics identified "anomalies" and left to start their own funds and these are the biggest players. Cliff Asness et al played a big role in developing this approach at Goldman Sachs Asset Management and later at Applied Quantitative Research (AQR). Lakonishok et al started LSV. Andrew Lo has been involved with Simplex. Also, there are funds that use BARRA or Axioma models to make explicit factor bets. Many of the funds in this camp may agree that factor structure exist in the market but disagree on their source. Some would argue that the premiums exist because of behavioral bias as opposed to compensation for systematic risk. Nonetheless I would group these sub-camps under this banner since their approach in estimating risk is very similar (although they disagree on how to interpret and whether one can exploit factor premia). If you measure quant funds by AUM I would argue that this school has the highest share. Indeed the so-called quant meltdown of Aug '07 supports this view since it implies that many firms were trading on the same factors (value and momentum in particular). The third approach is the empirical approach. Members of this group apply a framework to analyze returns and usually hail from a statistical physics, computer science, or perhaps bioinformatics background. This is where you have a hypothesis (based on corporate finance theory, or by observation of market history) and you may test the hypothesis out-of-sample or in another market. I would place Capital Fund Management, Nassim Taleb's Empirica, and Vic Neiderholffer as exemplars of this category. I would argue that you can include fundamental analysts and managers such as Warren Buffet, Ken Fisher, and Peter Lynch in this category as well. They have a model for evaluating the returns of stocks (i.e. margin of safety, strong positioning, etc.) grounded in history although they do not formalize the model in the language of statistics. UPDATE: Here is a link to " Challenges in Quantitative Equity Management " by Fabozzi. It does a great job discussing what quant methods are used, and covers market share and growth rates. This may answer your question at a granular level (see page 60 for example)
{ "source": [ "https://quant.stackexchange.com/questions/1501", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/981/" ] }
1,565
I am working with a set of covariance matrices evaluated at various points in time over some history. Each covariance matrix is $N\times N$ for $N$ financial time-series over $T$ periods. I would like to explore some of the properties of this matrix's evolution over time, particularly whether correlation as a whole is increasing or decreasing, and whether certain series become more or less correlated with the whole. I am looking for suggestions as to the kinds of analysis to perform on this data-set, and particularly graphical/pictorial analysis. Ideally, I would like to avoid having to look in depth into each series as $N$ is rather large. Update The following graphs were generated based on the accepted answer from @Quant-Guy. PC = principal component = eigenvector. The analysis was done on correlations rather than covariances in order to account for vastly different variances of the $N$ series.
I would consider a motion chart that plots the eigenvalues of the covariance matrix over time. For a static view you can create a table: rows represent dates, and columns represent eigenvectors. The entries of the table represent changes in the angle of the eigenvector from the previous row. This will show how stable your covariance structure is. You can also create a second table this time with eigenvalues as the columns sorted from high to low (and the corresponding values below for each date). This shows the variance described by each eigenvector so you can see whether correlation as whole is increasing or decreasing Update: You can also measure the distance between the two covariance matrices via some distance measure metric such as Kullback-Leibler divergence, euclidean distance, Mahalanabois, etc.
{ "source": [ "https://quant.stackexchange.com/questions/1565", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1106/" ] }
1,621
The general problem I have is visualization of the implied distribution of returns of a currency pair. I usually use QQplots for historical returns, so for example versus the normal distribution: Now I would like to see the same QQplot, but for implied returns given a set of implied BS volatilities, for example here are the surfaces: USDZAR 1month 3month 6month 12month 2year 10dPut 15.82 14.59 14.51 14.50 15.25 25dPut 16.36 15.33 15.27 15.17 15.66 ATMoney 17.78 17.01 16.94 16.85 17.36 25dCall 20.34 20.06 20.24 20.38 20.88 10dCall 22.53 22.65 23.39 24.23 24.84 EURPLN 1month 3month 6month 12month 2year 10dPut 9.10 9.06 9.10 9.43 9.53 25dPut 9.74 9.54 9.51 9.68 9.77 ATMoney 10.89 10.75 10.78 10.92 11.09 25dCall 12.83 12.92 13.22 13.55 13.68 10dCall 14.44 15.08 15.57 16.16 16.34 EURUSD 1month 3month 6month 12month 2year 10dPut 19.13 19.43 19.61 19.59 18.90 25dPut 16.82 16.71 16.67 16.49 15.90 AtMoney 14.77 14.52 14.44 14.22 13.87 25dCall 13.56 13.30 13.23 13.04 12.85 10dCall 12.85 12.85 12.90 12.89 12.78 Anybody know how I could go about doing this? Any R packages or hints where to start on this? It doesn't necessarily have to be a qqplot, it could just be a plot of the density function; that would help me too. Thanks.
You can directly imply a probability distribution from a volatility skew. Note that, for any terminal probability distribution $p(S)$ at tenor $T$, we have the model-free formula for the call price $C(K)$ as a function of strike $K$ \begin{equation} C=e^{-rT} \int_0^\infty (S-K)^+ p(S) dS \end{equation} Therefore we can write \begin{equation} e^{rT} \frac{\partial C}{\partial K}=\int_K^\infty (-1) \cdot p(S) dS \end{equation} and by the fundamental theorem of calculus \begin{equation} e^{rT} \frac{\partial^2 C}{\partial K^2} = p(K) \end{equation} Therefore, all you need, in order to find the value of $p(x)$ for any $x$, is the second derivative of call prices at strike $x$. Usually, one uses a fitted skew (such as a polynomial fit) to the available volatility values at the given tenor. In your case, with just 5 points, I would recommend fitting a parabola in log strike space. Once you have a continuous skew $\sigma(K)$ then you just need to find \begin{equation} {\left. \frac{\partial^2 }{\partial x^2}\right|} BS_{\text{Call}}(S_0, x, \sigma(x), r, T, q) \end{equation} evaluated at $x=K$ which can be done either with a bunch of symbol-jiggling, or by simply finite differencing. In your case I recommend the latter. Once you have probability distribution values, of course, the process of generating the qq plots is one you have already mastered. Edit: Sign error correction, per @Robino
{ "source": [ "https://quant.stackexchange.com/questions/1621", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/954/" ] }
1,640
I have a very basic data question: how to get a list of all common stocks traded on NYSE, NASDAQ and AMEX? I would need to be able to get the approximate list of common stocks as is available in Telechart... I can not get this data at eoddata , for example... I would like to calculate market breadth indicators and would like to find out how many of common stocks traded were up or down 4% (breakouts/breakdowns) (Cl-Lag(Cl) > 0.04 (4% breakout), Cl-Lag(Cl) < -0.04 (4% breakdown)), how many of common stocks traded are down/up 25% in quarter etc. My first problem is how to get a list of symbols with only common stocks (no ETFs).
NASDAQ makes this information available via FTP and they update it every night. Log into ftp.nasdaqtrader.com anonymously. Look in the directory SymbolDirectory . You'll notice two files: nasdaqlisted.txt and otherlisted.txt . These two files will give you the entire list of tradeable symbols, where they are listed, their name/description, and an indicator as to whether they are an ETF. Given this list, which you can pull each night, you can then query Yahoo to obtain the necessary data to calculate your statistics. UPDATE: More information about these files and their fields can be found here .
{ "source": [ "https://quant.stackexchange.com/questions/1640", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1188/" ] }
1,658
Various studies have demonstrated the very large and growing influence of high frequency trading (HFT) on the markets. HFT firms are clearly making a great deal of money from somewhere, and it stands to reason that they are making this money at the expense of every other participant in the market. Defenders of HFT will argue that HFT firms provide an essential service to the economy in the form of greater liquidity. What research has been done on the benefits and costs of HFT? Has any study attempted to measure either the benefits or the costs? How would one attempt to measures these benefits and costs? What would be the effect of banning rapidly cancelled limit orders (see follow-up question ), e.g. via a minimum 1-second tick rule? Any references and professional opinions (backed by research) on this topic would be appreciated.
The lead paper in the January 2011 Journal of Finance ( Hendershott, Jones, and Menkveld ) addresses algorithmic trading (AT). In short, they find that AT improves liquidity as measured by bid-offer spreads. Taking the econometrics as correct (it is in the Journal of Finance) the next question is if bid-offer spreads are a sufficient statistic for measuring liquidity (or any other benefits). It is a difficult question to answer because, given current market structure, AT may improve liquidity (as measured by bid-offer spreads), but without data on other market structures, it is hard to say that we wouldn't better off with something like on-demand call auctions. I think there's a consensus that opening and closing call auctions have improved market quality as measured by opening and closing volatility, but it is not clear that we'd be better/worse off with completely call on-demand exchanges (although I know of at least call on-demand exchange in the works). I think at this point it's still a subjective question with smart people on both sides. I tend to think we'd be better with call auctions (in terms of the pure economics of matching supply and demand). Finally, you may find this Big Picture post interesting.
{ "source": [ "https://quant.stackexchange.com/questions/1658", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1106/" ] }
1,710
There is much speculation to what degree financial series are random (and what kind of randomness prevails). I want to turn the question on its head and ask: Is there a mathematical proof that whatever trading strategy you use you cannot beat a random walk (that is the expected value will always be 0 assuming no drift)? (I found this blog post where the author used the so called "75% rule" to purportedly beat a random walk but I think he got the distinction between prices and returns wrong. This method would only work if you had a range of allowed prices (e.g. a mean reverting series). See e.g. here for a discussion.)
I can help you beat random walk 'in the way you want', i.e. the expected value $E[\$]$ will always be positive even assuming no drift. However, I have to warn people that $E[\$] > 0$ is NOT really an adequate condition for 'beating' in reality (at least to myself). Let's define some mathematical notations for derivation, and rephrase (simplify) vonjd's question without losing generality. Assume a trader plays a fair game, and his surplus $X(0), X(1), X(2), ... X(t)$ is a martingale. Q: Can the trader find a stopping time $s$ such that $E[X($s$)] > X(0)$? A proof supporting Bootvis' answer, for comparison, consider a normal trading strategy that bets evenly. Then, $$\begin{align*}E[X(s)] &= E[ E[X(s)|X(s-1), X(s-2),..., X(0)] ] \\ &= E[X(s-1)] = E[X(s-2)] = ... = E[X(0)] = X(0).\end{align*}$$ Now, consider a 'double-betting' strategy. We keep doubling your losing trade until first win . Let's set the initial surplus, $X(0) = 0$ for simplicity. Accordingly, $X(k) = X(k-1) + G(k)$, where $G(k)=\pm 2^{k}$ with probability $1/2$. Note that we get the power $(k)$ of $2$ in $G(k)$ because of 'double-betting'. Our market is still random walk. This strategy is designed stop at a time $s = min{k}$ s.t. $G(k) > 0$ (Note that $Prob{s=infinity} = 0$) Compute $E[X(s)]$ by conditioning on s: $$\begin{align*} E[X(s)] &= E[E[X(s)|s]] = \sum_{k=1}^{\infty} E[X(s)|s=k] * Prob{s=k} \\ &= \sum_{k=1}^{...} (-1-2-4-8...-2^{(k-1)} + 2^{k}) * (1/2)^{k} \\ &= \sum_{k=1}^{...} 1 * (1/2)^k = 1 > 0 = X(0)\end{align*}$$ Conclusion A trader can make $E[X(s)]>0$ for random walk using the double-betting strategy. We proved that you can beat random walk in your definition of 'beating', i.e. expected value > 0. This is actually a simplified proof supporting Akshay's answer. Whatever it's called: volatility pumping, Kelly strategy, optimal growth portfolio, and etc. These ideas simply ask one more question: why double? Is there an optimal betting ratio because of ... (various reasons and assumptions)? WARNING: Yes, the expected value is indeed positive, and it might be an adequate proof for people who believe winning strategy is all about searching for $E[X(s)]>0$. Unfortunately, this is NOT adequate in reality, at least to myself. You have been warned. A $E[X(s)]>0$ strategy is guaranteed to make you real fortune if and only if we have 'unlimited amount of capital'. For details (long story), see wiki: Martingale betting system . You might ask what should we do if we only have limited capital? The Kelly criteria actually kind of offers the effect of the double-betting strategy for limited capital. For example, if you have a very weak trading signal (close to random walk in which there is no signal at all), the Kelly criteria will recommend you to bet something like \$1 (initially) for \$1M capital, and increase/decrease your position by certain % when you lose/win. Yeah, \$1M indeed looks like unlimited capital to \$1. (From comment) There is no contradiction to the common sense that 'pure independence = zero E[PnL]'. $E[] > 0$ in my example and vonjd'd Parrondo's paradox are indeed exploited from sort of dependency. While the Parrondo's paradox exploits the dependency between two losing games, mine is exploiting the dependency from my losing trades (which is less obvious). But warn again: This is at the cost of ruin risk! Though Kelly and vol-pump strategies eliminate ruin risk, they still suffer from trending risk.
{ "source": [ "https://quant.stackexchange.com/questions/1710", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
1,764
I had to ask this question after reading the answers to What programming languages are most commonly used in quantitative finance? I understand that C++ programs can be optimized pretty well and are faster than anything else. But in this era, the performance of a program written in a language based on frameworks such as C# and Java can be pretty close to that of C++, while the maintenance cost of the program would be lower than the C++ one. But why is C++ still a very popular language in QF?
The other posters have already noted that the prevalent use of C++ appears to be due to historical reasons and unwillingness to change. Those reasons aren't the ones that people should be applying. If you want real reasons to use C++, how about the following: Powerful infrastructure. Take a look at Intel Parallel Studio for an example. Performance compared to .Net or Java (see my course on HPC ). When each array element access checks the bounds and throws exceptions, you know you're leaking CPU cycles there. Parallelization. The C++ ecosystem has vastly superior paralellization in both 'blind' mode (OpenMP vs TPL's Parallel ) and explicit mode (Intel TBB vs TPL) Lots of SDKs, most notably CUDA, base their development on C/C++. Possibility of invoking low-level CPU instructions (e.g., work with SSE intrinsics). On the other hand, C++ is Extremely noisy. What with all the headers, include directives, friend class declarations, and myriads of other redundant things. Has hard-to-use libraries (STL, Boost) with very cryptic, global-level mechanisms. Think bind_2nd :) Editor support is vastly inferior compared to IDEA/ReSharper. Navigation, refactoring, analysis - all are weaker or non-existent. This is going to be improved in the near future for both VS and standalone editing. Compiler errors are beyond cryptic. Clang attempts to fix it to some extent, but things are still cryptic, just not as abysmally bad as they were previously. And by the way, for the typical user, the performance difference between C++ and, say, C# won't be as pronounced.
{ "source": [ "https://quant.stackexchange.com/questions/1764", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1170/" ] }
1,891
Suppose one has an idea for a short-horizon trading strategy, which we will define as having an average holding period of under 1 week and a required latency between signal calculation and execution of under 1 minute. This category includes much more than just high-frequency market-making strategies. It also includes statistical arbitrage, news-based trading, trading earnings or economics releases, cross-market arbitrage, short-term reversal/momentum, etc. Before even thinking about trading such a strategy, one would obviously want to backtest it on a sufficiently long data sample. How much data does one need to acquire in order to be confident that the strategy "works" and is not a statistical fluke? I don't mean confident enough to bet the ranch, but confident enough to assign significant additional resources to forward testing or trading a relatively small amount of capital. Acquiring data (and not just market price data) could be very expensive or impossible for some signals, such as those based on newer economic or financial time-series. As such, this question is important both for deciding what strategies to investigate and how much to expect to invest on data acquisition. A complete answer should depend on the expected Information Ratio of the strategy, as a low IR strategy would take a much longer sample to distinguish from noise.
Consider the standard error , and in particular the distance between the upper and lower limits: \begin{equation} \Delta = (\bar{x} + SE \cdot \alpha) - (\bar{x} - SE \cdot \alpha) = 2 \cdot SE \cdot \alpha \end{equation} Using the formula for standard error, we can solve for sample size: \begin{equation} n = \left(\frac{2 \cdot s \cdot \alpha}{\Delta}\right)^{2} \end{equation} where $s$ is the measured standard deviation, which you already have from your IR calculation. High-frequency Example I was testing a market-making model recently that was expected to return a couple basis points for each trade and I wanted to be confident that my returns were really positive (ie, not a fluke). So, I chose a distance of 3 bps $(\Delta = .0003)$. My sample's measured standard deviation was 45 bps $(s = .0045)$. For a confidence interval of 95% $(\alpha = 1.96)$, my sample size needs to be $n = 3458$ trades . I would have picked a tighter distance if I had been simulating this model, but I was trading live and I couldn't be too choosy with money on the line. Low-frequency Example I imagine that for a low-frequency model that was expected to return 1.5% per month , I'd want maybe 1% as the distance $(\Delta = .01)$. If the hoped-for Sharpe ratio were 3, then the standard deviation would be 1.7% $(s = .017)$, which I came-up with by backing-out the monthly returns. So for a confidence interval of 95% $(\alpha = 1.96)$, I'd need 45 months of data.
{ "source": [ "https://quant.stackexchange.com/questions/1891", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1106/" ] }
1,937
I'm working on a small application that will provide some charts and graphs to be used for technical analysis. I'm new to TA but I'm wondering if there is a way to algorithmically identify the formation of certain patterns. In most of the TA literature I've read the authors explain how to identify these patterns visually. Is there a way to algorithmically determine these patterns so that I could, for example, examine the prices in code and identify a possible Head and Shoulders pattern?
As mentioned elsewhere on this site, Lo, Mamaysky, and Wang (2000) do exactly what you're talking about, namely algorithmic detection of head and shoulders patterns. Their definition: Head-and-shoulders (HS) and inverted head-and-shoulders (IHS) patterns are characterized by a sequence of five consecutive local extrema $E_1,...,E_5$ such that $$ HS \equiv \begin{cases} E_1 \text{ is a maximum} \\ E_3 > E_1, E_3 > E_5 \\ E_1\text{ and }E_5\text{ are within 1.5 percent of their average} \\ E_2\text{ and }E_4\text{ are within 1.5 percent of their average,} \end{cases} $$ $$ IHS \equiv \begin{cases} E_1\text{ is a minimum} \\ E_3<E_1, E_3 < E_5 \\ E_1\text{ and }E_5\text{ are within 1.5 percent of their average} \\ E_2\text{ and }E_4\text{ are within 1.5 percent of their average.} \end{cases} $$
{ "source": [ "https://quant.stackexchange.com/questions/1937", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/914/" ] }
1,985
My question is pretty simple: what papers do you feel are foundational to quantitative finance? I'm compiling a personal reading list already, drawn from Wilmott forums, papers referenced in Derivatives, and other sources. However, the body of research is immense, especially in recent years, so I'm interested in what the professionals are reading/building their work off of. Any references the community could offer would be much appreciated. EDIT: As per the comments I'll define recent years as post 2000 with an emphasis on research after the crash of 2008. In particular, I'm seeking papers on quantitative management of portfolios and asset pricing.
Ledoit and Wolf shrinkage methods ("Honey I shrunk the sample covariance matrix") Ceria and Stubbs - Robust optimization literature (2006) Stock & Watson (2002ab) - papers on large N small P estimation Rockafellar & Uryasev (2000) - "Optimization of CVaR and coherent risk measures" Sorensen, Qian, Hua - "Quantitative Portfolio Management" Ang and Bekaert - International Asset Allocation with Regime Shifts Cochrane, "Asset Pricing" (2005) Cochrane, "Discount Rates", (2011) Bernd Scherer, Portfolio Construction and Risk Budgeting 4th Edition Robertson et al, "Forecasting Using Relative Entropy" (2002) Here are recent picks that I believe will be looked on as major contributions: "Robust Bayesian Allocation", Attillio Meucci (2010) "Dynamic stock selection - A structured factor model framework", Lopes Carvallho Aguilar (2011) " A New Breed of Copulas for Risk and Portfolio Management ", Atillio Meucci (2011)
{ "source": [ "https://quant.stackexchange.com/questions/1985", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1354/" ] }
2,019
Can anyone recommend books that explain the math used in quantitative finance academic papers?
If you need a primer covering various domains of math then Dan Stefanica's text will do the job. The text covers multivariable calculus, lagrange multipliers, black scholes PDF, greeks & hedging, newton's method, bootstrapping, taylor series, numerical integration, and risk neutral valuation. It also includes a mathematical appendix. If you want an introduction to risk analysis complete with geometric interpretations check out ATillio Meucci's Risk and Asset Allocation . Hull's Options, Futures, and Derivatives is a classic that includes stochastic calculus and the topics in the title. Here are the best applied statistics books: Rene Carmona's " Statistical Analysis of Financial Data in S-Plus " covers a lot of ground with examples compatible with R. He starts with foundations and builds towards more complex models. If you want ready-to-apply solutions Eric Zivot's " Modeling Financial Time Series with S-Plus " is encyclopedic in the range of topics covered. Whereas Carmona will focus on various modeling techniques, Zivot will cover portfolio optimization, factor analysis, and many other topics. It makes for a great reference rather than a cover-to-cover read. If you want to focus on time-series specifically with an applied bent - Shumway and Stoffer's Time Series Analysis and Applications is also great. The solutions are compatible with R. There are various theoretical statistics books (Hamilton, Ruey Tsay) but those will assume you understand the math.
{ "source": [ "https://quant.stackexchange.com/questions/2019", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/906/" ] }
2,074
I have a sample covariance matrix of S&P 500 security returns where the smallest k-th eigenvalues are negative and quite small (reflecting noise and some high correlations in the matrix). I am performing some operations on the covariance matrix and this matrix must be positive definite. What is the best way to "fix" the covariance matrix? (For what it's worth, I intend to take the inverse of the covariance matrix.) One approach proposed by Rebonato (1999) is to decompose the covariance matrix into its eigenvectors and eigenvalues, set the negative eigenvalues to 0 or (0+epsilon), and then rebuild the covariance matrix. The issue I have with this method is that: the trace of the original matrix is not preserved, and the method ignores the idea of level repulsion in random matrices (i.e. that eigenvalues are not close to each other). Higham (2001) uses an optimization procedure to find the nearest correlation matrix that is positive semi-definite. Grubisic and Pietersz (2003) have a geometric method they claim outperforms the Higham technique. Incidentally, some more recent twists on Rebonato's paper are Kercheval (2009) and Rapisardo (2006) who build off of Rebonato with a geometric approach. A critical point is that the resulting matrix may not be singular (which can be the case when using optimization methods). What is the best way to transform a covariance matrix into a positive definite covariance matrix? UPDATE: Perhaps another angle of attack is to test whether a security is linearly dependent on a combination of securities and removing the offender.
Nick Higham's specialty is algorithms to find the nearest correlation matrix. His older work involved increased performance (in order-of-convergence terms) of techniques that successively projected a nearly-positive-semi-definite matrix onto the positive semidefinite space. Perhaps even more interesting, from the practitioner point of view, is his extension to the case of correlation matrices with factor model structures. The best place to look for this work is probably the PhD thesis paper by his doctoral student Ruediger Borsdorf. Higham's blog entry covers his work up to 2013 pretty well.
{ "source": [ "https://quant.stackexchange.com/questions/2074", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1800/" ] }
2,260
If I know the daily returns of my portfolio, I need to multiply the Sharpe Ratio by $\sqrt{252}$ to have it annualized. I don't understand why that is.
Actually, that is not always the case. Here is a great paper by Andy Lo, "The Statistics of Sharpe Ratios". He shows how monthly Sharpe ratios "cannot be annualized by multiplying by $\sqrt{12}$ except under very special circumstances". I expect this will carry over to annualizing daily Sharpe Ratios.
{ "source": [ "https://quant.stackexchange.com/questions/2260", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1553/" ] }
2,707
Usually in asset allocation you have a quantitative approach (which can be from example mean-variance), but you (or you and your firm) also have a more qualitative approach given market-conditions, economic outlooks, or tactical indicators. Hence, you will eventually come up with 2 allocations, the ones strictly dictated by the numbers $w^*$ which is the result of your quantitative algorithm and the one you have in mind from your personal expectations $\bar{w}$. What are common the ways $f$ to mix them together such that $w=f(w^*,\bar{w})$ is your "final" allocation?
There are some cases where you can blend your portfolios using weights directly. One case involves corner portfolios . In this case a linear combination of weights is also efficient. Another case is where you can treat the two separate weights you have produced each as distinct portfolio under the assumption that the correlation between these portfolios is relatively stable. In this scenario, the problem reduces to a two-asset portfolio optimization problem (each asset is simply the linear combination of weights produced via your two methods). The other class of methods involves blending via the expected returns. If you arrived at the weights via a mean-variance utility optimization you can back-out the implied expected returns based on these weights and a risk aversion parameter. (Indeed, this is the approach Black-Litterman took to back-out the implied expected returns from a set of benchmark weights, and Jay Walters shows the simple linear algebra for this in the paper I cite below.) The approaches below require that you blend views on expected returns rather than weights. This is more natural since weights are product of some optimization (one might be short a security for hedging purposes despite having a positive expected return view for the security). Two sets of portfolio weights may each be on the efficient frontier but a generic convex blend of these two sets may be inefficient. To blend your qualitative scores with quantitative views in return space you can: Convert qualitative factors into quantitative scores . Grinold & Kahn discuss various techniques in Active Portfolio Management , 2nd ed. Check out the section "Information Processing". One straightforward technique is if you have a rating system such as "Sell, Hold, Buy, Strong Buy" then associate each rating with a dummy variable and build a linear (or non-linear) factor model including your quantitative forecasts as other factors. (Note: There is a more general question of "signal weighting - how do I blend quantitative information efficiently?" which might be worthy of another post.) OR Express qualitative views in the form of confidences via Black-Litterman (i.e. MSFT will rise more than APPL with 20% confidence). A Black-Litterman model - specifically the Idzorek variation which uses % confidences - is a good way to do this. Jay Walters has a nice reference paper on Black-Litterman here . Also there is a package in R called BLCop that you can toy with. The Black-Litterman model has been refined over the last several years. Read the papers from Wing Cheung (Nomura) on the "Augmented Black-Litterman model" if you want to see another explanation. His implementation is quite flexible as it supports generalized factor-view blending as well as other features. OR Yet a more general technique is Entropy-pooling . Whereas, Black-Litterman allows you to create views on expectations of asset performance (MSFT will return 8%), or relative views as in MSFT will-outperform APPL, you might have views on correlations, variance, views on the rankings of securities, or views on underlying risk-factors that are statistically related to your securities of interest. These views cannot be satisfied by the "Pick Matrix"/Omega construction in Black-Litterman. In this case Atillio Meucci's implementation of Entropy-Pooling is the way to go. He has MATLAB code demonstrating the approach here . The Entropy Pooling framework applies to parametric or non-parametric problems. The non-parametric version of Entropy Pooling can handle scenarios which correspond to arbitrary probability distributions. Entropy pooling pooling will process a view with and update the probabilities for each scenario in a way that imposes the least amount of spurious structure on the original probabilities assigned to the scenarios. In this way Entropy pooling is perfectly Bayesian. Essentially you have a prior -- JxN panel of data furnished from historical data, a reference model, or a monte carlo simulation (J = number of scenarios; N = asset returns or risk factors -- anything you could take a view on.). This JxN panel ties to a vector 'p' of probabilities where one probability corresponds to each scenario. (If you are using historical data, the vector of probabilities could simply be 1/length(data), or exponentially weighted.) Then you can create a view which contains your current qualitative scores. These views are expressed as constraints on probabilities. So you can setup a constraint which is interpreted as "Buy implies the security is in the top quantile of returns". Or, perhaps you aren't sure exactly what the labels imply about expected returns but you believe it will be consistent with the prior. In this case you can assign the qualitative scores from the past to the historical empirical data (even if you only have partial coverage of the investment universe), and then create views consisting of your qualitative categorical assessments. The Entropy pooling procedure will generate a revised set of probabilities for each of the scenarios. You can then take expectations (probability weighted average) with the new probabilities for expected portfolio returns, expected security returns, correlations, etc. You would then proceed to optimization with you revised expectations on returns and risk.
{ "source": [ "https://quant.stackexchange.com/questions/2707", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/467/" ] }
2,870
I've been a researching minimum variance portfolios (from this link ) and find that by building MVPs adding constraints on portfolio weights and a few other tweaks to the methods outlined I get generally positive returns over a six-month to one year time scale. I am looking to build some portfolios that are low risk, but have good long term (yearly) expected returns. MVP (as in minimum variance NOT mean variance) seems promising from backtests but I don't have a good intuition for why this works. I understand the optimization procedure is primarily looking to optimize for reducing variance, and I see that this works in the backtest (very low standard deviation of returns). What I don't have an intuitive feel for is why optimizing variance alone (with no regards to optimizing returns, i.e. no mean in the optimization as in traditional mean-variance optimization) gives generally positive returns. Any explanations?
The minimum variance solution loads up on securities that have low variances and co-variances. Theoretically you are correct that this should have a low expected return profile. However, it turns out - in contradiction to modern portfolio theory - that securities that have low-volatility or low-beta experience higher returns than high-volatility or high-beta stocks. This is well-documented in the literature as the low-volatility anomaly . As a result, many funds and ETFs have been launched in recent years to exploit this phenomenon. There are a couple arguments as to why the anomaly exists. The paper I cite above argues that institutional investor objectives and constraints create the anomaly: Over the past 41 years, high volatility and high beta stocks have substantially underperformed low volatility and low beta stocks in U.S.markets. We propose an explanation that combines the average investor's preference for risk and the typical institutional investor’s mandate to maximize the ratio of excess returns and tracking error relative to a fixed benchmark (the information ratio) without resorting to leverage. Models of delegated asset management show that such mandates discourage arbitrage activity in both high alpha, low beta stocks and low alpha, high beta stocks. This explanation is consistent with several aspects of the low volatility anomaly including why it has strengthened in recent years even as institutional investors have become more dominant.
{ "source": [ "https://quant.stackexchange.com/questions/2870", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2186/" ] }
2,906
I currently have a local volatility model that uses the standard Black Scholes assumptions. When calculating the volatility surface, what causes the difference between the call volatility surface, and the put surface?
The reason for put and call volatilities to appear different is that the implied vol has been calculated using different drift parameters than those implied by the market. Let's take everything in the model as given except the interest rate $r$ and the volatility $\sigma$. For European options we have the Black-Scholes formula for put and call values $V_{P,C}$ $$ V_{P,C}=BS_{P,C}(r,\sigma) $$ Now, although it is common practice to run this equation backwards to "imply" the volatility $\sigma$ $$ \sigma_{\text{Imp}} = BS^{-1}_{\sigma}(r,V) $$ we can see that from a mathematical point of view we could imply $r$ instead $$ r_{\text{Imp}} = BS^{-1}_{r}(\sigma,V). $$ Obviously, using a different $r$ affect options prices and therefore implied volatilities. Consider now the consequences of receiving prices from someone using the Black-Scholes model. For concreteness I will take $T=1, K=S=100$ and no carry cost. Let's say you think $r=1\%$. I give you put and call prices of $7.95$ and $11.80$. You will get a put vol of $21.3\%$ and a call vol of $28.6\%$. Seem familiar? That's because I actually generated those prices using $r=4\%$. If you had used the same drift parameter $r$ as I had employed, you would have computed both volatilities to be $25\%$. Generally, risk-free interest rates are not too hard to pin down, but we have other effects on drift where the parameters are not so obvious. This includes dividends, borrow costs and funding costs. Each of these terms is typically treated as a deterministic "carry cost" but even in the simple case of European options it is not necessarily clear what values should be used for them. So to your answer your question, the difference between put and call volatility surfaces is a symptom of your drift parameters failing to match those of the market.
{ "source": [ "https://quant.stackexchange.com/questions/2906", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2004/" ] }
3,783
What is an efficient data structure to model order book of prices and quantities to ensure: constant look up iteration in order of prices retrieving best bid and ask in constant time fast quantity updates
The specifics depend on if you're implementing for equities (order-based) or futures (level-based). I recommend https://web.archive.org/web/20110219163448/http://howtohft.wordpress.com/2011/02/15/how-to-build-a-fast-limit-order-book/ for a general overview of a good architecture for the former. Building off of that, though, I have found that using array-based over pointer-based data structures provides faster performance. This is because for modern pipelining, caching CPUs, branching and dereferences (especially from uncached memory) are the biggest performance killers because they introduce data dependencies resulting in pipeline / memory stalls. If you can, you should also optimize for the particular exchange. For instance, it turns out that, last I checked, Nasdaq generates order IDs incrementally starting from a small number, so you can store all the orders in a giant array instead of a hashtable. This is really cache- and TLB-friendly compared to a hashtable because most updates tend to happen to recently-dereferenced orders. Speaking from experience with equities (order-based), my preferred architecture is: A large associative array from order ids to order metadata ( std::unordered_map or std::vector if you can swing it such as in the case of Nasdaq). The order metadata includes pointers to the order book (essentially consisting of the price-levels on both sides) and price-level it belongs to, so after looking up the order, the order book and price level data structures are a single dereference away. Having a pointer to the price allows for a O(1) decrement for an Order Execute or Order Reduce operation. If you want to keep track of time-priority as well, you can keep pointers to the next and previous orders in the queue. Since most updates happen near the inside of the book, using a vector for the price levels for each book will result in the fastest average price lookup. Searching linearly from the end of the vector is on average faster than a binary search, too, because most of the time the desired price is only a few levels at most from the inside, and a linear search is easier on the branch predictor, optimizer and cache. Of course, there can be pathological orders far away from the inside of the book, and an attacker could conceivably send a lot of updates at the end of the book in order to slow your implementation down. In practice, though, most of the time this results in a cache-friendly, nearly O(1) implementation for insert , lookup , update and delete (with a worst-case O(N) memcpy). Specifically for a Best Bid / Offer (BBO) update, you can get an implementation to calculate the update in just a few instructions (essentially appending or deleting an element from the end of the vector ), or about three dereferences. The behavior of this is O(1) best-case behavior for insertion, lookup, delete and update with very low constants (I've clocked it to be on average the cost of a single fetch from main memory - roughly 60ns). Unfortunately you can get O(N) worst case behavior, but with low probability and still with very good constants due to the cache-, TLB- and compiler-friendliness. It is also very fast, nearly optimally so, in the case of BBO updates which is what you are usually interested in anyways. I have a reference implementation here for Nasdaq's ITCH protocol which illustrates these techniques and more. It clocks in at a mean of just over 61ns / tick (about 16 million ticks / second) by only keeping track of the quantity at each price-level and not the order queue, and uses almost no allocation besides std::vector resizing.
{ "source": [ "https://quant.stackexchange.com/questions/3783", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2678/" ] }
4,240
I've been using Monte Carlo simulation (MC) for pricing vanilla options with non-lognormal underlyings returns. I'm tempted to start using MC as my primary option-valuating technique as I can get sound results without relying on the assumptions of the analytical methods (Black-Scholes, for example). Computational costs aside, when to use MC simulation over analytical methods for options pricing?
Monte Carlo is most useful when you lack analytic tractability or when you have a highly multidimensional problem. For example, even using simple lognormal and poisson models, there exist path-dependent payoffs or multi-asset computations such that no analytic solution exists and such that any PDE finite difference solution would require 3 or more dimensions. Other times, you are employing a model where the SDE is not solvable, so an apparently one-dimensional problem still ends up forcing you to generate many incremental paths using Euler or Milstein integration. Cases Where Monte Carlo Is A Poor Idea Weakly path-dependent options (e.g. lookbacks): Use PDE or series solutions Single-dimensional cases : If your problem is just one dimensional, such as pricing a payoff along the terminal distribution, you should never use Monte Carlo, since numerical quadrature is far superior in this case, even if you just use Riemann sums. Cases Where Monte Carlo Is A Good Idea Strongly path-dependent options such as ratchet range options Portfolio risks and exotic baskets where high dimensionality comes into play. CDO tranche protection is a classic example. So are tail risk computations, especially for multi-asset portfolios. Intractable models where the terminal distribution cannot be computed, such as some stochastic vol models To the point about single-dimensional cases -- it sounds like this describes your usage, perhaps because you are using some kind of implied distributional fit to agree with volatility skew. In this case Monte Carlo seems easy, but using a trapezoid rule integrator (or similar) will be as easy and far higher quality by about any measure. Now Monte Carlo does make it tricky to accurately compute greeks. As with any model, we can compute greeks by using a finite difference "parameter bump", computing our greek $$ g_\mu =\frac{ V(\dots,\mu+\Delta \mu,\dots) - V(\dots, \mu,\dots)}{\Delta \mu} $$ but if there is a lot of random noise in those two separate computations of $V()$ then our $g_\mu$ will be inaccurate. Instead it is important to bring the differencing inside the Monte Carlo formula. That is, we don't want to be doing $$ \hat{g}_\mu =\frac{ \frac1M \sum_{i=1}^M V(x_i,\dots,\mu+\Delta \mu,\dots) - \frac1M \sum_{i=1}^MV(y_i, \dots, \mu,\dots)}{\Delta \mu} $$ for two separate sample sets $x_i$ and $y_i$. Instead, we want to use the same $x_i$ for both sums, meaning we effectively compute $$ g_\mu =\frac1{M {\Delta \mu}} \sum_{i=1}^M V(x_i,\dots,\mu+\Delta \mu,\dots) -V(x_i, \dots, \mu,\dots) $$ and end up with a far more accurate estimate, typically better than our estimate of option value. I'll make one final note, which is that you feel you "get sound results without relying on the assumptions of the analytical methods". If your terminal distributions are empirically generated, then you are likely to misprice any options because you are not using anything close to a risk-neutral measure. For example, you almost certainly find yourself pricing a forward contract $F$ far higher than the true, arbitrageable value range $F \in [S_0 e^{r_L T}, S_0 e^{r_b T}]$ where $r_b, r_L$ are standard borrow-lend rates. (Vytautas beat me to some of these points)
{ "source": [ "https://quant.stackexchange.com/questions/4240", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2257/" ] }
4,589
I want to simulate stock price paths with different stochastic processes. I started with the famous geometric brownian motion. I simulated the values with the following formula: $$R_i=\frac{S_{i+1}-S_i}{S_i}=\mu \Delta t + \sigma \varphi \sqrt{\Delta t}$$ with: $\mu= $ sample mean $\sigma= $ sample volatility $\Delta t = $ 1 (1 day) $\varphi=$ normally distributed random number I used a short way of simulating: Simulate normally distributed random numbers with sample mean and sample standard deviation. Multiplicate this with the stock price, this gives the price increment. Calculate Sum of price increment and stock price and this gives the simulated stock price value. (This methodology can be found here ) So I thought I understood this, but now I found the following formula , which is also the geometric brownian motion: $$ S_t = S_0 \exp\left[\left(\mu - \frac{\sigma^2}{2}\right) t + \sigma W_t \right] $$ I do not understand the difference? What does the second formula says in comparison to the first? Should I have taken the second one? How should I simulate with the second formula?
The way you do it in the first place is a discretization of the Geometric Brownian Motion (GBM) process. This method is most useful when you want to compute the path between $S_0$ and $S_t$, i.e. you want to know all the intermediary points $S_i$ for $0 \leq i \leq t$. The second equation is a closed form solution for the GBM given $S_0$. A simple mathematical proof showed that, if you know the initial point $S_0$ (which is $a$ in your equation), then the value of the process at time $t$ is given by your equation (which contains $W_t$, so $S_t$ is still random). However, this method will not tell you anything about the path. As mentioned in the comments below, you can also use the close form to simulate each step of the paths.
{ "source": [ "https://quant.stackexchange.com/questions/4589", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/3298/" ] }
4,954
I've read the so-called "leverage-effect" for stocks models the fact that if a company is leveraged, its volatility should increase as the stock price moves lower and closer to the level of debt. Can someone please explain this to me?
The key to this is to think about the enterprise value of a business separately from how it is financed. For simplicity sake, consider a business that comprises a sole gold bar (no workers, no extraction costs, etc). The value of the business is clearly just the value of the gold bar. If it were a listed company, with no debt, then the equity capitalization would be the value of the gold bar, and the volatility of the share price would be equal to the volatility of the gold price. Now consider the same company financed with $50\%$ debt (at zero interest) and $50\%$ equity. The enterprise value of the geared company remains the same as before, but the equity capitalization is half as much (since the debt holders are owed the other half). However, whereas the claims of debt holders is fixed in nominal dollars, the equity holders get the benefit/cost of a higher/lower gold price. E.g. If the gold bar is initially worth $\$100$ (financed with $\$50$ equity and $\$50$ debt), but then rises to $\$110$, then the value of equity becomes $\$60$, while the value of debt remains at $\$50$. Equity holders enjoy a $20\%$ increase ($=\frac{10}{50}$) in share value, against $10\%$ ($=\frac{10}{100}$) in the unlevered case. In moving from $0\%$ gearing to $50\%$ gearing, the volatility of equity value has doubled.
{ "source": [ "https://quant.stackexchange.com/questions/4954", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/3184/" ] }
5,981
I would like to know if someone could provide a summarized view of the advantages and disadvantages of the approaches on the volatility surface issues, such as: Local vol Stochastic Vol (Heston/SVI) Parametrization (Carr and Wu approach)
The volatiltiy surface is just a representation of European option prices as a function of strike and maturity in a different "unit" - namely implied volatility (while the term implied volatility has to be made precise by the model used to convert prices (quotes) into implied volatilities - for example: we may consider log-normal vols and normal vols). Volatility is often preferred over prices, e.g., when considering interpolations of European option prices (although this may introduce difficulties like arbitrage violations, see, e.g., http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1964634 ). A local volatility model can generate a perfect fit to the implied volatility surface via Dupire's formula as long as the given surface is arbitrage free. In other words: the model can calibrate to a surface of European option prices. Since this calibration is done by an analytic formula the calibration is exact and fast. Parametric models, like stochastic volatility models, usually are more difficult to calibrate to Europen option price. The formulas that are derived for calibration are usually more complex and often the model does not produce an exact fit. Obviously, the reason to use a stochastic volatility model (or parametric model) is not given by the need to calibrate to European options. The reason is to capture other effects of the model. An important effect to be considered is the forward volatility . Let $t=0$ denote today. Given the model is calibrated to the implied volatitliy surface. How does the volatlity surface generated by the model look like in $t=t_1$ at state $S(t_1) = S_1$? The forward volatility will describe the option price conditional to a future point in time. It is important for "Options on Options" and "Forward Start Options". In other words: More exotic products depend on this feature. While European option only depend on the terminal distributions conditional to today, such a feature depends on the dynamics (conditional transition probabilities). In a local volatility model the forward volatility shows a possibly unrealistic behavior: it flattens out. The smile is vanishing. A stochastic volatility model can produce a more realistic forward volatility surface, where the smile is almost self similar.. Another aspect are sensitivities (hedge ratios): Using a local volatility model may imply a too rigid assumption on how the volatiltiy surface depends on the spot. This then has implications on the calculation of sensitivities (greeks). Afaik, this was the main motivatoin to introduce the SABR model (which is a stochastic volatility model used to interpolate the implied volatility surface): to have a more realistic behavior w.r.t. Greeks). To summarize: Local Volatility Model : Advantage: Fast and exact calibration to the volatility surface. Suitable for products which only depend on terminal distribution of the underlying (no "conditional properties"). Not suitable for more complex products which depend heavily on "conditional properties". Stochastic Volatility Model : Advantage: Can produce more realistic dynamics, e.g. forward volatility. Can produce more realistic hedge dynamics. Disadvantage: For products which depend only on terminal distributions the fit of the volatility surface may be too poor.
{ "source": [ "https://quant.stackexchange.com/questions/5981", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/3595/" ] }
6,988
In quantitative finance, we know we have a lot of option price models such as geometric Brownian motion model (Black-Scholes models), stochastic volatility model (Heston), jump diffusion models and so on, my question is how can we use these models to make money in practice? My comments: Because we can read option price from the market, by these models (Black-Scholes), we can get the implied volatility, then we may use this implied volatility to compute other exotic option price, then we can make money by selling/buying this exotic option as a market maker, is this the only way to make money? For stocks, we know that if we have a better model to predict future stock prices, then we can make money, but for option, it seems that we didn't use these models to predict the future option prices? so how can we make money with these models?
In general there are two basic ways to make money out of your option pricing models: Sell side (market maker, risk neutral): You use these models to calculate your greeks to hedge your portfolio, so that you live on the spread. Buy side (market/risk taker): You use your model to find mispriced options in the market and buy/sell accordingly. (A third possibility would be to write fancy books and papers about these models and get rich and/or tenure this way ;-)
{ "source": [ "https://quant.stackexchange.com/questions/6988", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/4601/" ] }
7,276
What makes GARCH(1,1) so prevalent in modeling volatility, especially in academia? What does this model offer that makes it significantly better than the others?
First, Garch models stochastic volatility. Thus its use should be limited to estimating the volatility component. The difference in some of the volatility models is the assumption made of the random variance process components. I believe it has been popular because it is an extension of the ARCH family of models and it is relatively easy to setup and calibrate because it relies on past observations. Think of it this way: If you are to pinpoint your PhD dissertation topic would you take the risk to delve into deriving a new model, taking the risk you utterly fail and get nowhere over your x years of research or are you more likely to work on extensions or improvements of what currently exists? The same applies here GARCH is an extension of ARCH and there are numerous extensions of GARCH as well, such as GARCH-M, IGARCH, NGARCH... I disagree with cdcaveman that it is the best model out there because it suffers from major deficiencies. Every model makes assumptions but there are better models out there for sure which is why I do not know of too many volatility traders that rely primarily on GARCH models in their quest to forecast volatility. Deficiencies: It depends heavily on past variances The definition of "long-term variance" is at best arbitrary making the assumption of the randomness originating from a normal distribution The weights are just a result of optimization (MLE or other optimizers) of past data and make up the bulk of the calibration process. Volatility dynamics are changing in the same way as most other inputs to asset prices are dynamic thus making the assumption that an optimization of past variances, which results in the weights that make up the bulk of the current variance estimate, will yield anything that produces excess returns is a horrible assumption, imho. Though most multivariate models can get quickly complex, multivariate GARCH can be tricky in regards to specifying the covariances (VECH or BEKK come to mind). (credit to Bob Jansen for pointing out this aspect of GARCH). Volatility models that are originating from trading desks and that are rarely to be found in academic paper or the public domain often do not make a normal distribution assumption of the variance dynamics heavily incorporate regime shifts rarely rely on functions of linear nature incorporate correlation structures with other asset classes and even non-price return related inputs. In summary, its a neat model to output something to show off within minutes. Whether the results are usable is an entirely different question and again I do not know of too many pure index vol traders who embrace GARCH. Edit: A look at the SABR model (or dynamic SABR) might be beneficial when searching for better models, though the "backbone" dynamics of the SABR model are more applicable for some derivatives than others.
{ "source": [ "https://quant.stackexchange.com/questions/7276", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/4781/" ] }