source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
7,568
Is there any known solution (preferably open source) to map between ticker symbols, Reuters and Bloomberg symbols. For example: Ticker: AAPL Reuters: AAPL.O (may be prefixed with RSF.ANY. dependent upon infrastructure) Bloomberg: AAPL US Equity Edit: by mapping I mean translating from one symbol naming convention to another. For example let's say we have RSF.ANY.AAPL.O and want to get Bloomber equivalent, which is "AAPL US Equity". Edit2: Fixed Bloomber mapping, it should be "AAPL US Equity" not "AAPL:US"
Here are some pointers. First of all: What you list as a Reuters RIC, RSF.ANY.AAPL.OQ , is not really a RIC, only the AAPL.OQ is. The initial part is some stuff which is essentially site specific and tells me that you are working on a site that has a legacy RTIC infrastructure (some Reuters/TIBCO technology which is quite old these days and for all practical purposes has been deprecated in favour of other distribution mechanisms, most notably the ADS). Ok, the AAPL.OQ is the RIC, and only that. The initial part, the RSF.ANY denotes the feed and that is because the Reuters Market Data System (an in-house ticker plant) is vendor agnostic and can have any feed on it, for example Bloomberg. So the initial part could might as well be BB.ANY in order to denote the site's Bloomberg feed. .. and then the latter part would of course be a Bloomberg symbol, not a RIC. But we are getting ahead of ourselves here and mixing technology implementation with that of instrument naming schemes, most notably the RIC and the BSYM. With regards to how Reuters RICs are constructed you can read this guide . This document also exist on the Thomson Reuters web site but it doesn't seem to be available without registration. I just found the public link by Googling. There may be a newer version of this document but I doubt it makes much of a difference. RICs have been constructed the same way for ages. As for the Bloomberg Symbology (BSYM) you can find more information on this link . Bloomberg has multiple identifiers to identify the same thing. Only the BBGID (Bloomberg Global ID) doesn't change with name changes, i.e it is constant over time. The downside is that it is totally meaningless. Another way to access a data item is to use a combination of the Ticker, Market and Pricing source with spaces between them. Both Reuters and Bloomberg try to use the exchange's ticker symbol as part of their naming standard whenever possible. Unfortunately some exchanges use incomprehensible ticker symbols but that is not really the fault of Reuters or Bloomberg. Bloomberg has made their symbology available under a very liberal "open source" license. Don't be fooled though. He who defines the universe, owns it. Without an impartial body to allocate the symbols it is really not worth much, IMHO. The difficulty in naming financial instruments lies not so much with exchange traded instruments like equities. That is rather simple: Take the exchange's own ticker symbol and then add some self-invented identifier to denote the market place. That's how both Reuters and Bloomberg do it. Nope, my friend, the difficulty (and the real lock-in) is with all the OTC instruments.
{ "source": [ "https://quant.stackexchange.com/questions/7568", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/3308/" ] }
7,604
Volatility skew tells us that options with the same maturity at different strikes can have different implied vol. However, can a corresponding call and put for the same strike and maturity have different implied vol?
Taking away all frictions and incomplentess of the market, the theory says that European Call and Puts do have the same implied volatility unless there is an arbitrage opportunity by put call parity $$ C(t,K) - P(t,K) = DF_t(F_t - K)\ . $$ If you plug the Black-Scholes formula here for the prices of the call and the put, you will see that the equality only holds if and only if volatilities are equal. $$ DF_t[F_t(\Phi(d_+^{Call}) + \Phi(-d_+^{Put})) - K(\Phi(d_-^{Call}) + \Phi(-d_-^{Put}))] = DF_t(F_t-K) $$ Since $\Phi(x)+\Phi(-x)=1$, put call parity holds if and only if $d_\pm^{Call} = d_\pm^{Put}$, so if and only if $\sigma_{Call}(t,K) = \sigma_{Put}(t,K)$. In practice there are bid-ask spreads and liquidity issues which implies that observable prices of European options do no align necessarily to the theory. For American options (the standard options traded on Equity stocks) we can still think in terms of implied volatility but there is no such thing as a put-call parity so implied volatilities are not necessarily equal anymore. There are some put-call parity style inequalities but those are not strong enough to guarantee the equality of volatilities.
{ "source": [ "https://quant.stackexchange.com/questions/7604", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/4729/" ] }
7,761
We all know if you back out of the Black Scholes option pricing model you can derive what the option is "implying" about the underlyings future expected volatility. Is there a simple, closed form, formula deriving Implied Volatility (IV)? If so can you could you direct me to the equation? Or is IV only numerically solved?
Brenner and Subrahmanyam (1988) provided a closed form estimate of IV, you can use it as the initial estimate: $$ \sigma \approx \sqrt{\cfrac{2\pi}{T}} . \cfrac{C}{S} $$
{ "source": [ "https://quant.stackexchange.com/questions/7761", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/4729/" ] }
8,247
This question has puzzled me for a while. We all know geometric brownian motions have drifts $\mu$: $dS / S = \mu dt + \sigma dW$ and different stocks have different drifts of $\mu$. Why would the drifts go away in Black Scholes? Intuitively, everything else being equal, if a stock has higher drift, shouldn't it have higher probability of finishing in-the-money (and higher probability of having higher payoff), and the call option should be worth more? Is there an intuitive and easy-to-understand answer? thanks.
Here couple pointers that may make it clearer: Drift can be replaced by the risk-free rate through a mathematical construct called risk-neutral probability pricing. Why can we get away with that without introducing errors? The reason lies in the ability to setup a hedge portfolio, thus the market will not compensate us for the drift above and beyond the risk free rate under risk-neutral probability pricing. As long as such hedge exists and couple other conditions are met (please look up Girsanov's Theorem) we can introduce a risk-neutral measure so that when applying it to the differential equation and through application of Ito Calculus the drift term vanishes, which greatly simplifies the underlying math. It is not easy to come up with a way to reliably measure drift as it is not well known and hard to estimate in reality. Similarly, real probabilities are different for every underlying asset and are difficult to estimate because investors require different risk premiums for each and every asset. Hence, being able to take the detour through risk neutral pricing greatly simplifies the derivation not only mathematically but also intuitively. Please note that the volatility of the risk neutral paths and paths under real-world probabilities are identical, what differs is the drift term vs risk-free rate. Keep in mind that not all derivative functions can be derived through risk-neutral pricing, in fact I venture to estimate that less than 20% of all traded derivatives by investment banks can be priced through risk-neutral pricing (in terms of number of different instruments not trading volume). In order to understand the technical reason of the elimination of drift through risk-neutral pricing you need to walk through the mathematics and one of the cleanest and most intuitive approaches I have seen was in Steven Shreve's book, Stochastic Calculus for Finance II, page 218-220 (2004 edition)
{ "source": [ "https://quant.stackexchange.com/questions/8247", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2576/" ] }
8,274
In the world of finance, Risk-neutral pricing allow us to estimate the fair value of derivatives using the risk free rate as the expected return of the underlyings. However, the behavior of financial assets in the real-world might be substantially different to the evolution used in a risk-neutral context. For instance, if I want to estimate the real-world probability of an equity asset reaching certain thresholds, which models and calibration techniques could be used? In particular, some questions that may arise in the estimation of real-world probabilities are: Calibration : Should real-world probabilities be calibrated to current market prices or, alternative, historical data should be used for this type of estimation? No-arbitrage conditions : Could they be relaxed or they still play a role in the assessment of real-world probabilities? Expected returns : Assuming that I have already estimated the expected return of an asset $\mu$, how accurate would be a real-world estimation that combines a widely used evolution model (e.g. Geometric Brownian motion ), with the use of $\mu$ instead of the risk free rate $r$? Per comments, I understand that in order to estimate real-world probabilities: I should use expected returns instead of the risk-free rate. The asset evolution should still respect the no-arbitrage conditions (i.e: the real-world dynamics should still reproduce the current prices of vanilla options). However, if we just use $\mu$ instead of $r$, the underlying asset behavior might not be consistent with the observed option prices. For instance, if we just change $r$ by $\mu$ (with $\mu>r$) the underlying asset dynamics will lead to call prices above its current market price, and put prices below its market price. Therefore, in addition to use expected returns, which other adjustment might be needed in order to estimate real-world probabilities? Any papers or references regarding real-world estimation will be greatly appreciated.
The risk-neutral measure $\mathbb{Q}$ is a mathematical construct which stems from the law of one price , also known as the principle of no riskless arbitrage and which you may already have heard of in the following terms: "there is no free lunch in financial markets". This law is at the heart of securities' relative valuation , see this very nice paper by Emmanuel Derman ("Metaphors, Models & Theories", 2011) and some part of this discussion. In what follows, assume for the sake of simplicity existence of a risk-free asset ; deterministic and constant rates, with risk-free rate $r$ ; no dividends and no additional equity funding costs. How to relate $\mathbb{Q}$ to $\mathbb{P}$ : some useful concepts The risk-neutral measure $\mathbb{Q}$ is a probability measure which is equivalent to $\mathbb{P}$ and under which the prices of assets (I should rather say the price of self-financing portfolios composed of marketed securities to be perfectly rigorous), discounted at the risk-free rate, turn out to be martingales . If one assumes there is no free lunch in the real world (hence under $\mathbb{P}$ ), then the above definition (more specifically the "equivalent" part) suggests that there will be no free lunch under $\mathbb{Q}$ either . To convince yourself have a look at the accepted answer to this SE question. This answers your question concerning no arbitrage conditions. The martingale property is convenient since it allows us to represent asset prices as expectations conditional on the information we currenty have, which seems intuitive and natural. Indeed from the definition if $X_t$ is a $\mathbb{Q}$ -martingale then $$ X_0 = E^{\mathbb{Q}}[X_t \vert \mathcal{F}_0] $$ The adjective risk-neutral comes from the fact that, using a replication argument (static for linear contracts, dynamic for most of the others) and under the assumption of no free lunch (+ market completeness, continuous trading, no frictions), one can show that the true performance of the stock simply disappears from the option valuation problem. Risk aversion thus disappears and only the risk-free rate $r$ remains. This is exactly what Black-Scholes-Merton showed and which earned them the Nobel prize in the first place, see below. A simple example: the Black-Scholes model Assume that the stock price $S_t$ follows a GBM under $\mathbb{P}$ $$ \frac{dS_t}{S_t} = \mu dt + \sigma dW_t^{\mathbb{P}}\ \ \ (1) $$ where $\mu$ is the expected performance of the stock and $\sigma$ the annualised volatility of log-returns. This equation describes the dynamics of the stock in the real world. Consider the pricing (we are still under in the real world) of a contingent claim $V_t = V(t,S_t)$ of which the only thing we know is that it pays out $\phi(S_T)$ to its holder when $t=T$ (generic European option). Now, consider the following self-financing portfolio: $$\Pi_t = V_t - \alpha S_t$$ Using Itô's lemma along with the self-financing property yields: \begin{align} d\Pi_t &= dV_t - \alpha dS_t \\ &= \left( \frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S_t^2 \frac{\partial^2 V}{\partial S^2}\right) dt + \left( \frac{\partial V}{\partial S} - \alpha \right) dS_t\\ \end{align} The original argument of Black-Scholes-Merton is then that, if we can dynamically rebalance the portfolio $\Pi_t$ so that the number of shares held is continuously adjusted to be equal to $\alpha = \frac{\partial V}{\partial S}$ , then the portfolio $\Pi_t$ would drift at a deterministic rate which, by absence of arbitrage opportunity, should match the risk-free rate. Writing this as $d\Pi_t = \Pi_t r dt$ and remembering that we've picked $\alpha = \frac{\partial V}{\partial S}$ to reach this conclusion, we have \begin{align} &d\Pi_t = \Pi_t r dt \\ \iff& \left( \frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S_t^2 \frac{\partial^2 V}{\partial S^2} \sigma^2 \right) dt = \left( V_t - \frac{\partial V}{\partial S} S_t \right) r dt \\ \iff& \frac{\partial V}{\partial t}(t,S) + r S \frac{\partial V}{\partial S}(t,S) + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2}(t,S) - rV(t,S) = 0 \end{align} which is the famous Black-Scholes pricing equation. Now, the Feynmann-Kac theorem tells us that the solution to the above PDE can be computed as: $$ V_0 = E^\mathbb{Q}[ e^{-rT} \phi(S_T) \vert \mathcal{F}_0 ] $$ where under a certain measure $\mathbb{Q}$ $$ \frac{dS_t}{S_t} = r dt + \sigma dW_t^{\mathbb{Q}} $$ which shows that $$\frac{V_t}{B_t} \text{ and } \frac{S_t}{B_t} \text{ are } \mathbb{Q}\text{-martingales}$$ with $B_t = e^{rt}$ representing the value of the risk-free asset we mentioned in the introduction. Notice how $\mu$ has completely disappeared from the pricing equation. Because this Feynman-Kac formula very much resembles a magical trick, let us zoom in on the change of measure from a more mathematical perspective (the above was indeed the financial argument... at least for deriving the pricing equation, not for expressing its solution in martingale form). Starting from $(1)$ , let us define the quantity $\lambda$ as the excess-return over the risk-free rate of our stock, expressed in volatility units (ie its Sharpe ratio): $$ \lambda = \frac{\mu - r}{\sigma} $$ Plugging this into $(1)$ gives: $$ \frac{dS_t}{S_t} = r dt + \sigma (dW_t^{\mathbb{P}} + \lambda dt) $$ Now Girsanov theorem tells us that if we define the Radon-Nikodym of the change of measure as $$ \left. \frac{d\mathbb{Q}}{d\mathbb{P}} \right\vert_{\mathcal{F}_t} = \mathcal{E}(-\lambda W_t^{\mathbb{P}}) $$ then the process $$ W_t^{\mathbb{Q}} := W_t^{\mathbb{P}} - \langle W^{\mathbb{P}}, -\lambda W^{\mathbb{P}} \rangle_t = W_t^{\mathbb{P}} + \lambda t $$ will emerge as a $\mathbb{Q}$ -Brownian motion, hence we can write: $$ \frac{dS_t}{S_t} = r dt + \sigma dW_t^{\mathbb{Q}} $$ Okay, this might seem even more magic to you than earlier, but there is a rigorous mathematical treatment behind don't worry. Anyway, an interesting feature of writing and manipulating the Radon-Nikodym derivative is that one can eventually show that: $$V_0 = E^{\mathbb{Q}} \left[ \left. \frac{V_T}{B_T} \right\vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ \left. \frac{V_T}{B_T} \mathcal{E}(-\lambda W_T^\mathbb{P}) \right\vert \mathcal{F}_0 \right]$$ where I have used the Bayes' rule for condition expectations , with $$ X := V_T/B_T,\ \ \ f := \frac{d\mathbb{Q}}{d\mathbb{P}} \vert \mathcal {F}_T = \mathcal{E}(-\lambda W_T^{\mathbb{P}}),\ \ \ E^\mathbb{P}[f \vert \mathcal{F}_0 ] = 1 $$ The above result is extremely interesting and can here be re-expressed as $$ V_0 = E^{\mathbb{Q}} \left[ e^{-rT} \phi(S_T) \vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ e^{-\left(r+\frac{\lambda^2}{2}+\frac{\lambda}{T} W_T^{\mathbb{P}}\right)T} \phi(S_T) \vert \mathcal{F}_0 \right] $$ This shows that, under BS assumptions: The option price can be calculated as an expectation under $\mathbb{Q}$ in which case we discount cash flows at the risk-free rate. The option price can also be calculated as an expectation under $\mathbb{P}$ but this time we need to discount cash flows based on our risk-aversion, which transpires through the market risk premium $\lambda$ (which depends on $\mu$ ). This answers your question: Therefore, in addition to use expected returns, which other adjustment might be needed in order to estimate real-world probabilities? You need to use a stochastic discount factor accounting for the risk aversion, see above and further remarks below. Estimating real-world probabilities assuming BS You have different possibilities here. The first idea which springs to mind is to calibrate your diffusion model to observed time series. When doing that, you hope to get an estimate for $\mu$ and $\sigma$ in the GBM case. Now given what we just said earlier, you must be very careful when pricing under $\mathbb{P}$ : you cannot discount at the risk-free rate. Also obtaining a statistically significant estimation for $\mu$ (and the latent equity risk premium) may not be as easy as it seems see the discussion here It's more complicated than that when you choose another model than BS The relationship: $$ V_0 = E^{\mathbb{Q}} \left[ \frac{V_T}{B_T} \vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ \frac{V_T}{B_T} f \vert \mathcal{F}_0 \right] $$ with $$f = \left. \frac{d\mathbb{Q}}{d\mathbb{P}} \right\vert_{\mathcal{F}_T}$$ will hold (under mild technical conditions). Compared to the risk-free discount factor $$DF (0,T):=1/B_T $$ the quantity $$SDF (0,T):=f/B_T$$ is best known as a Stochastic Discount Factor (maybe you've already heard about SDF models, this is precisely that) and we can write, without loss of generality: $$ V_0 = E^{\mathbb{Q}} \left[ DF (0,T) V_T \vert \mathcal{F}_0 \right] = E^{\mathbb{P}} \left[ SDF (0,T) V_T \vert \mathcal{F}_0 \right] $$ The problem is that, depending on the model assumptions you use, you cannot always have a simple and/or unique form for $f$ (hence $SDF (0,T) $ ) as it used to be the case in BS. This is notably the case for incomplete models (i.e. models that include jumps and/or stochastic volatility etc.). So now you understand why when we need models to price options, we directly calibrate them under $\mathbb{Q}$ and not on time series observed under $\mathbb{P}$ .
{ "source": [ "https://quant.stackexchange.com/questions/8274", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/5208/" ] }
8,896
Except Zipline, are there any other Pythonic algorithmic trading library I can choose? Especially, for backtesting?
Aside from Zipline , there are a number of algorithmic trading libraries in various stages of development for Python. From the commercial side, RapidQuant looks very interesting though I haven't tried it yet. It's from some of same developers that brought us the excellent Pandas data analysis library. I think Wes McKinney (Pandas's main author) is involved. From the open source side, you might check out ultra-finance . It aims to be a fully featured event-driven based backtesting system. Also check out PyaAlgoTrade . It's coded to allow for distributed testing of strategies on Google's cloud infrastructure. It incorporates the open source TA-Lib technical analysis library. Finally, take a look at TradeProgrammer . It also uses the TA-Lib library. The package is free to use for backtesting, but its live trading version is commercial. Aside from that, I think that many proprietary traders build their own systems. There is definitely something to be said for using a tool you understand on that level.
{ "source": [ "https://quant.stackexchange.com/questions/8896", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/6054/" ] }
9,313
I come from a different field (Machine learning/AI/data science), but aim to ask a philosophical question with the utmost respect: Why do quantitative financial analysts (analysts/traders/etc.) prefer (or at least seem) traditional statistical methods (traditional = frequentist/regression/normal correlation methods/ts analysis) over newer AI/machine learning methods? I've read a million models, but it seems biased? Background: I recently joined a 1B AUM (I know it's not a ton) asset management firm. I was asked to build a new model for a sector rotation strategy (basically predicting which SP 500 sector would do the best over 6 months-- chose to use forward rolling 6 month returns) they employ and my first inclination was to combine ARIMA (traditional) with random forest (feature selection) and a categorical (based on normal distribution standard deviation) gradient boosted classifier for ETFs in each sector. Not to be rude, but I beat the ValuLine timeliness for each sector. I used the above mentioned returns as my indicator and pretty much threw everthing at the wall for predictors initially (basically just combing FRED), then used randomForest to select features. I ended up combining EMA and percent change to create a pretty solid model that, like I said, beat ValuLine. I've read a lot of literature, and I haven't seen anyone do anything like this. Any help in terms of pointing me in the right direction for literature? Or any answers to the overarching idea of why isn't there more machine learning in equity markets (forgetting social/news analysis)? EDIT: For clarification, I'm really interested in long-term predictions (I think Shiller was right) based on macro predictors. Thanks PS- I've been lurking for a while. Thanks for all the awesome questions, answers, and discussions.
Because of: The (extreme) dominance of noise over signal The prevalence of non-repeating patterns (many of which we know are not going to repeat) A pathetic sample size for cross-validation Regime changes due to exogenous events. These are typically in the cross-val window which makes it even worse. (GFC, financial integration, trade law changes, interest rate adjustments by central banks, some idiot in a bank was hiding trades and loses 5 billions dollars, etc). It is well known that non-linear relationships are generally just artefacts of the in sample dataset There is also the following: Much price changes are driven by news such as a plane crashing or a merger announcement. Are you trying to forecast news (!?) by getting your model to learn non-linear relationships on price data? It should be clear that, if American Airlines price falls due to a terrorist hijacking, it is not going to be useful to have a random forest learn any patterns that result since it will not repeat. Because of these factors many (econometricians and practicioners) will try to use a priori knowledge to select features and impose constraints on the model in an attempt to improve generalization. This is perceived as necessary by econometricians since the data is too thin, noisy and nonstationary (i.e., the above reasons). This is not to say that "machine learning" methods such as Lasso, NNG, Elastic Nets or Ridge can't be applied. They result in essentially linear models and you can impose whatever a priori constraints on it through the metaparameters in the loss function or by using a variant that preserves hierarchies when using indicator function interactions (Tibshirani 2013...). Edit: You will still need to select which features go into the algorithm (as a prior imposition) but you can use these to achieve slightly more sparsity than you would otherwise have and introduce some bias into your conditional expectation (or state probability if you're doing multinomial categorical GLM) for improvement in variance of sampling distribution. I am however open to random forests with the right a priori constraints in place. There are indeed hundreds of papers that use machine learning to forecast financial markets. Just google something silly like "fuzzy bayesian expert adaptive learners with PSO training S&P 500" and you will get a lesson in the file-drawer effect, publication bias and substandard research methodologies (e.g. selecting 3 of 50 algorithms and 2 of 50 indices and hoping it convinces people). However, the above is an optimist's view of the industry. From those I've spoken to at low frequency funds they are simply ignorant of machine learning and couldn't apply it because they lack the knowledge and skills. If they were actually interested in being true quants, who knows how much damage they could do with deep learning or something. If you want to do real machine learning in finance and actually do something that is meritocratic/skill/scientific instead of almost completely random and full of people who practice nonsense, go to a HFT firm (not that most people practice non-sense in low frequency funds, just that many do and this is something that is absolutely impossible to get away with in HFT). That said, I am continually and consistently underwhelmed when I hear of the research methods of low frequency quant funds.
{ "source": [ "https://quant.stackexchange.com/questions/9313", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/6399/" ] }
10,401
It seems quants increasingly use econometric models at work. As someone who has sold his soul to probability theory and stochastical analysis I would like to catch up. What are the econometric tools a quant should be able to wield ? As I see it, the answer will be highly dependant on where one works. Thus perhaps it would make sense to distinguish: Buy side Sell side Fixed Income Equity Risk Management and Model Validation Book suggestions that cover the necessary knowledge will be appreciated. Also, if someone feels like it, a list of topics (e.g. ARCH, GARCH etc.) would also be very helpful.
I can only talk about quantitative trading. As a rule of thumb, the lower frequency you work in, the more econometrics is important, whereas for a higher frequency, the more econometrics becomes useless . (I would still recommend a top econometrician for HFT since they have what it takes to succeed, it's just the models aren't out-of-the-box applicable.) But if I was interviewing someone who was educated in econometrics for a quantitative research position, I would hope for (given the relevance to financial time-series): I have tried to put in a legend, ^ is something you should learn later and ^^ is something you should learn after learning ^ . ^^ Kalman filters for dynamic linear models. GARCH (learn ARCH first). ARMA(p,q)/ARIMA(p,i,q)/AR(p)/MA(q). ACF/PACF. Econometric forecast evaluation (RMSE,MSE,MAE). Thorough OLS understanding. Assumptions and consequences of violation. ^ Regime switching and threshold models. Cointegration models such as VECM and Engle-Granger and basic $I(n)$ theory along with ADF/PP unit root testing. VARs. ^ Quantile regression. Basic knowledge of dimensionality reduction algorithms (the more the better, but I wouldn't have this as an expectation for an econometrics candidate). ^ Impulse response functions. ^ Monte carlo applications to construct sampling distributions and the idea of the bootstrap, along with general knowledge of at least one bootstrap estimator. A good knowledge of hypothesis testing, sampling distributions, population/sample concepts, lag length selection, consistency/power/bias, variance/bias tradeoff, maximum likelihood, PDF/CDF, qualitative knowledge of different distributions commonly used. A knowledge of why and how econometricians pre-process data, take differences, introduce variables and account for non-linearities with simple transforms on the individual features, interactions between features, ratios of features and indicator function breaks (either data determined or, usually more appropriately, determined a priori). Comovement not necessarily as a slope phenomenon; linear correlations (and its pitfalls), rank correlations, three-way relationship between correlation, linear regression slope and cointegrating vector, how to test for spillovers in a linear DGP, and more global and advanced dependence estimators (such as copula, wavelet, mutual information, IRFs through VECM/VAR, forecast error variance decompositions, among others). The difference between residual analysis and test set cross-validation , and how both relate to overfitting and model generalisation. I would not care about: Panel modelling. I would also like to see hopefully (most likely picked up from self-study): ^^ Wavelets (DWT/CWT/phase difference analysis/frequency-domain bivariate correlation) and STFT should be a part of an econometricians toolbox. ^^ Dynamic correlation estimators (DCC-GARCH, stochastic copulas) A knowledge of generalization theory picked up from machine learning lectures. ^^ Methods like NNG to get better OLS estimates. Boosting and bagging linear DGPs for better generalisation. ^^ Perpendicular regression and LAD estimators when least squares is not appropriate given some assumption violation, if the conditional expectation is not wanted (conditional median is theoretically desirable), or if you don't want to inadvertently do least-rectangles upon a misspecification of the causal relationship, or you want the loss to be less skewed by outliers. Here is some voluntary stuff that either I have seen some top guys working on in industry or in an econometrics paper, and I would be very impressed to see knowledge in these areas: Stochastic optimal control (a large quantitative global macro fund is doing work on this) Bayesian time-series (a reputable, large systematic fund had some research on this) I would like to see knowledge of how to come up with a DGP and figure out how to estimate it with numerical methods. As an example, how to embed exogenous variables in the forcing equation in Patton's symmetrized Joe-Clayton copula, then figure out how to optimize the density numerically and bootstrap unbiased and consistent standard errors. Another would be to derive a Kalman estimator to extract time-varying yield curve parameters (curvature, slope, etc). Everyone is estimating simple MGARCH and VECM models since you can just plug the data into R , so it is doubtful there is alpha here. Probably there is some alpha for the guys that can estimate parsimonious models that others simply can not because they are not in the top 1% of econometricians. Here is some stuff that's probably not needed in low frequency quantitative research: Advanced optimisation theory. GAs, stochastic gradient descient and Newton's are all you will be expected to know. Non-linear machine learning. Non-linear dimensionality reduction or manifold learning. All you are expected to know is PCA, ICA and the concept of the time-series factor model. Digital signal processing not related to comovement estimators. There is one thing from another field that may be required: Ornstein-Uhlenbeck SDEs for a pairs trading fund. You'll notice I've listed almost all the mainstream stuff that's applicable to time-series. So most of what you'll get in a financial time-series course is what would be the expectation I think. Note that I did not list high frequency econometrics models, since I think they are not useful in high frequency finance. If you are going for such a position you will be interviewed by computer scientists and electrical engineers who will more likely ask you a question about asymptotic time complexity than about econometrics.
{ "source": [ "https://quant.stackexchange.com/questions/10401", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/7279/" ] }
14,102
In most textbooks Ito's lemma is derived (on different levels of technicality depending on the intended audience) and then only the classic examples of Geometric Brownian motion and the Black-Scholes equation are given. My question I am looking for references where lots of worked examples of applying Ito's lemma are given in an easy to follow, step by step fashion. Also more advanced cases should be covered.
These are all examples on Ito Formula in its general form (with quadratic variations):
{ "source": [ "https://quant.stackexchange.com/questions/14102", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
14,125
I'm working on a client memo explaining several approaches to equity hedging, and I'm looking for a not-too-technical term for a hedging strategy where I try to keep options near the money, as to have a quickly reacting hedge, expensive, but drastically reduced drawdown (hopefully). Of course, the opposite would simply be called tail-risk hedge, but what if I tighten the moneyness? I was thinking about core hedge, near-the-money hedge, continuously adjusted hedge, ... Obviously this is just a marketing buzzword, but is there some established word that will be understood by most? Also, clients are not too technical, so it may well be a fuzzy word, or not 100% accurate.
These are all examples on Ito Formula in its general form (with quadratic variations):
{ "source": [ "https://quant.stackexchange.com/questions/14125", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/1976/" ] }
14,567
What is the so-called Swap Curve, and how does it relate to the Zero Curve (or spot yield curve)? Does it only refer to a curve of swap rates versus maturities found in the market? Or is it a swap equivalent of a spot-yield curve constructed from bootstrapping a bond yield curve? The context of this question is set against a backdrop of a plethora of terminology (that seems to be used interchangeably). I am looking into how the so-called Zero Curve (or spot yield curve) is constructed in order to discount various IR derivatives (including swaps) when pricing them.
Garabedian, Typically, the "swap curve" refers to an x-y chart of par swap rates plotted against their time to maturity. This is typically called the "par swap curve." Your second question, "how it relates to the zero curve," is very complex in the post-crisis world. I think it's helpful to start the discussion with a government bond yield curve to clarify some concepts and terminologies. Consider the US Treasury market, using the outstanding Treasury notes and bonds (nearly 300 of them...), we can either use bootstrapping or more sophisticated spline models to construct a "fitted curve." Since this yield curve represents bonds of identical credit risks (basically risk-free), the zero coupon curve, the discount curve, the forward curve, and the par yield curve are just different representations of the same thing and can be translated very easily from each other. For simplicity, I'll assume annual compounding: If you know the zero coupon rate $r_t$ for time $t$, then the discount factor is $1 / (1 + r_t)^t$. If you know the 1-year zero coupon rate $r_1$ and 2-year zero coupon rate $r_2$, then you can compute the 1-year forward 1-year rate from $(1 + r_1)(1+f_{1,1})=(1+r_2)^2$. You can also compute the 2-year par rate, just solve for $c$ from $$ \frac{c}{(1 + r_1)} + \frac{100 + c}{(1+r_2)^2} = 100. $$ Now let's return to the swap market. To be concrete, let's consider a 2-year USD par swap. This instrument has four fixed leg payment, and eight floating payment. The par swap rate is the fixed-leg interest rate that sets the present value of all the cash flows to 0. In other words, we'd solve for the $c$ in: $$ \sum_{i=1}^4 c \Delta_i d(T_i) = \sum_{j=1}^8 l_j \delta_j d(t_j), $$ where $d(t)$ is the discount factor for time $t$, $\Delta_i$ and $\delta_i$ are year fractions, and $l_j$'s are the 3M Libor forward rates. Before the financial crisis, it is assumed that the discount curve and the forward curve are both based on Libor. This simplifies things a lot – just build a Libor forward curve so that it reproduces libors, futures rates, and par swap rates, and you're done. In this framework, all the translations (from zero curve to par curve to forward curve, etc.) above are still valid. Unfortunately, the idea that Libor was the appropriate funding rate was completely invalidated during the crisis. In recent years, a common practice is to use the "OIS discounting"-based "multi-curve" approach. In the equation above, the $l_i$'s are still based on the 3M Libor forward curve, but the $d(t)$'s should be discount factors fitted to overnight indexed swaps. Simply put, when you are building a swap curve, you now need to simultaneously calibrate both the OIS discount curve AND and Libor discount curve... Under this new paradigm, the simple translation that we used for government bonds above no longer works, since multiple curves are involved. But it gets worse... since 1M Libor and 3M Libor have different credit risks, you can't even do something like $(1 + \text{Libor}_{\rm 1M}/12)(1 + \text{Libor}_\text{1 month forward 2 month} / 2) = 1 + \text{Libor}_{\rm 3M} / 4$! Instead, you need to build separate 1M and 3M Libor forward curves to account for the tenor basis... As you can see, building a swap curve nowadays is a pretty involved task. What we now refer to as "a" swap curve is actually a collection of curves (OIS curve, 1m libor, 3m libor, 6m libor, etc.) bundled together... There are numerous literature you can find on this topic just by googling "multi-curve". For example, http://developers.opengamma.com/quantitative-research/Multiple-Curve-Construction-OpenGamma.pdf
{ "source": [ "https://quant.stackexchange.com/questions/14567", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/6782/" ] }
17,125
Factor models such as Fama-French or the other ones that are partially summarized here work on the cross-section of asset returns. How are the factors built, how are sensitivities/coefficients estimated? In this context Fama-MacBeth regressions are usually mentioned. How does this method work intuitively? Could anyone give a step-by-step manual? EDIT: Links to papers and manuals have been posted in the two answers - this is great. But can someone provide more intuition in the answer? Say we have a universe of stocks (say MSCI Europe) and we group them by value and size. How can we proceed? How do we construct the factors and how do we construct the sensitivities? Could someone please give a more direct explanation, without a link? thanks!
1. Determine Factors Economically, the use of factor models can be either motivated using the ICAPM or the APT . Although there are some theoretical differences between the model, for empirical and practical work these differences are irrelevant. In the end, both models stipulate that returns and expected returns are linear functions of the factors: $$ r_{i,t} = \alpha_i + \sum_j \beta_{i,j} F_{j,t} + \epsilon_{i,t} \quad (1)$$ $$ \mathbb{E}[ r_{i,t}] = \lambda_o + \sum_j \beta_{i,j} \lambda_j \quad\quad\quad(2)$$ where $F_{j,t}$ is the factor surprise of factor $j$ at time $t$ and $\lambda_j $ is the factor risk premium of factor $j$. What the factors are is fundamentally undetermined. Following the ICAPM, the factors should be proxies for future marginal consumption growth (=state variables). Whatever factor you use, there should be an economic reason why returns should be related to the factor. For some of the steps later, it makes a difference whether the factors are traded returns or some other factor (such as macroeconomic variables). Factors based on returns are usually derived as the return on a particular portfolio or the difference between two portfolios. Best known examples for the first group are the macro factors used by Chen, Roll, and Ross ( 1986 ) and for the later group the Fama and French factors ( 1992 , 1993 , 1996 , 2014 ). It makes the statistical estimation somewhat easier when the factors are returns (I’ll explain this point later) 2. Collect Data The next step is always the data collection, both for the factors and the test assets. Sometimes, when the factors are macroeconomic time series (or something similar) their predictable component is removed so that the factors are only the factor surprises. In principle, only the unexpected component should explain the cross-sectional differences in returns. When factors are constructed as portfolio returns, a key question is the rebalancing frequency. Most papers that I am aware of follow the example of Fama and French and form the portfolios in the middle of the year (1st of July) and then keep the portfolio constituents the same for a year (a well-known counterexample is the momentum factor of Carhart ( 1997 ) who uses monthly rebalancing). When factors are constructed as the difference in returns between top and bottom portfolios according to some ranking the question arises at which quantiles to split the assets. Common are splits at the median, 30/70 quantiles, or 10/90 quantiles. 3. Estimate regressions The final step is to estimate the regressions to see if the factors are able to explain the cross-section of returns. There are two principle approaches to this, sometimes called time-series regression and cross-sectional regressions (I have also heard people refer to the first procedure as the Fama-French method and the second one as the Fama-MacBeth method). a) Time-Series Regression When all factors are returns, you can use time-series regressions for each test asset to estimate the regression slopes $\beta_{i,j}$. In this case, you estimate model (1). You will obtain a beta for each factor and test asset. The reason you can use time-series regressions in this case is that the factor premia $\lambda_j$ can simply be estimated as the time-series mean of the factor returns. If you use excess returns as dependent variables in the regression, the factor model has one implications: all $\alpha_i$ should be zero. Testing this depends a bit on your assumptions about the temporal and cross-sectional correlation in the error terms. In any case, you will have to resort to some form of F-Test (adjusted for autocorrelation, heteroscedasticity, general errors etc.) as you are testing multiple hypotheses. The book by Cochrane ( 2001 ) derives these in detail using a GMM approach (chapters 12 and 13). b) Cross-Sectional Regressions For general factors, you will need to run cross-sectional regressions by estimating equation (2). A key problem here is that both the $\beta$ coefficients and the prices of risk $\lambda$ are not directly observable. The usual way around is to follow the procedure laid out by Fama and MacBeth ( 1973 ): You first run time-series regressions separately for each test asset. This will give you estimates for each $\beta$ for each asset. These estimates are then used in the cross-sectional regression as independent variables using the average returns for each asset as dependent variable. The coefficients being estimated in this regression are the factor risk premia $\lambda$. Again, the prediction of a factor model is that the pricing errors $\lambda_0$ are zero for each asset. In the case of cross-sectional regressions this is a single parameter for which the nullhypothesis that it is zero in the population can be tested. This procedure is usually repeated using a rolling window; with monthly data usually 5 years of data. The real “meat” of the Fama-MacBeth method is the statistical theory of how to account in the standard errors of the cross-sectional regressions for the fact that the $\beta$’s are estimated coefficients from a time series regression and cross-sectional correlation. Again, I would refer to Cochrane’s ( 2001 ) book in Chapter 12 for details on the test statistics. 4. Evaluate results After evaluating whether the pricing errors are small (test that $\alpha_i=0$ for all i), the next question is to test whether the factors chosen in step 1 are “good factors”. This means that they should exhibit a strong relationship to expected returns. The cross-sectional and time-series approaches give slightly different methods to test if a factor is priced. For both methods (time-series and cross-sectional regressions) one should test if the factors are actually priced in the cross-section. For time-series regressions, the factor risk premia are estimated as the time-series average of the factor returns. Standard statistical tests can be used to test if these are positive. For cross-sectional regressions, the factor risk premia are the coefficients of the regressions which can also be tested. In both cases, one should be careful about the standard errors used (autocorrelation in the time series approach, cross-sectional dependencies). A question that often comes up is which approach is “better”. First, time-series regressions can only be used when the factors are returns. In case the factors are returns, the two approaches are not necessarily equivalent. The time-series regression estimates the factor premium as the average return. Therefore, any factor receives a zero pricing error in the sample. This is equivalent to forcing the intercept in the cross-sectional to zero. In order to make the two methods equivalent, you will have to include the factor as a test asset as well. If you do this, then using the correct standard errors will produce the same estimates for the prices of risk.
{ "source": [ "https://quant.stackexchange.com/questions/17125", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2587/" ] }
17,129
I'd like to fit a non stationary time series using a SARIMA + GARCH model. I have not found any package that allow me to fit this model. I'm using rugarch : model=ugarchspec( variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), mean.model = list(armaOrder = c(2, 2), include.mean = T), distribution.model = "sstd") modelfit=ugarchfit(spec=model,data=y) but it allow me only to fit an ARMA + GARCH model. Can you help me?
1. Determine Factors Economically, the use of factor models can be either motivated using the ICAPM or the APT . Although there are some theoretical differences between the model, for empirical and practical work these differences are irrelevant. In the end, both models stipulate that returns and expected returns are linear functions of the factors: $$ r_{i,t} = \alpha_i + \sum_j \beta_{i,j} F_{j,t} + \epsilon_{i,t} \quad (1)$$ $$ \mathbb{E}[ r_{i,t}] = \lambda_o + \sum_j \beta_{i,j} \lambda_j \quad\quad\quad(2)$$ where $F_{j,t}$ is the factor surprise of factor $j$ at time $t$ and $\lambda_j $ is the factor risk premium of factor $j$. What the factors are is fundamentally undetermined. Following the ICAPM, the factors should be proxies for future marginal consumption growth (=state variables). Whatever factor you use, there should be an economic reason why returns should be related to the factor. For some of the steps later, it makes a difference whether the factors are traded returns or some other factor (such as macroeconomic variables). Factors based on returns are usually derived as the return on a particular portfolio or the difference between two portfolios. Best known examples for the first group are the macro factors used by Chen, Roll, and Ross ( 1986 ) and for the later group the Fama and French factors ( 1992 , 1993 , 1996 , 2014 ). It makes the statistical estimation somewhat easier when the factors are returns (I’ll explain this point later) 2. Collect Data The next step is always the data collection, both for the factors and the test assets. Sometimes, when the factors are macroeconomic time series (or something similar) their predictable component is removed so that the factors are only the factor surprises. In principle, only the unexpected component should explain the cross-sectional differences in returns. When factors are constructed as portfolio returns, a key question is the rebalancing frequency. Most papers that I am aware of follow the example of Fama and French and form the portfolios in the middle of the year (1st of July) and then keep the portfolio constituents the same for a year (a well-known counterexample is the momentum factor of Carhart ( 1997 ) who uses monthly rebalancing). When factors are constructed as the difference in returns between top and bottom portfolios according to some ranking the question arises at which quantiles to split the assets. Common are splits at the median, 30/70 quantiles, or 10/90 quantiles. 3. Estimate regressions The final step is to estimate the regressions to see if the factors are able to explain the cross-section of returns. There are two principle approaches to this, sometimes called time-series regression and cross-sectional regressions (I have also heard people refer to the first procedure as the Fama-French method and the second one as the Fama-MacBeth method). a) Time-Series Regression When all factors are returns, you can use time-series regressions for each test asset to estimate the regression slopes $\beta_{i,j}$. In this case, you estimate model (1). You will obtain a beta for each factor and test asset. The reason you can use time-series regressions in this case is that the factor premia $\lambda_j$ can simply be estimated as the time-series mean of the factor returns. If you use excess returns as dependent variables in the regression, the factor model has one implications: all $\alpha_i$ should be zero. Testing this depends a bit on your assumptions about the temporal and cross-sectional correlation in the error terms. In any case, you will have to resort to some form of F-Test (adjusted for autocorrelation, heteroscedasticity, general errors etc.) as you are testing multiple hypotheses. The book by Cochrane ( 2001 ) derives these in detail using a GMM approach (chapters 12 and 13). b) Cross-Sectional Regressions For general factors, you will need to run cross-sectional regressions by estimating equation (2). A key problem here is that both the $\beta$ coefficients and the prices of risk $\lambda$ are not directly observable. The usual way around is to follow the procedure laid out by Fama and MacBeth ( 1973 ): You first run time-series regressions separately for each test asset. This will give you estimates for each $\beta$ for each asset. These estimates are then used in the cross-sectional regression as independent variables using the average returns for each asset as dependent variable. The coefficients being estimated in this regression are the factor risk premia $\lambda$. Again, the prediction of a factor model is that the pricing errors $\lambda_0$ are zero for each asset. In the case of cross-sectional regressions this is a single parameter for which the nullhypothesis that it is zero in the population can be tested. This procedure is usually repeated using a rolling window; with monthly data usually 5 years of data. The real “meat” of the Fama-MacBeth method is the statistical theory of how to account in the standard errors of the cross-sectional regressions for the fact that the $\beta$’s are estimated coefficients from a time series regression and cross-sectional correlation. Again, I would refer to Cochrane’s ( 2001 ) book in Chapter 12 for details on the test statistics. 4. Evaluate results After evaluating whether the pricing errors are small (test that $\alpha_i=0$ for all i), the next question is to test whether the factors chosen in step 1 are “good factors”. This means that they should exhibit a strong relationship to expected returns. The cross-sectional and time-series approaches give slightly different methods to test if a factor is priced. For both methods (time-series and cross-sectional regressions) one should test if the factors are actually priced in the cross-section. For time-series regressions, the factor risk premia are estimated as the time-series average of the factor returns. Standard statistical tests can be used to test if these are positive. For cross-sectional regressions, the factor risk premia are the coefficients of the regressions which can also be tested. In both cases, one should be careful about the standard errors used (autocorrelation in the time series approach, cross-sectional dependencies). A question that often comes up is which approach is “better”. First, time-series regressions can only be used when the factors are returns. In case the factors are returns, the two approaches are not necessarily equivalent. The time-series regression estimates the factor premium as the average return. Therefore, any factor receives a zero pricing error in the sample. This is equivalent to forcing the intercept in the cross-sectional to zero. In order to make the two methods equivalent, you will have to include the factor as a test asset as well. If you do this, then using the correct standard errors will produce the same estimates for the prices of risk.
{ "source": [ "https://quant.stackexchange.com/questions/17129", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15752/" ] }
17,859
In Dupire's local volatility model, the volatility is is a deterministic function of the underlying price and time, chosen to match observed European option prices. To be more specific, given a smooth surface $(K,T)\mapsto C(K,T)$ where K is the strike and T is time to maturity. Dupire equation implies that there exits an unique continuous function $\sigma_{loc}$ defined by $$\sigma_{loc}^{2}(K,T)=\frac{\partial_{T}C(K,T)+rK\partial_{K}C(K,T)}{\frac{1}{2}K^{2}\partial_{KK}C(K,T)}$$ for all $(K,T)\in(0,\infty)\times(0,\infty)$ such that the solution to the stochastic differential equation $dS_{t}/S_{t}=rdt+\sigma(t,S_{t})dW_{t}$ exactly generates the European call option prices. What do the dynamics of the local volatility mean? Are dynamics equivalent to the volatility surface? Why the dynamics of local volatility model is highly unrealistic?
A general model (with continuous paths) can be written $$ \frac{dS_t}{S_t} = r_t dt + \sigma_t dW_t^S $$ where the short rate $r_t$ and spot volatility $\sigma_t$ are stochastic processes. In the Black-Scholes model both $r$ and $\sigma$ are deterministic functions of time (even constant in the original model). This produces a flat smile for any expiry $T$. And we have the closed form formula for option prices $$ C(t,S;T,K) = BS(S,T-t,K;\Sigma(T,K)) $$ where $BS$ is the BS formula and $\Sigma(T,K) = \sqrt{\frac{1}{T-t}\int_t^T \sigma(s)^2 ds}$. This is not consistent with the smile observed on the market. In order to match market prices, one needs to use a different volatility for each expiry and strike. This is the implied volatility surface $(T,K) \mapsto \Sigma(T,K)$. In the local volatility model, rates are deterministic, instant volatility is stochastic but there is only one source of randomness $$ \frac{dS_t}{S_t} = r(t) dt + \sigma_{Dup}(t,S_t) dW_t^S $$ this is a special case of the general model with $$ d\sigma_t = (\partial_t \sigma_{Dup}(t,S_t) + r(t)S_t\partial_S\sigma_{Dup}(t,S_t) + \frac{1}{2}S_t^2\partial_S^2\sigma_{Dup}(t,S_t)) dt + \frac{1}{2}S_t\partial_S\sigma_{Dup}(t,S_t)^2 dW_t^S $$ What is appealing with this model is that the function $\sigma_{Dup}$ can be perfectly calibrated to match all market vanilla prices (and quite easily too). The problem is that while correlated to the spot, statistical study show that the volatility also has its own source of randomness independent of that of the spot. Mathematically, this means the instant correlation between the spot and vol is not 1 contrary to what happens in the local volatility model. This can be seen in several ways: The forward smile. Forward implied volatility is implied from prices of forward start options: ignoring interest rates, $$ C(t,S;T\to T+\theta,K) := E^Q[(\frac{S_{T+\theta}}{S_{T}}-K)_+] =: C_{BS}(S=1,\theta,K;\Sigma(t,S;T\to T+\theta,K)) $$ Alternatively, it is sometimes defined as the expectation of implied volatility at a forward date. In a LV model, as the maturity $T$ increases but $\theta$ is kept constant, the forward smile gets flatter and higher. This is not what we observe in the markets where the forward smile tends to be similar to the current smile. This is because the initial smile you calibrate the model too has decreasing skew: $$ \partial_K \Sigma(0,S;T,K) \xrightarrow[T\to +\infty]{} 0 $$ Smile rolling. In a LV model, smile tends to move in the opposite direction of the spot and get higher independently of the direction of the spot. This is not consistent with what is observed on markets. See Hagan and al. Managing Smile Risk for the derivation. This means that $\partial_S \Sigma_{LV}(t,S;T,K)$ often has the wrong sign so your Delta will be wrong which can lead to a higher hedging error than using BS. Barrier options. In FX markets, barrier options like Double No Touch are liquid but a LV model calibrated to vanilla prices does not reproduce these prices. This is a consequence of the previous point. The LV model is a static model. Its whole dynamic comes from the volatility surface at time 0. But the vol surface has a dynamic that is richer than that. There are alternatives using multiple factors like SV models, LSV models (parametric local vol like SABR or fully non parametric local vol), models of the joint dynamic of the spot and vol surface etc... but the LV model remains the default model in many cases due to its simplicity, its ability to calibrate the initial smile perfectly and its numerical efficiency.
{ "source": [ "https://quant.stackexchange.com/questions/17859", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/16126/" ] }
17,870
I know the title sounds a little extreme but I wonder whether R is phased out by a lot of quant desks at sell side banks as well as hedge funds in favor of Python. I get the impression that with improvements in Pandas, Numpy and other Python packages functionality in Python is drastically improving in order to meaningfully mine data and model time series. I have also seen quite impressive implementations through Python to parallelize code and fan out computations to several servers/machines. I know some packages in R are capable of that too but I just sense that the current momentum favors Python. I need to make a decision regarding architecture of a subset of my modeling framework myself and need some input what the current sentiment is by other quants. I also have to admit that my initial reservations regarding performance via Python are mostly outdated because some of the packages make heavy use of C implementations under the hood and I have seen implementations that clearly outperform even efficiently written, compiled OOP language code. Can you please comment on what you are using? I am not asking for opinions whether you think one is better or worse for below tasks but specifically why you use R or Python and whether you even place them in the same category to accomplish, among others, the following tasks: acquire, store, maintain, read, clean time series perform basic statistics on time series, advanced statistical models such as multivariate regression analyses,... performing mathematical computations (fourier transforms, PDE solver, PCA, ...) visualization of data (static and dynamic) pricing derivatives (application of pricing models such as interest rate models) interconnectivity (with Excel, servers, UI, ...) (Added Jan 2016): Ability to design, implement, and train deep learning networks. EDIT I thought the following link might add more value though its slightly dated [2013] (for some obscure reason that discussion was also closed...): https://softwareengineering.stackexchange.com/questions/181342/r-vs-python-for-data-analysis You can also search for several posts on the r-bloggers website that address computational efficiency between R and Python packages. As was addressed in some of the answers, one aspect is data pruning, the preparation and setup of input data. Another part of the equation is the computational efficiency when actually performing statistical and mathematical computations. Update (Jan 2016) I wanted to provide an update to this question now that AI/Deep Learning networks are very actively pursued at banks and hedge funds. I have spent a good amount of time on delving into deep learning and performed experiments and worked with libraries such as Theano, Torch, and Caffe. What stood out from my own work and conversations with others was that a lot of those libraries are used via Python and that most of the researchers in this space do not use R in this particular field. Now, this still constitutes a small part of quant work being performed in financial services but I still wanted to point it out as it directly touches on the question I asked. I added this aspect of quant research to reflect current trends.
My deal is HFT so what I care about is read/load data from file or DB quickly in memory perform very efficient data-munging operations (group,transform) visualize easily the data I think is is pretty clear that 3. goes to R, graphics and ggplot2 and others allow you to plot anything from scratch with little effort. About 1. and 2. I am amazed reading previous post to see that people are advocating for python based on pandas and that no one cites data.table The data.table is a fantastic package that allows blazing fast grouping/transforming of tables with 10s million rows. From this bench you can see that data.table is multiple time faster than pandas and much more stable (pandas tend to crash on massive tables) Example R) library(data.table) R) DT = data.table(x=rnorm(2e7),y=rnorm(2e7),z=sample(letters,2e7,replace=T)) R) tables() NAME NROW NCOL MB COLS KEY [1,] DT 20,000,000 3 458 x,y,z Total: 458MB R) system.time(DT[,.(sum(x),mean(y)),.(z)]) user system elapsed 0.226 0.037 0.264 R)setkey(DT,z) R)system.time(DT[,.(sum(x),mean(y)),.(z)]) user system elapsed 0.118 0.022 0.140 Then there is speed, as I work in HFT neither R nor python can be used in production. But the Rcpp package allows you to write efficient C++ code and integrate it to R trivially (literally adding 2 lines). I doubt R is fading, given the number of new packages created every day and the momentum the language has... EDIT 2018-07 A few years latter I am amazed by how the R ecosystem has evolved. For in-memory computation you get unmatched tools, from fst for blazing fast binary read/write, fork or cluster parallelism in one liners. C++ integration is incredibly easy with Rcpp. You get interactive graphics with the classics like plotly, crazy features like ggplotly (just makes your ggplot2 interactive). For trying python with pandas I honestly do not understand how there could even be a match. Syntax is clunky and performance is poor, I must be too used to R I guess. Another thing that is really missing in python is litterate programming, nothing comes close to rmarkdown (the best I could find in python was jupyter but that does even come close). With all the fuss surrounding the R vs Python langage war I realize that vast majority of people are simply uninformed, they do not know what data.table is, that it has nothing to do with a data.frame, they do not know that R fully supports tensorflow and keras.... To conclude I think both tools can do everything and it seems that python langage has very good PR...
{ "source": [ "https://quant.stackexchange.com/questions/17870", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2114/" ] }
18,094
I need to calculate the a time dynamic Maximum Drawdown in Python. The problem is that e.g.: ( df.CLOSE_SPX.max() - df.CLOSE_SPX.min() ) / df.CLOSE_SPX.max() can't work since these functions use all data and not e.g. considering the minimum only from a given maximum onwards on the timeline. Does anone know how to implement that in python? This is a short example of the dataframe used: CLOSE_SPX Close_iBoxx A_Returns B_Returns A_Vola B_Vola 2014-05-15 1870.85 234.3017 -0.009362 0.003412 0.170535 0.075468 2014-05-16 1877.86 234.0216 0.003747 -0.001195 0.170153 0.075378 2014-05-19 1885.08 233.7717 0.003845 -0.001068 0.170059 0.075384 2014-05-20 1872.83 234.2596 -0.006498 0.002087 0.170135 0.075410 2014-05-21 1888.03 233.9101 0.008116 -0.001492 0.169560 0.075326 2014-05-22 1892.49 233.5429 0.002362 -0.001570 0.169370 0.075341 2014-05-23 1900.53 233.8605 0.004248 0.001360 0.168716 0.075333 2014-05-27 1911.91 234.0368 0.005988 0.000754 0.168797 0.075294 2014-05-28 1909.78 235.4454 -0.001114 0.006019 0.168805 0.075474 2014-05-29 1920.03 235.1813 0.005367 -0.001122 0.168866 0.075451 2014-05-30 1923.57 235.2161 0.001844 0.000148 0.168844 0.075430 2014-06-02 1924.97 233.8868 0.000728 -0.005651 0.168528 0.075641 2014-06-03 1924.24 232.9049 -0.000379 -0.004198 0.167852 0.075267
You can get this using a pandas rolling_max to find the past maximum in a window to calculate the current day's drawdown, then use a rolling_min to determine the maximum drawdown that has been experienced. Lets say we wanted the moving 1-year (252 trading day) maximum drawdown experienced by a particular symbol. The following should do the trick: import pandas as pd import pandas_datareader as web import matplotlib.pyplot as pp import datetime # Get SPY data for past several years SPY_Dat = web.DataReader('SPY', 'yahoo', datetime.date(2007,1,1)) # We are going to use a trailing 252 trading day window window = 252 # Calculate the max drawdown in the past window days for each day in the series. # Use min_periods=1 if you want to let the first 252 days data have an expanding window Roll_Max = SPY_Dat['Adj Close'].rolling(window, min_periods=1).max() Daily_Drawdown = SPY_Dat['Adj Close']/Roll_Max - 1.0 # Next we calculate the minimum (negative) daily drawdown in that window. # Again, use min_periods=1 if you want to allow the expanding window Max_Daily_Drawdown = Daily_Drawdown.rolling(window, min_periods=1).min() # Plot the results Daily_Drawdown.plot() Max_Daily_Drawdown.plot() pp.show() Which yields (Blue is daily running 252-day drawdown, green is maximum experienced 252-day drawdown in the past year): Note: with the newest
{ "source": [ "https://quant.stackexchange.com/questions/18094", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/16404/" ] }
18,441
Back in the 90's, Goldman Sachs (publicly?) released a series called "Quantitative Strategies Research Notes" — mostly technical papers on topic. Emanuel Derman co-authored almost all of them. Some of them are available online: Regimes of Volatility Trading and Hedging Local Volatility How to Value and Hedge Options on Foreign Indexes Strike-Adjusted Spread The Local Volatility Surface But 15 more papers are apparently missing. Did anyone see them published, perhaps, as a book? Or just a comprehensive collection? References This is the latest list of the publications that I took from a late 1999 paper: Understanding Guaranteed Exchange-Rate Contracts In Foreign Stock Investments. Emanuel Derman, Piotr Karasinski and Jeffrey Wecker Valuing and Hedging Outperformance Options. Emanuel Derman Pay-On-Exercise Options. Emanuel Derman and Iraj Kani The Ins and Outs of Barrier Options. Emanuel Derman and Iraj Kani The Volatility Smile and Its Implied Tree. Emanuel Derman and Iraj Kani Static Options Replication. Emanuel Derman, Deniz Ergener and Iraj Kani Enhanced Numerical Methods for Options with Barriers. Emanuel Derman, Iraj Kani, Deniz Ergener and Indrajit Bardhan The Local Volatility Surface: Unlocking the Information in Index Option Prices. Emanuel Derman, Iraj Kani and Joseph Z. Zou Implied Trinomial Trees of the Volatility Smile. Emanuel Derman, Iraj Kani and Neil Chriss Model Risk. Emanuel Derman, Trading and Hedging Local Volatility. Iraj Kani, Emanuel Derman and Michael Kamal Investing in Volatility. Emanuel Derman, Michael Kamal, Iraj Kani, John McClure, Cyrus Pirasteh and Joseph Zou Is the Volatility Skew Fair? Emanuel Derman, Michael Kamal, Iraj Kani and Joseph Zou Stochastic Implied Trees: Arbitrage Pricing with Stochastic Term and Strike Structure of Volatility. Emanuel Derman and Iraj Kani The Patterns of Change in Implied Index Volatilities. Michael Kamal and Emanuel Derman Predicting the Response of Implied Volatility to Large Index Moves: An October 1997 S&P Case Study. Emanuel Derman and Joe Zou How to Value and Hedge Options on Foreign Indexes. Kresimir Demeterfi Regimes of Volatility: Some Observations on the Variation of S&P 500 Implied Volatilities. Emanuel Derman
Many of them are on my website at emanuelderman.com . Others I probably have anyway. Feel free to email me
{ "source": [ "https://quant.stackexchange.com/questions/18441", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/11701/" ] }
25,942
There is a big body of literature on econometric models like ARIMA , ARIMAX or VAR . Yet to the best of my knowledge practically nobody is making use of that in Quantitative Finance. Yes, there is a paper here and there and sometimes you find an example where stock prices are being used for illustrative purposes but this is far from the main stream. My question Is there a good reason for that? Is it just because of tradition and different schools of thought or is there a good technical explanation? (By the way I was pleased to find an arima tag here... but this is again a case in point: only 8 out of nearly 7,000 questions (~ 0.1% !) use this tag! ...ok, make this 9 now ;-)
It's an interesting question. I particularly agree with the $\mathbb{Q}-\mathbb{P}$ dichotomy mentioned by many. I would add to the other answers that, come to think of it, the Black-Scholes postulated Geometric Brownian Motion could be interpreted as an AR(1) process on the logarithm of the stock price as you discretise the SDE from which it is a solution, which is exactly what you do when running Monte-Carlo simulations (same thing for the Ornstein-Uhlenbeck process as explained here and noted by @Richard). Actually, when taking the continuous-time limit, many more econometric models can be shown to correspond to stochastic processes frequently used by $\Bbb{Q}$ quants (see this paper for instance and the comment of @Kiwiakos below and discussed here with interesting references). So why do we, at least on the sell-side, tend to favour (jump-)diffusion models over econometric models, while the latter have the advantage that volatility/variance is an observable quantity and not a hidden variable, making them easier to calibrate on historical time series, that is, information observed under $\mathbb{P}$ ? Well... essentially because derivatives pricing happens under a risk-neutral measure $\mathbb{Q}$ and not the physical measure $\mathbb{P}$. When working under $\mathbb{Q}$, we do relative valuation . Voluntarily over-simplifying the situation, we appeal to the absence of arbitrage opportunity to claim that any financial instrument can be priced solely by looking at the prices of other securities (typically listed options) that can be combined to perfectly replicate the former instrument's behaviour (or used as a perfect hedge, which is equivalent) . Therefore, it is not important to have a model which can be easily calibrated to historical time series, hence under $\mathbb{P}$ (which is the key feature of econometric models IMHO) but essential to have a model that leads to nice closed form formulas for the price of simple instruments that could be used as a relative pricing basis under $\mathbb{Q}$ (which is the key feature of most jump-diffusion models used by quants IMHO). Consider the GARCH pricing model proposed by Duan for instance. True, it is easily calibrable to historical time series, but: Is the past really useful to understand what will happen in the future, which is the crux of derivatives pricing? Not necessarily, especially since we are in a relative valuation framework: it is the evolution of the market prices at which we can trade the elementary replication blocks that matters, not the historical behaviour of the underlying asset . You need Monte-Carlo simulations to compute European option prices under this model: think about how much computational resources would be needed to calibrate such a model to 1000 vanilla prices several times a day in a live production environment (especially compared to something like Heston were Fast Fourier Transform techniques can be implemented). Summarising (again, with a voluntary over-simplification) Econometric models : easily calibrated under $\mathbb{P}$ (discrete time + observable volatility/variance), yet need simulation methods à la Monte Carlo to be able to compute option prices, even for the most basic types of options. (Jump-)Diffusion models : painful to calibrate to time series (continuous time + hidden Markov models) but admittedly lead to (semi-)closed form formulas for many benchmark instruments (or at least the popular models are the ones that do...), making them easy to calibrate/use under $\mathbb{Q}$.
{ "source": [ "https://quant.stackexchange.com/questions/25942", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/12/" ] }
29,504
Let $$X_t = \int_0^t W_s \,\mathrm d s$$ where $W_s$ is our usual Brownian motion. My questions are the following: Expectation? Variance? Is it a martingale? Is it an Ito process or a Riemann integral? Any reference for practicing tricky problems like this? For the expectation, I know it's zero via Fubini. We can put the expectation inside the integral. Now, for the variance and the martingale questions, do we have any tricks? Thanks!
This type of integral has appeared so many times and in so many places; for example, here , here and here . Basically, for each sample $\omega$ , we can treat $\int_0^t W_s ds$ as a Riemann integral. Moreover, note that \begin{align*} d(tW_t) = W_t dt + tdW_t. \end{align*} Therefore, \begin{align*} \int_0^t W_s ds &= tW_t -\int_0^t sdW_s \tag{1}\\ &= \int_0^t (t-s)dW_s, \end{align*} which can also be treated as a (parametrized) Ito integral. Then, it is easy to see that \begin{align*} E\left(\int_0^t W_s ds\right) = 0, \end{align*} and that \begin{align*} \operatorname{Var}\left(\int_0^t W_s ds\right) &= \int_0^t(t-s)^2 ds\\ &=\frac{1}{3}t^3. \end{align*} Regarding the martingality, note that, from $(1)$ , \begin{align*} \int_0^{t_2} W_s ds -\int_0^{t_1} W_s ds &=t_2W_{t_2}-t_1W_{t_1} + \int_{t_1}^{t_2}sdW_s\\ &=t_2(W_{t_2}-W_{t_1}) + (t_2-t_1) W_{t_1} + \int_{t_1}^{t_2}sdW_s\\ &=(t_2-t_1) W_{t_1} + \int_{t_1}^{t_2}(t_2+s)dW_s, \end{align*} for $t_2>t_1\ge 0$ . Therefore, \begin{align*} E\left(\int_0^{t_2} W_s ds\mid \mathscr{F}_{t_1} \right) &= \int_0^{t_1} W_s ds + (t_2-t_1) W_{t_1}. \end{align*} It is not a martingale. Another way to see this is based the equation \begin{align*} d\left(\int_0^t W_s ds\right) = W_t dt, \end{align*} which is not driftless. EDIT: One other approach for the martingality can proceed as follows. For $t_2>t_1 >0$ , \begin{align*} E\left(\int_0^{t_2} W_s ds \mid \mathscr{F}_{t_1}\right) &= \int_0^{t_1} W_s ds + E\left(\int_{t_1}^{t_2} W_s ds \mid \mathscr{F}_{t_1}\right)\\ &=\int_0^{t_1} W_s ds + \int_{t_1}^{t_2} E\left(W_s \mid \mathscr{F}_{t_1}\right) ds\\ &=\int_0^{t_1} W_s ds + \int_{t_1}^{t_2} E\left(W_s-W_{t_1}+ W_{t_1}\mid \mathscr{F}_{t_1}\right) ds\\ &= \int_0^{t_1} W_s ds + (t_2-t_1)W_{t_1}. \end{align*}
{ "source": [ "https://quant.stackexchange.com/questions/29504", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/22776/" ] }
29,572
My company is starting a new initiative aimed at building a financial database from scratch. We would be using it in these ways: Time series analysis of: a company's financial data (ex: IBM's total fixed assets over time), aggregations (ex: total fixed assets for the materials sector over time), etc. Single company snapshot: various data points of a single company Analysis of multiple companies across multiple data fields for a single time frame, usually the current day. Backtesting, rank analysis, data analysis, etc. of ideas and custom factors. Approximate breadth of data: 3000 companies 3500 data fields (ex: total fixed assets, earnings, etc.) 500 aggregation levels Periodicity: daily, monthly, quarterly, annual 20 year look-back that would grow over time Questions: What database should we choose? We are currently limited to free options and we prefer open source (on principle). Currently we use PostgreSQL. How should I structure this schema-wise? I am thinking of breaking up the field types into categories (balance sheet, descriptive, income statement, custom calculations, etc.) so each company would have a table for balance sheet, descriptive, income statement, custom calculations, etc. with each row representing one day and appropriate fields for the category of table for columns/fields. That will be my fully normalized database. Using the fully normalized database, I will then build a data warehouse, temp tables, views, etc. that are not fully normalized to make queries fast for the various use cases described previously. One issue with this approach is the number of tables. If I have, say, 5 categories of company data and 3000 companies I will have 15,000 tables in my fully normalized database for just storing the company data. But still, from my perspective, it seems like the best way to do it. What is the best strategy for indexing and structuring the time series portion of this? I've talked to a few people and I did some research on time series database indexing/structure, but help/references/tips/etc. in this area, even if they duplicate what I have found, would be helpful. I realize this depends on the answer to #1 above, so maybe assume I am staying with PostgreSQL and I will be building out the "time series" functionality specific bells and whistles myself. Notes: In-depth technical answers and references/links much preferred. This is for a small buy side financial investment firm. If you have been down this road before, suggestions outside of the scope of my initial question are welcome. We cannot compromise on the amount of data, so reducing the amount of data isn't an option for us; however, the numbers I supplied are estimates only. If there is a better place to ask this question, please let me know. There is much more to what we want to do, but this represents the core of what we want to do from a data structure perspective.
I am going to recommend something that I have no doubt will get people completely up in arms and probably get people to attack me. It happened in the past and I lost many points on StackOverflow as people downvoted my answer. I certainly hope people are more open minded in the quant forum. Note - It seems that this suggestion has created some strong disagreement again. Before you read this I would like to point out that this suggestion is for a "Small buy side firm" and not a massive multiuser system. I spent 7 years managing a high-frequency trading operation and our primary focus was building systems just like this. We spent a huge amount of time trying to figure out the most efficient way to store, retrieve and analyze order level data from both the NYSE, NASDAQ and a wide variety of ECNs. What I am giving you is the result of that work. Our answer was Don't Use a Database . A basic structured file system of serialized data chunks works far better. Market time series data is unique in many ways, both in how it is used and how it is stored. Databases were developed for wildly different needs and actually hurt the performance of what you are trying to do. This is in the context of a small to mid-sized trading operation that is focused on data analysis related to trading strategies or risk analytics. If you are creating a solution for a large brokerage, bank or have to meet the needs of a large number of simultaneous clients then I imagine that your solution would differ from mine. I happen to love databases. I am using MongoDB right now for part of a new project allowing us to analyze options trades, but my market timeseries data, including 16 years of options data, is all built into a structured file store. Let me explain the reasoning behind this and why it is more performant. First, let's look at storing the data. Databases are designed to allow a system to do a wide variety of things with data. The basic CRUD functions; Create, Read, Update and Delete. To do these things effectively and safely, many checks and safety mechanisms must be implemented. Before you read data the database needs to be sure the data isn't being modified, it is checking for colisions, etc.. When you do read the data in a database the server puts a lot of effort into caching that data and determining if it can be served up faster later. There are indexing operations and replicating data to prepare it to be viewed in different ways. Database designers have put huge amounts of effort into designing these functions to be fast, but they all take processing time and if they are not used they are just an impediment. Market time series data is stored in a completely different way. In fact, I would say it is prepared rather than stored. Each data item only needs to be written once and after that never needs to be modified or changed. Data items can be written sequentially, there is no need to insert anything in the middle. It needs no ACID functionality at all. They have little to no references out to any other data. The time series is effectively its own thing. As a database does all the magic that makes databases wonderful it also packs on the bytes. The minimum space data can take up is its own original size. They may be able to play some tricks with normalizing data and compression, but those only go so far and slow things down. The indexing, caching and referencing the data ends up packing on the bytes and chewing up storage. Reading is also very simplified. Finding data is as simple as time & symbol. Complex indexing does it no good. Since time series data is typically read in a linear fashion and a sequential chunk at once, Caching strategies actually slow the access down instead of help. It takes the processor cycles to cache the data you aren't going to read again anytime soon. This is the basic structures that worked for us. We created basic data structures for serializing the data. If your major concern is speed and data size you can go with a simple custom binary storage. In another answer, omencat suggested using TeaFiles and that looks like it has some promise also. Our recent need is for more flexibility so we chose to use a fairly dense, but flexible JSON format. We broke the data up into fairly obvious chunks. The EOD stock data is a very easy example, but the concept works for our larger datasets also. We use the data for analysis in fairly traditional time series scenarios. It could be referenced as one quote or out to a series containing years of data at a time. It was important to break the data down to bite-sized chunks for storage so we chose to make one "Block" of our data equal one year of EOD stock time series data. Each block is one file that contains a year of OHLC EOD data serialized as JSON. The name of the file is the Stock symbol prefixed by an underscore. Note - the underscore prevents issues when the stock symbol conflicts with DOS commands such as COM or PRN. Note, make sure you understand the limitations of your file system. We got in trouble when we put too many files in one place. This led to a directory structure that is effectively its own index. It is broken down by the year of data and then also sorted by the first letter of the stock symbol. This gives us roughly 20 to a few hundred symbol files per directory. It looks roughly like this; \StockEOD\{YYYY}\{Initial}\_symbol.json AAPL data for 2015 would be \StockEOD\2015\A\_AAPL.json A small piece of its data file looks like this; [{"dt":"2007-01-03T00:00:00","o":86.28,"h":86.58,"l":81.9,"c":83.8,"v":43674760}, {"dt":"2007-01-04T00:00:00","o":84.17,"h":85.95,"l":83.82,"c":85.66,"v":29854074}, {"dt":"2007-01-05T00:00:00","o":85.84,"h":86.2,"l":84.4,"c":85.05,"v":29631186}, {"dt":"2007-01-08T00:00:00","o":85.98,"h":86.53,"l":85.28,"c":85.47,"v":28269652} We have a router object that can give us a list of filenames for any data request in just a handful of lines. Each file is read with an Async filestream and deserialized. Each quote is turned into an Object and added to a sorted list in the system. At that point, we can do a very quick query to trim off the unneeded data. The data is now in memory and can be used in almost any way needed. If the query size gets too big for the computer to handle it isn't difficult chunking the process. It takes a massive request to get there. I have had programmers who I described this to almost go into a rage telling me how I was doing it wrong. That this was "Rolling my own database" and a complete waste of time. In fact, we switched from a fairly sophisticated database. When we did our codebase to handle this dropped to a small handful of classes and less than 1/4 of the code we used to manage the database solution. We also got nearly a 100x jump in speed. I can retrieve 7 years of stock end of day data for 20 symbols in a couple of milliseconds. Our old HF trading system used similar concepts but in a highly optimized Linux environment and operated in the nanosecond range.
{ "source": [ "https://quant.stackexchange.com/questions/29572", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/22557/" ] }
30,065
What is the formula for the forward price of a bond (assuming there are coupons in the interim period, and that the deal is collateralised) Please also prove it with an arbitrage cashflow scenario analysis! I suppose it is like fwd = spot - pv coupons) × (1+ repo × T ) , I am not certain at what rate to pv the coupons.
Amazingly, there are several different methods for computing bond forward price – the underlying ideas are the same (forward price = spot price - carry), but the computational details differ a bit based on market convention. Let's start with the basics. Assume between now ($t_0$) and the forward settlement date $t_2$, the bond makes a coupon payment at time $t_1$. Now consider the following series of trades: Today, a trader buys a bond at a price of $P + AI_0$ (spot clean price + spot accrued interest). To fund the purchase, the trader enters into a $t_1$-year term repo agreement at a repo rate of $r$. More specifically, he/she sells the repo by borrowing $P + AI_0$ and delivering the bond as collateral. At time $t_1$ (coupon payment date), the repo balance is $(P + AI_0)(1 + rt_1)$ and the trader receives a coupon payment of $c / 2$ for being the owner of the bond. The trader re-enters into another repo agreement that spans from $t_1$ to $t_2$ on a principal of $(P + AI_0)(1 + rt_1) - c/2$. This new loan, combined with the coupon payment of $c/2$, allows the trader to retire the old repo loan without putting up any additional capital. Finally, at time $t_2$, the trader gets back the bond and repays the repo loan along with interest from $t_1$ to $t_2$: $$ \left((P + AI_0)(1 + rt_1) - \frac{c}{2}\right) \bigl(1 + r(t_2-t_1)\bigr) . $$ These trades are economically no different from buying the bond forward at time $t_2$. Therefore, the forward clean price for settlement at $t_2$ must be $$ F(t_2) = (P + AI_0)(1 + rt_1)\bigl(1 + r(t_2-t_1)\bigr) - \frac{c}{2}\bigl(1 + r(t_2-t_1)\bigr) - AI_{t_2}. $$ The method above is known as the Compounded Method . In the US Treasury market (and most international bond markets), a small approximation is made. Recall for small $rt$, we have $$ (1 + rt_1)(1+r(t_2-t_1))\approx 1 + r(t_1+t_2-t_1) = 1 + rt_2, $$ we therefore have the Proceeds Method : $$ F(t_2) = (P + AI_0)(1 + rt_2) - \frac{c}{2}\bigl(1 + r(t_2-t_1)\bigr) - AI_{t_2}. $$ The Proceeds Method is for all intents and purposes the standard/default way of pricing bond forwards. There's also the "Simple" and "Scientific" methods, but these are rarely used.
{ "source": [ "https://quant.stackexchange.com/questions/30065", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/17348/" ] }
30,071
It is generally assumed that market prices follow random walks, implying market efficiency. However, one could find that some combinations of the "random walks" are cointegrated. Does this contradict market efficiency? How should it be interpreted?
Amazingly, there are several different methods for computing bond forward price – the underlying ideas are the same (forward price = spot price - carry), but the computational details differ a bit based on market convention. Let's start with the basics. Assume between now ($t_0$) and the forward settlement date $t_2$, the bond makes a coupon payment at time $t_1$. Now consider the following series of trades: Today, a trader buys a bond at a price of $P + AI_0$ (spot clean price + spot accrued interest). To fund the purchase, the trader enters into a $t_1$-year term repo agreement at a repo rate of $r$. More specifically, he/she sells the repo by borrowing $P + AI_0$ and delivering the bond as collateral. At time $t_1$ (coupon payment date), the repo balance is $(P + AI_0)(1 + rt_1)$ and the trader receives a coupon payment of $c / 2$ for being the owner of the bond. The trader re-enters into another repo agreement that spans from $t_1$ to $t_2$ on a principal of $(P + AI_0)(1 + rt_1) - c/2$. This new loan, combined with the coupon payment of $c/2$, allows the trader to retire the old repo loan without putting up any additional capital. Finally, at time $t_2$, the trader gets back the bond and repays the repo loan along with interest from $t_1$ to $t_2$: $$ \left((P + AI_0)(1 + rt_1) - \frac{c}{2}\right) \bigl(1 + r(t_2-t_1)\bigr) . $$ These trades are economically no different from buying the bond forward at time $t_2$. Therefore, the forward clean price for settlement at $t_2$ must be $$ F(t_2) = (P + AI_0)(1 + rt_1)\bigl(1 + r(t_2-t_1)\bigr) - \frac{c}{2}\bigl(1 + r(t_2-t_1)\bigr) - AI_{t_2}. $$ The method above is known as the Compounded Method . In the US Treasury market (and most international bond markets), a small approximation is made. Recall for small $rt$, we have $$ (1 + rt_1)(1+r(t_2-t_1))\approx 1 + r(t_1+t_2-t_1) = 1 + rt_2, $$ we therefore have the Proceeds Method : $$ F(t_2) = (P + AI_0)(1 + rt_2) - \frac{c}{2}\bigl(1 + r(t_2-t_1)\bigr) - AI_{t_2}. $$ The Proceeds Method is for all intents and purposes the standard/default way of pricing bond forwards. There's also the "Simple" and "Scientific" methods, but these are rarely used.
{ "source": [ "https://quant.stackexchange.com/questions/30071", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/18670/" ] }
30,073
Where can I find the Credit default swap index that JPMorganChase bank puts out? I am able to find some indices for Europe but none for the US. thanks
Amazingly, there are several different methods for computing bond forward price – the underlying ideas are the same (forward price = spot price - carry), but the computational details differ a bit based on market convention. Let's start with the basics. Assume between now ($t_0$) and the forward settlement date $t_2$, the bond makes a coupon payment at time $t_1$. Now consider the following series of trades: Today, a trader buys a bond at a price of $P + AI_0$ (spot clean price + spot accrued interest). To fund the purchase, the trader enters into a $t_1$-year term repo agreement at a repo rate of $r$. More specifically, he/she sells the repo by borrowing $P + AI_0$ and delivering the bond as collateral. At time $t_1$ (coupon payment date), the repo balance is $(P + AI_0)(1 + rt_1)$ and the trader receives a coupon payment of $c / 2$ for being the owner of the bond. The trader re-enters into another repo agreement that spans from $t_1$ to $t_2$ on a principal of $(P + AI_0)(1 + rt_1) - c/2$. This new loan, combined with the coupon payment of $c/2$, allows the trader to retire the old repo loan without putting up any additional capital. Finally, at time $t_2$, the trader gets back the bond and repays the repo loan along with interest from $t_1$ to $t_2$: $$ \left((P + AI_0)(1 + rt_1) - \frac{c}{2}\right) \bigl(1 + r(t_2-t_1)\bigr) . $$ These trades are economically no different from buying the bond forward at time $t_2$. Therefore, the forward clean price for settlement at $t_2$ must be $$ F(t_2) = (P + AI_0)(1 + rt_1)\bigl(1 + r(t_2-t_1)\bigr) - \frac{c}{2}\bigl(1 + r(t_2-t_1)\bigr) - AI_{t_2}. $$ The method above is known as the Compounded Method . In the US Treasury market (and most international bond markets), a small approximation is made. Recall for small $rt$, we have $$ (1 + rt_1)(1+r(t_2-t_1))\approx 1 + r(t_1+t_2-t_1) = 1 + rt_2, $$ we therefore have the Proceeds Method : $$ F(t_2) = (P + AI_0)(1 + rt_2) - \frac{c}{2}\bigl(1 + r(t_2-t_1)\bigr) - AI_{t_2}. $$ The Proceeds Method is for all intents and purposes the standard/default way of pricing bond forwards. There's also the "Simple" and "Scientific" methods, but these are rarely used.
{ "source": [ "https://quant.stackexchange.com/questions/30073", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/6889/" ] }
30,397
I have already figured out that Delta-hedging essentially turns European options into volatility products where you pay implied vol and get paid realized vol for long positions and you pay realized vol and get paid implied vol for short positions. I have also discovered that market-making in European options generally is conducted by using volatility models to project vol for some underlying asset and then quoting bid and ask prices using the pricing model subject to the projected vol and finally delta-hedging all option positions that get filled by the market. What is the effect of delta-hedging exotic options? Does it have the same effect of turning the exotic options into volatility-based payoff products as opposed to directional price movement payoff products? Do market-makers in exotic options delta-hedge their positions just like is done with vanilla products?
Consider reading Lorenzo Bergomi 's excellent book -- or at least the first chapter available here for download --, it will help you clarify things. Some remarks as to your original question: It is well known that, under a pure diffusion assumption, the total P&L of a delta hedged European option (i.e. an option whose payoff only depends on the value of the underlying asset at a future date $T$ ) over the horizon $[0,T]$ writes: $$ P\&L_{[0,T]} = \int_0^T \frac{1}{2} \underbrace{\Gamma(t,S_t,\sigma^2_{t,\text{impl.}})S_t^2}_{\text{Gamma dollar}}( \sigma^2_{t,\text{real.}} - \sigma^2_{t,\text{impl.}}) dt $$ As such, although a delta hedged European option portfolio is sensitive to the realised vs. implied volatility discrepancy, it is not a pure volatility trade : the Gamma dollar term introduces a dependency on the spot path. This Gamma dollar term should really be seen as some kind of volatility accumulator : only along paths where the Gamma dollar is non zero will the discrepancy between realised and implied volatility matter. The previous relationship still holds for digital (or binary) options, since they are also European options after all. What you observe is thus simply a consequence of the fact that the Gamma map of a binary call is pretty different from that of a vanilla call (see below): indeed, it is zero over most of the time to maturity/spot domain. As such the contribution of realised vs. implied volatility discrepancies to the total P&L is completely different: only what happens (1) around the strike, (2) near the expiry matters. This is why you did not observe a strong correlation between the resulting P&L and the realised vol over the entire delta-hedging horizon . Now with respect to delta hedging exotic options, quoting from Lorenzo Bergomi's "chapter's digest" and adding some of my own remarks: Delta hedging removes the order-one contribution of $\delta S$ to the P&L of an option position. Although the expectation of the P&L of a delta hedged portfolio is zero (provided you correctly anticipate the volatility - or variance - of future returns), Delta hedging is not adequate for reducing the standard deviation of the P&L [...] The sources of the dispersion of this P&L are: (a) the tails of returns, (b) the volatility of realized volatility and the correlation of future realized volatilities. Using options for gamma-hedging (...) In other words, using other vanilla options to locally cancel the gamma dollar term, immunizes us against realized volatility (...) Dynamical trading of vanilla options, however, exposes us to uncertainty as to future levels of implied volatilities . Hence the need of stochastic volatility models for modelling the dynamics of implied volatility. Exotic options often depend in a complex way on the dynamics of implied volatilities An exposure which is usually dealt with by trading vanilla options. See for instance the case of a (relative) forward-start at the money call in a homogeneous diffusion model. In that very specific case, we do not care about delta hedging anymore since it is only the dynamics of forward volatility that matters. One way want to neutralise such an exposure using calendar spreads.
{ "source": [ "https://quant.stackexchange.com/questions/30397", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/24628/" ] }
34,101
Seem to be confused over the difference between PV01 of a bond and DV01 of the bond. PV01, also known as the basis point value (BPV), specifies how much the price of an instrument changes if the interest rate changes by 1 basis point (0.01%). DV01 is the dollar value of one basis point change in the instrument. Is my explanation correct?
They are both price changes in response to a 1 bp change. DV01 is valid for a single bond. It is the price change in response to a 1 bp change in yield of this instrument. It arises from the mathematical relationship between yield and price. PV01 is a more general concept for all fixed income securities , not just bonds but swaps, futures and options, MBS, and portfolios thereof. It is the price change in response to a 1 bp change in yields all along the yield curve (parallel shift in the yield curve). It presupposes an estimate of the yield curve and a mathematical relationship between the price of an instrument and this yield curve. For a single simple bond they are the same.
{ "source": [ "https://quant.stackexchange.com/questions/34101", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/5839/" ] }
34,111
Factor investing in equity markets is one of the hot topics of these days. Many manufacturers of investment products offer exposure to small cap, momentum, minvol, value and other pure factors or factor blends. Many of them beating the cap-weighted index at relatively low cost. I think of various reasons but I woud like to discuss: What are the reasons not everybody just invests in factor portfolios? What could be limitations, regulations, fears or other reasons not to invest in factor portfolios?
This question goes to whether the historical returns to factors represent: Spurious results, overfitting, data mining... Mispricing Unexploitable effects Compensation for risk Case 1: Spurious results etc... If someone constructs a "stock tickers that begin with AAP or GOO" factor, the highly above average returns would almost certainly reflect a fishing expedition (or conditioning on future information) and would not be reproducible going forward. Under a null of no above average returns, you're going to get portfolios that have above average historical returns with t-stats over 2. Beware. For something like the Fama-French factor $\mathit{HML}$, this seems far less likely since it has continued to hold decades and decades after it was initially discovered. And if a factor works in other markets and asset classes, it may also increase confidence that the effect is really there. For example, Asness et. al. (2013) find value and momentum effects across a broad variety of asset classes. Case 2: Mispricing If a factor reflects mispricing by investors, some kind of psychological bias or error, there's the possibility that investors wisen up and the higher returns vanish! For example, are we likely to see more egregious violations of the law of one price such as in the tech stock carveouts of Lamont and Thaler (2003)? Perhaps, but if investors get smarter, these types of anomalies should go away. Case 3: Unexploitable effects A related idea is that various anomalies can exist if they are unexploitable. There's a large literature on mispricing and short sale constraints. A common question for higher turnover strategies (eg. momentum) is to what extent trading costs eat into estimated returns. For example, Novy-Marx and Velikov (2016) examine transaction costs and anomalies. Another issue is what's the price you can actually trade at? In this blog post, Ernie Chan goes through some examples where a strategy appears to generate positive returns but actually doesn't! . Case 4: Compensation for risk If the factors represent compensation for risk, a risk that investors don't wish to hold, then there's a rational reason for the effect to continue. It doesn't violate economic laws of rationality for there to be positive insurance premiums and for the holders of unpleasant, aggregate risk to earn premiums for bearing that risk. The efficient market hypothesis of Eugene Fama does not imply that expected returns are constant. This brings up a question of whether clients are capable of bearing a risk? When will they want cash? For example, a number of university endowments tried to follow the David Swensen, Yale Model and try to earn premiums for holding highly illiquid investments. When the 2008 financial crisis hit, all types of these illiquid investments in private equity, venture capital etc... became even more illiquid and ceased to pay dividends. Some major university endowments ended up issuing large bonds to raise cash... If a university's plan in a financial crisis is to avoid cuts in programs by tapping an endowment, then investing a large portion of the endowment in illiquid securities may be problematic. If you go down to the level of tiny non-profits, their revenues (from contributions) can be highly correlated to the business cycle. Putting their cash in investments correlated with their revenue could put them at risk of disintegration in a crisis. Declining above average returns... Mclean and Pontiff (2016) find that many asset pricing anomalies in the literature decline in estimated magnitude post discovery and post publication. Do their results reflect investors learning (and reduced mispricing)? That some prior asset pricing anomalies were spurious or overstated? That compensation for risk has gone down? Some combination of the three? A decline in the estimated magnitude of an effect in studies trying to replicate the results is a pervasive issue in science. References Asness, Clifford S., Tobias J. Moskowitz, and Lasse Heje Pedersen, 2013, "Value and Momentum Everywhere," Journal of Finance Lamont, Owen A., and Richard H. Thaler, 2003, "Can the Market Add and Subtract? Mispricing in Tech Stock Carve‐outs," Journal of Political Economy Mclean, David R. and Jeffrey Pontiff, 2016, "Does Academic Research Destroy Stock Return Predictability?" Journal of Finance Novy-Marx, Robert, and Mihail Velikov, 2016, "A taxonomy of anomalies and their trading costs," Review of Financial Studies `
{ "source": [ "https://quant.stackexchange.com/questions/34111", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/2587/" ] }
34,662
The original paper by Markowitz from the '60s has ~20,000 citations (definitely popular). However several papers I came across show that a $\frac{1}{n}$ asset allocation gives higher Sharpe ratios (not sure about drawdowns) than mean-variance portfolio optimisation (or any subsequent derivative methods). What is the deal with all the attention portfolio optimisation (a more complex method) is getting if its utility seems low in practice?
Markowitz's concepts attracted a great deal of interest from theorists (and still do), but never had much application in practice. The results from practical application were always disappointing (starting in the 1970's, well before DeMiguel, Garlappi, and Uppal (2007) study of $\frac{1}{N}$ portfolios), mainly because it is so difficult to provide accurate estimates of all the parameters involved (especially the expected returns, which in truth no one knows). Nevertheless it was revolutionary because gave rise to other theories, such as CAPM, multi-factor models, Black-Litterman, etc. which eventually we hope will prove useful. Compare investment textbooks before and after Markowitz and you will see that everything changed at that time. Everyone working in (the asset management side of) QuantFinance today is an intellectual descendant of Markowitz, even though they don't use his method in their everyday work.
{ "source": [ "https://quant.stackexchange.com/questions/34662", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/11655/" ] }
38,862
Which books/papers should we all have in our shelves? There are a couple that I use regularly such as: An Introduction to Financial Option Valuation: Mathematics, Stochastics and Computation Asset Pricing (Cochrane) Which ones do you recommend for which topics?
General Finance Textbooks Options, Futures and Other Derivatives , John Hull The Concepts and Practice of Mathematical Finance , Mark Joshi Paul Wilmott on Quantitative Finance , Paul Wilmott Asset Pricing Asset Pricing (Revised Edition) , Cochrane, John H. Princeton University Press, 2009. Financial Decisions and Markets: A Course in Asset Pricing , Campbell, John Y. Princeton University Press, 2017. Asset pricing and portfolio choice theory , Back, Kerry. Oxford University Press, 2010. Damodaran on Valuation , Damodaran, Aswath, Wiley Finance, 2006 Dynamic Asset Pricing Theory (Third Edition) , Duffie, Darrell. Princeton University Press, 2001. Asset Allocation Introduction to Risk Parity and Budgeting , Roncalli, Thierry, 2013 Asset Management: A Systematic Approach to Factor Investing , Ang, Andrew, Financial Management Association, 2014 Expected Returns: An Investor's Guide to Harvesting Market Rewards , Illmanen, Anti, The Wiley Finance Series, 2011 Option Pricing Theory and Stochastic Calculus Financial Calculus: An Introduction to Derivative Pricing , Martin Baxter and Andrew Rennie Arbitrage Theory in Continuous Time , Tomas Björk Stochastic Calculus for Finance I: The Binomial Asset Pricing Model , Steven Shreve Stochastic Calculus for Finance II: Continuous-Time Models , Steven Shreve Martingale Methods in Financial Modelling , Marek Musiela and Marek Rutkowski Mathematical Methods for Financial Markets , Monique Jeanblanc, Marc Yor, and Marc Chesney Financial Modelling With Jump Processes , Rama Cont and Peter Tankov Option Volatility and Pricing , Sheldon Natenberg Asset Classes Equity Derivatives: Equity derivatives , Marcus Overhaus et al. Equity Hybrid Derivatives , Marcus Overhaus et al. The Volatility Surface , Jim Gatheral Stochastic Volatility Modeling , Lorenzo Bergomi Dynamic Hedging: Managing Vanilla and Exotic Options , Nassim Nicholas Taleb Option Volatility & Pricing , Sheldon Natenberg Option Valuation Under Stochastic Volatility: With Mathematica Code , Alan L. Lewis FX Derivatives: Foreign Exchange Option Pricing , Iain J. Clark FX Options and Smile Risk , Antonio Castagna FX Options and Structured Products , Uwe Wystup Commodity Derivatives: Commodity Option Pricing , Iain J. Clark Commodities and Commodity Derivatives , Helyette Geman Energy and Power Risk Management: New Developments in Modeling, Pricing, and Hedging , Alexander Eydeland, Krzysztof Wolyniec Interest Rate Derivatives: Interest Rate Option Models , Rebonato Interest Rate Models – Theory and Practice (with Smile, Inflation and Credit) , Damiano Brigo and Fabio Mercurio Interest Rate Modeling I, II & III , Leif B. G. Andersen and Vladimir V. Piterbarg Pricing and Trading Interest Rate Derivatives , J H M Darbyshire Inflation Derivatives: Interest Rate Models – Theory and Practice (with Smile, Inflation and Credit) , Damiano Brigo and Fabio Mercurio Credit Derivatives: Credit Risk - Modeling, Valuation & Hedging , Tomasz R. Bielecki and Marek Rutkowski Modelling Single-name and Multi-name Credit Derivatives , Dominic O’Kane Interest Rate Models – Theory and Practice (with Smile, Inflation and Credit) , Damiano Brigo and Fabio Mercurio XVA: XVA: Credit, Funding and Capital Valuation Adjustments , Andrew Green Counterparty Credit Risk, Collateral and Funding , Damiano Brigo, Massimo Morini, and Andrea Pallavicini Quantitative Risk Management Quantitative Risk Management: Concepts, Techniques and Tools , Alexander J. McNeil, Rudiger Frey, and Paul Embrechts Mathematics Probability and Stochastic Processes: Probability , A.N. Shiryaev Probability , Leo Breiman Stochastic Calculus and Applications , Samuel N. Cohen and Robert J. Elliott Stochastic Differential Equations , Bernt Oksendal Diffusions Markov Processes and Martingales , L. C. G. Roger and D. Williams Statistics: Statistical Inference , George Casella and Roger Berger Theoretical Statistics - Topics for a Core Course , Robert W. Keener Time Series Analysis , James Hamilton The econometrics of financial markets , Campbell, John Y., Andrew Wen-Chuan Lo, and Archie Craig MacKinlay. Vol. 2. Princeton, NJ: Princeton University Press, 1997. The Elements of Statistical Learning , Hastie, Tibshirani and Friedman Handbook of Markov Chain Monte Carlo , Brooks, Steve, Gelman, Andrew, Jones, Galin , and Meng, Xiao-Li. Analysis of Financial Time Series , Ruey S. Tsay Machine Learning : Machine Learning: A Probabilistic Perspective , Kevin P Murphy Pattern Recognition and Machine Learning , Christopher Bishop Reinforcement Learning: An introduction , Richard S. Sutton and Andrew G. Barto Advances in Financial Machine Learning , Marcos Lopez de Prado Programming C++ Design Patterns and Derivatives Pricing , Mark Joshi Python for Data Analysis , Wes McKinney Applied Computational Economics and Finance , Mario J. Miranda and Paul L. Fackler Modern Computational Finance , Antoine Savine Interviews Quant Job Interview Questions and Answers , Mark Joshi Heard on the Street: Quantitative Questions from Wall Street Job Interviews , Timothy Crack 150 Most Frequently Asked Questions on Quant Interviews , Dan Stefanica, Radoš Radoičić, and Tai-ho Wang An Interview primer for quantitative finance , Dirk Bester Being a Quant My Life as a Quant: Reflections on Physics and Finance , Emanuel Derman The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It , Scott Patterson A Man for All Markets: From Las Vegas to Wall Street, How I Beat the Dealer and the Market , Edward Thorpe The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution , Gregory Zuckerman Cultural Classics Reminiscences of a Stock Operator , Jesse Livermore Liar’s Poker , Michael Lewis Against the Gods , Peter Bernstein
{ "source": [ "https://quant.stackexchange.com/questions/38862", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/16472/" ] }
46,125
I am an applied math postdoc and I have been presented with the option of leaving academia to work in high frequency trading. I wanted to get a feel for the field and the theory underlying it so I scanned through several books in the library and it seems there are almost no books on the mathematical theory of this field. All the books I have looked at contain lots of explanations of the various aspects of trading such as 'market participants', 'limit order books', 'market microstructure', etc..which of course are very important to know, and some relatively basic math on things like 'statistical arbitrage strategies'. But where is the rigorous mathematical underpinning? I would have expected to find books containing the same type of theory as in books on mathematical finance, i.e. a deep treatment of measure theory and probability theory, mathematical statistics, stochastic processes etc.. Why are these topics not covered in HFT books? Is advanced math not needed? If this is the case, what are the main skills needed for a high frequency trader?
Hah! There is no such thing as the “rigorous mathematical underpinning” of high frequency trading - because HFT, like all trading, is not primarily a mathematical endeavour. It’s true that many people who work in HFT have a mathematical background, but that’s because the tools of applied math and statistics are useful when analysing the large amounts of data that are generated by HFT activity. So the math that is useful to know is linear algebra, statistics, time series and optimisation (to some extent it’s useful to be familiar with machine learning, which encompasses all of the above). Don’t go into HFT thinking that you will primarily be doing advanced math. If you are lucky, you will mostly be doing data analysis. More likely, you will spend a lot of time cleaning data, writing code, and monitoring trading systems.
{ "source": [ "https://quant.stackexchange.com/questions/46125", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/6201/" ] }
54,082
I'm thinking of writing a master's thesis about pricing options using Levy processes, but I wonder if these processes are actually used for modeling stock prices or not (and which specifically)? And if not which ones are used? In addition, I wonder if it is difficult to implement the right code in Python or can it be done with little knowledge about programming? And what book / blog do you recommend regarding Python in finance?
I give you a brief outline about some key properties of Lévy processes . Lévy processes have stationary and independent increments but do not necessarily have continuous sample paths. In fact, Brownian motion is the only Levy process with continuous sample paths. Some Lévy processes (e.g. Poisson process) have single, rare but large jumps ( finite activity ) whereas others jump infinitely often during any finite time interval. Such processes de facto only move via (small) jumps ( infinite active ). In general, Lévy processes have three components ( Lévy or characteristic triplet ): linear drift Brownian diffusion jumps. ( $\to$ Lévy–Itô decomposition ) This also points to the fact that all Lévy processes are semimartingales . Thus, following the general Itô stochastic integration theory, we can make sense out of terms like $\mathrm{d}X_t$ and $\int_0^t Y_s\mathrm{d}X_s$ , for an appropriate process $Y_t$ and any Lévy processs $X_t$ . A nice way of thinking about Lévy processes are time changed processes . Take the variance gamma process as an example. You can define that process by explicitly giving its trend/volatility/jump components or you take a simple arithmetic Brownian motion $X_t=\theta t+\sigma W_t$ and a Gamma process $\gamma_t$ . Then, the process $X_{\gamma_t}=\theta\gamma_t+\sigma W_{\gamma_t}$ is a variance gamma process. In general, you can use a process to alter the ``time'' of another process ( $\to$ subordination ). General time-changed Lévy processes can capture volatility clusters and the leverage effect yet remain reasonably tractable. They kind of combine Lévy processes with the ideas of stochastic volatility. Intuitively, you can think about calendar time (using $t$ as time) and business time (using $\gamma_t$ as time) as two different things. So, the time changed processes are based on business activity (e.g. arriving trades). Intuition is given by the scaling property of Brownian motion: $\sqrt{c}W_t \overset{\mathrm{Law}}{=}W_{ct}$ for any $c>0$ . Thus, changes in time result in changes of the scaling of the Brownian motion. In this sense, a time change leads to changing (random) variances, etc. Lévy processes are not trivial processes. You often do not have a transition density in closed form. Instead, the characteristic function is very simple for Lévy processes ( $\to$ Lévy-Khintchine formula ). Thus, option pricing is often done using Fourier methods: option prices equal discounted expectations with respect to the risk-neutral density. You can change that domain into a Fourier domain by integrating the characteristic function instead. The same trick is used for stochastic volatility models. Stock prices are often modelled in the form of exponential Lévy processes , so you set $S_t=S_0e^{X_t}$ , where $X_t$ is a Lévy process and $S_0>0$ . This ensures positivity. To obtain a martingale after discounting, you of course need to correct the drift. Here are some common exponential Lévy processes used in finance: Geometric Brownian motion Merton's (1976) jump diffusion model Kou's (2002) jump diffusion model Normal inverse Gaussian process from Barndorff-Nielsen (1997) Meixner process from Schoutens and Teugels (1998) Generalised hyperbolic model from Eberlein et al. (1998) Variance gamma process from Carr and Madan (1998) CGMY from Carr et al. (2002) Finite moment log stable model from Carr and Wu (2003) The first one is the only one with continuous sample paths. Number 2 and 3 are the only finite activity models with jumps in that list. For your thesis, I'd particularly look at Kou's model because it's super tractable and you can price many derivatives easily with it. On the infinite active side, I think VG and CGMY (its generalisation) are the most popular. If you want a book on Lévy processes, I'd recommend ``Financial Modelling with Jump Processes'' from Cont and Tankov. It's extremely well written. If you start with the pricing of European-style options, you won't need much programming. A function which outputs the characteristic function and a second function which performs numerical integration (that’s probably build in already). That's all you need. So, that shouldn't be the hardest part about your thesis:) Note that the characteristic functions are honestly quite simple. With respect to Fourier methods in option pricing , there are a couple of approaches Carr and Madan (1999) introduce the fast Fourier transform Bakshi and Madan (2000) give a general pricing formula in the `Black-Scholes' style Lewis (2001) provides a general formula (nests the above approaches) using complex contour integration Fang and Oosterlee (2009) introduce the COS method. That's one of the fastest (and easiest) approaches. Because Lévy processes have independent increments, they cannot model volatility clusters! However, they can easily incorporate fat tails. Time-changed Lévy processes are not necessarily Lévy processes themselves and can incorporate stochastic volatility and asymmetry between volatility and return changes.
{ "source": [ "https://quant.stackexchange.com/questions/54082", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/46351/" ] }
54,362
I watched a speech by Simon Johnson at UCSB and, at one point, he claimed that Citigroup has failed three times since the 1980s. For example, he claims that Citigroup failed and was saved by the government in 1982 because of "bad loans made in emerging markets". The second such failure is at the end of the 1980s because of "bad loans to commercial real estate". The third is of course the 2008/2009 financial crisis. What strikes me as odd is that the first two alleged failures are not mentioned in Wikipedia on Citigroup ! He doesn't appear to me as a crackpot as he is the former chief economist of IMF and effective whitewashing of the US's biggest bank holding company on Wikipedia is as unlikely. Does anyone know what affairs is he referencing?
(I worked there for 23 years.) Simon Johnson is correct. Citi (or its predecessors) was insolvent on those 3 occasions, and would have gone into liquidation without the bailouts by U.S. taxpayers.
{ "source": [ "https://quant.stackexchange.com/questions/54362", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/47205/" ] }
54,365
I have a dataset with 3.000 observation (price of an asset). I want to study the empirical distribution of the logRet of that time series. How can I do it in Excel? if not possible to do it in Excel, Python? Wolfram? thanks, Ciao
(I worked there for 23 years.) Simon Johnson is correct. Citi (or its predecessors) was insolvent on those 3 occasions, and would have gone into liquidation without the bailouts by U.S. taxpayers.
{ "source": [ "https://quant.stackexchange.com/questions/54365", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/47207/" ] }
61,760
I'm writing my master's thesis about stock price prediction using machine learning methods. During my literature review, I noticed that a lot of research produced on this topic is of poor quality, published in non-finance related journals or unpublished/peer reviewed alltogether. There is no paper to be found in leading journals like journal of finance or journal of financial economics on the topic. I'm curious as to why this is the case. Did the academic world move on, and simply accept that markets are generally efficient a long time ago? Or are the leading journals overlooking a key technique that could effectively forecast stock price?
I think you're overlooking a third explanation: Nobody that found a successful technique to generate alpha has published it. I can think of the following causes: If you're an academic, why share your brilliant idea? These techniques require a lot of data and financial data can be expensive, researches that work at firms that have access to this data don't share their findings with the public. Academics did find a lot of signals already the old fashioned way. Despite this, fancy techniques such as AAD and Reinforcement Learning are discussed publicly. These methods don't generate any alpha however.
{ "source": [ "https://quant.stackexchange.com/questions/61760", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/49718/" ] }
63,140
Why do people suggest using red black trees/balanced binary trees for the levels in a limit order book? Why are they algorithmically ideal?
Why do people suggest using red black trees/balanced binary trees for the levels in a limit order book? Because people are unoriginal and keep referencing the same blog post. Why are they algorithmically ideal? They're not necessarily ideal. In fact, they're rarely used in production trading systems with low latency requirements. However, your source probably had the following considerations: They were given more of an engineering objective rather than a trading objective. Without business constraints or queries that you're supposed to optimize, a reasonable prior is to optimize for the worst case runtime of inserts and deletes, since inserts and deletes often dominate executions . They were designing this order book structure based on sample data from an asset class with sparse prices , like equities. Because of (1) and (2), they needed to take into account the following market properties: New prices are often inserted towards the outside of the book , since (i) the inside levels tend to be dense and (ii) insertions towards the inside are likely to be matched and truncated by the opposite book. Forming a new level gives significant queue priority and orders towards the outside have more time value, so price levels are less likely to be removed by order cancels towards the outside, and more likely to be removed by cancels or executions towards the inside of the book . (3) and (4) would promote an unbalanced and tall BST, which has much worse amortized runtime than its idealized form. There are various ways to mitigate this. Self-balancing is just one naive solution, as red-black trees are very widely implemented in container libraries and a simple way to guarantee $\mathbb{O}\left(\log n\right)$ inserts and deletes of price levels. When evaluating the optimal data structure, I would keep in mind the following three main topics. 1. Start with the business use case Such as: What queries need to be optimized for your application? Sparsity of the book. Statistical distribution of book events. For example: In options instruments , there may be very few order events, so it may be cheaper to just store everything in arrays and linearly walk through them. In liquid futures contracts , most events only affect a few hundred price levels, and price bands might give you a bound on levels that you actually care about, so it is possible to preallocate the levels in an array and represent index prices as an offset from some initial state in number of ticks. Some trading strategies need to act very quickly to the change to the top of the book, and can afford to defer level inserts or deletes outside the BBO till later, so it is unimportant to optimize for level inserts or deletes. 2. Understand the messaging protocol and data feed For example: Some data feeds are bursty, so you might design your application to flush all data events before performing the critical path of your business action (e.g. order placement, model update). The optimal order book structure may differ if events are batched. Successive events in the data feed may have some price ordering. 3. Hardware codesign In practice, when you're operating at memory or cache access time scales or dealing with a small number of events relative to cache size, asymptotic time complexity often goes out of the window and it's more important to look at the actual implementation and real benchmarks, and codesign your order book for the architecture that it is running on. In such cases, a simple array or vector with linear access patterns will often outperform any complex data structure with better asymptotic runtime because a simple array makes it easier to exploit hardware optimizations that are more important: Locality Prefetching Instruction pipelining Fitting all relevant/qualifying data into fewer "pages" that have to move up the memory hierarchy, e.g. not chasing pointers across non-contiguous regions of memory. SIMD intrinsics. How does this translate to order book design? For example: The C++ STL implementation of unordered_map will often have worse performance than map for order ID lookup of instruments with a small number of orders. It is possible to represent each price level with an intrusive doubly-linked list, which has $\mathbb{O}\left(1\right)$ lookup of the neighboring nodes, so you can unlink an order that was deleted in $\mathbb{O}\left(1\right)$ . But you will often get better performance by creating a linked list of preallocated arrays, and removing orders by marking them with a tombstone flag. In many of the situations that I described above, a linked list of arrays or an array of arrays will outperform a general purpose design with red-black trees of intrusive doubly-linked lists.
{ "source": [ "https://quant.stackexchange.com/questions/63140", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/11723/" ] }
1
I know that a Turing machine 1 can theoretically simulate "anything", but I don't know whether it could simulate something as fundamentally different as a quantum-based computer. Are there any attempts to do this, or has anybody proved it possible/not possible? I've googled around, but I'm not an expert on this topic, so I'm not sure where to look. I've found the Wikipedia article on quantum Turing machine , but I'm not certain how exactly it differs from a classical TM. I also found the paper Deutsch's Universal Quantum Turing Machine , by W. Fouché et al., but it is rather difficult to understand for me. 1. In case it is not clear, by Turing machine I mean the theoretical concept, not a physical machine (i.e. an implementation of the theoretical concept).
Yes , a quantum computer could be simulated by a Turing machine , though this shouldn't be taken to imply that real-world quantum computers couldn't enjoy quantum advantage , i.e. a significant implementation advantage over real-world classical computers. As a rule-of-thumb, if a human could manually describe or imagine how something ought to operate, that imagining can be implemented on a Turing machine. Quantum computers fall into this category. At current, a big motivation for quantum computing is that qubits can exist in superpositions ,$$ \left| \psi \right> = \alpha \left| 0 \right> + \beta \left| 1 \right>, \tag{1} $$essentially allowing for massively parallel computation. Then there's quantum annealing and other little tricks that are basically analog computing tactics. But, those benefits are about efficiency. In some cases, that efficiency is beyond astronomical, enabling stuff that wouldn't have been practical on classical hardware. This causes quantum computing to have major applications in cryptography and such. However, quantum computing isn't currently motivated by a desire for things that we fundamentally couldn't do before. If a quantum computer can perform an operation, then a classical Turing machine could perform a simulation of a quantum computer performing that operation. Randomness isn't a problem. I guess two big reasons: Randomness can be more precisely captured by using distribution math anyway. Randomness isn't a real " thing " to begin with; it's merely ignorance. And we can always produce ignorance.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/-1/" ] }
3
Quantum computers are known to be able to crack in polynomial time a broad range of cryptographic algorithms which were previously thought to be solvable only by resources increasing exponentially with the bit size of the key. An example for that is Shor's algorithm . But, as far I know, not all problems fall into this category. On Making Hard Problems for Quantum Computers , we can read Researchers have developed a computer algorithm that doesn’t solve problems but instead creates them for the purpose of evaluating quantum computers. Can we still expect a new cryptographic algorithm which will be hard to crack using even a quantum computer ? For clarity: the question refers to specifically to the design of new algorithms .
The title of your question asks for techniques that are impossible to break, to which the One Time Pad (OTP) is the correct answer, as pointed out in the other answers. The OTP is information-theoretically secure, which means that an adversaries computational abilities are inapplicable when it comes to finding the message. However, despite being perfectly secure in theory , the OTP is of limited use in modern cryptography. It is extremely difficult to use successfully in practice . The important question really is: Can we still expect a new cryptographic algorithm which will be hard to crack using even a quantum computer? Asymmetric Cryptography Asymmetric cryptography includes Public-Key Encryption (PKE), Digital Signatures, and Key Agreement schemes. These techniques are vital to solve the problems of key distribution and key management. Key distribution and key management are non-negligible problems, they are largely what prevent the OTP from being usable in practice. The internet as we know it today would not function without the ability to create a secured communications channel from an insecure communications channel, which is one of the features that asymmetric algorithms offer. Shor's algorithm Shor's algorithm is useful for solving the problems of integer factorization and discrete logarithms. These two problems are what provide the basis for security of widely used schemes such as RSA and Diffie-Hellman . NIST is currently evaluating submissions for Post-Quantum algorithms - algorithms that are based on problems that are believed to be resistant to quantum computers. These problems include: Lattice based problems Shortest vector problem (SVP) Closest vector problem (CVP) Multivariate equations Multivariate polynomial equations Code based schemes Based on the hardness of decoding linear codes Supersingular Isogeny Diffie-Hellman (SIDH) It should be noted that classical algorithms for solving the above problems may exist , it's just that the runtime/accuracy of these algorithms is prohibitive for solving large instances in practice. These problems don't appear to be solvable when given the ability to solve the problem of order finding , which is what the quantum part of Shor's algorithm does. Symmetric Cryptography Grover's algorithm provides a quadratic speedup when searching through an unsorted list. This is effectively the problem brute-forcing a symmetric encryption key. Working around Grover's algorithm is relatively easy compared to working around Shor's algorithm: Simply double the size of your symmetric key . A 256-bit key offers 128-bits of resistance against brute force to an adversary that uses Grover's algorithm. Grover's algorithm is also usable against hash functions . The solution again is simple: Double the size of your hash output (and capacity if you are using a hash based on a sponge construction ).
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/3", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/27/" ] }
12
Is there any way to emulate a quantum computer in my normal computer, so that I will be able to test and try quantum programming languages (such as Q# )? I mean something that I can really test my hypothesis and gets the most accurate results. Update: I'm not really looking for simulating a quantum computer, but I'm not sure if its possible to efficiently emulate one on a normal non-quantum based PC.
Yes, it's possible (but slow). There are a couple of existing (this is only a partial list) emulators: QDD: A Quantum Computer Emulation Library QDD is a C++ library which provides a relatively intuitive set of quantum computing constructs within the context of the C++ programming environment. QDD is unique in that the its emulation of quantum computing is based upon a Binary Decision Diagram (BDD) representation of the quantum state. jQuantum jQuantum is a program which simulates a quantum computer. You can design quantum circuits with it and let them run. The current state of the quantum register is illustrated. QCE QCE is a software tool that emulates various hardware designs of Quantum Computers. QCE simulates the physical processes that govern the operation of a hardware quantum processor, strictly according to the laws of quantum mechanics. QCE also provides an environment to debug and execute quantum algorithms under realistic experimental conditions. (In addition, Q# only works with MS's QDK , thanks @Pavel) The downside to all of these is simple: they still run on binary (non-quantum) circuits. To the best of my knowledge, there's no easily accessible quantum computer to use for running these things. And since it takes multiple binary bits to express a single qubit, the amount of computational power needed to simulate a quantum program gets large very quickly. I'll quote a paper on the subject ( J. Allcock, 2010 ): Our evaluation shows that our implementations are very accurate, but at the same time we use a significant amount of additional memory in order to achieve this. Reducing our aims for accuracy would allow us to decrease representation size, and therefore emulate more qubits with the same amount of memory. p 89, section 5.1 As our implementations get more accurate, they also get slower. TL;DR: it's possible, and some emulators exist, but none are very efficient for large amounts of qubits.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/12", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/40/" ] }
23
Similar to the question Could a Turing Machine simulate a quantum computer? : given a 'classical' algorithm, is it always possible to formulate an equivalent algorithm which can be performed on a quantum computer? If yes, is there some kind of procedure we can follow for this? The resulting algorithm will probably not take full advantage of the possibilities of quantum computing, it's more of a theoretical question.
Yes, it can do so in a rather trivial way: Use only reversible classical logical gates to simulate computations using boolean logic (for instance, using Toffoli gates to simulate NAND gates), use only the standard basis states $\lvert 0\rangle$ and $\lvert 1\rangle$ as input, and only perform standard basis state measurements at the output. In this way you can simulate exactly the same calculations as the classical computer does, on a gate-by-gate basis.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/23", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/45/" ] }
70
A quantum computer can efficiently solve problems lying in the complexity class BQP . I have seen a claim the one can (potentially, because we don't know whether BQP is a proper subset or equal to PP) increase the efficiency of a quantum computer by applying postselection and that the class of efficiently solvable problems becomes now postBQP = PP . What does postselection mean here?
"Postselection" refers to the process of conditioning on the outcome of a measurement on some other qubit. (This is something that you can think of for classical probability distributions and statistical analysis as well: it is not a concept special to quantum computation.) Postselection has featured quite often (up to this point) in quantum mechanics experiments, because — for experiments on very small systems, involving not very many particles — it is a relatively easy way to simulate having good quantum control or feedforward. However, it is not a practical way of realising computation, because you have to condition on an outcome of one or more measurements which may occur with very low probability. Actually 'selecting' a measurement outcome is nothing you can do easily in quantum mechanics — what one actually does is throw away any outcome which does not allow you to do what you want to do. If the outcome which you are trying to select has probability $0 < p < 1$, you will have to try an expected number $1/p$ times before you manage to obtain the outcome you are trying to select. If $p = 1/2^n$ for some large integer $n$, you may be waiting a very long time. The result that postselection 'increases' (as you say) the power of bounded-error quantum computation from BQP to PP is a well-liked result in the theory of quantum computation, not because it is practical , but because it is a simple and crisp result of a sort which is rare in computational complexity, and is useful for informing intuitions about quantum computation — it has led onward to ideas of "quantum supremacy" experiments , for example. But it is not something which you should think of as an operation which is freely available to quantum computers as a practical technique, unless you can show that the outcomes which you are trying to postselect are few enough and of high-enough probability (or, as with measurement-based computation, that you can simulate the 'desirable' outcome by a suitable adaptation of your procedure if you obtain one of the 'undesirable' outcomes).
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/70", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/18/" ] }
91
Quantum algorithms frequently use bra-ket notation in their description. What do all of these brackets and vertical lines mean? For example: $|ψ⟩=α|0⟩+β|1⟩$ While this is arguably a question about mathematics, this type of notation appears to be used frequently when dealing with quantum computation specifically. I'm not sure I have ever seen it used in any other contexts. Edit By the last part, I mean that it is possible to denote vectors and inner products using standard notation for linear algebra, and some other fields that use these objects and operators do so without the use of bra-ket notation. This leads me to conclude that there is some difference/reason why bra-ket is especially handy for denoting quantum algorithms. It is not an assertion of fact, I meant it as an observation. "I'm not sure I have seen it used elsewhere" is not the same statement as "It is not used in any other contexts".
As already explained by others, a ket $\left|\psi\right>$ is just a vector. A bra $\left<\psi\right|$ is the Hermitian conjugate of the vector. You can multiply a vector with a number in the usual way. Now comes the fun part: You can write the scalar product of two vectors $\left|\psi\right>$ and $\left|\phi\right>$ as $\left<\phi\middle|\psi\right>$. You can apply an operator to the vector (in finite dimensions this is just a matrix multiplication) $X\left|\psi\right>$. All in all, the notation is very handy and intuitive. For more information, see the Wikipedia article or a textbook on Quantum Mechanics.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/91", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/162/" ] }
117
This is a follow-up question to @heather's answer to the question : Why must quantum computers be kept near absolute zero? What I know: Superconducting quantum computing : It is an implementation of a quantum computer in a superconducting electronic circuit. Optical quantum computing : It uses photons as information carriers, and linear optical elements to process quantum information, and uses photon detectors and quantum memories to detect and store quantum information. Next, this is what Wikipedia goes on to say about superconducting quantum computing : Classical computation models rely on physical implementations consistent with the laws of classical mechanics. It is known, however, that the classical description is only accurate for specific cases, while the more general description of nature is given by the quantum mechanics. Quantum computation studies the application of quantum phenomena, that are beyond the scope of classical approximation, for information processing and communication. Various models of quantum computation exist, however the most popular models incorporate the concepts of qubits and quantum gates. A qubit is a generalization of a bit - a system with two possible states, that can be in a quantum superposition of both. A quantum gate is a generalization of a logic gate: it describes the transformation that one or more qubits will experience after the gate is applied on them, given their initial state. The physical implementation of qubits and gates is difficult, for the same reasons that quantum phenomena are hard to observe in everyday life. One approach is to implement the quantum computers in superconductors, where the quantum effects become macroscopic, though at a price of extremely low operation temperatures. This does make some sense! However, I was looking for why optical quantum computers don't need "extremely low temperatures" unlike superconducting quantum computers. Don't they suffer from the same problem i.e. aren't the quantum phenomena in optical quantum computers difficult to observe just as for superconducting quantum computers? Are the quantum effects already macroscopic at room temperatures, in such computers? Why so? I was going through the description of Linear optical quantum computing on Wikipedia , but found no reference to "temperature" as such.
I was looking for why optical quantum computers don't need "extremely low temperatures" unlike superconducting quantum computers. Superconducting qubits usually work in the frequency range 4 GHz to 10 GHz. The energy associated with a transition frequency $f_{10}$ in quantum mechanics is $E_{10} = h f_{10}$ where $h$ is Planck's constant. Comparing the qubit transition energy to the thermal energy $E_\text{thermal} = k_b T$ (where $k_b$ is Boltzmann's constant), we see that the qubit energy is above the thermal energy when $$f_{10} > k_b T / h \, .$$ Looking up Boltzmann's and Planck's constants, we find $$h/k_b = 0.048 \, \text{K / GHz} \, .$$ Therefore, we can write $$f_{10} > 1 \, \text{GHz} \, \,\frac{T}{0.048 \, \text{K}}$$ So, for the highest frequency superconducting qubit at 10 GHz, we need $T < 0.48 \, \text{K}$ in order for there to be a low probability that the qubit is randomly excited or de-excited due to thermal interactions. This is why superconducting qubits are usually operated in dilution refrigerators at ~15 milliKelvin. Of course, we also need the temperature to be low enough to get the metals superconducting, but for aluminum that happens at 1 K so actually the constraint we already talked about is more important. On the other hand, suppose the two states of the optical qubit $\left \lvert 0 \right \rangle$ and $\left \lvert 1 \right \rangle$ the presence and absence of an optical photon. An optical photon has a frequency of around $10^{14}$ Hz, which corresponds to a temperature of 14,309 Kelvin. Therefore, there's an extremely low probability of the thermal environment changing the qubit state by creating or removing a photon. This is why optical light is sort of intrinsically quantum mechanical in nature. Don't they suffer from the same problem i.e. aren't the quantum phenomena in optical quantum computers difficult to observe just as for superconducting quantum computers? Well, the difficulties between superconducting quantum computers and optical quantum computers are different . Optical photons essentially don't interact with each other. To get an effective interaction between two photons, you have to either put them through a nonlinear crystal, or do some kind of photodetection measurement. The challenge with nonlinear crystals is that they're very inefficient; only a very small fraction of photons that go in actually undergo the nonlinear process that causes interaction. The challenge with photodetection is that it's hard to build a photodetector that has high detection efficiency and low dark counts $^{[a]}$ . In fact, the best photo-detectors actually need to be operated in cryogenic environments anyway, so some optical quantum computing architectures need cryogenic refrigeration despite the fact that the qubits themselves have very high frequency. P.S. This answer could be expanded quite a bit. If someone has a particular aspect they'd like to know more about, please leave a comment. $[a]$ : Dark counts means the times a photodetector thinks it saw a photon even though there really wasn't one. In other words, isn't the rate that the detector counts photons when its in the dark.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/117", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/26/" ] }
171
I'm admittedly a novice in this field, but I have read that, while the D-wave (one) is an interesting device, there is some skepticism regarding it being 1) useful and 2) actually a 'quantum computer'. For example, Scott Aaronson has expressed multiple times that he is skeptical about whether the 'quantum' parts in the D-wave are actually useful: It remains true, as I’ve reiterated here for years, that we have no direct evidence that quantum coherence is playing a role in the observed speedup, or indeed that entanglement between qubits is ever present in the system. Exerpt from this blog . Additionally, the relevant Wikipedia section on skepticism against the D-wave is a mess. So, I ask: I know that D-wave claims to use some sort of quantum annealing. Is there (dis)proof of the D-wave actually using quantum annealing (with effect) in its computations? Has it been conclusively shown that the D-wave is (in)effective? If not, is there a clear overview of the work to attempt this?
There is still a search for problems where the D-Wave shows improvement over classical algorithms. One might recall media splashes where the D-Wave solved some instances $10^8$ times faster than a classical algorithms but forgot to mention that the problem can be solved in polynomial time using minimum weight perfect matching. Denchev showing $10^8$ speedup https://arxiv.org/abs/1512.02206 Mandra using MWPM https://arxiv.org/abs/1703.00622 There is some evidence that there are indeed some quantum effects used by the D-Wave. Notably a study by Katzgraber et al. that compares the D-Wave with simulated annealing and the effects of reducing barrier thickness in the energy landscape (to make tunneling more probable). In Fig. 5 of the following paper the barrier thickness is reduced and the D-Wave shows improvement on the class of problems while Simulated Annealing shows no improvement. https://arxiv.org/abs/1505.01545 Full disclosure: Katzgraber was my PhD advisor so I am most familiar with his work. On the other hand, there have been a few papers on the topic of the D-Wave being a simple thermal annealer with no quantum effects, notably the papers by Smolin although they are a bit dated now. https://arxiv.org/abs/1305.4904 https://arxiv.org/abs/1401.7087 More recently Albash et al. discussed the finite temperature as a reason for quantum annealers not functioning competitively. https://arxiv.org/abs/1703.03871
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/171", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/253/" ] }
175
Grover's search algorithm provides a provable quadratic speed-up for unsorted database search. The algorithm is usually expressed by the following quantum circuit: In most representations, a crucial part of the protocol is the "oracle gate" $U_\omega$ , which "magically" performs the operation $|x\rangle\mapsto(-1)^{f(x)}|x\rangle$ . It is however often left unsaid how difficult realizing such a gate would actually be. Indeed, it could seem like this use of an "oracle" is just a way to sweep the difficulties under the carpet. How do we know whether such an oracular operation is indeed realizable? And if so, what is its complexity (for example in terms of complexity of gate decomposition)?
The function $f$ is simply an arbitrary boolean function of a bit string: $f\colon \{0,1\}^n \to \{0,1\}$ . For applications to breaking cryptography, such as [1] , [2] , or [3] , this is not actually a ‘database lookup’, which would necessitate storing the entire database as a quantum circuit somehow, but rather a function such as \begin{equation*} x \mapsto \begin{cases} 1, & \text{if $\operatorname{SHA-256}(x) = y$;} \\ 0, & \text{otherwise,} \end{cases} \end{equation*} for fixed $y$ , which has no structure we can exploit for a classical search, unlike, say, the function \begin{equation*} x \mapsto \begin{cases} 1, & \text{if $2^x \equiv y \pmod{2^{2048} - 1942289}$}, \\ 0, & \text{otherwise}, \end{cases} \end{equation*} which has structure that can be exploited to invert it faster even on a classical computer. The question of the particular cost can't be answered in general because $f$ can be any circuit—it's just a matter of making a quantum circuit out of a classical circuit . But usually, as in the example above, the function $f$ is very cheap to evaluate on a classical computer, so it shouldn't pose a particularly onerous burden on a quantum computer for which everything else about Grover's algorithm is within your budget. The only general cost on top of $f$ is an extra conditional NOT gate $$C\colon \left|a\right> \left|b\right> \to \left|a\right> \left|a \oplus b\right>$$ where $\oplus$ is xor, and an extra ancillary qubit for it. In particular, if we have a circuit $$F\colon \left|x\right> \left|a\right> \lvert\text{junk}\rangle \mapsto \left|x\right> \left|a \oplus f(x)\right> \lvert\text{junk}'\rangle$$ built out of $C$ and the circuit for $f$ , then if we apply it to $\left|x\right>$ together with an ancillary qubit initially in the state $\left|-\right> = H\left|1\right> = (1/\sqrt{2})(\left|0\right> - \left|1\right>)$ where $H$ is a Hadamard gate, then we get \begin{align*} F\left|x\right> \left|-\right> \lvert\text{junk}\rangle &= \frac{1}{\sqrt{2}}\bigl( F\left|x\right> \left|0\right> \lvert\text{junk}\rangle - F\left|x\right> \left|1\right> \lvert\text{junk}\rangle \bigr) \\ &= \frac{1}{\sqrt{2}}\bigl( \left|x\right> \left|f(x)\right> \lvert\text{junk}'\rangle - \left|x\right> \left|1 \oplus f(x)\right> \lvert\text{junk}'\rangle \bigr). \end{align*} If $f(x) = 0$ then $1 \oplus f(x) = 1$ , so by simplifying we obtain $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = \left|x\right> \left|-\right> \lvert\text{junk}'\rangle,$$ whereas if $f(x) = 1$ then $1 \oplus f(x) = 0$ , so $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = -\left|x\right> \left|-\right> \lvert\text{junk}'\rangle,$$ and thus in general $$F\left|x\right> \left|-\right> \lvert\text{junk}\rangle = (-1)^{f(x)} \left|x\right> \left|-\right> \lvert\text{junk}'\rangle.$$
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/175", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/55/" ] }
1,185
Most reversible quantum algorithms use standard gates like Toffoli gate (CCNOT) or Fredkin gate (CSWAP). Since some operations require a constant $\left|0\right>$ as input and the number of inputs and outputs is equal, garbage qubits (or junk qubits ) appear in the course of the computation. So, a principal circuit like $\left|x\right>\mapsto\left|f(x)\right>$ actually becomes $\left|x\right>\left|0\right>\mapsto\left|f(x)\right>\left|g\right>$, where $\left|g\right>$ stands for the garbage qubit(s). Circuits that preserve the original value ends up with $\left|x\right>\left|0\right>\left|0\right>\mapsto\left|x\right>\left|f(x)\right>\left|g\right>$ I understand that garbage qubits are inevitable if we want the circuit to stay reversible, but many sources${}^1$ claim that it is important to eliminate them. Why is it so? ${}^1$ Due to requests for sources, see for example this arXiv paper , pg 8, which says However, each of these simple operations contains a number of additional, auxiliary qubits, which serve to store the intermediate results, but are not relevant at the end. In order not to waste any unneccesary [sic] space, it is therefore important to reset these qubits to 0 so that we are able to re–use them or this arXiv paper which says The removal of garbage qubits and ancilla qubits are essential in designing an efficient quantum circuit. or the many other sources - a google search produces many hits.
Quantum interference is the heart and soul of quantum computation. Whenever you have junk qubits they're going to prevent interference. This is actually a very simple but very important point. Let's say we have a function $f:\{0,1\}\to\{0,1\}$ which maps a single bit to a single bit. Say $f$ is a very simple function, like $f(x)=x$ . Let's say we had a circuit $C_f$ which inputs $x$ and outputs $f(x)$ . Now, of course, this was a reversible circuit, and could be implemented using a unitary transformation $|x\rangle\to|x\rangle$ . Now, we could feed in $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$ and the output would also be $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$ . Let us now apply Hadamard transform gate and measure what we get. If you apply the Hadamard transform to this state $\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle$ , you get the $|0\rangle$ state, and you see $0$ with probability $1$ . In this case there was no junk created in the intermediate steps, while converting the classical circuit to a quantum circuit. But, let's say we created some junk in an intermediate step when using a circuit like this one: . For this circuit, if we start off in the state $|x\rangle|0\rangle = \left(\frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle \right)|0\rangle$ , after the first step we get $\frac{1}{\sqrt{2}}|00\rangle + \frac{1}{\sqrt{2}}|11\rangle$ . If we apply the Hadamard transform to the first qubit, we end up with: $$\frac{1}{2}|00\rangle + \frac{1}{2}|01\rangle + \frac{1}{2}|10\rangle - \frac{1}{2}|11\rangle$$ If we make a measurement on the first qubit we get $0$ with probability $\frac{1}{2}$ , unlike in the previous case where we could see $0$ with probability $1$ ! The only difference between the two cases was the creation of a junk bit in an intermediate step, which was not gotten rid of, thus leading to a difference in the final result of the computation (since the junk qubit got entangled with the other qubit). We will see a different interference pattern than in the previous case when the Hadamard transform is applied. This is exactly why we don't like to keep junk around when we are doing quantum computation: it prevents interference. Source: Professor Umesh Vazirani's lecture on EdX.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1185", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/31/" ] }
1,235
As I understand it, the field of quantum mechanics was started in the early 20th century when Max Planck solved the black-body radiation problem. But I don't know when the idea of computers using quantum effects spread out. What is the earliest source that proposes the idea of quantum computers using qubits?
According to Wikipedia of Timeline of quantum computing , here are the main events: 1960 Stephen Wiesner invents conjugate coding . 1968 A quantum computer with spins as quantum bits was also formulated for use as a quantum spacetime in 1968. Finkelstein, David (1968). "Space-Time Structure in High Energy Interactions". In Gudehus, T.; Kaiser, G. Fundamental Interactions at High Energy. New York: Gordon & Breach. 1973 Alexander Holevo publishes a paper showing that n qubits cannot carry more than n classical bits of information (see: "Holevo's theorem" /"Holevo's bound"). Charles H. Bennett shows that computation can be done reversibly. 1976 Polish mathematical physicist Roman Stanisław Ingarden publishes a seminal paper entitled "Quantum Information Theory" in Reports on Mathematical Physics, vol. 10, 43–72, 1976. 1980 Paul Benioff described quantum mechanical Hamiltonian models of computers Yuri Manin proposed an idea of quantum computing 1981 Richard Feynman in his talk [...], observed that it appeared to be impossible in general to simulate an evolution of a quantum system on a classical computer in an efficient way. He proposed a basic model for a quantum computer that would be capable of such simulations 1982 Paul Benioff proposes the first recognisable theoretical framework for a quantum computer. So in general, the field of quantum computing was initiated by the work of Paul Benioff study and Yuri Manin in 1980, Richard Feynman in 1982 study , and David Deutsch in 1985. Source: Quantum computing at Wikipedia .
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1235", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/11/" ] }
1,289
On reading this Reddit thread I realized that even after a couple months of learning about quantum computing I've absolutely no clue about how a quantum computer actually works. To make the question more precise, let's say we have a superconducting qubit based 5-qubit quantum computer (like the 5-qubit IBM Quantum Computer). I type in $2+3$ using a keyboard onto a monitor (say in a basic calculator app the quantum computer might have). After that it should return me $5$. But is going on at the hardware level? Are some sort of electrical signals corresponding to the inputs $2$,$3$ and $+$ going to the processing unit of the computer? Does that somehow "initialize" the Cooper pair electrons? What happens to the Cooper pair electron qubits after that (guess they'd be worked on by some quantum-gates , which are in turn again black boxes )? How does it finally return me the output $5$ ? I am surprised as to how little I could come up with about the basic working of a quantum computer by searching on the net.
Firstly, a classical computer does basic maths at the hardware level in the arithmetic and logic unit (ALU). The logic gates take low and high input voltages and uses CMOS to implement logic gates allowing for individual gates to be performed and built up to perform larger, more complicated operations. In this sense, typing on a keyboard is sending electrical signals, which eventually ends up in a command (in the form of more electrical signals) being sent to the ALU, the correct operations being performed and more signals sent back, which gets converted to display pixels in the shape of a number on your screen. What about a quantum computer? There are two possible ways that quantum processors get used: by themselves, or in conjunction with a classical processor. However, most (including your example of superconducting) quantum processors don't actually use electrical signals, although this is still how your mouse, keyboard and monitor etc. transmit and receive information. So, there needs to be a way to convert the electric signal to whatever signal the quantum processor uses (which I'll get on to later), as well as some way of telling the processor what you want to do. Both these issues can be solved at once by classical pre- and post- processing, such as in IBM's QISKit . Microsoft is taking a bit more of a top-down approach in Q# , where programs for a quantum processor is written more like a 'classical' program, as opposed to a script, then compiled and potentially optimised for the hardware. That is, if you've got a function, it can perform classical operations, as well as make calls to the quantum processor to perform any required quantum operations. This leads me to the first point: If you're going to ask a computer with access to a quantum processor to calculate something such as $2+3$, one very valid solution would be to just compute it on the classical processor as per usual. OK, let's say that you're forcing the classical processor to use the quantum processor, which in this case is one of IBM's superconducting chips, using transmon qubits, let's say, the IBM QX4 . This is too small to have error correction, so let's ignore that. There are three parts to using a circuit model processor: initialisation, unitary evolution and measurement, which are explained in more detail below. Before that, What is a transmon? Take a superconducting loop to allow for Cooper pairs and add one or two Josephson junctions to give a Cooper pair box island in the region between the two Josephson junctions with Josephson coupling energy $E_J = I_c\Phi_0/2\pi$, where the magnetic flux quantum $\Phi_0 = h/2e$ and $I_c$ is the critical current of the junction. Applying a voltage $V_g$ to this box gives a 'gate capacitance' $C_g$ and makes this a charge qubit . For the Coulomb energy of a single Cooper pair $E_C = \left(2e\right)^2/2C$, where $C$ is the sum of the total capacitance of the island. The Hamiltonian of such a system is given by $$H = E_C\left(n - n_g\right)^2 - E_J\cos\phi,$$ where $n$ is the number of Cooper pairs, $\phi$ is the phase change across the junction and $n_g = C_gV_g/2e$. When performing unitary operations, only the two lowest states of the system are considered, $\left|n\right\rangle = \left|0\right\rangle$ and $\left|n\right\rangle = \left|1\right\rangle$ with respective energies $E_0 =\hbar\omega_0$ and $E_1 = \hbar\omega_1$ and qubit frequency $\omega = \omega_1-\omega_0$, describing the computational basis of a qubit. A typical charge qubit could have $E_C = 5E_J$. Adding a large shunting capacitance and increasing the gate capacitance switches this ratio, so that $E_J\gg E_C$ and we have a transmon . This has the advantage of longer coherence times, at a cost of reduced anharmonicity (where energy levels beyond the first two are closer together, potentially causing leakage). Finally , we get to the main question: How do we initialise, evolve and measure a transmon? Single qubit unitary evolution: Applying a microwave pulse $\mathcal E\left(t\right) = \mathcal E_x\left(t\right)\cos\left(\omega_dt\right) + \mathcal E_y\left(t\right)\sin\left(\omega_dt\right)$ for $0<t<t_g$ of frequency $\omega_d$ and making the rotating wave approximation gives the Hamiltonian of the qubit states (in the ideal case) as $$H =\hbar \begin{pmatrix}\omega_1-\omega_d && \frac 12\mathcal E_x\left(t\right) - \frac i2\mathcal E_y\left(t\right)\\ \frac 12\mathcal E_x\left(t\right) + \frac i2\mathcal E_y\left(t\right) && \omega_2-2\omega_d\end{pmatrix}$$ However, due to lower anharmonicity, the microwave pulses have to be shaped to reduce leakage to higher energy levels in a process known as Derivative Removal by Adiabatic Gate (DRAG) . By varying the pulse, different Hamiltonians can be achieved, which, depending on the time of the pulse can be used to implement different unitary operations on a single qubit. Measurement/readout: A microwave resonator, with resonance frequency $\omega_r$, can be coupled to the transmon using a capacitor. This interaction causes (decaying) Rabi oscillations to occur in the transmon-resonator system. When the coupling strength of cavity and qubit, $g \ll \omega-\omega_r$, this is known as the dispersive regime . In this regime, the transmittance spectrum of the cavity is shifted by $\pm g^2/\left(\omega-\omega_r\right)$ depending on the state of the qubit, so applying a microwave pulse and analysing the transmittance and reflectance (by computer) can then be used to measure the qubit. Multiple qubit unitary evolution: This idea of coupling a qubit to a microwave resonator can be extended by coupling the resonator to another qubit. As in the single qubit gate case, timings of the coupling as well as microwave pulses can be used allow the first qubit to couple to the cavity, which is then coupled to the second qubit and perform certain 2-qubit gates. Higher energy levels can also be used to make certain gates easier to implement due to interactions between higher levels caused by the cavity. One such example is shown here , where the cavity causes an interaction between the states of $\left|2\right>\left|0\right>$ and $\left|1\right>\left|1\right>$. An avoided crossing between these states means that a 2-qubit phase gate can be implemented, although in general 2-qubit gates are implemented less well (have a lower fidelity) than single qubit ones. Initialisation: Readout, potentially followed by a single qubit Pauli $X$ gate (on each qubit measured to be in state $\left|1\right\rangle$) to ensure that all qubits start in state $\left|0\right\rangle$. Adding 2 and 3 is now a 'simple' matter of initialising the qubits, performing the gates equivalent to a classical reversible adder and measuring the result, all implemented automatically. The measurement result is then returned by a classical computer as per usual. As a bonus , it seems a little pointless to go through all that in order to implement gates that could be done on a classical computer anyway, so it turns out that it's possible to approximately implement a quantum adder , which adds two quantum (as opposed to classical) states, with some error, on one of IBM's processors.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1289", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/26/" ] }
1,367
I come from a non-physics background and I am very much interested in pursuing Quantum Computing - especially how to program them. Any guidance on how to get started will be very helpful.
You could start with an introduction to quantum computers such as this one from Voxxed Days Vienna 2018 - it's intended for people with a programming background but little to no prior knowledge in quantum mechanics. After that you can check out the guides in the IBM Quantum Experience or those for the Microsoft Quantum Development Kit . In addition to that, there are loads of videos on YouTube, for example, that can help you understand the topic more deeply.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1367", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/769/" ] }
1,385
This blogpost by Scott Aaronson is a very useful and simple explanation of Shor's algorithm . I'm wondering if there is such an explanation for the second most famous quantum algorithm: Grover's algorithm to search an unordered database of size $O(n)$ in $O(\sqrt{n})$ time. In particular, I'd like to see some understandable intuition for the initially surprising result of the running time!
There is a good explanation by Craig Gidney here (he also has other great content, including a circuit simulator, on his blog ). Essentially, Grover's algorithm applies when you have a function which returns True for one of its possible inputs, and False for all the others. The job of the algorithm is to find the one that returns True . To do this we express the inputs as bit strings, and encode these using the $|0\rangle$ and $|1\rangle$ states of a string of qubits. So the bit string 0011 would be encoded in the four qubit state $|0011\rangle$ , for example. We also need to be able to implement the function using quantum gates. Specifically, we need to find a sequence of gates that will implement a unitary $U$ such that $U | a \rangle = - | a \rangle, \,\,\,\,\,\,\,\,\,\,\,\,\, U | b \rangle = | b \rangle $ where $a$ is the bit string for which the function would return True and $b$ is any for which it would return False . If we start with a superposition of all possible bit strings, which is pretty easy to do by just Hadamarding everything, all inputs start off with the same amplitude of $\frac{1}{\sqrt{2^n}}$ (where $n$ is the length of the bit strings we are searching over, and therefore the number of qubits we are using). But if we then apply the oracle $U$ , the amplitude of the state we are looking for will change to $-\frac{1}{\sqrt{2^n}}$ . This is not any easily observable difference, so we need to amplify it. To do this we use the Grover Diffusion Operator , $D$ . The effect of this operator is essentially to look at how each amplitude is different from the mean amplitude, and then invert this difference. So if a certain amplitude was a certain amount larger than the mean amplitude, it will become that same amount less than the mean, and vice-versa. Specifically, if you have a superposition of bit strings $b_j$ , the diffusion operator has the effect $D: \,\,\,\, \sum_j \alpha_j \, | b_j \rangle \,\,\,\,\,\, \mapsto \,\,\,\,\,\, \sum_j (2\mu \, - \, \alpha_j) \, | b_j \rangle$ where $\mu = \sum_j \alpha_j$ is the mean amplitude. So any amplitude $\mu + \delta$ gets turned into $\mu - \delta$ . To see why it has this effect, and how to implement it, see these lecture notes . Most of the amplitudes will be a tiny bit larger than the mean (due to the effect of the single $-\frac{1}{\sqrt{2^n}}$ ), so they will become a tiny bit less than the mean through this operation. Not a big change. The state we are looking for will be affected more strongly. Its amplitude is a lot less than the mean, and so will become a lot greater the mean after the diffusion operator is applied. The end effect of the diffusion operator is therefore to cause an interference effect on the states which skims an amplitude of $\frac{1}{\sqrt{2^n}}$ from all the wrong answers and adds it to the right one. By repeating this process, we can quickly get to the point where our solution stands out from the crowd so much that we can identify it. Of course, this all goes to show that all the work is done by the diffusion operator. Searching is just an application that we can connect to it. See the answers to other questions for details on how the functions and diffusion operator are implemented.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1385", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/253/" ] }
1,451
I would like to know how a job for a D-Wave device is written in code and submitted to the device. In the answer it would be best to see a specific example of this for a simple problem. I guess that the "Hello World" of a D-Wave device would be something like finding the ground states of a simple 2D Ising model , since this is the kind of problem directly realized by the hardware. So perhaps this would be a nice example to see. But if those with expertise thing an alternative example would be suitable, I'd be happy to see an alternative.
The 'Hello World' equivalent in the D-Wave world is the 2D checkerboard example. In this example, you are given the following square graph with 4 nodes: Let's define that we colour vertex $\sigma_{i}$ black if $\sigma_{i} = -1$ and white if $\sigma_{i} = +1$. The goal is to create a checkerboard pattern with the four vertices in the graph. There is various ways of defining $h$ and $J$ to achieve this result. First of all, there are two possible solutions to this problem: The D-Wave quantum annealer minimizes the Ising Hamiltonian that we define and it is important to understand the effect of the different coupler settings. Consider for example the $J_{0,1}$ coupler: If we set it to $J_{0,1}=-1$, the Hamiltonian is minimized if both qubits take the same value. We say negative couplers correlate . Whereas if we set it to $J_{0,1}=+1$, the Hamiltonian is minimized if the two qubits take opposite values. Thus, positive couplers anti-correlate . In the checkerboard example, we want to anti-correlate each pair of neighbouring qubits which gives rise to the following Hamiltonian: $$H = \sigma_{0}\sigma_{1} + \sigma_{0}\sigma_{2} + \sigma_{1}\sigma_{3} + \sigma_{2}\sigma_{3}$$ For the sake of demonstration, we also add a bias term on the $0$-th qubit such that we only get solution #1. This solution requires $\sigma_{0}=-1$ and we therefore set its bias $h_{0}=1$. The final Hamiltonian is now: $$H = \sigma_{0} + \sigma_{0}\sigma_{1} + \sigma_{0}\sigma_{2} + \sigma_{1}\sigma_{3} + \sigma_{2}\sigma_{3}$$ So let's code it up! NOTE: You DO NEED access to D-Wave's Cloud Service for anything to work. First of all, make sure you have the dwave_sapi2 ( https://cloud.dwavesys.com/qubist/downloads/ ) Python package installed. Everything is going to be Python 2.7 since D-Wave currently doesn't support any higher Python version. That being said, let's import the essentials: from dwave_sapi2.core import solve_ising from dwave_sapi2.embedding import find_embedding, embed_problem, unembed_answer from dwave_sapi2.util import get_hardware_adjacency from dwave_sapi2.remote import RemoteConnection In order to connect to the D-Wave Solver API you will need a valid API token for their SAPI solver, the SAPI URL and you need to decide which quantum processor you want to use: DWAVE_SAPI_URL = 'https://cloud.dwavesys.com/sapi' DWAVE_TOKEN = [your D-Wave API token] DWAVE_SOLVER = 'DW_2000Q_VFYC_1' I recommend using the D-Wave 2000Q Virtual Full Yield Chimera (VFYC) which is a fully functional chip without any dead qubits! Here's the Chimera chip layout: At this point I am splitting the tutorial into two distinct pieces. In the first section, we are manually embedding the problem onto the Chimera hardware graph and in the second section we are using D-Wave's embedding heuristics to find a hardware embedding. Manual embedding The unit cell in the top left corner on the D-Wave 2000Q chip layout above looks like this: Note, that not all couplers are visualized in this image. As you can see, there is no coupler between qubit $0$ and qubit $1$ which we would need to directly implement our square graph above. That's why we are now redefining $0\rightarrow0$, $1\rightarrow4$, $2\rightarrow7$ and $3\rightarrow3$. We then go on and define $h$ as a list and $J$ as a dictionary: J = {(0,4): 1, (4,3): 1, (3,7): 1, (7,0): 1} h = [-1,0,0,0,0,0,0,0,0] $h$ has 8 entries since we use qubits 0 to 7. We now establish connection to the Solver API and request the D-Wave 2000Q VFYC solver: connection = RemoteConnection(DWAVE_SAPI_URL, DWAVE_TOKEN) solver = connection.get_solver(DWAVE_SOLVER) Now, we can define the number of readouts and choose answer_mode to be "histogram" which already sorts the results by the number of occurrences for us. We are now ready to solve the Ising instance with the D-Wave quantum annealer: params = {"answer_mode": 'histogram', "num_reads": 10000} results = solve_ising(solver, h, J, **params) print results You should get the following result: { 'timing': { 'total_real_time': 1655206, 'anneal_time_per_run': 20, 'post_processing_overhead_time': 13588, 'qpu_sampling_time': 1640000, 'readout_time_per_run': 123, 'qpu_delay_time_per_sample': 21, 'qpu_anneal_time_per_sample': 20, 'total_post_processing_time': 97081, 'qpu_programming_time': 8748, 'run_time_chip': 1640000, 'qpu_access_time': 1655206, 'qpu_readout_time_per_sample': 123 }, 'energies': [-5.0], 'num_occurrences': [10000], 'solutions': [ [1, 3, 3, 1, -1, 3, 3, -1, { lots of 3 's that I am omitting}]]} As you can see we got the correct ground state energy ( energies ) of $-5.0$. The solution string is full of $3$'s which is the default outcome for unused/unmeasured qubits and if we apply the reverse transformations - $0\rightarrow0$, $4\rightarrow1$, $7\rightarrow2$ and $3\rightarrow3$ - we get the correct solution string $[1, -1, -1, 1]$. Done! Heuristic embedding If you start creating larger and larger Ising instances you will not be able to perform manual embedding. So let's suppose we can't manually embed our 2D checkerboard example. $J$ and $h$ then remain unchanged from our initial definitions: J = {(0,1): 1, (0,2): 1, (1,3): 1, (2,3): 1} h = [-1,0,0,0] We again establish the remote connection and get the D-Wave 2000Q VFYC solver instance: connection = RemoteConnection(DWAVE_SAPI_URL, DWAVE_TOKEN) solver = connection.get_solver(DWAVE_SOLVER) In order to find an embedding of our problem, we need to first get the adjacency matrix of the current hardware graph: adjacency = get_hardware_adjacency(solver) Now let's try to find an embedding of our problem: embedding = find_embedding(J.keys(), adjacency) If you are dealing with large Ising instances you might want to search for embeddings in multiple threads (parallelized over multiple CPUs) and then select the embedding with the smallest chain length! A chain is when multiple qubits are forced to act as a single qubit in order to increase the degree of connectivity. However, the longer the chain the more likely that it breaks. And broken chains give bad results! We are now ready to embed our problem onto the graph: [h, j0, jc, embeddings] = embed_problem(h, J, embedding, adjacency) j0 contains the original couplings that we defined and jc contains the couplings that enforce the integrity of the chains (they correlate the qubits within the chains). Thus, we need to combine them again into one big $J$ dictionary: J = j0.copy() J.update(jc) Now, we're ready to solve the embedded problem: params = {"answer_mode": 'histogram', "num_reads": 10000} raw_results = solve_ising(solver, h, J, **params) print 'Lowest energy found: {}'.format(raw_results['energies']) print 'Number of occurences: {}'.format(raw_results['num_occurrences']) The raw_results will not make sense to us unless we unembed the problem. In case, some chains broke we are fixing them through a majority vote as defined by the optional argument broken_chains : unembedded_results = unembed_answer(raw_results['solutions'], embedding, broken_chains='vote') print 'Solution string: {}'.format(unembedded_results) If you run this, you should get the correct result in all readouts: Lowest energy found: [-5.0] Number of occurences: [10000] Solution string: [[1, -1, -1, 1]] I hope this answered your question and I highly recommend checking out all the additional parameters that you can pass to the solve_ising function to improve the quality of your solutions such as num_spin_reversal_transforms or postprocess .
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1451", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/409/" ] }
1,474
From this question, I gathered that the main quantum computing programming languages are Q# and QISKit . What other programming languages are available for programming quantum computers? Are there certain benefits to choosing particular ones? EDIT: I am looking for programming languages, not emulators. Emulators simulate things. Programming languages are a method of writing instructions (either for real objects or for emulators). There may be a single language that works for multiple emulators and vice versa.
Wikipedia list of Quantum Computer programming languages (This answer is not a copy of that webpage, it's more updated and with verified links. In some cases the author's paper or website link is added.) Quantum instruction sets Quil - An instruction set architecture for quantum computing that first introduced a shared quantum/classical memory model. See also PyQuil . OpenQASM - The intermediate representation introduced by IBM for use with their Quantum Experience . Quantum programming languages Imperative languages QCL - One of the first implemented quantum programming languages. Quantum pseudocode - [Not actually a language, but a nice way to represent quantum algorithms and operations.] E. H. Knill. "Conventions for Quantum Pseudocode", unpublished, LANL report LAUR-96-2724 (PDF Source 1 , 2 ), Search at arXiv for all papers referencing Knill's paper. Q|SI> - Original paper in Chinese with English abstract. English version at arXiv: " Q|SI>: A Quantum Programming Environment ". Q language - Software for the Q language . qGCL - " Alternation in Quantum Programming: From Superposition of Data to Superposition of Programs ". QMASM - Specific to D-Wave systems. QMASM Documentation . Author Scott Pakin's edif2qmasm and QMASM webpage at LANL. Functional languages QFC and QPL - Author's website . QML - Main site: http://sneezy.cs.nott.ac.uk/QML/ (not responding, a month later), Archive.Org copy of sneezy.cs.nott.ac.uk , author's PhD thesis: " A functional quantum programming language " (PDF). LIQUi|> - Extension to F# (F Sharp). Quantum lambda calculi - Wikipedia lists a few versions . Quipper - A Haskell based scalable functional programming language for quantum computing. See also Proto-Quipper . A Talk by Peter Selinger (FSCD 2018) titled " Challenges in Quantum Programming Languages " ( .PDF ) discusses these languages. Multi-Paradigm languages Q# (Q Sharp) - A domain-specific programming language used for expressing quantum algorithms. It was initially released to the public by Microsoft as part of the Quantum Development Kit. Also available are Microsoft Quantum Katas , a series of self-paced tutorials aimed at teaching elements of quantum computing and Q# programming at the same time. Strawberry Fields (from XanduAI ) is a full-stack Python library for designing, simulating, and optimizing continuous variable quantum optical circuits. The website Quantum Computing Report has a Tools webpage listing over a dozen links, some new and some repeating the above list. See also QuanTiki's webpage: " List of QC simulators ", for a huge list of simulators and programming languages based on: C/C++, CaML, OCaml, F#, along with GUI based, Java, JavaScript, Julia, Maple, Mathematica, Maxima, Matlab/Octave, .NET, Perl/PHP, Python, Scheme/Haskell/LISP/ML and other online services providing calculators, compilers, simulators, and toolkits, etc. Are there certain benefits to choosing particular ones? If you plan on using a particular quantum computer then one would hope that the programming language developed by the manufacturer is both best suited for that particular machine and well supported. Choosing a language with a larger following means that there are more Forums available and hopefully more bug fixes and support. Unfortunately, that leaves some great niche products to struggle to gain a user base. Trying to find one language that is both powerful/expressive and supported across various platforms is the trick, the answer is an opinion ATM. An evaluation of four software platforms: Forest (pyQuil), QISKit, ProjectQ, and the Quantum Developer Kit is offered by Ryan LaRose in " Overview and Comparison of Gate Level Quantum Software Platforms " (6 Jul 2018). Updates: Google's Cirq and OpenFermion-Cirq: " Google's AI Blog - Announcing Cirq: An Open Source Framework for NISQ Algorithms ". D-Wave's Leap and Ocean SDK allows access to a D-Wave 2000Q™ System in a cloud environment with access to a 2000+ qubit quantum annealing machine to test and run workloads for free, assuming the core algorithms used go into the open source pool. Apply to login at D-Wave's Leap In webpage. Rigetti Computing's Quantum Cloud Service (QCS) offers a Quantum Machine Image, a virtualized programming, and execution environment that is pre-configured with Forest 2.0, to access up to 16 qubits of a 128 qubit computer. Stay tuned for information on Fujitsu's Digital Annealer , an architecture capable of performing computations some 10,000 times faster than a conventional computer. If they eventually provide a development environment that is cross-compatible with true quantum computers these two paragraphs will remain in this answer, otherwise I will remove them. While their silicon chip is not quantum in nature Fujitsu has partnered with 1Qbit to develop what is described as a " Quantum Inspired AI Cloud Service ", whether their Digital Annealer quacks like a duck (anneals like a D-Wave, and uses compatible code) remains to be seen. Visit here to access the Fujitsu Digital Annealer Technical Service . University of Pennsylvania's QWIRE ( choir ) is a quantum circuit language and formal verification tool, it has a GitHub webpage . A review of: Cirq, Cliffords.jl, dimod, dwave-system, FermiLib, Forest (pyQuil & Grove), OpenFermion, ProjectQ, PyZX, QGL.jl, Qbsolv, Qiskit Terra and Aqua, Qiskit Tutorials, and Qiskit.js, Qrack, Quantum Fog, Quantum++, Qubiter, Quirk, reference-qvm, ScaffCC, Strawberry Fields, XACC, and finally XACC VQE is offered in the paper: " Open source software in quantum computing " (Dec 21 2018), by Mark Fingerhuth, Tomáš Babej, and Peter Wittek. I will return to this answer from time to time to make updates, without excessive bumping .
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1474", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/1289/" ] }
1,584
Is there a general statement about what kinds of problems can be solved more efficiently using quantum computers (quantum gate model only)? Do the problems for which an algorithm is known today have a common property? As far as i understand quantum computing helps with the hidden subgroup problem (Shor); Grover's algorithm helps speedup search problems. I have read that quantum algorithms can provide speed-up if you look for a 'global property' of a function (Grover/Deutsch). Is there a more concise and correct statement about where quantum computing can help? Is it possible to give an explanation why quantum physics can help there (preferably something deeper that 'interference can be exploited')? And why it possibly will not help for other problems (e.g. for NP-complete problems)? Are there relevant papers that discuss just that? I had asked this question before over on cstheory.stackexchange.com but it may be more suitable here.
On computational helpfulness in general Without perhaps realising it, you are asking a version of one of the most difficult questions you can possibly ask about theoretical computer science. You can ask the same question about classical computers, only instead of asking whether adding 'quantumness' is helpful, you can ask: Is there a concise statement about where randomised algorithms can help? It's possible to say something very vague here — if you think that solutions are plentiful (or that the number of solutions to some sub-problem are plentiful) but that it might be difficult to systematically construct one, then it's helpful to be able to make choices at random in order to get past the problem of systematic construction. But beware, sometimes the reason why you know that there are plentiful solutions to a sub-problem is because there is a proof using the probabilistic method . When this is the case, you know that the number of solutions is plentiful by reduction to what is in effect a helpful randomised algorithm! Unless you have another way of justifying the fact that the number of solutions is plentiful for those cases, there is no simple description of when a randomised algorithm can help. And if you have high enough demands of 'helpfulness' (a super-polynomial advantage), then what you are asking is whether $\mathsf P \ne \mathsf{BPP}$, which is an unsolved problem in complexity theory. Is there a concise statement about where parallelised algorithms can help? Here things may be slightly better. If a problem looks as though it can be broken down into many independent sub-problems, then it can be parallelised — though this is a vague, "you'll know it when you see it" sort of criterion. The main question is, will you know it when you see it? Would you have guessed that testing feasibility of systems of linear equations over the rationals is not only parallelisable, but could be solved using $O(\log^2 n)$-depth circuits [c.f. Comput. Complex. 8 (pp. 99--126), 1999 ]? One way in which people try to paint a big-picture intuition for this is to approach the question from the opposite direction, and say when it is known that a parallelised algorithm won't help. Specifically, it won't help if the problem has an inherently sequential aspect to it. But this is circular, because 'sequential' just means that the structure that you can see for the problem is one which is not parallelised. Again, there is no simple, comprehensive description of when a parallelised algorithm can help. And if you have high enough demands of 'helpfulness' (a poly-logarithmic upper bound on the amount of time, assuming polynomial parallelisation), then what you are asking is whether $\mathsf P \ne \mathsf{NC}$ , which is again an unsolved problem in complexity theory. The prospects for "concise and correct descriptions of when [X] is helpful" are not looking too great by this point. Though you might protest that we're being too strict here: on the grounds of demanding more than a polynomial advantage, we couldn't even claim that non-deterministic Turing machines were 'helpful' (which is clearly absurd). We shouldn't demand such a high bar — in the absence of techniques to efficiently solve satisfiability, we should at least accept that if we somehow could obtain a non-deterministic Turing machine, we would indeed find it very very helpful . But this is different from being able to characterise precisely which problems we would find it helpful for. On the helpfulness of quantum computers Taking a step back, is there anything we can say about where quantum computers are helpful? We can say this: a quantum computer can only do something interesting if it is taking advantage of the structure of a problem, which is unavailable to a classical computer. (This is hinted at by the remarks about a "global property" of a problem, as you mention). But we can say more than this: problems solved by quantum computers in the unitary circuit model will instantiate some features of that problem as unitary operators . The features of the problem which are unavailable to classical computers will be all those which do not have a (provably) statistically significant relationship to the standard basis. In the case of Shor's algorithm, this property is the eigenvalues of a permutation operator which is defined in terms of multiplication over a ring. In the case of Grover's algorithm, this property is whether the reflection about the set of marked states, commutes with the reflection about the uniform superposition — this determines whether the Grover iterator has any eigenvalues which are not $\pm 1$. It is not especially surprising to see that in both cases, the information relates to eigenvalues and eigenvectors. This is an excellent example of a property of an operator which need not have any meaningful relationship to the standard basis. But there is no particular reason why the information has to be an eigenvalue. All that is needed is to be able to describe a unitary operator, encoding some relevant feature of the problem which is not obvious from inspection of the standard basis, but is accessible in some other easily described way. In the end, all this says is that a quantum computer is useful when you can find a quantum algorithm to solve a problem. But at least it's a broad outline of a strategy for finding quantum algorithms, which is no worse than the broad outlines of strategies I've described above for randomised or parallelised algorithms. Remarks on when a quantum computer is 'helpful' As other people have noted here, "where quantum computing can help" depends on what you mean by 'help'. Shor's algorithm is often trotted out in such discussions, and once in a while people will point out that we don't know that factorisation isn't solvable in polynomial-time. So do we actually know that "quantum computing would be helpful for factorising numbers"? Aside from the difficulty in realising quantum computers, I think here the reasonable answer is 'yes'; not because we know that you can't factorise efficiently using conventional computers, but because we don't know how you would do it using conventional computers. If quantum computers help you to do something that you have no better approach to doing, it seems to me that this is 'helping'. You mention Grover's algorithm, which yields a well-known square-root speedup over brute-force search. This is only a polynomial speedup, and a speedup over a naive classical algorithm — we have better classical algorithms than brute-force search, even for NP -compelete problems. For instance, in the case of 3-SAT instances with a single satisfying assignment, the PPSZ algorithm has a runtime of $O(2^{0.386\,n})$, which outperforms Grover's original algorithm. So is Grover's algorithm 'helpful'? Perhaps Grover's algorithm as such is not especially helpful. However, it may be helpful if you use it to elaborate more clever classical strategies beyond brute-force search: using amplitude amplification , the natural generalisation of Grover's algorithm to more general settings, we can improve on the performance of many non-trivial algorithms for SAT (see e.g. [ACM SIGACT News 36 (pp.103--108), 2005 — free PDF link ]; hat tip to Martin Schwarz who pointed me to this reference in the comments). As with Grover's algorithm, amplitude amplification only yields polynomial speed-ups: but speaking practically, even a polynomial speedup may be interesting if it isn't washed out by the overhead associated with protecting quantum information from noise.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1584", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/1185/" ] }
1,654
This can be seen as the software complement to How does a quantum computer do basic math at the hardware level? The question was asked by a member of the audience at the 4th network of the Spanish Network on Quantum Information and Quantum Technologies . The context the person gave was: " I'm a materials scientist. You are introducing advanced sophisticated theoretical concepts, but I have trouble picturing the practical operation of a quantum computer for a simple task. If I was using diodes, transistors etc I could easily figure out myself the classical operations I need to run to add 1+1. How would you do that, in detail, on a quantum computer? ".
As per the linked question, the simplest solution is just to get the classical processor to perform such operations if possible . Of course, that may not be possible, so we want to create an adder . There are two types of single bit adder - the half-adder and the full adder . The half-adder takes the inputs $A$ and $B$ and outputs the 'sum' (XOR operation) $S = A\oplus B$ and the 'carry' (AND operation) $C = A\cdot B$. A full adder also has the 'carry in' $C_{in}$ input and the 'carry out' output $C_{out}$, replacing $C$. This returns $S=A\oplus B\oplus C_{in}$ and $C_{out} = C_{in}\cdot\left(A+B\right) + A\cdot B$. Quantum version of the half-adder Looking at the CNOT gate on qubit register $A$ controlling register $B$: \begin{align*}\text{CNOT}_{A\rightarrow B}\left|0\right>_A\left|0\right>_B &= \left|0\right>_A\left|0\right>_B \\ \text{CNOT}_{A\rightarrow B}\left|0\right>_A\left|1\right>_B &= \left|0\right>_A\left|1\right>_B \\\text{CNOT}_{A\rightarrow B}\left|1\right>_A\left|0\right>_B &= \left|1\right>_A\left|1\right>_B \\\text{CNOT}_{A\rightarrow B}\left|1\right>_A\left|1\right>_B &= \left|1\right>_A\left|0\right>_B, \\ \end{align*} which immediately gives the output of the $B$ register as $A\oplus B = S$. However, we have yet to compute the carry and the state of the $B$ register has changed so we also need to perform the AND operation. This can be done using the 3-qubit Toffoli (controlled-CNOT/CCNOT) gate. This can be done using registers $A$ and $B$ as control registers and initialising the third register $\left(C\right)$ in state $\left|0\right>$, giving the output of the third register as $A\cdot B = C$. Implementing Toffoli on registers $A$ and $B$ controlling register $C$ followed by CNOT with $A$ controlling $B$ gives the output of register $B$ as the sum and the output of register $C$ as the carry. A quantum circuit diagram of the half-adder is shown in figure 1. Figure 1: Circuit Diagram of a half-adder, consisting of Toffoli followed by CNOT. Input bits are $A$ and $B$, giving the sum $S$ with carry out $C$. Quantum version of the full adder Shown in figure 2, a simple way of doing this for single bits is by using $4$ qubit registers, here labelled $A$, $B$, $C_{in}$ and $1$, where $1$ starts in state $\left|0\right>$, so the initial state is $\left|A\right>\left|B\right>\left|C_{in}\right>\left|0\right>$: Apply Toffoli using $A$ and $B$ to control $1$: $\left|A\right>\left|B\right>\left|C_{in}\right>\left|A\cdot B\right>$ CNOT with $A$ controlling $B$: $\left|A\right>\left|A\oplus B\right>\left|C_{in}\right>\left|A\cdot B\right>$ Toffoli with $B$ and $C_{in}$ controlling $1$: $\left|A\right>\left|A\oplus B\right>\left|C_{in}\right>\left|A\cdot B\oplus\left(A\oplus B\right)\cdot C_{in} = C_{out}\right>$ CNOT with $B$ controlling $C_{in}$: $\left|A\right>\left|A\oplus B\right>\left|A\oplus B\oplus C_{in} = S\right>\left|C_{out}\right>$ A final step to get back the inputs $A$ and $B$ is to apply a CNOT with register $A$ controlling register $B$, giving the final output state as $$\left|\psi_{out}\right> = \left|A\right>\left|B\right>\left|S\right>\left|C_{out}\right>$$ This gives the output of register $C_{in}$ as the sum and the output of register $2$ as carry out. Figure 2: Circuit diagram of a full adder. Input bits are $A$ and $B$ along with a carry in $C_{in}$, giving the sum $S$ with carry out $C_{out}$. Quantum version of the ripple carry adder A simple extension of the full adder is a ripple carry adder, named as it 'ripples' the carry out to become the carry in of the next adder in a series of adders, allowing for arbitrarily-sized (if slow) sums. A quantum version of such an adder can be found e.g. here Actual implementation of a half-adder For many systems, implementing a Toffoli gate is far from as simple as implementing a single qubit (or even two qubit) gate. This answer gives a way of decomposing Toffoli into multiple smaller gates. However, in real systems, such as IBMQX , there can also be issues on which qubits can be used as targets. As such, a real life implementation on IBMQX2 looks like this: Figure 3: Implementation of a half-adder on IBMQX2. In addition to decomposing the Toffoli gate into multiple smaller gates, additional gates are required as not all qubit registers can be used as targets. Registers q[0] and q[1] are added to get the sum in q[1] and the carry in q[2]. In this case, the result q[2]q[1] should be 10. Running this on the processor gave the correct result with a probability of 42.8% (although it was still the most likely outcome).
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1654", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/1847/" ] }
1,657
The term " Church of the Higher Hilbert Space " is used in quantum information frequently when analysing quantum channels and quantum states. What does this term mean (or, alternately, what does the term "Going to To the Church of the Higher Hilbert Space" mean)?
The church of the larger (or higher, or greater) Hilbert space is just a trick that some people like (myself included) for rewriting some operations. The most general operations that you can write down for a system are described by completely positive maps, while we like describing things with unitaries, which you can always do by moving from the original Hilbert space to a larger one (i.e. adding more qubits). Similarly, for measurements, you can turn general measurements into projective measurements by increasing the size of the Hilbert space. Also, mixed states can be described as pure states of a larger system. Example Consider the map that takes a qubit and with probability $1-p$ does nothing, and with probability, $p$ applies the bit-flip operation $X$: $$ |\psi\rangle\langle\psi|\mapsto (1-p)|\psi\rangle\langle\psi|+pX|\psi\rangle\langle\psi|X $$ This is not unitary, but you can describe it as a unitary on two qubits (i.e. by moving from a Hilbert space dimension 2 to Hilbert space dimension 4). This works by introducing an extra qubit in the state $\sqrt{1-p}|0\rangle+\sqrt{p}|1\rangle$ and performing a controlled-not controlled by the new qubit and targeting the original one. $$ |\psi\rangle(\sqrt{1-p}|0\rangle+\sqrt{p}|1\rangle)\mapsto|\Psi\rangle=\sqrt{1-p}|\psi\rangle|0\rangle+\sqrt{p}\left(X|\psi\rangle\right)|1\rangle. $$ To get back the action of the system on just the original qubit, you trace out the new qubit: $$ \rho={\rm Tr}_2\left(|\Psi\rangle\langle\Psi|\right)= (1-p)|\psi\rangle\langle\psi|+pX|\psi\rangle\langle\psi|X. $$ In other words, you just ignore the existence of the new qubit after you’ve implemented the unitary! Note that as well as demonstrating the church of the larger Hilbert space for operations, this also demonstrates it for states - the mixed state $\rho$ can be made into the pure state $|\Psi\rangle$ by increasing the size of the Hilbert space.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1657", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/429/" ] }
1,803
Plain and simple. Does Moore's law apply to quantum computing, or is it similar but with the numbers adjusted (ex. triples every 2 years). Also, if Moore's law doesn't apply, why do qubits change it?
If you take as definition " the number of transistors in a dense integrated circuit doubles about every two years ", it definitely does not apply: as answered here in Do the 'fundamental circuit elements' have a correspondence in quantum technologies? there exist no transistors-as-fundamental-components (nor do exist fundamental-parallel-to-transistors) in a quantum computer. If you take a more general definition " chip performance doubles aproximately every 18 months ", the question makes more sense, and the answer is still that it does not apply , mainly because Moore's law is not one of fundamental Physics. Rather, in the early stages, it was an observation of a stablished industry. Later, as pointed out in a comment,[1] it has been described as functioning as an " evolving target " and as a " self-fulfilling prophecy " for that same industry. The key is that we do not have a stablished industry producing quantum computers. We are not in the quantum equivalent from 1965. Arguably we will move faster, but in many aspects we are rather in the XVII-XVIII centuries. For a perspective, check this timeline of computing hardware before 1950 . For a more productive answer, there are a few fundamental differences and a few possible parallels between classical and quantum hardware, in the context of Moore's law: For many architectures, in a certain sense we already work with the smallest possible component. While we might develop ion traps (of a fixed size) fitting more ions, but we cannot develop smaller ions: they are of atomic size. Even when we are able to come up with tricks, such as Three addressable spin qubits in a molecular single-ion magnet , they are still fundamentally limited by quantum mechanics. We need control over 8 energy levels to control 3 qubits ($2^n$), which is doable, but not scalable. Precisely because the scalability issue is one of the hardest problem we have with quantum computers -not just having a larger number of qubits, buy also being able to entangle them- it's dangerous to extrapolate from current progress. See for illustration the history of NMR quantum computers , which stalled after a very early string of successes. In theory, increasing the number of qubits in the device was trivial. In practice, every time you want to be able to control 1 more qubit you need to double the resolution of your machine, which becomes very unfeasible very quickly. If and when there exists an industry that relies on an evolving technology which is able to produce some kind of integrated quantum chips, then yes, at that point we will be able to draw a real parallel to Moore's law. For a taste of how far we are from that point, see Are there any estimates on how complexity of quantum engineering scales with size? [1] Thanks to Sebastian Mach for that insight and wikipedia link . For more details on that see Getting New Technologies Together: Studies in Making Sociotechnical Order edited by Cornelis Disco, Barend van der Meulen, p. 206 and Gordon Moore says aloha to Moore's Law .
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1803", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/1348/" ] }
1,806
Quantum state teleportation is the quantum information protocol where a qubit is transferred between two parties using an initial shared entangled state, Bell measurement, classical communication and local rotation. Apparently, there is also something called quantum gate teleportation. What is quantum gate teleportation and what is it used for? I am particularly interested in possible applications in simulating quantum circuits.
Quantum gate teleportation is the act of being able to apply a quantum gate on the unknown state while it is being teleported. This is one of the ways in which measurement-based computation can be described using graph states. Usually, teleportation works by having an unknown quantum state $|\psi\rangle$ held by Alice, and two qubits in the Bell state $|\Psi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$ shared between Alice and Bob. Alice performs a Bell state measurement, getting one of 4 possible answers and Bob holds on his qubit, depending on the measurement result of Alice, one of the 4 states $|\psi\rangle,X|\psi\rangle,Z|\psi\rangle,ZX|\psi\rangle.$ So, once Bob learns what result Alice got, he can compensate by applying the appropriate Paulis. Let $U$ be a 1-qubit unitary. Assume Alice and Bob share $(\mathbb{I}\otimes U)|\Psi\rangle$ instead of $|\Psi\rangle$ . If they repeat the teleportation protocol, Bob now has one of $U|\psi\rangle,UX|\psi\rangle,UZ|\psi\rangle,UZX|\psi\rangle$ , which we can rewrite as $U|\psi\rangle,(UXU^\dagger)U|\psi\rangle,(UZU^\dagger)U|\psi\rangle,(UZXU^\dagger)U|\psi\rangle.$ The compensations that Bob has to make for a given measurement result are given by the bracketed terms. Often, these are no worse than the compensations you would have to make for normal teleportation (i.e. just the Pauli rotations). For example, if $U$ is the Hadamard rotation, then the corrections are just $(\mathbb{I},Z,X,XZ)$ respectively. So, you can apply the Hadamard during teleportation just be changing the state that you teleport through (There is a strong connection here to the Choi-Jamiołkowski isomorphism ). You can do the same for Pauli gates, and the phase gate $\sqrt{Z}=S$ . Moreover, if you repeat this protocol to build up a more complicated computation, it is often sufficient to keep a record of what these corrections are, and to apply them later. Even if you don't only need the Pauli gates (as is the case for $T=\sqrt{S}$ ), the compensations may be easier than implementing the gate directly. This is the basis of the construction of the fault-tolerant T gate. In fact, you can do something similar to apply a controlled-NOT between a pair of qubits as well. This time, the state you need is $|\Psi\rangle_{A_1B_1}|\Psi\rangle_{A_1B_1}$ , and a controlled-NOT applied between $B_1$ and $B_2$ . This time, there are 16 possible compensating rotations, but all of them are just about how Pauli operations propagate through the action of a controlled-NOT and, again, that just gives Pauli operations out.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1806", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/144/" ] }
1,808
For an integer, $N$, to be factorised, with $a$ (uniformly) chosen at random between $1$ and $N$, with $r$ the order of $a\mod N$ (that is, the smallest $r$ with $a^r\equiv 1\mod N$): Why is that in Shor's algorithm we have to discard the scenario in which $a^{r/2} =-1 \mod N$? Also, why shouldn't we discard the scenario when $a^{r/2} = 1 \mod N$?
Quantum gate teleportation is the act of being able to apply a quantum gate on the unknown state while it is being teleported. This is one of the ways in which measurement-based computation can be described using graph states. Usually, teleportation works by having an unknown quantum state $|\psi\rangle$ held by Alice, and two qubits in the Bell state $|\Psi\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$ shared between Alice and Bob. Alice performs a Bell state measurement, getting one of 4 possible answers and Bob holds on his qubit, depending on the measurement result of Alice, one of the 4 states $|\psi\rangle,X|\psi\rangle,Z|\psi\rangle,ZX|\psi\rangle.$ So, once Bob learns what result Alice got, he can compensate by applying the appropriate Paulis. Let $U$ be a 1-qubit unitary. Assume Alice and Bob share $(\mathbb{I}\otimes U)|\Psi\rangle$ instead of $|\Psi\rangle$ . If they repeat the teleportation protocol, Bob now has one of $U|\psi\rangle,UX|\psi\rangle,UZ|\psi\rangle,UZX|\psi\rangle$ , which we can rewrite as $U|\psi\rangle,(UXU^\dagger)U|\psi\rangle,(UZU^\dagger)U|\psi\rangle,(UZXU^\dagger)U|\psi\rangle.$ The compensations that Bob has to make for a given measurement result are given by the bracketed terms. Often, these are no worse than the compensations you would have to make for normal teleportation (i.e. just the Pauli rotations). For example, if $U$ is the Hadamard rotation, then the corrections are just $(\mathbb{I},Z,X,XZ)$ respectively. So, you can apply the Hadamard during teleportation just be changing the state that you teleport through (There is a strong connection here to the Choi-Jamiołkowski isomorphism ). You can do the same for Pauli gates, and the phase gate $\sqrt{Z}=S$ . Moreover, if you repeat this protocol to build up a more complicated computation, it is often sufficient to keep a record of what these corrections are, and to apply them later. Even if you don't only need the Pauli gates (as is the case for $T=\sqrt{S}$ ), the compensations may be easier than implementing the gate directly. This is the basis of the construction of the fault-tolerant T gate. In fact, you can do something similar to apply a controlled-NOT between a pair of qubits as well. This time, the state you need is $|\Psi\rangle_{A_1B_1}|\Psi\rangle_{A_1B_1}$ , and a controlled-NOT applied between $B_1$ and $B_2$ . This time, there are 16 possible compensating rotations, but all of them are just about how Pauli operations propagate through the action of a controlled-NOT and, again, that just gives Pauli operations out.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/1808", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/5410/" ] }
2,106
I am studying Quantum Computing and Information, and have come across the term "surface code", but I can't find a brief explanation of what it is and how it works. Hopefully you guys can help me with this.
The surface codes are a family of quantum error correcting codes defined on a 2D lattice of qubits. Each code within this family has stabilizers that are defined equivalently in the bulk, but differ from one another in their boundary conditions. The members of the surface code family are sometimes also described by more specific names: The toric code is a surface code with periodic boundary conditions, the planar code is one defined on a plane, etc. The term ‘surface code’ is sometimes also used interchangeably with ‘planar code’, since this is the most realistic example of the surface code family. The surface codes are currently a large research area, so I’ll just point you towards some good entry points (in addition to the Wikipedia article linked to above). Topological quantum memory (paper) Surface codes: Towards practical large-scale quantum computation (paper) My blog series introducing surface codes The surface codes can also be generalized to qudits. For more on that, see here .
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/2106", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/2422/" ] }
2,151
In the past few days, I have been trying to collect material (mostly research papers) related to Quantum machine learning and its applications, for a summer project. Here are a few which I found interesting (from a superficial reading): Unsupervised Machine Learning on a Hybrid Quantum Computer (J.S. Otterbach et al., 2017) Quantum algorithms for supervised and unsupervised machine learning (Lloyd, Mohseni & Rebentrost, 2013) A Machine Learning Framework to Forecast Wave Conditions (James, Zhang & O'Donncha 2017) Quantum Neuron: an elementary building block for machine learning on quantum computers (Cao, Guerreschi & Aspuru-Guzik, 2017) Quantum machine learning for quantum anomaly detection (Liu & Rebentrost, 2017) However, coming from the more physics-y end of the spectrum, I don't have much background knowledge in this area and am finding most of the specialized materials impenetrable. Ciliberto et al. 's paper: Quantum machine learning: a classical perspective somewhat helped me to grasp some of the basic concepts. I'm looking for similar but more elaborate introductory material. It would be very helpful if you could recommend textbooks, video lectures, etc. which provide a good introduction to the field of quantum machine learning. For instance, Nielsen and Chuang's textbook is a great introduction to the quantum computing and quantum algorithms in general and goes quite far in terms of introductory material (although it begins at a very basic level and covers all the necessary portions of quantum mechanics and linear algebra and even the basics of computational complexity!). Is there anything similar for quantum machine learning? P.S: I do realize that quantum machine learning is a vast area. In case there is any confusion, I would like to point out that I'm mainly looking for textbooks/introductory papers/lectures which cover the details of the quantum analogues of classical machine learning algorithms.
The Nielsen and Chuang of Quantum Machine Learning is this extensive review called " Quantum Machine Learning " published in Nature in 2017. The arXiv version is here and has been updated as recently as 10 May 2018.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/2151", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/26/" ] }
2,282
Is it because we don't know exactly how to create quantum computers (and how they must work), or do we know how to create it in theory, but don't have the tools to execute it in practice? Is it a mix of the above two? Any other reasons?
We know exactly, in theory, how to construct a quantum computer. But that is intrinsically more difficult than to construct a classical computer. In a classical computer, you do not have to use a single particle to encode bits. Instead, you might say that anything less than a billion electrons is a 0 and anything more than that is a 1, and aim for, say, two billion of electrons to encode a 1 normally. That makes you inherently fault-tolerant: Even if there are hundreds of millions of electrons more or less than expected, you will still get the correct classification as a digital 0 or a 1. In a quantum computer, this trick is not possible due to the non-cloning theorem: You cannot trivially employ more than one particle to encode a qubit (quantum bit). Instead, you must make all your gates operate so well that they are not just accurate to the single particle level but even to a tiny fraction of how much they act on a single particle (to the so-called quantum-error correction threshold). This is much more challenging than to get gates accurate merely to within hundreds of millions of electrons. Meanwhile we do have the tools to, just barely, make quantum computers with the required level of accuracy. But nobody has, as of yet, managed to make a big one meaning one that can accurately operate on the perhaps hundred of thousands of physical qubits needed to implement a hundred or so logical qubits to then be undeniably in the realm where the quantum computer beats classical computers at select problems (quantum supremacy).
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/2282", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/2559/" ] }
2,499
I have a computer science degree. I work in IT, and have done so for many years. In that period "classical" computers have advanced by leaps and bounds. I now have a terabyte disk drive in my bedroom drawer amongst my socks, my phone has phenomenal processing power, and computers have revolutionized our lives. But as far as I know, quantum computing hasn't done anything. Moreover it looks like it's going to stay that way. Quantum computing has been around now for the thick end of forty years, and real computing has left it in the dust. See the timeline on Wikipedia, and ask yourself where's the parallel adder? Where's the equivalent of Atlas, or the MU5? I went to Manchester University, see the history on the Manchester Computers article on Wikipedia. Quantum computers don't show similar progress. Au contraire, it looks like they haven't even got off the ground. You won't be buying one in PC World any time soon. Will you ever be able to? Is it all hype and hot air? Is quantum computing just pie in the sky? Is it all just jam-tomorrow woo peddled by quantum quacks to a gullible public? If not, why not?
I'll be trying to approach this from a neutral point of view. Your question is sort of "opinion-based", but yet, there are a few important points to be made. Theoretically , there's no convincing argument (yet) as to why quantum computers aren't practically realizable. But, do check out: How Quantum Computers Fail: Quantum Codes, Correlations in Physical Systems, and Noise Accumulation - Gil Kalai , and the related blog post by Scott Aaronson where he provides some convincing arguments against Kalai's claims. Also, read James Wotton's answer to the related QCSE post: Is Gil Kalai's argument against topological quantum computers sound? Math Overflow has a great summary: On Mathematical Arguments Against Quantum Computing . However, yes, of course, there are engineering problems . Problems (adapted from arXiv:cs/0602096 ): Sensitivity to interaction with the environment: Quantum computers are extremely sensitive to interaction with the surroundings since any interaction (or measurement) leads to a collapse of the state function. This phenomenon is called decoherence. It is extremely difficult to isolate a quantum system, especially an engineered one for a computation, without it getting entangled with the environment. The larger the number of qubits the harder is it to maintain the coherence. [Further reading: Wikipedia: Quantum decoherence ] Unreliable quantum gate actions: Quantum computation on qubits is accomplished by operating upon them with an array of transformations that are implemented in principle using small gates. It is imperative that no phase errors be introduced in these transformations. But practical schemes are likely to introduce such errors. It is also possible that the quantum register is already entangled with the environment even before the beginning of the computation. Furthermore, uncertainty in initial phase makes calibration by rotation operation inadequate. In addition, one must consider the relative lack of precision in the classical control that implements the matrix transformations. This lack of precision cannot be completely compensated for by the quantum algorithm. Errors and their correction: Classical error correction employs redundancy. The simplest way is to store the information multiple times, and — if these copies are later found to disagree — just take a majority vote; e.g. Suppose we copy a bit three times. Suppose further that a noisy error corrupts the three-bit state so that one bit is equal to zero but the other two are equal to one. If we assume that noisy errors are independent and occur with some probability $p$ , it is most likely that the error is a single-bit error and the transmitted message is three ones. It is possible that a double-bit error occurs and the transmitted message is equal to three zeros, but this outcome is less likely than the above outcome. Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. However, quantum error correcting code(s) protect quantum information against errors of only some limited forms. Also, they are efficient only for errors in a small number of qubits. Moreover, the number of qubits needed to correct errors doesn't normally scale well with the number of qubits in which error actually occurs. [Further reading: Wikipedia: Quantum error correction ] Constraints on state preparation: State preparation is the essential first step to be considered before the beginning of any quantum computation. In most schemes, the qubits need to be in a particular superposition state for the quantum computation to proceed correctly. But creating arbitrary states precisely can be exponentially hard (in both time and resource (gate) complexity). Quantum information, uncertainty, and entropy of quantum gates: Classical information is easy to obtain by means of interaction with the system. On the other hand, the impossibility of cloning means that any specific unknown state cannot be determined. This means that unless the system has specifically been prepared, our ability to control it remains limited. The average information of a system is given by its entropy. The determination of entropy would depend on the statistics obeyed by the object. A requirement for low temperatures : Several quantum computing architectures like superconducting quantum computing require extremely low temperatures (close to absolute zero) for functioning. Progress: Around a decade and a half, ago the decoherence time for the so-called "quantum computers" were lesser than 1 nanosecond. Now, the IBM Quantum Experience 16 qubit version which you can access online has a decoherence time ~100 μs (see: Demonstration of Envariance and Parity Learning on the IBM 16 Qubit Processor (Davide Ferrari & Michele Amoretti, 2018) ). The decoherence time of 100 μs is sufficient to run simple quantum algorithms already! You can check it out yourself on the 5 qubit and 16 qubit quantum computers which have been made accessible online by IBM. I think Google has been able to achieve even better decoherence times with their superconducting chips (having an equivalent number of qubits). There has been a lot of improvement in the area of quantum error correction in the past decade (requiring a much lesser number of total qubits). See Quantum Error Correction for Quantum Memories (Terhal, 2015) for a brief review. Also, quoting: Wikipedia: Quantum error correction - Experimental realization : There have been several experimental realizations of CSS-based codes. The first demonstration was with NMR qubits. Subsequently, demonstrations have been made with linear optics, trapped ions, and superconducting (transmon) qubits. Other error-correcting codes have also been implemented, such as one aimed at correcting for photon loss, the dominant error source in photonic qubit schemes. Preparation of arbitrary quantum states is still a major problem. But now at least we know the exact gate decomposition for any unitary evolution ( Quantum-state preparation with universal gate decompositions (Plesch & Brukner, 2011) ), albeit the number of gates doesn't usually scale well with a number of qubits. There have been further improvements like in High-fidelity quantum state preparation using neighboring optimal control (Yuchen Peng & Frank Gaitan, 2017) . For some other recently developed high-precision methods of state preparation see The preparation of states in quantum mechanics (Fröhlich, 2016) and Preparation of quantum state (Ali et al.,2017) . One of the main constraints we still have is number of qubits (note that this issue is intrinsically related to the difficulty of maintaining coherence for long periods of time c.f. Schrödinger's cat and the difficulty of macroscopic superposition state and the excellent answer therein). None of the present day quantum computers is sufficient to show any considerable improvement in "capability" compared to classical computers. The largest number factorized by a quantum computer till date is 291311 ( High-fidelity adiabatic quantum computation using the intrinsic Hamiltonian of a spin system: Application to the experimental factorization of 291311 (Li et al., 2017) ). Brute-force factorization of 291311 takes at most ~270 divisions, each of which takes ~10 ns on modern CPUs. That's 3 us in total, implying that the factorization will be several orders of magnitude faster on your laptop. The practical improvement in time complexity won't be noticeable unless and until the number of qubits increases by at least 10 times or so (I'm not considering the D-Wave machines which have over 1000 qubits, as they use a different mechanism known as quantum annealing which is effective only in a few narrow ranges of problems). But, arguably even the number of qubits is on a steady rise. Recently Google announced a 72 qubit machine ( Google AI blog: A Preview of Bristlecone, Google’s New Quantum Processor ) and Intel announced a 49 qubit chip ( IEEE Spectrum:: CES 2018: Intel's 49-Qubit Chip Shoots for Quantum Supremacy ). Compare that to the 2000s when we only used to have a single-digit number of qubits! Several quantum computing architectures have been discovered in the past couple of decades, for which near-absolute-zero temperatures are not necessary , for example, optical quantum computers, trapped-ion quantum computers, diamond-based quantum computers, etc. Cf. Why do optical quantum computers not have to be kept near absolute zero while superconducting quantum computers do? Conclusion: Whether we will ever have efficient quantum computers which can visibly outperform classical computers in certain areas, is something which only time will say. However, looking at the considerable progress we have been making, it probably wouldn't be too wrong to say that in a couple of decades we should have sufficiently powerful quantum computers. On the theoretical side though, we don't yet know if classical algorithms (can) exist which will match quantum algorithms in terms of time complexity. See my previous answer about this issue. From a completely theoretical perspective, it would also be extremely interesting if someone can prove that all BQP problems lie in BPP or P! I personally believe that in the coming decades we will be using a combination of quantum computing techniques and classical computing techniques (i.e. either your PC will be having both classical hardware components as well as quantum hardware or quantum computing will be totally cloud-based and you'll only access online them from classical computers). Because remember that quantum computers are efficient only for a very narrow range of problems. It would be pretty resource-intensive and unwise to do an addition like 2+3 using a quantum computer (see How does a quantum computer do basic math at the hardware level? ). Now, coming to your point of whether national funds are unnecessarily being wasted on trying to build quantum computers . My answer is NO ! Even if we fail to build legitimate and efficient quantum computers, we will still have gained a lot in terms of engineering progress and scientific progress . Already research in photonics and superconductors has increased manyfold and we are beginning to understand a lot of physical phenomena better than ever before. Moreover, quantum information theory and quantum cryptography have led to the discovery of a few neat mathematical results and techniques which may be useful in a lot of other areas too (cf. Physics SE: Mathematically challenging areas in Quantum information theory and quantum cryptography ). We will also have understood a lot more about some of the hardest problems in theoretical computer science by that time (even if we fail to build a "quantum computer"). Sources and References: Difficulties in the Implementation of Quantum Computers (Ponnath, 2006) Wikipedia: Quantum computing Wikipedia: Quantum error correction Addendum: After a bit of searching, I found a very nice article which outlines almost all of Scott Aaronson's counter-arguments against the quantum computing skepticism. I very highly recommend going through all the points given in there. It's actually part 14 of the lecture notes put up by Aaronson on his website. They were used for the course PHYS771 at the University of Waterloo. The lectures notes are based on his popular textbook Quantum Computing Since Democritus .
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/2499", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/1905/" ] }
3,971
Sligthly related to this question , but not the same. Traditional computer science requires no physics knowledge for a computer scientist to be able to research and make progress in the field. Of course, you do need to know about the underlying physical layer when your research is related to that, but in many cases you can ignore it (e.g. when designing an algorithm). Even when architectural details are important (e.g. cache layout), oftentimes it is not necessary to know all the details about them, or how they're implemented at the physical level. Has quantum computing reached this level of "maturity"? Can you design a quantum algorithm, or do actual research in the field, as a computer scientist that doesn't know anything about quantum physics? In other words, can you "learn" quantum computing ignoring the physical side, and is it worth it (in terms of scientific career)?
Speaking as a computer scientist without any physics background making contributions to quantum computing: yes, computer scientists without any physics background can make contributions to quantum computing. Though I think it was always that way; it has nothing to do with the field being "mature". If you understand the postulates of quantum mechanics (operations are unitary matrices, states are unit vectors, measurement is a projection), and know how to apply those in the context of a computation, you can create quantum algorithms. The fact that these concepts were originally derived from physics is historically interesting, but not really relevant when optimizing a quantum circuit. As a concrete example: quantum physics is very heavy on calculus, but quantum computation isn't. Quantum physics does become relevant if you are trying to design algorithms for simulating quantum systems. And some of the concepts you would learn in a quantum physics course should also appear in a quantum computation course. But overall I agree with Scott Aaronson : [My] way to teach quantum mechanics [...] starts directly from the conceptual core -- namely, a certain generalization of probability theory to allow minus signs. Once you know what the theory is actually about, you can then sprinkle in physics to taste [...] [quantum mechanics is] not a physical theory in the same sense as electromagnetism or general relativity. [...] Basically, quantum mechanics is the operating system that other physical theories run on as application software [...] [...] [quantum mechanics is] about information and probabilities and observables, and how they relate to each other.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/3971", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/4296/" ] }
3,980
I was looking into applications of Quantum Computing for machine learning and encountered the following pre-print from 2003. Quantum Convolution and Correlation Algorithms are Physically Impossible . The article doesn't appear to have been published in any journals, but it has been cited a few dozen times. The article author makes the case that it is impossible to compute discrete convolution over quantum states. Intuitively this seems incorrect to me, since I know we can perform quantum matrix multiplication, and I know that discrete convolution can be framed simply as multiplication with a Toeplitz (or circulant) matrix. The crux of his argument seems to be that there is no realizable composition of unitary operators for the elementwise (Hadamard) product of two vectors. Where is my disconnect? Is there any reason we cannot in general construct a Toeplitz matrix for discrete convolution in a quantum computer? Or is the article simply incorrect? I have worked through the contradiction that the author presents in his proof of Lemma 14, and it seems to make sense to me.
Speaking as a computer scientist without any physics background making contributions to quantum computing: yes, computer scientists without any physics background can make contributions to quantum computing. Though I think it was always that way; it has nothing to do with the field being "mature". If you understand the postulates of quantum mechanics (operations are unitary matrices, states are unit vectors, measurement is a projection), and know how to apply those in the context of a computation, you can create quantum algorithms. The fact that these concepts were originally derived from physics is historically interesting, but not really relevant when optimizing a quantum circuit. As a concrete example: quantum physics is very heavy on calculus, but quantum computation isn't. Quantum physics does become relevant if you are trying to design algorithms for simulating quantum systems. And some of the concepts you would learn in a quantum physics course should also appear in a quantum computation course. But overall I agree with Scott Aaronson : [My] way to teach quantum mechanics [...] starts directly from the conceptual core -- namely, a certain generalization of probability theory to allow minus signs. Once you know what the theory is actually about, you can then sprinkle in physics to taste [...] [quantum mechanics is] not a physical theory in the same sense as electromagnetism or general relativity. [...] Basically, quantum mechanics is the operating system that other physical theories run on as application software [...] [...] [quantum mechanics is] about information and probabilities and observables, and how they relate to each other.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/3980", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/4298/" ] }
4,252
In a three-qubit system, it's easy to derive the CNOT operator when the control & target qubits are adjacent in significance - you just tensor the 2-bit CNOT operator with the identity matrix in the untouched qubit's position of significance: $$C_{10}|\phi_2\phi_1\phi_0\rangle = (\mathbb{I}_2 \otimes C_{10})|\phi_2\phi_1\phi_0\rangle.$$ However, it isn't obvious how to derive the CNOT operator when the control & target qubits are not adjacent in significance: $C_{20}|\phi_2\phi_1\phi_0\rangle.$ How is this done?
For a presentation from first principles, I like Ryan O'Donnell's answer . But for a slightly higher-level algebraic treatment, here's how I would do it. The main feature of a controlled-$U$ operation, for any unitary $U$, is that it (coherently) performs an operation on some qubits depending on the value of some single qubit. The way that we can write this explicitly algebraically (with the control on the first qubit) is: $$ \mathit{CU} \;=\; \def\ket#1{\lvert #1 \rangle}\def\bra#1{\langle #1 \rvert}\ket{0}\!\bra{0} \!\otimes\! \mathbf 1 \,+\, \ket{1}\!\bra{1} \!\otimes\! U$$ where $\mathbf 1$ is an identity matrix of the same dimension as $U$. Here, $\ket{0}\!\bra{0}$ and $\ket{1}\!\bra{1}$ are projectors onto the states $\ket{0}$ and $\ket{1}$ of the control qubit — but we are not using them here as elements of a measurement, but to describe the effect on the other qubits depending on one or the other subspace of the state-space of the first qubit. We can use this to derive the matrix for the gate $\mathit{CX}_{1,3}$ which performs an $X$ operation on qubit 3, coherently conditioned on the state of qubit 1, by thinking of this as a controlled-$(\mathbf 1_2 \!\otimes\! X)$ operation on qubits 2 and 3: $$ \begin{aligned} \mathit{CX}_{1,3} \;&=\; \ket{0}\!\bra{0} \otimes \mathbf 1_4 \,+\, \ket{1}\!\bra{1} \otimes (\mathbf 1_2 \otimes X) \\[1ex]&=\; \begin{bmatrix} \mathbf 1_4 & \mathbf 0_4 \\ \mathbf 0_4 & (\mathbf 1_2 \!\otimes\! X) \end{bmatrix} \;=\; \begin{bmatrix} \mathbf 1_2 & \mathbf 0_2 & \mathbf 0_2 & \mathbf 0_2 \\ \mathbf 0_2 & \mathbf 1_2 & \mathbf 0_2 & \mathbf 0_2 \\ \mathbf 0_2 & \mathbf 0_2 & X & \mathbf 0_2 \\ \mathbf 0_2 & \mathbf 0_2 & \mathbf 0_2 & X \end{bmatrix}, \end{aligned} $$ where the latter two are block matrix representations to save on space (and sanity). Better still: we can recognise that — on some mathematical level where we allow ourselves to realise that the order of the tensor factors doesn't have to be in some fixed order — the control and the target of the operation can be on any two tensor factors, and that we can fill in the description of the operator on all of the other qubits with $\mathbf 1_2$. This would allow us to jump straight to the representation $$ \begin{alignat}{2} \mathit{CX}_{1,3} \;&=&\; \underbrace{\ket{0}\!\bra{0}}_{\text{control}} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{target}\!\!\!\!} &+\, \underbrace{\ket{1}\!\bra{1}}_{\text{control}} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\; X\;}_{\!\!\!\!\text{target}\!\!\!\!} \\[1ex]&=&\; \begin{bmatrix} \mathbf 1_2 & \mathbf 0_2 & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ \mathbf 0_2 & \mathbf 1_2 & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \end{bmatrix} \,&+\, \begin{bmatrix} \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} \\ \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & X & \mathbf 0_2 \\ \phantom{\mathbf 0_2} & \phantom{\mathbf 0_2} & {\mathbf 0_2} & X \end{bmatrix} \end{alignat} $$ and also allows us to immediately see what to do if the roles of control and target are reversed: $$ \begin{alignat}{2} \mathit{CX}_{3,1} \;&=&\; \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{target}\!\!\!\!} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\ket{0}\!\bra{0}}_{\text{control}} \,&+\, \underbrace{\;X\;}_{\!\!\!\!\text{target}\!\!\!\!} \otimes \underbrace{\;\mathbf 1_2\;}_{\!\!\!\!\text{uninvolved}\!\!\!\!} \otimes \underbrace{\ket{1}\!\bra{1}}_{\text{control}} \\[1ex]&=&\; {\scriptstyle\begin{bmatrix} \!\ket{0}\!\bra{0}\!\! & & & \\ & \!\!\ket{0}\!\bra{0}\!\! & & \\ & & \!\!\ket{0}\!\bra{0}\!\! & \\ & & & \!\!\ket{0}\!\bra{0} \end{bmatrix}} \,&+\, {\scriptstyle\begin{bmatrix} & & \!\!\ket{1}\!\bra{1}\!\! & \\ & & & \!\!\ket{1}\!\bra{1} \\ \!\ket{1}\!\bra{1}\!\! & & & \\ & \!\!\ket{1}\!\bra{1} & & \end{bmatrix}} \\[1ex]&=&\; \left[{\scriptstyle\begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{matrix}}\right.\,\,&\,\,\left.{\scriptstyle\begin{matrix} 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}}\right]. \end{alignat} $$ But best of all: if you can write down these operators algebraically, you can take the first steps towards dispensing with the giant matrices entirely, instead reasoning about these operators algebraically using expressions such as $\mathit{CX}_{1,3} = \ket{0}\!\bra{0} \! \otimes\!\mathbf 1_2\! \otimes\! \mathbf 1_2 + \ket{1}\!\bra{1} \! \otimes\! \mathbf 1_2 \! \otimes\! X$ and $\mathit{CX}_{3,1} = \mathbf 1_2 \! \otimes\! \mathbf 1_2 \! \otimes \! \ket{0}\!\bra{0} + X \! \otimes\! \mathbf 1_2 \! \otimes \! \ket{1}\!\bra{1}$. There will be a limit to how much you can do with these, of course — a simple change in representation is unlikely to make a difficult quantum algorithm efficiently solvable, let alone tractable by manual calculation — but you can reason about simple circuits much more effectively using these expressions than with giant space-eating matrices.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/4252", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/4153/" ] }
4,263
(Sorry for a somewhat amateurish question) I studied quantum computing from 2004 to 2007, but I've lost track of the field since then. At the time there was a lot of hype and talk of QC potentially solving all sorts of problems by outperforming classical computers, but in practice there were really only two theoretical breakthroughs: Shor's algorithm, which did show significant speed up, but which had limited applicability, and wasn't really useful outside of integer factorization. Grover's algorithm, which was applicable to a wider category of problems (since it could be used to solve NP-Complete problems), but which only showed polynomial speed-up compared to classical computers. Quantum annealing was also discussed, but it wasn't clear whether it was really better than classical simulated annealing or not. Measurement based QC and the graph state representation of QC were also hot topics, but nothing definitive had been proved on that front either. Has any progress in the field of quantum algorithms been made since then? In particular: Have there been any truly ground breaking algorithms besides Grover's and Shor's? Has there been any progress in defining BQP's relationship to P, BPP and NP? Have we made any progress in understanding the nature of quantum speed up other than saying that "it must be because of entanglement"?
Have there been any truly ground breaking algorithms besides Grover's and Shor's? It depends on what you mean by "truly ground breaking". Grover's and Shor's are particularly unique because they were really the first instances that showed particularly valuable types of speed-up with a quantum computer (e.g. the presumed exponential improvement for Shor) and they had killer applications for specific communities. There have been a few quantum algorithms that have been designed since, and I think three are particularly worthy of mention: The BQP-complete algorithm for evaluating the Jones polynomial at particular points. I mention this because, aside from more obvious things like Hamiltonian simulation, I believe it was the first BQP-complete algorithm, so it really shows the full power of a quantum computer. The HHL algorithm for solving linear equations. This is a slightly funny one because it's more like a quantum subroutine, with quantum inputs and outputs. However, it is also BQP-complete and it's receiving a lot of attention at the moment, because of potential applications in machine learning and the like. I guess this is the best candidate for truly ground breaking, but that's a matter of opinion. Quantum Chemistry . I know very little about these, but the algorithms have developed substantially since the time you mention, and it has always been cited as one of the useful applications of a quantum computer. Has there been any progress in defining BQP's relationship to P, BPP and NP? Essentially, no. We know BQP contains BPP, and we don't know the relation between BQP and NP. Have we made any progress in understanding the nature of quantum speed up other than saying that "it must be because of entanglement"? Even back when you were studying it originally, I would say it was more precisely defined than that. There are (and were) good comparisons between universal gate sets (potentially capable of giving exponential speed-up) and classically simulable gate sets. For example, recall that the Clifford gates produce entanglement but are classically simulable. Not that it's straightforward to state precisely what is required in a more pedagogical manner. Perhaps where some progress has been made is in terms of other models of computation. For example, the model DQC1 is better understood - this is a model that appears to have some speed-up over classical algorithms but is unlikely to be capable of BQP-complete calculations (but before you get drawn into the hype that you might find online, there is entanglement present during the computation). On the other hand, the "it's because of entanglement" sort of statement still isn't entirely resolved. Yes, for pure state quantum computation, there must be some entanglement because otherwise the system is easy to simulate, but for mixed separable states, we don't know if they can be used for computations, or if they can be efficiently simulated. Also, one might try to ask a more insightful question: Have we made any progress in understanding which problems will be amenable to a quantum speed-up? This is subtly different because if you think that a quantum computer gives you new logic gates that a classical computer doesn't have, then it's obvious that to get a speed-up, you must use those new gates. However, it is not clear that every problem is amenable to such benefits. Which ones are? There are classes of problem where one might hope for speed-up, but I think that still relies on individual intuition. That can probably still be said about classical algorithms. You've written an algorithm x. Is there a better classical version? Maybe not, or maybe you're just not spotting it. That's why we don't know if P=NP.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/4263", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/4636/" ] }
5,125
I know that $re^{i\theta} = x + iy$ for any complex number $x + iy$ by Euler's formula. How do you calculate relative and global phase?
As clearly evident from the Euler's form $z=re^{i\theta}$ , a phase has something to do with rotation in the Argand plane but not affect the amplitude of a complex number. You can make a set of set of infinite complex numbers with the same magnitude. It can just be regarded as the extra degree of freedom for a given complex number. In the perspective of Quantum Information/Computing, the observable quantities are the probabilities which are proportional to the complex number amplitudes $|z|^2=|re^{i\theta}|^2=(re^{i\theta})(r^*e^{-i\theta})=r^2$ which clearly doesn't care about the the phase $\theta$ . Let's consider the most simple non-trivial example. For any quantum state with two degrees of freedom (qubit): \begin{equation} |\psi\rangle=r_1e^{i\theta_1}|0\rangle+r_2e^{i\theta_2}|1\rangle \end{equation} This is described by two complex numbers with phases $\theta_1$ and $\theta_2$ respectively. It can be rewritten as: \begin{equation} |\psi\rangle=e^{i\theta_1}(r_1|0\rangle+r_2e^{i(\theta_2-\theta_1)}|1\rangle) \end{equation} Now, if you calculate the amplitude $|\psi|^2$ , the factor $e^{i\theta_1}$ in front will vanish by the argument above. This is called a global phase which is an overall phase in front. The relative phase is the quantity $\theta_2-\theta_1$ or $\theta_1-\theta_2$ , however defined. The relative phase is an observable quantity in Quantum Theory and it can be changed when a state evolved in accordance with the Schrodinger's equation $i\hbar\frac{d}{dt}|\psi\rangle=\hat{H}|\psi\rangle$ . The relative phase has also great importance when we consider the density matrix for a state defined as $\rho=|\psi\rangle \langle \psi|$ which for the example above is: \begin{equation} \rho=r_1^2|0\rangle\langle0|+r_1r_2e^{i(\theta_1-\theta_2)}|0\rangle\langle1|+r_2r_1e^{i(\theta_2-\theta_1)}|1\rangle\langle0|+r_2^2|1\rangle\langle1| \end{equation} where it is only the relative phase that appears and not the global phase. In Quantum Information point of view, this relative phase appearing in the off-diagonal terms of the above matrix carries the information of coherence of the system which is one of the most unique properties of quantum systems. These are some general concerns of relative and global phases. It does not make any sense to talk about a relative phase for a single complex number $z$ . Also, please see the wiki articles of such concepts, they clear enough content on these as a good start. Here you can refer to https://en.wikipedia.org/wiki/Qubit , mainly the Bloch sphere section.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/5125", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/4693/" ] }
5,769
Is the circuit depth the longest sequence of gates applied on one of the qubits? Or is it something more complicated?
The circuit depth is the length of the longest path from the input (or from a preparation) to the output (or a measurement gate), moving forward in time along qubit wires. The stopping points on the path are the gates, the allowed paths that must be considered can enter and exit those gates on any input / output, and the length is the number of jumps from each gate to the next gates along the path. (When considering a unitary circuit, there is often a convention among theorists to ignore the final measurement in the count of the depth, which I actually provided in my original answer to this question. While it might not be entirely appropriate in the context of quantum technologies, this is something you may see here and there.) Put another way: the depth is the smallest amount of time steps required to perform the circuit, if every gate (including each measurement and preparation) is performed at some integer time-step, gates which act on no common qubits are allowed to be performed at the same time-step, and gates acting on at least one qubit in common must act at different time-steps (and in the correct order of course). With a bit of work, it's possible to show that determining this smallest number of time-steps amounts to the first description that I give. Practically speaking, this is best computed with dynamic programming, taking the circuit gate by gate and computing how each gate contributes to the length of the longest path that ends at a given qubit. Example Using the example provided by Hastings: Here is how you can compute the circuit depth, adding one gate at a time, to compute the length of the longest path that ends at a given qubit. Initialise the depth ending at each qubit to 0. Init: [0, 0, 0, 0, 0] For each gate in sequence (consistent with the input/output dependencies of the gates), take the maximum depth $d$ to that point on all of the qubits on which the gate acts, add one, and set the new max-depth on those qubits to that result. (If you are considering a unitary circuit with final measurements, and for some reason you want to use the convention of ignoring the depth-contribution of the measurements, then just don't count the measurements.) In the case of the circuit above, this yields: H 1: [1, 0, 0, 0, 0] H 2: [1, 1, 0, 0, 0] H 3: [1, 1, 1, 0, 0] CU^4 1 4 5: [2, 1, 1, 2, 2] CU^2 2 4 5: [2, 3, 1, 3, 3] CU^1 3 4 5: [2, 2, 4, 4, 4] H 1: [3, 2, 4, 4, 4] CS 1 2: [4, 4, 4, 4, 4] H 2: [4, 5, 4, 4, 4] CT 1 3: [5, 5, 5, 4, 4] CS 2 3: [5, 6, 6, 4, 4] H 3: [5, 6, 7, 4, 4] Meas 1: [6, 6, 7, 4, 4] Meas 2: [6, 7, 7, 4, 4] Meas 3: [6, 7, 8, 4, 4] Take the maximum of the depths ending at each qubit. In this case, it is qubit 3, with a depth of 8. With a little bit of work, by tracing how this depth arose, you can also find one or more particular paths through the circuit which yields this depth ( e.g. H 1 ; CU^4 1 4 5 ; H 1; CS 1 2; H 2; CS 2 3; H 3; Meas 3 ).
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/5769", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/6071/" ] }
9,204
According to the New Scientist News , the Zapata team is able to factor 1,099,551,473,989 into its factors 1,048,589 and 1,048,601. According to the New Scientist: A quantum computing start-up company called Zapata has worked with IBM to develop a new way to factor large numbers, using it on the largest number that has been factored with a quantum computer so far... The future success of the algorithm used could have big implications What is this new algorithm? How many q-bits are required to factor 1,099,551,473,989?
The claimed new quantum factoring record is $n=a(a+b)$ with $a=1048589=2^{20}+13$ and $b=12$ . It could well be that the form $n=a(a+b)$ with $0\le b\le12$ (or other small upper bound for $b$ ) defines all the sizable numbers the new record-claiming method can factor (with the indicated $3$ qubits on the hardware at hand and the ∼30% success rate, based on the picture in that answer ). Such $n$ are scarce: their proportion near $n$ thins out as $6.5/\sqrt n+\mathcal o(1/\sqrt n)$ [*] . The method lacks generality. Further, the factor $a$ of such $n$ is effortlessly found using Fermat's factorization method (exposed in a fragment of letter likely to Mersenne circa 1643). If $n=a(a+b)$ with $0\le b\le2\sqrt a$ (a much larger class of integers), only the first step of Fermat's factoring is required (for all except very small $n$ ): compute $r=\bigl\lceil\sqrt{4n}\,\bigr\rceil$ , then $a=(r-\sqrt{r^2-4n})/2$ . That's enough to factor 1099551473989 by hand. The picture and that other answer refer to Eric R. Anschuetz, Jonathan P. Olson, Alán Aspuru-Guzik, Yudong Cao's Variational Quantum Factoring ( arXiv:808.08927 , Aug 2018). That reduces factorization to a combinatorial problem with number of unknowns proportional to the bit length of the factors. I find nothing suggesting the preprocessing makes that sublinear in the general case, and conclude that Variational Quantum Factoring is exponential in the bit length of $n$ , thus essentially pointless since we have sub-exponential algorithms for factorization on classical computers. Similar stunts of stretching to artificially large numbers what an experimental setup allows have already been pulled. Take the record of 143 =11×13 by Nanyang Xu, Jing Zhu, Dawei Lu, Xianyi Zhou, Xinhua Peng, and Jiangfeng Du in Quantum Factorization of 143 on a Dipolar-Coupling Nuclear Magnetic Resonance System ( Physical Review Letters , 2012). Their experimental technique factors an integer product of two odd exactly-4-bit integers (thus with two unknown bits per factor). The scarcity of primes in range [8,15] makes 143 the only product of two distinct primes that the technique can factor. Their experimental setup iteratively minimizes a function with a 2-bit input. This achievement (I'm not joking up to this point) has been stretched, without new experiment AFAIK, to: 56153 =233×241 by Nikesh S. Dattani and Nathaniel Bryans, Quantum factorization of 56153 with only 4 qubits ( arXiv:1411.6758 , 2014) 4088459 =2017×2027, by Avinash Dash, Deepankar Sarmah, Bikash K. Behera, and Prasanta K. Panigrahi, Exact search algorithm to factorize large biprimes and a triprime on IBM quantum computer ( arXiv:1805.10478 , May 2018) 383123885216472214589586724601136274484797633168671371 =618970019642690137449562081×618970019642690137449562091 by myself ( crypto.SE , June 2018), using the technique of the above paper. [*] Proof: for each $a$ , there are $13$ values of $b$ leading to $13$ values of $n$ of the form $n=a(a+b)$ with $0\le b\le12$ . For large enough $a$ there are no two $(a,b)$ leading to the same $n$ [#] . When we increase $n$ by $1$ , $n^2$ increases by $2n+1$ . Taking the inverse, the density of squares near large $n$ is $\displaystyle\frac1{2\sqrt n}+\mathcal o(\sqrt n)$ . Thus the density of $n$ of the form $a(a+b)$ with $0\le b\le12$ is $\displaystyle\frac{13}{2\sqrt n}+\mathcal o(1/\sqrt n)$ . [#] Auxiliary proof: Assume $a(a+b)=a'(a'+b')$ with $0\le b\le12$ and $0\le b'\le12$ . Let $c=2a-b$ and $c'=2a'-b'$ . It comes $(c-b)(c+b)/4=(c'-b')(c'+b')/4$ , thus $c^2-b^2=c'^2-b'^2$ , thus $c^2-c'^2=b^2-b'^2$ , thus $|(c-c')(c+c')|\le144$ , thus for $c\ge72$ and $c'\ge72$ the only solution is $c=c'$ , hence $b=b'$ . Thus for $a\ge42$ and $a'\ge42$ the only solution is $a=a'$ and $b=b'$ . The bound on $a$ can be further lowered.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/9204", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/4866/" ] }
13,629
I wonder what are magic states , and a magic state gadget . While I'm reading a paper, these terms frequently appear.
Magic states are certain states that have very nice properties with respect to fault-tolerant quantum computation. In the vast landscape of quantum gates, there is a crude but useful distinction to be made between Clifford gates and all other gates (also referred to as the inspired non-Clifford gates ). The set of Clifford gates is in technical terms the normalizer of the Pauli group, which basically means that it's the set of operations that map the set of Pauli eigenstates to the set of Pauli eigenstates - as the Pauli operators and its eigenstates are very important in quantum computation, we also care deeply about the Clifford gates. There is another reason why we care about Clifford gates. In the scope of Quantum Error correction (specifically stabilizer codes and Fault-tolerance), all Clifford operations on stabilizer codes can be implemented transversally . This is a certain way of implementing (logical) operations on codes that are more or less 'the most easy way' of fault-tolerant - making them highly desirable. Unfortunately it's impossible (as shown here) to have a complete universal gateset of operations with only transversal implementations, which means that at least one operation in the universal gateset needs to be implemented differently. As is often (but not necessarily) the choice, the set of Clifford operations (or rather, a generating set) is chosen as the transversal gates, and one other (non-Clifford) gate is implemented differently. Implementing these non-clifford gates in a fault-tolerant manner is very tough and costly - there exist some methods that are on paper fault-tolerant, but lack implementability in one way or another. Magic states are a way of circumventing the need for non-Clifford gates by preparing certain states that kind of 'encode' the non-Clifford action into the state. Intuitively, you can think of this as applying all the necessary non-Clifford gates in a computation in advance, resulting in the magic states; the rest of the computation can then be performed by using only Clifford gates, making the fault-tolerant implementation containable. Without a reference I cannot be completely sure what is meant by a 'magic state gadget', but I think the authors are referring to a gadget that would perform magic state distillation . Such a procedure produces pure magic states from noisy magic states - it was shown that this can be performed in a reasonably scalable fashion, and moreover in a fault-tolerant fashion. This gives a blue print of a fault-tolerant quantum computer with only Clifford gates (and the magic state distillation gadget). Note that one needs a lot of magic states to perform computations - designs of quantum computers with magic states will most likely have the vast majority of its usable qubits be used for the distillation of magic states - the actual computation will almost be 'an afterthought'. As a closing note, it may very well be that at some point all we care about in quantum computing resources is the distillation of magic states. This is of course an oversimplification, but I use it to emphasize the possible importance of these states.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/13629", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/10028/" ] }
13,644
How to implement a 3 qubit gate, that exchanges the level $|110\rangle$ and $|000\rangle$ , with elementary gates (CNOT, SWAP, Toffoli, local gates, etc.(everything Qiskit allows)): $$ U=\pmatrix{ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ }=? $$ I tried to decompose the unitaries hamiltonian $U=\exp(-i\pi $ $H$ $)$ in the Zeeman basis, but it doesn't looks not very promising.
Magic states are certain states that have very nice properties with respect to fault-tolerant quantum computation. In the vast landscape of quantum gates, there is a crude but useful distinction to be made between Clifford gates and all other gates (also referred to as the inspired non-Clifford gates ). The set of Clifford gates is in technical terms the normalizer of the Pauli group, which basically means that it's the set of operations that map the set of Pauli eigenstates to the set of Pauli eigenstates - as the Pauli operators and its eigenstates are very important in quantum computation, we also care deeply about the Clifford gates. There is another reason why we care about Clifford gates. In the scope of Quantum Error correction (specifically stabilizer codes and Fault-tolerance), all Clifford operations on stabilizer codes can be implemented transversally . This is a certain way of implementing (logical) operations on codes that are more or less 'the most easy way' of fault-tolerant - making them highly desirable. Unfortunately it's impossible (as shown here) to have a complete universal gateset of operations with only transversal implementations, which means that at least one operation in the universal gateset needs to be implemented differently. As is often (but not necessarily) the choice, the set of Clifford operations (or rather, a generating set) is chosen as the transversal gates, and one other (non-Clifford) gate is implemented differently. Implementing these non-clifford gates in a fault-tolerant manner is very tough and costly - there exist some methods that are on paper fault-tolerant, but lack implementability in one way or another. Magic states are a way of circumventing the need for non-Clifford gates by preparing certain states that kind of 'encode' the non-Clifford action into the state. Intuitively, you can think of this as applying all the necessary non-Clifford gates in a computation in advance, resulting in the magic states; the rest of the computation can then be performed by using only Clifford gates, making the fault-tolerant implementation containable. Without a reference I cannot be completely sure what is meant by a 'magic state gadget', but I think the authors are referring to a gadget that would perform magic state distillation . Such a procedure produces pure magic states from noisy magic states - it was shown that this can be performed in a reasonably scalable fashion, and moreover in a fault-tolerant fashion. This gives a blue print of a fault-tolerant quantum computer with only Clifford gates (and the magic state distillation gadget). Note that one needs a lot of magic states to perform computations - designs of quantum computers with magic states will most likely have the vast majority of its usable qubits be used for the distillation of magic states - the actual computation will almost be 'an afterthought'. As a closing note, it may very well be that at some point all we care about in quantum computing resources is the distillation of magic states. This is of course an oversimplification, but I use it to emphasize the possible importance of these states.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/13644", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/5280/" ] }
13,651
Let's say I have a QuantumCircuit with depth $d$ layers. How can I generate a new QuantumCircuit with the last n layers removed. For example, let's say the QuantumCircuit has $d=8$ layers as follows: And let's say the last n=4 layers are desired to be removed then the resulting QuantumCircuit should be as follows leaving only the first 4 layers of the above QuantumCircuit : How can this be done in Qiskit?
Magic states are certain states that have very nice properties with respect to fault-tolerant quantum computation. In the vast landscape of quantum gates, there is a crude but useful distinction to be made between Clifford gates and all other gates (also referred to as the inspired non-Clifford gates ). The set of Clifford gates is in technical terms the normalizer of the Pauli group, which basically means that it's the set of operations that map the set of Pauli eigenstates to the set of Pauli eigenstates - as the Pauli operators and its eigenstates are very important in quantum computation, we also care deeply about the Clifford gates. There is another reason why we care about Clifford gates. In the scope of Quantum Error correction (specifically stabilizer codes and Fault-tolerance), all Clifford operations on stabilizer codes can be implemented transversally . This is a certain way of implementing (logical) operations on codes that are more or less 'the most easy way' of fault-tolerant - making them highly desirable. Unfortunately it's impossible (as shown here) to have a complete universal gateset of operations with only transversal implementations, which means that at least one operation in the universal gateset needs to be implemented differently. As is often (but not necessarily) the choice, the set of Clifford operations (or rather, a generating set) is chosen as the transversal gates, and one other (non-Clifford) gate is implemented differently. Implementing these non-clifford gates in a fault-tolerant manner is very tough and costly - there exist some methods that are on paper fault-tolerant, but lack implementability in one way or another. Magic states are a way of circumventing the need for non-Clifford gates by preparing certain states that kind of 'encode' the non-Clifford action into the state. Intuitively, you can think of this as applying all the necessary non-Clifford gates in a computation in advance, resulting in the magic states; the rest of the computation can then be performed by using only Clifford gates, making the fault-tolerant implementation containable. Without a reference I cannot be completely sure what is meant by a 'magic state gadget', but I think the authors are referring to a gadget that would perform magic state distillation . Such a procedure produces pure magic states from noisy magic states - it was shown that this can be performed in a reasonably scalable fashion, and moreover in a fault-tolerant fashion. This gives a blue print of a fault-tolerant quantum computer with only Clifford gates (and the magic state distillation gadget). Note that one needs a lot of magic states to perform computations - designs of quantum computers with magic states will most likely have the vast majority of its usable qubits be used for the distillation of magic states - the actual computation will almost be 'an afterthought'. As a closing note, it may very well be that at some point all we care about in quantum computing resources is the distillation of magic states. This is of course an oversimplification, but I use it to emphasize the possible importance of these states.
{ "source": [ "https://quantumcomputing.stackexchange.com/questions/13651", "https://quantumcomputing.stackexchange.com", "https://quantumcomputing.stackexchange.com/users/12248/" ] }
1
I already asked this question on Stack Overflow, but I would like to know if anyone managed to build a GCC 4.7 toolchain for ARM cross-compilation (for a x86/x86-64 Linux host). There are many instructions for building GCC from source and many available cross-compilers for pre-4.7 GCC versions, just not the latest one. Compiling on Rasp Pi itself works fine, but it's just a bit too slow for practical purposes. I am eager to get compiling and I would like to use the latest and the best tools.
I found these instructions How to build a cross compiler for your Raspberry Pi . It is a great walk through using a crosstool-ng tool which simplifies configuring a cross-compiler build A LOT (it has a nice curses-based interface) and it supports GCC 4.7. I've followed these steps and ended up with a successful build of a 4.7 cross-compiler. Prerequisites: The following packages are required: bison , flex , gperf , gawk , libtool , automake , g++ , ensure these are installed before proceeding. First download crosstool-ng from here (I used version 1.15.2). Unpack the distribution and do a ./configure / make / install Create a fresh directory somewhere in the file-system to build in and cd into it. Run ct-ng menuconfig . You will be presented with a nice set of menus to configure your build. Go into Paths and misc options. Enable Try features marked as EXPERIMENTAL. Choose a suitable Prefix directory . This is the directory that your compiler and libraries will get installed in (anything is fine basicallly, just make sure that the directory is empty). NOTE: It is also important that you have write access to the chosen folder Go to the Target options menu. Target architecture: arm Endianness: little endian Bitness: 32-bit You may also want to set floating point parameter to softfp (see this for more info), but hardfp is more appropriate for Raspbian. Go to Operating system menu and change Target OS to linux . Go to C compiler menu and choose gcc version 4.7.0 (the article recommends Linaro, but I managed to get it working with vanilla gcc). Also chose additional languages you want to be able to compile (C++, Fortran, ...) Go to C-library menu and chose one. The default is eglibc but that one didn't build fine for me, so I used glibc (the newest version). NOTE: during the build step 13. eglibc can fail to build if subversion is not installed, as the source can't be retrieved from the repository NOTE: eglibc is no longer a part of crosstool-ng as of version 1.21.0 due to its lack of development. See So long to eglibc . Use glibc as default. Exit the configuration tool while saving your changes. Run ct-ng build in the same directory. Wait a while (about 45 minutes in my case) and your cross compiler should be ready. Seems to work great!
{ "source": [ "https://raspberrypi.stackexchange.com/questions/1", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/-1/" ] }
5
The site http://www.raspberrypi.org/quick-start-guide says: You will need an SD card with an operating system preloaded before you can boot the Raspberry Pi. A brand-name (not generic) Class 4 card of 4GB or more is recommended. To obtain an SD card image, and for instructions on how to flash an SD card from a Linux or Windows PC, please refer to http://www.raspberrypi.org/downloads . Which SD cards are compatible with the Raspberry Pi?
This is the most authorative list . I personally bought a Transcend TS8GSDHC10 , but I am still waiting for my Raspberry Pi. You have to be careful and refer to the link, because there have been reported issues with people using microSD in SD adapters and also some of the higher class/speed SD cards suffer data transfer rate issues, so I would refer to this link before buying a new SD card.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/5", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/37/" ] }
25
I'd like to demonstrate how it's possible to 'bootstrap' up from a near-bare installation of Linux to something useful and productive. The Tiny Core and LFS projects demonstrate this well. However they are specific images for x86 based systems Is there a similar project based around ARM based systems, and specifically the Raspberry Pi. If not how would one bootstrap a similar project. It would need to take into account the limited memory, storage and processor speed available and the non-free blob requirements, but should still be possible to build a small custom system. Bonus points if compilation could be done on a separate host via cross-compiling, or using something like dist-cc over a bunch of Pis.
Yes. There is a CLFS for ARM manual available. Don't be worried by the CLFS name, that is what Linux From Scratch calls the manual for compiling LFS on non-x86 systems. Cross Linux From Scratch provides the means to cross-compile an LFS system on many types of systems.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/25", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/21/" ] }
33
Some variation of Linux is the de facto standard for Raspberry Pi. However, smaller, lesser known operating systems do exist and some would seem appropriate for such a small device. Are there any other operating systems that are compatible with the Raspberry Pi?
RISCOS is in the works and there is QT available now. Some bare metal programmers are working on OS's from scratch as well but these are more for fun & research than full blown OS's.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/33", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/8/" ] }
35
What voltage range can it accept? What sort of batteries are appropriate?
Standard USB uses 5V, and the Model B Pi claims to need 700mA. Taken from the Raspberry Pi FAQs : The device should run well off 4 x AA cells. If you were using 1.5V alkaline batteries, you would be oversupplying the board. As with most SoC-based computers, you should use NiMH batteries, as they supply an average 1.25V. This would leave your board at a safe, more controlled, 5V. The Pi will draw the correct amount of Amps it requires from the batteries, so you don't need worry there.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/35", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/68/" ] }
36
On my digital camera, I noticed a performance decrease when I used a larger card; will this happen with the Raspberry Pi too? What's the maximum sized SD card it can handle? I see there is comprehensive list available from this question .
See this link for a list of compatible cards: http://elinux.org/RPi_VerifiedPeripherals#SD_cards . The higher speeds are not necessarily a guarantee of performance. The problem observed at the moment is the support for the higher speeds required by the higher class SD cards, so there have been issues reported. This is a driver issue, and hopefully it should be fixed soon. My advice is to check the list before deciding on what to buy. Updated link: http://elinux.org/RPi_SD_cards
{ "source": [ "https://raspberrypi.stackexchange.com/questions/36", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/68/" ] }
37
Is IPv6 connectivity supported on the Raspberry Pi? If yes, is it enabled by default/how do I turn it on?
There are no hardware limitations for IPv6 connectivity, only software must support it. On Arch distribution, IPv6 is enabled by default, so if you have a router with DHCPv6 or RA, you will automatically get connected to the IPv6 internet. Raspbian supports IPv6, but the kernel module is not loaded by default (which is a crying shame in the wake of recent developments ). IPv6 can be enabled at run time by modprobe ipv6 command or at boot time by appending ipv6 to /etc/modules . I would look into specific distributions' documentations to learn more about configuring IPv6 on the device.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/37", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/-1/" ] }
38
I have a Raspberry Pi model B at home, but I do not have a screen. My plan is to connect it to the Ethernet and then ssh into it. But this means that the SD card with the operating system (Debian Squeeze) has to be prepared first. I see two ways: Prepare the SD with the OS such that the RPi always connects to the Ethernet under a fixed IP address and enables an SSH server. Prepare the SD with the OS such that the RPi connects to the network, enables an SSH server and then broadcasts its IP address so that I can ssh into it. Which of these ways is easier? And how do I do it? Are there other ways? I have the following tools: Ubuntu 10.4, MacOS 10.5, Windows 7, but only the Ubuntu has a cardreader. Unfortunately I cannot access my router's DHCP table, it is completely closed.
To enable ssh at startup, backup boot.rc on the boot partition on the SD image and replace it with boot_enable_ssh.rc I don't know about your router, but you may be able to configure it to reserve a fixed IP address for the MAC address of your Pi.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/38", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/9/" ] }
41
Is it possible to use the GPU for calculations? (e.g. CUDA/OpenCL)
As of 2012, your best bet was to implement your computation as a fragment shader in GLSL ES and find a way to represent the output as a RGBA (32-bit) texture. Eben stated in this 2012 talk that OpenCL is not likely to be implemented, but that there may be an API developed in the future; the answer starts at 21:20 , and Eben says "we may provide some way for people to get some of that general purpose compute out". Recent developments such as the VC4CL project have attempted to implement OpenCL on the VideoCore IV GPU used by the Raspberry Pi, and other related projects now provide access to some of the general compute power of the GPU.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/41", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/35/" ] }
44
I can't get the audio running. I don't hear anything and can not play anything. Is there a solution to enable audio?
Sound does not work with an HDMI monitor This is caused by some computer monitors which select DVI mode even if an HDMI cable is connected. This fix may be necessary even if other HDMI devices work perfectly on the same monitor (or TV)! Edit the /boot/config.txt file ( see Raspberry-Pi Configuration File ) and add the following line to the configuration file: hdmi_drive=2 Sound does not work at all, or in some applications Sound is disabled by default because the ALSA sound driver is still "alpha" (not fully tested) on the R-Pi. To try out sound, from the command prompt before "startx", type sudo apt-get install alsa-utils sudo modprobe snd_bcm2835 sudo aplay /usr/share/sounds/alsa/Front_Center.wav By default output will be automatic ( hdmi if hdmi supports audio, otherwise analogue ). You can force it with: sudo amixer -c 0 cset numid=3 <n> where n is one of: 0 = auto , 1 = headphones , 2 = hdmi . ( source ) If you are running Debian, try cd /opt/vc/src/hello_pi make -C libs/ilclient make -C libs/vgfont cd hello_audio make ./hello_audio.bin to test analogue output. And to test HDMI. ./hello_audio.bin 1 Also note that you may have to add your user to the audio group to get permission to access the sound card. gpasswd -a <username> audio Making the changes permanent sudo apt-get install alsa-utils is permanent, but sudo modprobe snd_bcm2835 only initialises the driver for the current session. To ensure the module is initialised on boot, add snd_bcm2835 to /etc/modules ( source ).
{ "source": [ "https://raspberrypi.stackexchange.com/questions/44", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/49/" ] }
70
Raspberry Pi has only 256 MB of RAM, so I would like to use swap space (either on SD card or attached USB storage). How do I set it up?
Raspbian uses dphys-swapfile , which is a swap-file based solution instead of the "standard" swap-partition based solution. It is much easier to change the size of the swap. The configuration file is: /etc/dphys-swapfile The content is very simple. By default my Raspbian has 100MB of swap: CONF_SWAPSIZE=100 If you want to change the size, you need to modify the number and restart dphys-swapfile: /etc/init.d/dphys-swapfile restart Edit: On Raspbian the default location is /var/swap, which is (of course) located on the SD card. I think it is a bad idea, so I would like to point out, that the /etc/dphys-swapfile can have the following option too: CONF_SWAPFILE=/media/btsync/swapfile I only problem with it, the usb storage is automounted, so a potential race here (automount vs. swapon)
{ "source": [ "https://raspberrypi.stackexchange.com/questions/70", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/-1/" ] }
71
I read that the current should be at least 700 mA. I plugged a power supply with output current of 550 mA, and it seems to work fine. Can something happen to my Raspberry Pi? Does a wrong current influence the performance?
The 700 mA recommendation errs on the side of caution. The Raspberry Pi itself needs around 400 mA. Powering a typical basic keyboard and mouse needs another 100 mA or so. You should be fine. But if you plug anything in that needs some serious power like a Wi-Fi adapter, keep in mind that you may be pushing the limits of your power supply.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/71", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/91/" ] }
72
I've not ordered mine (yet!), but I do some overclocking as a hobby to the extent that my netbook, Android phone, desktop, and even my wife's Blackberry are overclocked. Is there potential to overclock the RPi beyond stock voltage and speeds? I figure I can rig some sort of custom cooling if need be.
Without overvoltage (i.e. at the default 1.2V), most Pis can run at up to 800MHz stably. With overvoltage, 1000MHz is common. WARNING : Setting any of the parameters which over volt your Raspberry Pi will set a permanent bit within the SOC and your warranty is void. So If you care about the warranty do not adjust voltage. References: http://www.raspberrypi.org/faqs http://www.raspberrypi.org/phpBB3/viewtopic.php?f=29&t=6201 http://www.senab.co.uk/2012/05/29/raspberry-pi-overclocking/
{ "source": [ "https://raspberrypi.stackexchange.com/questions/72", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/90/" ] }
86
Has anyone successfully set up their Pi to run on solar power? If so, what would be the cheapest way to sensibly and reliably achieve this, using a combination of solar cells / batteries / voltage regulators? Is it a case of just building panels that give the required current, regulating the voltage and supplying a battery backup, or is there something else I should be aware of?
There are a few things you need to be aware of if you want to build your own solar power supply. The main thing is that the voltage output of solar cells can vary wildly based on the incident sunlight (how sunny it is). Simple unregulated solar panels sold as 'battery chargers' are often designed for deep-cycle 12v 'leisure' batteries, but can be measured at 18v or more open circuit in bright sunlight (even in the UK *8'). The internal resistance of the battery keeps that voltage down to the battery level, but without one the higher than nominal voltage could do damage to directly connected electronics if it isn't regulated. As such, you should be looking at either regulating the solar panel or moderating it's output with a battery, and ideally both. Using a battery also allows you to accumulate excess energy production and supply power when power from the solar panel dips, which means that you may be able to get away with a solar panel sized for your average power needs, rather than one sized for your maximum power needs. If you want to keep the size of solar panel and battery to a minimum, you may want to calculate the total expected power load, average power generation throughout the year and battery capacity to get you through the nights and the winter months. There is some excellent advice about solar charging in answers to my question over on electronics stack exchange, including information on how to calculate whether sunshine in your area is sufficient for your application for a given solar panel and battery combination, using location specific information from gaisma .
{ "source": [ "https://raspberrypi.stackexchange.com/questions/86", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/93/" ] }
87
There have been a few articles floating around online over the last few years about building a cluster of computers at home - here for example. The Pi strikes me as a good platform for building / testing this kind of thing due to their low cost; should "generic" guides like this transfer over to the Pi pretty easily, or is there anything specifically I should be aware of when attempting such a project?
I suggest looking at Dispy - the distributed computation python module. To run a program on a number of Raspberry Pi's (nodes) from a PC (server - assume IP is 192.168.0.100 ): Install an operating system on each RasPi Attach each RasPi to your network. Find the IP (if dynamic), or set up static IPs. (Let's assume that you have three nodes, and their IPs are 192.168.0.50-52 ) Set up Python (if not already), install dispy , then run dispynode.py -i 192.168.0.100 on each RasPi. This will tell dispynode to receive job information from the server. On the PC (the server), install dispy , then run the following python code: #!/usr/bin/env python import dispy cluster = dispy.JobCluster('/some/program', nodes=['192.168.0.50', '192.168.0.51', '192.168.0.52']) You can also replace /some/program with a python function - e.g. compute . You can also include dependencies such as python objects, modules and files (which dispy will transfer to each node) by adding depends=[ClassA, moduleB, 'file1']
{ "source": [ "https://raspberrypi.stackexchange.com/questions/87", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/93/" ] }
93
Is it possible (and feasible) to run .NET applications on the Raspberry Pi with Mono ? If so, how well do they run? Is a basic GUI usable, or does poor performance realistically restrict it to command line applications?
There is a StackOverflow question quite similar to this, Mono on Raspberry Pi . However, through my own research, I haven't been able to find anything specific to .NET, but rather just C#. You can install the runtime using APT on a Debian distro by executing: $ sudo apt-get install mono-runtime You can also (assuming you have some sort of GUI such as LXDE) install a slow Mono IDE with: $ sudo apt-get install monodevelop For Arch Linux ARM you need to install the runtime via Pacman , like so: $ sudo pacman -S mono The Mono IDE can be installed in a similar fashion: $ sudo pacman -S monodevelop
{ "source": [ "https://raspberrypi.stackexchange.com/questions/93", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/93/" ] }
103
As per the title, what are the maximum and minimum operational temperature ratings of the Pi before it stops reliably working? Could this also depend on the SD card in use?
From the RPi FAQ : What is the usable temperature range? The Raspberry Pi is built from commercial chips which are qualified to different temperature ranges; the LAN9512 is specified by the manufacturers being qualified from 0°C to 70°C, while the AP is qualified from -40°C to 85°C. You may well find that the board will work outside those temperatures, but we’re not qualifying the board itself to these extremes.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/103", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/93/" ] }
104
I've noticed that the RPi comes caseless. But does it just come with the computer itself? Or are there cables, instructions, swag (stickers, balloons, etc) that ship with it?
In the US from RS I received a brown box with the Raspberry Pi in an anti-static bag sandwiched between two pieces of foam. Also included was a quick start guide and a regulatory notice. Element 14 shipped in a smaller white box, also in an anti-static bag. There are no cables or SD cards included.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/104", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/90/" ] }
105
Is it possible to reliably run a Pi in an airtight (watertight) case or is some form of ventilation required for cooling? I'm assuming it doesn't chuck out too much heat, but am wondering whether running it indefinitely this way may cause issues.
There are no ventilation or cooling requirements. This has been verified by an RPi admin here .
{ "source": [ "https://raspberrypi.stackexchange.com/questions/105", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/93/" ] }
126
My first thought was to simply start broadcasting WOL magic packets over the network, but my Raspberry Pi is not waking up. So I'm assuming it either doesn't support WOL, or I haven't properly configured it. What do I have to do to enable Wake-on-LAN?
It doesn't support WoL. Considering the device draws so little power, the benefits of shutting it of and waking it with WoL are few and far between. Just leave it on!
{ "source": [ "https://raspberrypi.stackexchange.com/questions/126", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/87/" ] }
164
I understand that the software on the Raspberry Pi is divided into three sections: the closed-source GPU firmware, the patched ARM Linux kernel and the user space software. Is the GPU firmware on the chip or SD card? Is there an easy way to update everything (firmware, kernel, modules)?
EDIT: Since this post was written, the advice has changed. rpi-update should not be used unless specifically advised to by an RPi engineer / beta-testing. It is an unstable version of the firmware. It used to be necessary for updates but isn't anymore. See this answer on another question. What is the GPU firmware and kernel? The kernel is responsible for managing the resources of the Raspberry Pi and runs on the central processing unit (CPU). It allows tasks to run on the CPU. The GPU firmware, on the other hand, manages the graphical processing unit (GPU). The two separate units are on the same chip and share memory, which is segregated at boot time according to hard-coded start.elf files. In order to use the Raspberry Pi, both sets of files must be in the correct locations on the SD card. You can buy preloaded SD cards from the retail partners of the Foundation. Alternatively, the Foundation regularly release new SD cards images at http://www.raspberrypi.org/downloads . You must use Unix's dd or Windows' Win32DiskImager to load this on an existing SD card. It's not possible to compile your own GPU firmware image, because it is closed source, so we rely on the Foundation and Broadcom to supply this. On the other hand, you can compile your own kernel image from source. Cross-compilation is the subject of other questions, such as How do I cross-compile the kernel on a Ubuntu host? Updating the GPU firmware - Debian/Raspbian You can update the firmware using rpi-update by Hexxeh. On Raspbian , you can install it by running sudo apt-get install rpi-update To update the software, run sudo rpi-update Updating userspace and kernel Software - Debian/Raspbian The userspace software must be maintained. It's pretty easy; just run sudo apt-get upgrade If there are any errors, you can try updating the database first by running sudo apt-get update If you don't understand an error, then it's probably best you ask here or try googling. Updating software - Arch Linux The software must be maintained. The advantage of Arch Linux over Debian here is that Arch Linux manages the Raspberry Pi's firmware within the package management system. To update, just run sudo pacman -Syu If there are any errors and you don't understand it, then it's probably best you ask here or try googling. References rpi-update Repository
{ "source": [ "https://raspberrypi.stackexchange.com/questions/164", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/35/" ] }
165
How can I connect an SD card containing a Raspberry Pi OS to my Linux PC, and boot the OS in an emulator? Why won't VMWare work? What are the limitations of this method? Related: Emulation on a Windows PC
Yes this is completely possible. However, in reality it's a little bit different to how you are thinking. Preamble The SD card contains an image of the operating system. And works by inflating this image when the device is powered on. As I expect you already know, you flash this image onto the SD card in order to create a working system. However, what you can do before you flash the image is have a play around with it using QEMU , which is a processor emulator , and allows us to emulate the ARM instruction set. In this way, any changes you make to the image (installing, compiling etc) will still be there after you flash it to the SD card. I'll now talk you through how to use QEMU to load the image. I will be demonstrating with the Arch Linux image, but the process should be the same regardless. Using QEMU Prerequesites You will need to acquire QEMU for your system. QEMU should only have one requirement, in order in input devices to work you need to have the SDL development package installed, which should be available from your package manager. I recommend downloading the package using your regular package manager: Arch : sudo pacman -S sdl qemu Ubuntu : More recent versions (since 14.04) have a package for it: sudo apt-get install qemu-system-arm For older versions: sudo apt-get install libsdl-dev sudo add-apt-repository ppa:linaro-maintainers/tools sudo apt-get update sudo apt-get install qemu-system Building QEMU yourself Alternatively, you can build QEMU yourself. This is great if you want to try a new version, but it can be slow and be prepared for lots of errors during compilation! Note that if building QEMU from their website it must be compiled for ARM support. So check your distro repositories first. This can be done like so; mkdir rpi-emu && cd rpi-emu wget http://wiki.qemu.org/download/qemu-1.1.0-1.tar.bz2 tar xvjf qemu-1.1.0-1.tar.bz2 cd qemu-1.1.0-1 ./configure –target-list=arm-softmmu,arm-linux-user make sudo make install Verify that you have ARM support with: qemu-system-arm --version QEMU emulator version 1.0,1, Copyright (c) 2003-2008 Fabrice Bellard Obtaining the Image We're working with Arch Linux, so will be using the Arch Arm image. But replace this with whatever you wish to work with, or perhaps you already have an image. In which case, skip this step. wget http://anorien.csc.warwick.ac.uk/mirrors/raspberrypi.org/images/archlinuxarm/archlinuxarm-29-04-2012/archlinuxarm-29-04-2012.zip unzip archlinuxarm-29-04-2012.zip For QEMU to work we also need the kernel image (which would be inside the .img file). Note: I don't think this step is necessary for Debian. Someone please confirm. Luckily there are precompiled images available, and you can use the one from here ( direct download ). TODO: Demonstrate how to compile a kernel image here? Starting the VM You should now have: An img file which you can verify using the supplied sha1 (recommended). A kernel image (zImage). QEMU for ARM. The Virtual Machine can now be started using the following long-winded command: qemu-system-arm -kernel zImage -cpu arm1176 -M versatilepb -serial stdio -append "root=/dev/sda2" -hda archlinuxarm-29-04-2012.img -clock dynticks Note that you will only have several hundred megabytes of storage using this method (whatever is spare on the image). A virtual hard disk can be created by following the QEMU user guide .
{ "source": [ "https://raspberrypi.stackexchange.com/questions/165", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/35/" ] }
169
I'm not all that keen to buy a USB hard disk, but I'm aware that SD cards aren't suitable for many repetitions of reading and writing. Are there any steps I can take to extend the life of my SD card while it's being used by my Raspberry Pi?
These methods should increase the lifespan of the SD card by minimising the number of read/writes in various ways: Disable Swap Swapping is the process of using part of the SD card as volatile memory. This will increase the amount of RAM available, but it will result in a high number of read/writes. It is unlikely to increase performance significantly. Disable swap with the swapoff command: sudo swapoff --all You must also prevent it from coming back after a reboot: For Raspbian which uses dphys-swapfile to manage a swap file (instead of a "normal" swap partition) you can simply sudo apt-get remove dphys-swapfile to remove it permanently. Best to remove because setting the CONF_SWAPSIZE to 0, as explained in this answer , doesn't seem to work and still creates a 100MB swap file after reboot. For other distributions that use a swap partition instead of a swap file, remove the appropriate line from /etc/fstab Disabling Journaling on the Filesystem Using a journaling filesystem such as ext3 or ext4 WITHOUT a journal is an option to decrease read/writes. The obvious drawback of using a filesystem with journaling disabled is data loss as a result of an ungraceful dismount (i.e. post power failure, kernel lockup, etc.). You can disable journaling on ext3 by mounting it as ext2 . You can disable journaling on ext4 on an unmounted drive like this: tune4fs -O ^has_journal /dev/sdaX e4fsck –f /dev/sdaX sudo reboot The noatime Mount Flag Assign the noatime mount flag to partitions residing on the SD card by adding it to the options section of the partition in /etc/fstab . Reading accesses to the file system will no longer result in an update to the atime information associated with the file. The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read. Since writes can be somewhat expensive as mentioned in previous section, this can result in measurable performance gains. Note that the write time information to a file will continue to be updated anytime the file is written to with this option enabled. Directories in RAM Highly used directories such as /var/tmp/ and possibly /var/log can be relocated to RAM in /etc/fstab like this: tmpfs /var/tmp tmpfs nodev,nosuid,size=50M 0 0 This will allow /var/tmp to use 50MB of RAM as disk space. The only issue with doing this is that any drives mounted in RAM will not persist past a reboot. Thus if you mount /var/log and your system encounters an error that causes it to reboot, you will not be able to find out why. Directories in external Hard Disk You can also mount some directories on a persistent USB hard disk. More details of this can be found in this question . The Raspberry Pi can also boot it's root partition from an external drive. This could be via USB or Ethernet and means that the SD card will only be used to delegate to different device during boot. This requires a bit of kernel hacking to accomplish, as I don't think the default kernel supports USB storage. You can find more information at this question , or this external blog post .
{ "source": [ "https://raspberrypi.stackexchange.com/questions/169", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/68/" ] }
192
I would like to understand more about how the kernel works. Part of this is to compile it myself. How do I cross-compile the Kernel on a Ubuntu host?
Preparation First, we need to install the required prerequisites. I assume you have sudo access. sudo apt-get install git ncurses-dev make gcc-arm-linux-gnueabi git is the version control system used by the Linux kernel team. ncurses is a library for build console menus. It is necessary for menuconfig . make runs the compilation for us. gcc-arm-linux-gnueabi is the cross-compiler. Next, we need to retrieve the source, run: git clone https://github.com/raspberrypi/linux raspberrypi-linux cd raspberrypi-linux This will clone the source code to a directory called raspberrypi-linux and change to it. Compilation We first need to move the config file by running cp arch/arm/configs/bcmrpi_cutdown_defconfig .config Then configure the kernel build make ARCH=arm CROSS_COMPILE=/usr/bin/arm-linux-gnueabi- oldconfig Optional: Customise the build using menuconfig make ARCH=arm CROSS_COMPILE=/usr/bin/arm-linux-gnueabi- menuconfig Then run the compilation make ARCH=arm CROSS_COMPILE=/usr/bin/arm-linux-gnueabi- -k References http://elinux.org/RPi_Kernel_Compilation https://github.com/raspberrypi/linux http://packages.ubuntu.com/precise/gcc-arm-linux-gnueabi http://mitchtech.net/raspberry-pi-kernel-compile/
{ "source": [ "https://raspberrypi.stackexchange.com/questions/192", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/86/" ] }
194
Has anyone done some web server benchmarking on their Raspi? I don't have my Raspi yet but I'm planning on using it as a web server and I'm interested to see stats for: Number of requests per second Latency response time between requests Throughput (i.e. bytes per second) If these are different for different web server software and OS combinations I'd also be interested to see a comparison.
I expect that, as Alex says, the benchmarks will show that the fastest Linux webservers will still be the fastest, regardless of architecture. If anyone wants to run benchmarks then the following tutorial has been useful to me: How to perform benchmarks on a web-server Serving Static Pages I have tested the RPi using Apache serving a simple static page: <h1>It works!</h1> As a control group I used my primary webserver which totes the following spec; Intel(R) Xeon(R) CPU X3323 @ 2.50GHz 384MB RAM The results are as follows: Control ab -n 1000 -c 5 http://www.ivings.org.uk/~james/index.html Server Software: Apache/2.2.14 Server Hostname: www.ivings.org.uk Server Port: 80 Document Path: /~james/index.html Document Length: 19 bytes Concurrency Level: 5 Time taken for tests: 17.767 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 294000 bytes HTML transferred: 19000 bytes Requests per second: 56.29 [#/sec] (mean) Time per request: 88.833 [ms] (mean) Time per request: 17.767 [ms] (mean, across all concurrent requests) Transfer rate: 16.16 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 42 44 0.8 44 50 Processing: 44 45 0.9 45 59 Waiting: 44 45 0.9 45 59 Total: 86 89 1.3 88 108 Percentage of the requests served within a certain time (ms) 50% 88 66% 89 75% 89 80% 89 90% 90 95% 91 98% 91 99% 94 100% 108 (longest request) Raspberry Pi ab -n 1000 -c 5 http://86.137.189.68/index.html Server Software: Apache/2.2.22 Server Hostname: 86.137.189.68 Server Port: 80 Document Path: /index.html Document Length: 19 bytes Concurrency Level: 5 Time taken for tests: 23.186 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 304000 bytes HTML transferred: 19000 bytes Requests per second: 43.13 [#/sec] (mean) Time per request: 115.930 [ms] (mean) Time per request: 23.186 [ms] (mean, across all concurrent requests) Transfer rate: 12.80 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 40 44 5.6 43 116 Processing: 49 71 156.1 57 2157 Waiting: 47 53 6.7 55 104 Total: 91 116 156.1 99 2198 Percentage of the requests served within a certain time (ms) 50% 99 66% 100 75% 100 80% 100 90% 102 95% 126 98% 150 99% 667 100% 2198 (longest request) Conclusion Note: This is best treated as an estimate. The results show that the Raspberry Pi actually performed damn well considering. It was only slightly less responsive than my primary webserver. It should be fine handling a reasonably large number of requests.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/194", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/4/" ] }
208
I am looking for a cheap computer to distribute with my software & hardware. This will make convincing small businesses to change its platform easier. Instead of buying a $300+ computer, give them a $30 Raspberry Pi... Since the vision of the RPi team was to make cheap computers so children can learn to program, I thought there might be a licence preventing commercial use.
As a right, you can purchase a Raspberry Pi and use it for what ever intended purpose that you desire. However, the major problem that you could run into is the level of availability. RPi's are hot little devices with limited availability. Demand will certainly cool off in the near future but even in regards to a longer time frame (a year or two) production capacity may not be wide enough to fill larger quantity orders (1000+) for commercial applications. Long term, this may be a possibility but for the foreseeable future the RPi may not be your best choice for an embedded commercial computer system unless you are expecting to integrate the device in a very very small number of units and you are willing to wait a long time to receive your own inventory. Only you know your specific needs and can justify the use of the device. UPDATE Yesterday (July 16, 2012), the official Raspberry Pi blog released information stating that production of the RPi boards is being significantly ramped up and that purchase orders should be filled within 4 to 6 week lead times and bulk orders of the device can now be placed. Of course, if you need an excessively large number of units (i.e. multi-thousand), receiving your Pis may take more time than the alloted 4-6 weeks. Additionally, Gizmodo has reported that production has been ramped up to 4,000 units per day. I believe that for most cases this production quantity should meet the demand of most small to mid-sized projects.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/208", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/136/" ] }
213
I recently bought a RPi and it came with Debian, which I think is a Linux distribution. I'm more used to Windows; can I install it?
No. At this point in time, Windows cannot be installed on the Raspberry Pi. Windows is designed for the x86 and x86-64 architectures (32 and 64 bit architecture respectively). The RPi has an ARM architecture, which is incompatible. Windows 10 Microsoft have announced that Windows 10 will ship a version that supports Raspberry Pi 2.
{ "source": [ "https://raspberrypi.stackexchange.com/questions/213", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/86/" ] }
222
One may buy a simple HDMI-to-VGA cable, or an HDMI-to-DVI cable. Example: on Amazon . However, my vague knowledge is that these cables only work for video cards that have special support for this function. Does the Raspberry Pi support such things? Is there any easy way to use the Raspberry Pi on a screen that only takes VGA input without a converter box? What kind of conversion to other video outputs would the Raspberry Pi hardware support, and what cables or other equipment would be necessary for this?
No. The reason a DVI-to-VGA adapter works on your PC & laptop is that DVI includes analog (RGB) pins. The adapter is passive; It just connects the red analog output of the PC to the red analog input of the monitor, ditto for green and blue. They are included on most PCs and laptops for backward compatibility. HDMI-to-DVI cables are also passive, but they carry digital signals only. The analog RGB signals are missing, but that does not matter as the DVI monitor does not need them. HDMI contains no analog signal so it is not possible for any combination of passive adapters and/or cables to convert it to VGA. Active adapters work because they use DSPs and DACs to convert from one standard to the other, but of course they are more expensive than passive cables. Related forum thread: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=30&t=8125
{ "source": [ "https://raspberrypi.stackexchange.com/questions/222", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/28/" ] }
236
My USB keyboard was configured slightly wrong. So far I have only tried it with the Debian image. Certain keys were not in the right place (such as the '~' character). I know how to change the keyboard configuration and that this is trivial for many Linux users. However, in the interest of benefit to the Raspberry Pi community (especially those fairly new to Linux), I was wondering if there's a simple way to automatically set up a keyboard to correct these sorts of issues.
You need to reconfigure you keyboard mappings. At the command line type: sudo dpkg-reconfigure keyboard-configuration Follow the prompts. Then restart your RasPi. sudo reboot Or sudo nano /etc/default/keyboard find the line where it says XKBLAYOUT=”gb” and change the gb to the two letter code for your country (e.g. US). And restart your RasPi. If after remapping your keyboard you get a long delay in boot up during the keyboard mapping phase type the following (once) on the command line: sudo setupcon
{ "source": [ "https://raspberrypi.stackexchange.com/questions/236", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/20/" ] }
246
Is it possible to power the device without using the micro USB? For example, is it possible to use PoE (Power over Ethernet)?
Yes it is possible. But not with PoE. The board can be powered using the GPIO. Here is a pin diagram showing the pins for power at the top. Maximum permitted current draw from the 3v3 pin is 50mA. Maximum permitted current draw from the 5v pin is the USB input current (usually 1A) minus any current draw from the rest of the board. Model A: 1000mA - 500mA -> max power draw: 500mA Model B: 1000mA - 700mA -> max power draw: 300mA You can read more on elinux .
{ "source": [ "https://raspberrypi.stackexchange.com/questions/246", "https://raspberrypi.stackexchange.com", "https://raspberrypi.stackexchange.com/users/20/" ] }