text
stringlengths 301
426
| source
stringclasses 3
values | __index_level_0__
int64 0
404k
|
---|---|---|
Product Management, Product Development, Genai, AI, Artificial Intelligence.
users as the landscape of AI continues to evolve. By embracing this mindset and approach, teams can navigate the challenges of building AI-powered MVPs across varying user expectations, and technology and economic challenges and deliver innovative products that captivate users and pave the way for | medium | 5,097 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
long-term success amidst continuous AI innovation. Thanks for reading this article! Hope you found value. Stay updated by following me on Medium, subscribing to receive exclusive email updates, or connecting with me on LinkedIn for more insightful content and networking opportunities. Let’s stay | medium | 5,098 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
connected, continue the conversation, and share this article on Twitter and LinkedIn to reach more professionals in our community. Don’t forget to leave a comment below if you have any questions or thoughts to share! Notes Back to the Table of Contents [1] — “Managing Expectations in the problem | medium | 5,099 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
space” refers to the challenge of aligning users’ expectations with the actual capabilities of the AI-powered MVP within the specific domain or area where it operates. It involves ensuring that users have a realistic understanding of what the MVP can deliver and how it addresses their needs or | medium | 5,100 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
challenges within that problem space. This alignment is crucial for the success of the MVP, as unrealistic expectations can lead to user dissatisfaction, disappointment, or skepticism about the product’s value proposition. Therefore, effectively managing expectations involves transparent | medium | 5,101 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
communication, setting realistic goals, and educating users about the limitations and possibilities of the AI solution within the defined problem space. [2] — Cross-validation is a technique that divides the data into multiple folds or subsets. The model is trained on a subset of the data (training | medium | 5,102 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
set) and evaluated on the remaining folds (test sets). This process is repeated multiple times, and the results are averaged to get an estimate of the model’s performance. Cross-validation helps avoid the optimism bias in error estimates, especially for complex models. It is commonly used to | medium | 5,103 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
evaluate the generalization performance of machine learning models. Read more [3] — Bootstrapping is a resampling technique that involves creating multiple samples from the original dataset by sampling with replacement. Each bootstrapped sample has the same size as the original dataset, but some | medium | 5,104 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
data points may be repeated while others are omitted. The model is then trained and evaluated on each of the bootstrapped samples, and the results are averaged to get an estimate of the model’s performance. Bootstrapping is useful for statistical inference, such as estimating the standard error of | medium | 5,105 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
a statistic or constructing confidence intervals. It is often used for parameter estimation, ensemble learning, and evaluating the uncertainty of model predictions. Read more [4] — Explainability refers to the ability to explain or present the behavior of AI models in human-understandable terms. It | medium | 5,106 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
is crucial for building trust with end-users and providing insights to researchers and developers to identify biases, risks, and areas for performance improvement. [5] — Interpretability techniques are methods used to make AI models more transparent and understandable, such as Feature importance | medium | 5,107 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
analysis: Measuring the relevance of each input feature (e.g., words, phrases, text spans) to the model’s predictions. It provides insights into how the AI model is making decisions, which can help debug and improve the model’s performance; Model visualization: Visualizing the representations and | medium | 5,108 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
decision-making processes of the AI models, such as through scatter plots, cluster analysis, and Principal Component Analysis. [6] — The core message of the article: When — and Why — You Should Explain How Your AI Works by Reid Blackman and Beena Ammanath is that while AI adds significant value by | medium | 5,109 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
identifying complex patterns in data, it often operates as a “black box,” making it difficult for stakeholders to understand its operations and trust its outputs. Explainable AI (XAI) aims to address this issue by providing insights into how AI makes decisions. However, achieving explainability | medium | 5,110 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
involves trade-offs between accuracy and complexity. Organizations should assess the need for explainability based on factors such as regulatory compliance, end-user understanding, system improvement, and fairness assessment. A framework for prioritizing explainability in AI projects can help | medium | 5,111 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
organizations make informed decisions about when and how to implement XAI. [7] —Conducting ethical impact assessments and audits for AI systems to build trust, ensure responsible AI development, and prevent serious social and ethical harm. would entail the following key elements: i) Establishing | medium | 5,112 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
Ethical Frameworks and Guidelines: Developing clear ethical principles, guidelines, and frameworks to guide the assessment and auditing process. These would typically cover areas like bias, privacy, transparency, accountability, and responsible AI development. ii) Mapping AI System Interactions: | medium | 5,113 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
Creating detailed system diagrams or models to map out the various components of the AI system and the interactions between them. This helps identify potential ethical risks and issues that may arise from these interactions. iii) Ethical Impact Identification: Systematically identifying the | medium | 5,114 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
potential ethical impacts and risks associated with the AI system through methods like literature analysis, foresight techniques such as scenario building, and stakeholder consultations. iv) Ethical Impact Evaluation: Assessing the relative importance and severity of the identified ethical impacts. | medium | 5,115 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
Identifying potential value conflicts and trying to resolve them. Formulating conceptual definitions of the relevant ethical principles and values. v) Remedial Actions: Developing and implementing design interventions, recommendations, and other remedial actions to mitigate the identified ethical | medium | 5,116 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
risks and impacts. Documenting and communicating the remedial actions taken. vi) Review and Audit: Establishing a process to review and audit the ethical impact assessment at various stages. Setting milestones and criteria for the review and audit process. Ensuring proper documentation, follow-up, | medium | 5,117 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
and sign-off of the ethical impact assessment. The key goal of this process is to proactively identify and address ethical issues associated with AI systems, rather than addressing them reactively after problems have occurred. Read more [8] — When you develop your own Generative AI model, you have | medium | 5,118 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
full control over the data used to train the model. This allows you to ensure data privacy and security, especially for sensitive or proprietary information, by keeping it within your infrastructure. You can implement robust data governance policies and controls to protect user data and comply with | medium | 5,119 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
industry regulations. [9] — An in-house Generative AI model gives you complete control over the model’s performance and reliability. You can fine-tune and optimize the model to ensure consistent, high-quality outputs that meet your specific requirements. You’re not reliant on a third-party API | medium | 5,120 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
provider’s uptime and service level agreements, giving you greater autonomy and control. [10] — Developing an in-house Generative AI model requires significant upfront investment in infrastructure, talent, and ongoing maintenance. However, as your product scales, the unit economics can become more | medium | 5,121 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
favorable compared to relying on a third-party API with usage-based pricing. You can optimize the model’s efficiency, tailor the pricing structure, and leverage the model’s capabilities across multiple use cases to improve the overall cost-effectiveness. Here are the key points on how to identify | medium | 5,122 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
the unit economics for a Generative AI product: i) Compute Costs: The primary driver of unit economics for Generative AI is the compute cost, which includes the cost of training the model and the cost of inference (running the model to generate outputs). Factors like the size of the model (number | medium | 5,123 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
of parameters), the amount of training data, and the hardware used (e.g., GPUs, TPUs) all impact the compute costs. A rough rule of thumb is that the compute cost for inference on a transformer-based model like GPT-3 is around $0.0002 to $0.0014 per 1,000 tokens. However, the actual costs can vary | medium | 5,124 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
significantly based on the specific model and hardware used. ii) Pricing and Revenue: The pricing model for a Generative AI product can have a significant impact on the unit economics. This could include per-usage pricing, subscription models, or other creative pricing structures. Estimating the | medium | 5,125 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
potential revenue per user or transaction is crucial to understanding the unit economics. This involves analyzing factors like willingness to pay, potential use cases, and market size. iii) Operational Costs: Beyond the compute costs, there are other operational costs to consider, such as data | medium | 5,126 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
storage, infrastructure maintenance, engineering resources, and customer support. Carefully estimating these ongoing costs is important to understand the overall unit economics of the Generative AI product. iv) Scalability and Efficiency: As the GenAI product scales, there may be opportunities to | medium | 5,127 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
improve the unit economics through increased efficiency, such as optimizing the model architecture, leveraging more cost-effective hardware, or developing custom pricing models. Analyzing the potential for scalability and efficiency improvements can help project the long-term unit economics of the | medium | 5,128 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
Generative AI product. v) Competitive Landscape: Understanding the competitive landscape and pricing models of similar Generative AI products can provide valuable insights into the unit economics of your offering. Benchmarking against competitors can help you identify opportunities to differentiate | medium | 5,129 |
Product Management, Product Development, Genai, AI, Artificial Intelligence.
your product and potentially improve the unit economics. For more insight, read this article to watch this video [11] — Planning for product-market fit before launch — A five-step framework for planning for product-market fit — by Nima Torabi | medium | 5,130 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
“History is not the past but a map of the past, drawn from a particular point of view, to be useful to the modern traveller.” — Henry Glassie Almost any discussion of quantum computing — whether in a research article, popular science magazine, or business journal — invokes some comparison to what | medium | 5,131 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
is called “classical” computing. It’s nearly impossible to talk about quantum computing without using the phrase, so we better start there. The term classical in classical computing is borrowed from conventions in physics. In physics, we often denote pre-1900 physics as “classical” and post-1900 | medium | 5,132 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
physics as “modern.” Modern physics includes general relativity and quantum physics. General relativity is Einstein’s theory of curved space and time, which explains the force of gravity. This theory, while instrumental in enabling us to comprehend awe-inspiring images of galaxies, has its most | medium | 5,133 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
direct technological application in GPS, the Global Positioning System crucial for satellite navigation. Yet, even this remarkable application is not standalone — it also requires quantum technology for accurate functioning. Quantum computing theory might have some things to say about places of | medium | 5,134 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
extreme gravity, like black holes, but we will stay grounded on Earth and won’t say more about general relativity in this book. The reason is that, in contrast to general relativity, quantum physics has a broader range of applications that span diverse fields. It is the backbone of lasers and light | medium | 5,135 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
bulbs, the lifeblood of medical scanners and radiation therapy, and the cornerstone of semiconductors and electron microscopes. It governs the precision of atomic clocks and the power of atomic bombs, among many others. And it is the fuel for quantum computers. The potency of quantum physics and | medium | 5,136 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
its technological implications stem from exploring a new world — the microscopic world of atoms. By uniting three of the four fundamental forces of nature, quantum physics presents us with rules governing the basic constituents of matter and energy. The fourth force, gravity, while undeniably | medium | 5,137 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
significant, is relatively weak — evident when we effortlessly overcome Earth’s gravitational pull each time we stand up. The paradigm shift from classical to quantum physics propelled humanity from the era of steam engines and machinery to the digital, information-driven age. Now, it is driving us | medium | 5,138 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
into the quantum age. In the remainder of this post, I want to give you a bit of context about how and why quantum computing came to be. Until recently, quantum and computing were mostly separate fields of study, although they overlap significantly in their engineering applications. So, we’ll go | medium | 5,139 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
over the “quantum” and “computing” parts first before bringing them together. The “quantum” in quantum computing The story of quantum physics started in 1900 with Max Planck’s “quantum hypothesis.” Planck introduced the idea that energy is quantized, meaning it comes in discrete packets, which came | medium | 5,140 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
to be called quanta. The problem Planck was studying was dubbed the black body radiation problem. A black body is an idealization of an object that absorbs and emits radiation. It’s a pretty good approximation for hot things in thermal equilibrium, like the Sun or a glowing iron pot, which give off | medium | 5,141 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
roughly the same spectrum (colors) when at the same temperature. In other words, if you heat something up so that it glows “white hot,” it will be the same temperature as the Sun (roughly 6000 ℃). The problem was that no one could figure out why using the physics of the time (now called “classical” | medium | 5,142 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
physics). As Planck was doing his calculations, he discovered that if energy has some smallest unit, the formulas worked out perfectly. This was a wild guess he made in desperation, not something that could have been intuited or inferred. But it deviated from classical physics at the most | medium | 5,143 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
fundamental level, seemingly rendering centuries of development useless, so even Planck himself didn’t take it seriously at first. Indeed, it took many years before others began to pay attention and many more after that before the consequences were fully appreciated. Perhaps not surprisingly, one | medium | 5,144 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
of those who did grasp the importance early on was Albert Einstein. Among his illustrious list of contributions to science, Einstein used Planck’s quantum hypothesis to solve the problem of the photoelectric effect. The photoelectric effect is a phenomenon in which electrons are emitted from a | medium | 5,145 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
material when exposed to light of a certain frequency. Classical physics could not explain why the energy of these ejected electrons depended on the frequency (color) of the light rather than its intensity (brightness) or why there was a cut-off frequency below which no electrons were emitted, | medium | 5,146 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
regardless of the intensity. Einstein proposed that light itself is quantized into packets of energy, which later came to be called photons. Each photon has energy proportional to its frequency, in accordance with Planck’s quantum hypothesis. When a photon hits an electron, if its energy | medium | 5,147 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
(determined by its frequency) is sufficient, it can overcome the energy binding the electron to the material and eject it. Quantization had taken hold. This marked a profound transition in physics, which is still the source of confusion and debate today. Before photons, light was argued to be | medium | 5,148 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
either a wave or a particle. Now, it seems that it is both — or neither. This came to be known as wave-particle duality. Until this point, quantum theory seemed to apply only to light. However, in parallel with these developments, experiments were starting to probe the internal structure of atoms | medium | 5,149 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
in evermore detail. One curiosity was spectral lines. Specific lines or gaps were present when observing the light that the purest forms of elements (hydrogen, helium, lithium, and so on) emitted or absorbed. Many formulas existed, but none could be derived from classical physics or any mechanism | medium | 5,150 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
until Niels Bohr paid a visit to England in 1911 to discuss the matter with the leading atomic physicists of the day. Bohr’s model proposed that electrons in an atom could only occupy certain discrete energy levels, consistent with Planck’s quantum hypothesis. When an electron jumps from a higher | medium | 5,151 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
energy level to a lower one, it emits a photon of light with a frequency that corresponds to the difference in energy between the two levels. This model provided a way to calculate the energies of these spectral lines and brought the atom into the realm of quantum physics. He presented the model | medium | 5,152 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
formally in 1913. A decade later, Louis de Broglie proposed in his Ph.D. thesis that not just light but all particles, including electrons, have both particle and wave characteristics. This was a revolutionary idea that was quickly put to the test and verified. At this point, it was clear that | medium | 5,153 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
quantum theory was more than a technique that could be applied as corrections when classical physics didn’t fit the data. Scientists started to theorize abstractly using concepts of quantum physics rather than turning to it only as a last resort. In 1925, Werner Heisenberg invented matrix mechanics | medium | 5,154 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
to deal with the calculations underpinning the theory. Using it, he showed that it is impossible to measure the position and momentum of a particle simultaneously with perfect accuracy. This uncertainty principle became a fundamental and notorious aspect of quantum theory. At the same time, Erwin | medium | 5,155 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
Schrödinger developed a mathematical equation to describe how de Broglie’s waves might change in time. The variable in the equation — called the wave function — describes the probability of finding a particle in a given location or in a particular state. In contrast to matrix mechanics, | medium | 5,156 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
Schrödinger’s picture was called wave mechanics. There was a brief debate about which of the two alternatives was correct. However, Paul Dirac showed that both are equivalent by axiomatizing the theory, demonstrating that a simple set of principles can derive all that was known about it. He also | medium | 5,157 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
combined quantum mechanics with special relativity, leading to the discovery of antimatter and paving the way for further developments in quantum field theory, which ultimately led to the so-called Standard Model of Particle Physics that supported the seemingly unending stream of particle | medium | 5,158 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
discoveries in high-energy particle accelerator experiments. However, by the middle of the 20th century, quantum physics was more or less a settled science. The applications of the theory far outstripped the discoveries, and the majority of the people using it resembled engineers more so than they | medium | 5,159 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
did physicists. We’ll get to those applications briefly, but first, we need to jump ahead before jumping back in time. During the 1960s and ’70s, engineers in the laboratories of information technology companies like Bell and IBM began to worry about the limits of the new communications and | medium | 5,160 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
computing devices being built. These things require energy to function, and energy is the primary concern of physics. Did the laws of physics have anything to say about this? And, if they did, wouldn’t the most fundamental laws (quantum laws) have the most to say? Indeed, they did, and this is | medium | 5,161 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
where the two theoretical fields of physics and computing converged. But to appreciate it, we must tell the other half of the quantum computing backstory. The “computing” in quantum computing Often, in quantum physics, it is useful to compare the results of some theory or experiment to what | medium | 5,162 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
classical physics might predict. So, there is a long tradition in the quantum vs. classical comparison. This tradition was adopted first not by computer scientists or engineers but by quantum physicists who started to study quantum computing in the ’80s. Unlike in physics, the adjective classical | medium | 5,163 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
in classical computation does not mean it is pre-modern — it is just a way to distinguish it from quantum computation. In other words, whatever quantum computing was going to be compared to, it was bound to be referred to as “classical.” The device I am using to create this is anything but | medium | 5,164 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
classical in the usual sense of the term. A modern laptop computer is a marvel of engineering, and digital computing would not be possible without quantum engineering. The components inside your computing devices, like your smartphone, are small enough that quantum mechanics plays an important role | medium | 5,165 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
in how they were conceived, engineered, and function. What makes my laptop “classical” is not the hardware but the software. What the device does at the most abstract level — that is, compute — can be thought of in classical physics terms. Indeed, anything your smartphone can do, a large enough | medium | 5,166 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
system of levers and pulleys can do. It’s just that your computer can do it much faster and more reliably. A quantum computer, on the other hand, will function differently at both the device level and at the abstract level. Abstract computational theory, now referred to as theoretical computer | medium | 5,167 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
science, was born in the 1930s. British World War II codebreaker Alan Turing devised a theoretical model of a computer now known as a Turing machine. This simple, abstract machine was intended to encapsulate the generic concept of computation. He considered a machine that operates on an infinitely | medium | 5,168 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
long tape, reading, writing, or moving based on a set of predetermined rules. Remarkably, with this simple model, Turing proved that certain problems couldn’t be solved computationally at all. Together, Turing and his doctoral supervisor, Alonzo Church, arrived at the Church-Turing thesis, which | medium | 5,169 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
states that everything computable can be computed by a Turing machine. Essentially, anything that computes can be simulated by Turing’s theoretical device. (Imagine a modern device that emulates within it a much older device, like a video game console, and you have the right picture.) A more modern | medium | 5,170 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
version relating to physics states that all physical processes can be simulated by a Turing machine. In 1945, John von Neumann expanded on Turing’s work and proposed the architecture that most computers follow today. Known as the von Neumann architecture, this structure divides the computer into a | medium | 5,171 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
central processing unit (CPU), memory, input devices, and output devices. Around the same time, Claude Shannon was thinking about the transmission of information. In work that birthed the field of information theory, Shannon introduced the concept of the “bit,” short for binary digit, as the | medium | 5,172 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
fundamental unit of information. A bit is a variable that can take on one of two values, often represented as 0 and 1, which we will meet again and again in this book. In his work, Shannon connected the idea of information with uncertainty. When a message reduces uncertainty, it carries | medium | 5,173 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
information. The more uncertainty a message can eliminate, the more information it contains. Shannon formulated this concept mathematically and developed measures for information, realizing that all types of data — numbers, letters, images, sounds — could be represented with bits, opening the door | medium | 5,174 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
to the digital revolution. Nowadays, the concept of a bit underlies all of digital computing. Modern computers, for example, represent and process information in groups of bits. Fast forward to the ’70s, and the field of computer science was in full swing. Researchers devised more nuanced notions | medium | 5,175 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
of computation, including the extension of what was “computable” to what was computable efficiently. Efficiency has a technical definition, which roughly means that the time to solve the problem does not “blow up” as the problem size increases. A keyword here is exponential. If the amount of time | medium | 5,176 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
required to solve a problem compounds like investment interest or bacterial growth, we say it is inefficient. It would be computable but highly impractical to solve such a problem. The connection to physics emerged through what was eventually called the Extended Church-Turing Thesis, which states | medium | 5,177 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
not that a Turing machine can’t merely simulate any physical process but that it can do so efficiently. Today, all computers have roughly the same architecture. A single computer can do any computational task. Some problems are hard to solve, of course. But if you can prove it is hard to solve on | medium | 5,178 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
one computer, then you know there is no point in trying to design a new kind of computer. Why? Because the first computer can efficiently simulate the new one! This is what the Extended Church-Turing Thesis says, and it seems like common sense to the modern digital citizen. But is it true? Maybe | medium | 5,179 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
not. When we think about simulating a physical process based on quantum physics, it appears that a digital computer cannot do this efficiently. Enter quantum computing. A brief history of quantum computing Quantum computing was first suggested by Paul Benioff in 1980. He and others were motivated | medium | 5,180 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
by the aforementioned interest in the physical limitations of computation. It became clear that the mathematical models of computation did not account for the laws of quantum physics. Once this was understood, the obvious next step was to consider a fully quantum mechanical model of a Turing | medium | 5,181 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
Machine. Parallel to this, Richard Feynman lamented that it was difficult to simulate quantum physics on computers and mused that a computer built on the principles of quantum physics might fare better. At the same time, in Russia, Yuri Manin also hinted at the possibility of quantum computers, | medium | 5,182 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
both noting their potential access to exponential spaces and the difficulty of coordinating such. However, the idea remained somewhat nebulous for a few years. In 1985, David Deutsch proposed the universal quantum computer, a model of quantum computation able to simulate any other quantum system. | medium | 5,183 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
This Quantum Turing Machine paralleled Turing’s original theoretical model of computation based on digital information. This model of a quantum computer is essentially equivalent to the model we work with today. However, it still wasn’t clear at the time whether or not there was an advantage to | medium | 5,184 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
doing any of this. Finally, in 1992, Deutsch and Richard Jozsa gave the first quantum algorithm providing a provable speed-up — a quantum computer can solve in one run of the algorithm what it might take a conventional computer a growing number as the problem size gets larger. It’s an admittedly | medium | 5,185 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
contrived problem, but it is quite illustrative, and the so-called Deutsch-Jozsa algorithm will be discussed later. One of the concerns at the time was how robust a quantum computer would be to noise and, indeed, any small amount of error in the algorithm destroys the computation. Of course, we | medium | 5,186 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
could redefine the problem to be an approximate one. That is, the problem specification allows a small amount of error. The Deutsch-Jozsa algorithm then still solves the problem (it solves it perfectly, so also approximately). However, now, a digital computer can easily solve the approximate | medium | 5,187 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
version of the problem. So, the “quantum speed-up” disappears. Ethan Bernstein and Umesh Vazirani modified the problem in 1993 to one where small errors were allowed. They also devised a quantum algorithm that solved the problem in a single step, while any classical algorithm required many steps. | medium | 5,188 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
In 1994, Dan Simon devised another problem along the same lines. However, this time, the quantum algorithm used to solve it provided a provable exponential speed-up. That is, as the input size of the problem grows, the best classical algorithm requires a number of steps growing exponentially, while | medium | 5,189 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
the quantum algorithm requires, at most, a linear number of steps. Simon’s algorithm was the inspiration for perhaps the most famous quantum algorithm: Shor’s algorithm. In 1994, Peter Shor presented a quantum algorithm that can factor large numbers exponentially faster than the best-known | medium | 5,190 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
classical algorithm. Since most public-key cryptography is based on the assumed hardness of factoring, Shor sparked a massive interest in quantum computing. The race was on. Detractors quickly dug in their heels. The most common criticism was the incredible fragility of quantum information. It was | medium | 5,191 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
(and still occasionally is) argued that a quantum computer would have to be so well isolated from its environment as to make it practically infeasible. Classically, error correction employs redundancy — do the same thing many times, and if an error happens on one of them, the majority still tells | medium | 5,192 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
the correct answer. However, as we will see later, quantum data cannot be copied, so it would seem error correction is not possible. Peter Shor showed how to encode quantum data into a larger system such that if an error happened to a small part of it, the quantum data could still be recovered by | medium | 5,193 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
suitable decoding. There was a three-qubit code and a five-qubit code and, not long after, entire families of codes to protect quantum data from errors. While promising, these were still toy examples that worked in very specific instances. It wasn’t clear whether any required computation could be | medium | 5,194 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
protected. Recall that one crucial difference between bits and qubits is that the latter forms a continuous set. For example, 1 is a different state of information than 1.000000001, and so on. Would we need a continuous set of instructions for quantum computers for every possible transformation of | medium | 5,195 |
Quantum Computing, Quantum, Science Communication, Science, Nonfiction.
quantum data? Keep in mind that a single instruction suffices to do any computation with bits. It’s typically called the NAND (“not and”) gate, and it produces an output bit of 0 when its two input bits are 1 and outputs 1 otherwise. It’s amazing to think everything your computer can do, which is | medium | 5,196 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.