id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
1,413 | 2,017 |
"Attacking machine learning with adversarial examples"
|
"https://openai.com/research/attacking-machine-learning-with-adversarial-examples"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Research Attacking machine learning with adversarial examples February 24, 2017 Adversarial examples , Safety & Alignment , Computer vision , Robustness , Conclusion At OpenAI, we think adversarial examples are a good aspect of security to work on because they represent a concrete problem in AI safety that can be addressed in the short term, and because fixing them is difficult enough that it requires a serious research effort. (Though we’ll need to explore many aspects of machine learning security to achieve our goal of building safe, widely distributed AI.
) To get an idea of what adversarial examples look like, consider this demonstration from Explaining and Harnessing Adversarial Examples : starting with an image of a panda, the attacker adds a small perturbation that has been calculated to make the image be recognized as a gibbon with high confidence.
The approach is quite robust; recent research has shown adversarial examples can be printed out on standard paper then photographed with a standard smartphone, and still fool systems.
Adversarial examples have the potential to be dangerous. For example, attackers could target autonomous vehicles by using stickers or paint to create an adversarial stop sign that the vehicle would interpret as a ‘yield’ or other sign, as discussed in Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples.
Reinforcement learning agents can also be manipulated by adversarial examples, according to new research from UC Berkeley, OpenAI, and Pennsylvania State University, Adversarial Attacks on Neural Network Policies , and research from the University of Nevada at Reno, Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks.
The research shows that widely-used RL algorithms, such as DQN , TRPO , and A3C , are vulnerable to adversarial inputs. These can lead to degraded performance even in the presence of pertubations too subtle to be percieved by a human, causing an agent to move a pong paddle down when it should go up , or interfering with its ability to spot enemies in Seaquest.
Adversarial Attacks: Seaquest, A3C, L2-Norm 05:40 If you want to experiment with breaking your own models, you can use cleverhans , an open source library developed jointly by Ian Goodfellow and Nicolas Papernot to test your AI’s vulnerabilities to adversarial examples.
Adversarial examples give us some traction on AI safety When we think about the study of AI safety, we usually think about some of the most difficult problems in that field — how can we ensure that sophisticated reinforcement learning agents that are significantly more intelligent than human beings behave in ways that their designers intended? Adversarial examples show us that even simple modern algorithms, for both supervised and reinforcement learning, can already behave in surprising ways that we do not intend.
Attempted defenses against adversarial examples Traditional techniques for making machine learning models more robust, such as weight decay and dropout, generally do not provide a practical defense against adversarial examples. So far, only two methods have provided a significant defense.
Adversarial training : This is a brute force solution where we simply generate a lot of adversarial examples and explicitly train the model not to be fooled by each of them. An open-source implementation of adversarial training is available in the cleverhans library and its use illustrated in the following tutorial.
Defensive distillation : This is a strategy where we train the model to output probabilities of different classes, rather than hard decisions about which class to output. The probabilities are supplied by an earlier model, trained on the same task using hard class labels. This creates a model whose surface is smoothed in the directions an adversary will typically try to exploit, making it difficult for them to discover adversarial input tweaks that lead to incorrect categorization. (Distillation was originally introduced in Distilling the Knowledge in a Neural Network as a technique for model compression, where a small model is trained to imitate a large one, in order to obtain computational savings.) Yet even these specialized algorithms can easily be broken by giving more computational firepower to the attacker.
A failed defense: “gradient masking” To give an example of how a simple defense can fail, let’s consider why a technique called “gradient masking” does not work.
Gradient masking” is a term introduced in Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples.
to describe an entire category of failed defense methods that work by trying to deny the attacker access to a useful gradient.
Most adversarial example construction techniques use the gradient of the model to make an attack. In other words, they look at a picture of an airplane, they test which direction in picture space makes the probability of the “cat” class increase, and then they give a little push (in other words, they perturb the input) in that direction. The new, modified image is mis-recognized as a cat.
But what if there were no gradient — what if an infinitesimal modification to the image caused no change in the output of the model? This seems to provide some defense because the attacker does not know which way to “push” the image.
We can easily imagine some very trivial ways to get rid of the gradient. For example, most image classification models can be run in two modes: one mode where they output just the identity of the most likely class, and one mode where they output probabilities. If the model’s output is “99.9% airplane, 0.1% cat”, then a little tiny change to the input gives a little tiny change to the output, and the gradient tells us which changes will increase the probability of the “cat” class. If we run the model in a mode where the output is just “airplane”, then a little tiny change to the input will not change the output at all, and the gradient does not tell us anything.
Let’s run a thought experiment to see how well we could defend our model against adversarial examples by running it in “most likely class” mode instead of “probability mode.” The attacker no longer knows where to go to find inputs that will be classified as cats, so we might have some defense. Unfortunately, every image that was classified as a cat before is still classified as a cat now. If the attacker can guess which points are adversarial examples, those points will still be misclassified. We haven’t made the model more robust; we have just given the attacker fewer clues to figure out where the holes in the models defense are.
Even more unfortunately, it turns out that the attacker has a very good strategy for guessing where the holes in the defense are. The attacker can train their own model, a smooth model that has a gradient, make adversarial examples for their model, and then deploy those adversarial examples against our non-smooth model. Very often, our model will misclassify these examples too. In the end, our thought experiment reveals that hiding the gradient didn’t get us anywhere.
The defense strategies that perform gradient masking typically result in a model that is very smooth in specific directions and neighborhoods of training points, which makes it harder for the adversary to find gradients indicating good candidate directions to perturb the input in a damaging way for the model. However, the adversary can train a substitute model: a copy that imitates the defended model by observing the labels that the defended model assigns to inputs chosen carefully by the adversary.
A procedure for performing such a model extraction attack was introduced in the black-box attacks paper. The adversary can then use the substitute model’s gradients to find adversarial examples that are misclassified by the defended model as well. In the figure above, reproduced from the discussion of gradient masking found in Towards the Science of Security and Privacy in Machine Learning , we illustrate this attack strategy with a one-dimensional ML problem. The gradient masking phenomenon would be exacerbated for higher dimensionality problems, but harder to depict.
We find that both adversarial training and defensive distillation accidentally perform a kind of gradient masking. Neither algorithm was explicitly designed to perform gradient masking, but gradient masking is apparently a defense that machine learning algorithms can invent relatively easily when they are trained to defend themselves and not given specific instructions about how to do so. If we transfer adversarial examples from one model to a second model that was trained with either adversarial training or defensive distillation, the attack often succeeds, even when a direct attack on the second model would fail. This suggests that both training techniques do more to flatten out the model and remove the gradient than to make sure it classifies more points correctly.
Why is it hard to defend against adversarial examples? Adversarial examples are hard to defend against because it is difficult to construct a theoretical model of the adversarial example crafting process. Adversarial examples are solutions to an optimization problem that is non-linear and non-convex for many ML models, including neural networks. Because we don’t have good theoretical tools for describing the solutions to these complicated optimization problems, it is very hard to make any kind of theoretical argument that a defense will rule out a set of adversarial examples.
Adversarial examples are also hard to defend against because they require machine learning models to produce good outputs for every possible input.
Most of the time, machine learning models work very well but only work on a very small amount of all the many possible inputs they might encounter.
Every strategy we have tested so far fails because it is not adaptive : it may block one kind of attack, but it leaves another vulnerability open to an attacker who knows about the defense being used. Designing a defense that can protect against a powerful, adaptive attacker is an important research area.
Conclusion Adversarial examples show that many modern machine learning algorithms can be broken in surprising ways. These failures of machine learning demonstrate that even simple algorithms can behave very differently from what their designers intend. We encourage machine learning researchers to get involved and design methods for preventing adversarial examples, in order to close this gap between what designers intend and how algorithms behave. If you’re interested in working on adversarial examples, consider joining OpenAI.
For more information To learn more about machine learning security, follow Ian and Nicolas’s machine learning security blog cleverhans.io.
Authors Ian Goodfellow Nicolas Papernot Sandy Huang Yan Duan Pieter Abbeel Jack Clark Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
1,414 | 2,017 |
"Generalizing from simulation"
|
"https://openai.com/blog/generalizing-from-simulation"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Research Generalizing from simulation October 19, 2017 More resources Read paper (dynamics randomization) Read paper (image-based learning) Robotics , Sim-to-real , Domain randomization , Transfer learning , Publication The simulator need not match the real-world in appearance or dynamics; instead, we randomize relevant aspects of the environment, from friction to action delays to sensor noise. Our new results provide more evidence that general-purpose robots can be built by training entirely in simulation, followed by a small amount of self-calibration in the real world.
Dynamics randomization We developed dynamics randomization to train a robot to adapt to unknown real-world dynamics. During training, we randomize a large set of ninety-five properties that determine the dynamics of the environment, such as altering the mass of each link in the robot’s body; the friction and damping of the object it is being trained on; the height of the table the object is on; the latency between actions; the noise in its observations; and so on.
We used this approach to train an LSTM -based policy to push a hockey puck around a table. Our feed-forward networks fail at this task, whereas LSTMs can use their past observations to analyze the dynamics of the world and adjust their behavior accordingly.
From vision to action We also trained a robot end-to-end in simulation using reinforcement learning (RL), and deployed the resulting policy on a physical robot. The resulting system maps vision directly to action without special sensors, and can adapt to visual feedback.
The abundance of RL results with simulated robots can make it seem like RL easily solves most robotics tasks. But common RL algorithms work well only on tasks where small perturbations to your action can provide an incremental change to the reward. Some robotics tasks have simple rewards, like walking, where you can be scored on distance traveled. But most tasks do not —to define a dense reward for block stacking, you’d need to encode that the arm is close to the block, that the arm approaches the block in the correct orientation, that the block is lifted off the ground, the distance of block to the desired position, etc.
We spent a number of months unsuccessfully trying to get conventional RL algorithms working on pick-and-place tasks before ultimately developing a new reinforcement learning algorithm, Hindsight Experience Replay (HER), which allows agents to learn from a binary reward by pretending that a failure was what they wanted to do all along and learning from it accordingly. (By analogy, imagine looking for a gas station but ending up at a pizza shop. You still don’t know where to get gas, but you’ve now learned where to get pizza.) We also used domain randomization on the visual shapes to learn a vision system robust enough for the physical world.
Our HER implementation uses the actor-critic technique with asymmetric information. (The actor is the policy, and the critic is a network which receives action/state pairs and estimates their Q-value, or sum of future rewards, providing training signal to the actor.) While the critic has access to the full state of the simulator, the actor only has access to RGB and depth data. Thus the critic can provide fully accurate feedback, while the actor uses only data present in the real world.
Costs Both techniques increase the computational requirements: dynamics randomization slows training down by a factor of 3x, while learning from images rather than states is about 5–10x slower.
We see three approaches to building general-purpose robots: training on huge fleets of physical robots, making simulators increasingly match the real world, and randomizing the simulator to allow the model to generalize to the real-world. We increasingly believe that the third will be the most important part of the solution.
If you’re interested in helping us push towards general-purpose robots, consider joining our team at OpenAI.
Authors Xue Bin Peng Lerrel Pinto Alex Ray Bob McGrew Jonas Schneider Josh Tobin Marcin Andrychowicz Peter Welinder Pieter Abbeel Wojciech Zaremba Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
1,415 | 2,017 |
"IndabaX 2022 Events - Deep Learning Indaba 2022"
|
"https://deeplearningindaba.com/2022/2022/08/15/indabax-2022-events"
|
"Indaba Programme Posters and Demos Coding and Practical Sessions Workshops Location and Venue Accomodation Useful Contacts Frequently Asked Questions (FAQs) Code of Ethics and Conduct Sponsorship Organisers IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Ubantu Reports Past Editions 2017 2018 2019 2020 2021 Indaba Programme Posters and Demos Coding and Practical Sessions Workshops Location and Venue Accomodation Useful Contacts Frequently Asked Questions (FAQs) Code of Ethics and Conduct Sponsorship Organisers IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Ubantu Reports Past Editions 2017 2018 2019 2020 2021 IndabaX Central Mon 15. Aug 2022 IndabaX 2022 Events Introduction IndabaX started in 2018 as a series of locally run, community driven events with the aim to strengthen the capacity in ML in African countries. The IndabaX network has grown since then from 13 countries to 27 – and this year in 2022, 25 countries are supported by Deep Learning Indaba in their initiatives. This support takes the form of seed money to help facilitate these important meet-ups, but also in the form of advice, mentorship and community. This year a committee of eight individuals are supporting the IndabaX local organisers. And of course the main Deep Learning Indaba in Tunisia is a major catalyst and support for these events.
Who are the countries with IndabaX communities this year? We list them below from the largest African Nation, Algeria, to the country with the highest number of official languages in Africa, Zimbabwe.
1. Algeria – largest African nation 14. Namibia 2. Benin Republic 15. Nigeria 3. Cameroon 16. Rwanda – Fastest growing in ICT 4. Congo- Kinshasa 17. Senegal 5. Cote d’Ivoire 18. South Africa 6. Eswatini 19. Sudan 7. Ethiopia 20. Tanzania 8. Ghana 21. Togo 9. Kenya 10. Madagascar 22. Tunisia 23. Uganda 11. Malawi 24. Zambia 12. Mauritius. – first IndabaX taking place this year 13. Mozambique – first IndabaX taking place this year 25. Zimbabwe – the country with the most official languages in the world (16) What has the IndabaX network gained in 2022? Geographically, we gained another extreme point: the easternmost point in Africa, point Rodriguez in Mauritius to complement the Northwest point in Tunisia, Iles des Chinese and the Southernmost point in South Africa, Cape Aquihas. So this is indeed an achievement!!. We gained new countries, with Mauritius and Mozambique opening their first Indabax group.
In the 18th and 19th century, Africa was known as the ‘dark continent’, and little was known about its geography and people. A question in this century can be: Is Africa dark in terms of AI / ML? And if you allow me to use this metaphor, through the IndabaX events, Deep Learning Indaba is doing precisely that important task of shining a light onto the AI/ML community in Africa. When one looks at Global AI indices (such as https://www.tortoisemedia.com/intelligence/global-ai/ ), only a few countries in the north of Africa make it to the list of top 62 countries. Indices are impersonal, they can, by design, measure only some aspects, and comparisons are imperfect.
Take for example, the indicators used to measure talent for the Global AI Index: they quantify things such as the number of AI meet-ups, the proportion of AI / ML/ Data Science engineers on social media or as a proportion of the labour force, the number of Kaggle grandmasters, and so on. In my opinion, the existence of the IndabaX events and associated initiatives such as Zindi.africa have had a positive impact on this indicators. Local IndabaX events have helped hundreds of young people hear about and use competition platforms such as Zindi, cloud research platforms such as google colab and collaborative platforms such as GitHub.
The Research pillar of the Global AI index uses indicators such as the quality of local universities, number of AI research papers published. To understand these indicators, let us look briefly at the OECD.AI dashboard. See for example the OECD.AI dashboard for Tunisia: https://oecd.ai/en/dashboards/countries/Tunisia This dashboard shows the number of publications in AI, the growth of publications over time, collaborations across countries. Tunisian researchers in AI for example, tend to cooperate with North African researchers. So here there is another clear impact that IndabaX events can bring, which is to energise the formation of new collaborations.
And these collaborations are very much possible and can be very fruitful. The OECD.AI dashboard shows that the top 5 countries in Africa with the highest growth in the number of publications in AI over 2017-2022 [1] (the period of time Deep Learning Indaba and IndabaX have been held) are: Eswatini, Madagascar, Rwanda, Mozambique / Ethiopia. Two of these countries are holding IndabaX events for the first time in 2022. Not only that, but the growth of AI publications is in the region of 4x compared to a growth of less than 2x over the same period in countries such as the US, or European countries.
I leave you with these thoughts – perhaps during these IndabaX events we can reflect as a community on how to maximise the impact that is possible through these events on the AI / ML community in African countries.
And how befitting it is that this year, in 2022, there is a physical Indaba held in Tunisia. Join us to make these events possible.
[1] ECD.AI (2022), visualisations powered by JSI using data from MAG, version of 31/12/2021, accessed on 4/7/2022, www.oecd.ai share: Comments (0) Click here to cancel reply.
IndabaX Central IndabaX2023 Call for Hosts on Nov 01, 2022 06:34 AM Submit applications to host IndabaX events in your country, anywhere across Africa by 02 December 2022. The process is simple and we hope to support as many as we can. To apply, simply fill in this form! About the IndabaX program The IndabaX programme started in 2018 as an experiment in strengthening our machine learning community beyond[...] IndabaX Central Uncategorized IndabaX2022 Call for Growth Support[...] on Oct 19, 2022 09:02 AM We hope you’ll nominate yourself to join the IndabaX steering committee, which is the le[...] Categories IndabaX Central Uncategorized Archives November 2022 October 2022 August 2022 April 2022 Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2022 Location and Venue Sponsorship IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,416 | 2,017 |
"IndabaX2022 Call for Growth Support Leads (East, Central and Southern Africa) - Deep Learning Indaba 2022"
|
"https://deeplearningindaba.com/2022/2022/10/19/indabax2022-call-for-growth-support-leads-east-central-and-southern-africa"
|
"Indaba Programme Posters and Demos Coding and Practical Sessions Workshops Location and Venue Accomodation Useful Contacts Frequently Asked Questions (FAQs) Code of Ethics and Conduct Sponsorship Organisers IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Ubantu Reports Past Editions 2017 2018 2019 2020 2021 Indaba Programme Posters and Demos Coding and Practical Sessions Workshops Location and Venue Accomodation Useful Contacts Frequently Asked Questions (FAQs) Code of Ethics and Conduct Sponsorship Organisers IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Ubantu Reports Past Editions 2017 2018 2019 2020 2021 IndabaX Central Uncategorized Wed 19. Oct 2022 IndabaX2022 Call for Growth Support Leads (East, Central and Southern Africa) We hope you’ll nominate yourself to join the IndabaX steering committee, which is the leadership team that oversees the IndabaX programme. We are seeking 3 new members to join the IndabaX leadership. Nominations due by 07-11-2022.
IndabaX Steering and the Community Health The IndabaX is one of the major programmes of the Deep Learning Indaba. The IndabaX events are locally-organised Indabas (i.e gatherings) that help develop knowledge and capacity in machine learning and artificial intelligence in individual countries across Africa. Whereas the annual Deep Learning Indaba (in normal times) brings everyone together in the same location, the IndabaX gives focus to individual countries, its needs, and the specific support and development needed.
The IndabaX steering committee is the bridge between all the individual IndabaX events, sharing experiences between them, and ensuring the health of local communities across our continent. We seek to add 3 new members to the team who will be the Growth Support Leads in-charge of East Africa, Central Africa and Southern Africa Regions. If you have been a lead of a previous IndabaX in your country, we encourage you to now take on a role in shaping the IndabaX at the continent-scale.
Roles and Expectations of the Growth Support Leads In general, all Growth Support Leads: Are passionate about strengthening African Machine Learning and Artificial Intelligence.
Able to volunteer time to develop IndabaX communities in their respective regions and across Africa.
Have some experience organising communities and understand the challenges faced by the general tech community.
Come from across our African continent and from all backgrounds, i.e. from any country in the Region (East, Central or Southern Africa) they seek to represent, and can be students or academics, working in industry or the public secretary, etc.
Work within the codes or practice of the Deep Learning Indaba, driven by values of Ubuntu and a strong ethical compass.
Specific tasks you will be involved in as an IndabaX steering committee member are: Develop calls for hosts and future leaders.
Support individual IndabaX communities and help them best develop their programmes.
Review applications and budget allocation to different communities Report progress of the IndabaX and the health of the African ML community Work on new ways to improve the IndabaX programme overall.
Help support fundraising, communications, and maintain the integrity of the Deep Learning Indaba.
This is a volunteer role, i.e. not a paid role. We do not expect committee members to incur any expenses in supporting this role, although where needed expenses might be incurred these will be covered.
Take a role on the committee for 2 years.
Please apply by sending a nomination to [email protected] with the following contents: Please use [IndabaX2022] Call for Growth Support Leads as subject Your application should answer the following questions: What motivates you to join the IndabaX steering committee? What is your past experience with IndabaX? What challenges did you face in organising your previous event(s) and how did you overcome them? What are you particularly proud of when you think back on your previous event(s)? Can you provide the names of 3 other IndabaX organisers who can support your nomination? If you have any questions or want to know more, please get in touch using [email protected].
What is the IndabaX The IndabaX programme started in 2018 as an experiment in strengthening our machine learning community beyond the annual Deep Learning Indaba, to allow more people to contribute to the conversation on artificial intelligence and machine learning. We join hands across our beautiful continent. The initiative continues in 2022, and it is YOUR initiative! share: Comments (0) Click here to cancel reply.
IndabaX Central IndabaX2023 Call for Hosts on Nov 01, 2022 06:34 AM Submit applications to host IndabaX events in your country, anywhere across Africa by 02 December 2022. The process is simple and we hope to support as many as we can. To apply, simply fill in this form! About the IndabaX program The IndabaX programme started in 2018 as an experiment in strengthening our machine learning community beyond[...] IndabaX Central IndabaX 2022 Events on Aug 15, 2022 09:32 AM Introduction IndabaX started in 2018 as a series of locally run, community driven events[...] Categories IndabaX Central Uncategorized Archives November 2022 October 2022 August 2022 April 2022 Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2022 Location and Venue Sponsorship IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,417 | 2,017 |
"Blog - Deep Learning Indaba"
|
"https://deeplearningindaba.com//blog"
|
"Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Announcing the Deep Learning Indaba Award Winners on Aug 23, 2023 08:13 AM Annual Indaba Awards Blog Kambule Doctoral Award The Kambule Doctoral Award, held in honour of one of South Africa’s greatest mathematician and teacher, Dr Thamsanqu W. Kambule, recognises excellence in research and writing by doctoral candidates at African universities, in any area of computational and statistical sciences, and whose work demonstrates excellence through technical rigour, significance of contribution, and […] 0 comments The Ideathon: Back to the First Edition on Aug 18, 2023 01:43 PM Annual Indaba Blog Uncategorized The Deep Learning Indaba is approaching very fast, and we are very excited to launch our second edition of the Ideathon! Each year brings new experiences to the Indaba. One of last year’s adventures was a new competition aiming at fostering innovation and building bridges across the continent. The Ideathon took shape to provide a […] 0 comments Call for Kambule Doctoral & Alele-Williams Masters Nomination Reviewers on Apr 26, 2023 06:32 PM Annual Indaba Awards Blog Deep Learning Indaba 2023 What are the Kambule and Alele-Williams Awards? The Thamsanqa Kambule and Grace Alele-Williams Awards recognise and encourage excellence in research and writing by doctoral and master’s candidates, respectively, at African universities, in any area of computational and statistical sciences. The Kambule Doctoral Award celebrates African research excellence: its recipients uphold Thamsanqa […] 0 comments Call for Maathai Impact Nomination Reviewers on Apr 26, 2023 06:30 PM Annual Indaba Awards Blog Deep Learning Indaba 2023 What is the Maathai Impact Award? The Wangari Maathai Impact Award encourages and recognises work by African innovators that shows impactful application (including but not limited to technical, societal, environmental, and economic) of machine learning and artificial intelligence. This award reinforces the legacy of Wangari Maathai in acknowledging the capacity of […] 0 comments ML-Readiness on Apr 11, 2023 06:40 PM Blog IndabaX By Amelia Taylor Machine Learning (ML) is the talk of the tech-town. It is a tool everyone can use and it holds many promises: from being able to reveal hidden information in large data to being able to predict the “future” based on past data and sophisticated heuristics. ML is seen as the swiss army […] 0 comments A Preview of IndabaX Events in 2023! on Mar 13, 2023 05:27 PM Blog IndabaX The IndabaX-nometer is rising higher! This post is about celebrating the IndabaX events which are taking place in African countries in 2023! Entering its 5th year, the annual local Deep Learning Indaba events called IndabaX, are growing stronger. This year, in 2023, a record number of 36 countries in Africa will organise IndabaX events! […] 0 comments Deep Learning Indaba Awards 2023: Celebrating the Best in AI Research and Innovation on Jan 18, 2023 04:00 PM Annual Indaba Awards It is with great pleasure that we announce the launch of the Indaba Awards 2023, a celebration of African research excellence and impactful work in Artificial Intelligence. The awards recognise the outstanding work of individuals, teams, and communities who are pushing the boundaries of what is possible and benefiting local communities across the continent through […] 0 comments Ideathon @ DLI 2022 on Aug 17, 2022 10:26 AM Annual Indaba Blog News The Indaba is a learning opportunity, but also an opportunity to connect different actors in the continent around Machine Learning and related topics. This proposal took shape after hearing about the successful experiences from a sister summer school in a different part of the world to build research bridges across the borders. We propose therefore […] 0 comments Amal Rannen Masirouna: African AI Deciding its Destiny on Aug 05, 2022 08:49 AM Annual Indaba #masirouna #indaba2022 مصيرنا In a few weeks, the African AI community will gather again for the 2022 Deep Learning Indaba. We’ve come a long way since the very first Indaba in 2017. At that first Indaba we dreamt of a community of people–all curious, innovative and with unique ideas– who would shape the destiny of […] 0 comments Join the 2022 Deep Learning Indaba Organising Committees on Mar 18, 2022 12:39 PM Annual Indaba The Deep Learning Indaba will meet again in August 2022 in Tunisia. Strengthen African Machine Learning and Artificial Intelligence by nominating yourself to join this year’s organising committee. Nominate yourself or someone you know by completing this form by 10 April 2022. Deep Learning Indaba 2022 The COVID19 Pandemic prevented us from hosting the Deep […] 0 comments The Deep Learning Indaba 2022 in Tunis: Applications Open on Mar 17, 2022 03:43 PM Annual Indaba After not taking place these past two years, we excited to announce the Deep Learning Indaba for 2022 that will be held in Tunis, Tunisia, from the 21st to the 26th of August. Applications to participate are now open. An Indaba in 2022 While the global pandemic still makes planning uncertain, we believe that we […] 0 comments Call for IndabaX Organizers on Sep 10, 2021 09:13 AM Blog IndabaX We hope you’ll nominate yourself to join the IndabaX steering committee, which is the leadership team that oversees the IndabaX programme. We are seeking 4-5 new members to join the IndabaX leadership. Nominations due by 31 October 2021. To nominate simply send an email answering a few questions (see below). IndabaX and our Community’s Health […] 0 comments Shakir Mohamed IndabaX 2021: 24 AI Communities Across Africa on May 26, 2021 08:55 PM Uncategorized The 2021 IndabaX meetings will take place in 24 countries across our African continent, happening throughout this year. We are proud to enable these communities and future leaders. Even during this ongoing period of global pandemic, African machine learning is strong and thriving. deeplearningindaba.com/indabax An IndabaX is a locally-organised Indaba (meeting) that helps spread knowledge […] 0 comments Amal Rannen Recipients of the 2020 IndabaX-AI4D Innovation Grants on Sep 02, 2020 08:00 AM Grants IndabaX In executing our mission to Strengthen African Machine Learning and Artificial Intelligence, this year, instead of hosting our usual activities (the annual Indaba, IndabaX, or the Maathai and Kambule awards), we are experimenting with several new programs, one of these being the IndabaX-AI4D Innovation Grants, which aim to fund 6-month projects that support AI research […] 1 comments Kathleen Siminyu The 2020 Indaba and the Coronavirus Pandemic on Mar 22, 2020 10:00 AM Annual Indaba Mission The coronavirus pandemic has spread across the world, and its effects are being felt across our continent, with 16 people already having died from Covid-19 in Egypt, Algeria, Morocco, and Sudan. The Indaba leadership has been tracking the development of the coronavirus and given the seriousness and uncertainty regarding the spread of the virus, have […] 0 comments Shakir Mohamed #SautiYetu: Raising Our Voice in Artificial Intelligence on Aug 04, 2019 02:57 PM Annual Indaba This year we say Sauti Yetu/Our Voice. Our voice can be strong; our voice can create change; we have many different voices that must be heard. We hope that you will use the Indaba as part of your voice.
0 comments Shakir Mohamed Outcomes of 2019 Kambule and Maathai Awards on Jul 22, 2019 06:30 PM Awards We also release our report summarising the outcomes of this programme and share its key recommendations.
0 comments Shakir Mohamed IndabaX 2019: Growing the Footprint of African Machine Learning on Feb 11, 2019 09:19 AM Uncategorized The 2019 IndabaX meetings will take place in 27 countries across our African continent, during the months of March to May. We are proud to enable these communities and future leaders. African machine learning is strong and thriving. An IndabaX is a locally-organised, one-day Indaba (meeting) that helps spread knowledge and build capacity in machine […] 0 comments Shakir Mohamed Strengthening African Machine Learning on Jul 03, 2017 03:00 PM Annual Indaba Across the African continent, our communities gather to create spaces where we share our experiences, seek advice, and discuss the pressing issues of the day. In Zulu, this type of gathering is called an Indaba. This September, the first Deep Learning Indaba will take place: a shared space to learn, to share, and to debate the state-of-the-art in […] 0 comments Shakir Mohamed Categories Annual Indaba Awards Blog Grants IndabaX Mission News Uncategorized Archives August 2023 April 2023 March 2023 January 2023 August 2022 March 2022 September 2021 May 2021 September 2020 March 2020 August 2019 July 2019 February 2019 July 2017 Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,418 | 2,017 |
"Samory Kpotufe - Deep Learning Indaba"
|
"https://deeplearningindaba.com/speakers/samory-kpotufe"
|
"Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Samory Kpotufe Assistant Professor, Department of Statistics, Columbia University Prof. Samory Kpotufe is an Assistant Professor, Department of Statistics , Columbia University, and was previously an assistant professor at Princeton university. His main practical aim is to design adaptive procedures, i.e., practical procedures that can self-tune to unknown structure in data (e.g., manifold, sparsity, clusters), while at the same time meeting the various constraints (e.g., time, space, labelling cost) of modern applications. He works generally in machine learning, with an emphasis on nonparametric methods and high-dimensional statistics.
Samory recently won the first prize in the 2018 Bell Labs Prize, and $100,000, for his pioneering work on the critically important field of ‘transfer learning’ in machine learning that answers the question of how and when can learning from one machine learning tool, be applied to another; this is a question that lies at the heart of all machine learning – is each model a ‘one off’ or can the learnings be applied to other scenarios, and if so when? Samory has answered this question with a breakthrough theory that provides a quantitative answer.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,419 | 2,017 |
"Richard Socher - Deep Learning Indaba"
|
"https://deeplearningindaba.com/speakers/richard-socher"
|
"Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Richard Socher Chief Scientist at Salesforce Dr Richard Socher is Chief Scientist at Salesforce. He leads the company’s research efforts and brings state of the art artificial intelligence solutions into the platform.
Prior, Richard was an adjunct professor at the Stanford Computer Science Department and the CEO and founder of MetaMind, a startup acquired by Salesforce in April 2016. MetaMind’s deep learning AI platform analyzes, labels and makes predictions on image and text data so businesses can make smarter, faster and more accurate decisions.
Richard was awarded the Distinguished Application Paper Award at the International Conference on Machine Learning (ICML) 2011, the 2011 Yahoo! Key Scientific Challenges Award, a Microsoft Research PhD Fellowship in 2012, a 2013 “Magic Grant” from the Brown Institute for Media Innovation, the best Stanford CS PhD thesis award 2014 and the 2014 GigaOM Structure Award. He is currently a member of the World Economic Forum’s ‘Young Global Leaders’ Class of 2017 and on the Board of Directors for the Global Fund for Women.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,420 | 2,017 |
"Finale Doshi-Velez - Deep Learning Indaba"
|
"https://deeplearningindaba.com/speakers/finale-doshi-velez"
|
"Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Finale Doshi-Velez Assistant Professor of Computer Science at Harvard University Prof. Finale Doshi-Velez is an Assistant Professor of Computer Science at Harvard University. She is excited about methods to turn data into actionable knowledge. Her core research in machine learning, computational statistics, and data science is inspired by—and often applied to—the objective of accelerating scientific progress and practical impact in healthcare and other domains.
Specifically, she is interested in questions such as: How can we design robust, principled models to combine complex data sets with other knowledge sources? How can we design models that summarise and generate hypotheses from such data? How can we characterise the uncertainty in large, heterogeneous data to provide better support for decisions? Finale is interested in developing the probabilistic methods to address these questions.
Prior to joining SEAS, Finale was an NSF CI-TRaCS Postdoctoral Fellow at the Center for Biomedical Informatics at Harvard Medical School. She was a Marshall Scholar at Trinity College, Cambridge from 2007-2009, and she was named one of IEEE’s “AI Top 10 to Watch” in 2013.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,421 | 2,017 |
"Fei-Fei Li - Deep Learning Indaba"
|
"https://deeplearningindaba.com/speakers/fei-fei-li"
|
"Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Fei-Fei Li Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute Dr. Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018. And during her sabbatical from Stanford from January 2017 to September 2018, she was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Many people will recognise Dr Li as one of the creators of the ImageNet challenge and for her important work in advancing the state of the art in computer vision and machine learning.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,422 | 2,017 |
"Andrew Ng - Deep Learning Indaba"
|
"https://deeplearningindaba.com/speakers/andrew-ng"
|
"Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Andrew Ng Co-founder and and former lead of Google Brain, former Vice President and Chief Scientist at Baidu Dr Andrew Ng is one of the highly influential researchers in machine learning and AI , with his work having helped to drive the recent revolution in deep learning.
Andrew co-founded and led Google Brain and was a former Vice President and Chief Scientist at Baidu , building the company’s Artificial Intelligence Group into a team of several thousand people. He is an adjunct professor at Stanford University (formerly associate professor and Director of its AI Lab ). Also a pioneer in online education , Ng co-founded Coursera and deeplearning.ai.
Since 2018 he launched and currently heads AI Fund , an investment fund for backing artificial intelligence startups. He has founded Landing AI , which provides AI-powered SaaS products and Transformation Program to empower enterprises into cutting-edge AI companies.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,423 | 2,017 |
"Aisha Walcott-Bryant - Deep Learning Indaba"
|
"https://deeplearningindaba.com/speakers/aisha-walcott-bryant"
|
"Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Home Mentorship Mentees Mentors Code of Ethics and Conduct Awards Blog About Our Mission Support Us Indaba Abantu Reports and Analysis Past Editions 2017 2018 2019 2020 Aisha Walcott-Bryant Research scientist and manager at IBM Research Africa - Nairobi, Kenya Dr Aisha Walcott-Bryant is a research scientist and manager at IBM Research Africa – Nairobi, Kenya. Her current research makes use of AI and Blockchain technologies to address the increasing incidence of chronic illnesses, such as many non-communicable disease affecting the continent of Africa. Specifically, she is leading a team that is developing solutions for the management of chronic illnesses by providing clinical decision support as well as patient engagement services.
She joined the IBM Research Africa lab, and leads the research efforts in mobility and transportation for developing cities. The aim was to provide significant and impactful value-added services that ease the movement of people, goods, and services in Africa. She and her colleagues developed innovative intelligent transportation systems data capture methods and analytical tools, to provide computational understanding about the local driving and infrastructure context.
Aisha has worked in Spain in the area of Smarter Cities at Barcelona Digital and Telefonica. She earned her PhD in the Electrical Engineering and Computer Science Department at MIT in robotics, as a member of the Computer Science and Artificial Intelligent lab (CSAIL).
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,424 | 2,017 |
"Senegal - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/senegal"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Senegal 28 Aug - 29 Aug 2021 Number of Attendees: 89 Location: Senegal ( view map ) Contact Details of Organizers: Daouda Tandiang DJIBA ( [email protected] ) Derguene Mbaye ( [email protected]) Ousseynou Mbaye ( [email protected]) Aissatou Paye ( [email protected]) Sokhar Samb ( [email protected]) Mariama Drame ( [email protected]) Khadidiatou Fall Samb ( [email protected]) Aboubakr Seck ( [email protected]) Event website: https://indabax.galsen.ai/ Host Institution University Virtuelle du Senegal (UVS) Schedule 28 Aug 2021 Daouda Tandiang DJIBA Lead Organizer - Opening words El Hadji Abdoulaye Thiam - How do computers understand images ? Sileye Ba - Machine learning research for cosmetic and skin care at L'Oreal Round Table - Local AI solutions Tutorial: Derguene Mbaye - Build a chatbot with Rasa Mbaye Babacar GUEYE - Scale AI with MLOps Samy Bengio - Can neural networks reasoning? 29 Aug 2021 Tutorial: Ousseynou Mbaye - Predicting the price of bitcoin with deep learning Bubacar Bah - On Error Correction Neural Networks for Economic Forecasting Tutoriel: WiMLDS - Vision par ordinateur Ousmane Dia - Uncertainty in Deep Learning Fatoumata Fall - AI and employability Sonna Belona - Ethic and AI List of Speakers El Hadji Abdoulaye Thiam Data Scientist at Baamtu Sileye Ba Senior Research Scientist at l’Oreal Derguene Mbaye Engineer Researcher at Baamtu Mbaye Babacar GUEYE Principal - Data Analytics Samy Bengio Senior Director, AI and Machine Learning Research at Apple Ousseynou Mbaye Phd Student Bubacarr Bah German Research Chair in Mathematics with specialization in Data Science at AIMS - South Africa Aissatou Paye Data scientist Ousmane Dia Research Scientist at Facebook Fatoumata Fall CEO GodnessGreen Sonna Belona Manager Ethics and Gender Issue at AfroLeadership Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,425 | 2,017 |
"Zimbabwe - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-zimbabwe"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Zimbabwe 24 May - 26 May 2021 Number of Attendees: Location: Bulawayo, Zimbabwe ( view map ) Contact Details of Organizers: Daphne T Machangara ( [email protected]) Bruce Musabaeka ( [email protected]) Richard Jakachira ( [email protected]) Event website: https://meet.google.com/qakavck-fis?hs=&authuser=1 Host Institution National University of Science and Technology (NUST) THEME: Application and impact of Data Science in the real world in collaboration with the Applied Mathematics Department, NUST University.
Schedule THEME: Application and impact of Data Science in the real world in collaboration with the Applied Mathematics Department, NUST University. 24 May 2021 The learning process vs the real world expectations: Is the learning process matching the company expectations? 25 May 2021 Project presentations-oral, poster and practicals 26 May 2021 Where are we heading with Data Science? Journey of women data scientists.
List of Speakers Dr Arnaud Nguembang Fadja Researcher: University of Ferarra, Italy Rockerfeller Data Scientist , PhD student in AI: Stellenbosch University, AIMS-CMR tutor Miss Ephifania Geza PhD student in Bioinformatics: UCT, Aims-S.A, Henrica Makulu Data Analyst at Cassava Smartech Mr Milton Chirwa Data Analyst at EnJomo Analytics Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,426 | 2,017 |
"Zambia - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-zambia"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Zambia 25 May - 25 May 2021 Number of Attendees: Location: Lusaka ( view map ) Contact Details of Organizers: Yasin Musa Ayami ( [email protected]) Event website: https://tinyurl.com/czxj7pkc Host Institution Zambia In AI Community Schedule 22 May 2021 Pre-Deep Learning Indaba X 2021 Machine Learning Hackathon 25 May 2021 09:00 -09:45 : KEYNOTE: WHY SHOULD AFRICANS GET INVOLVED IN THE LATEST TECHNOLOGICAL ADVANCEMENTS 09:45 -11:15 : PANEL DISCUSSION: STATE OF AI IN ZAMBIA AND WHAT OPPORTUNITIES EXISTS 11:15 – 13:00 - Stream A : BASIC OF MATH AND HANDS-ON PYTHON TUTORIAL 11:15 – 13:00 - Stream B : LEARNING FROM OUR DATA 14:00 – 15:45 - Stream B : THE UNREASONABLE EFFECTIVENESS OF DEEP LEARNING 16:00 – 17:00 - LIGHTNING TALKS AND PROJECT/RESEARCH PRESENTATIONS List of Speakers Dr. Amelia Taylor - Talk Title: WHY SHOULD AFRICANS GET INVOLVED IN THE LATEST TECHNOLOGICAL ADVANCEMENTS Amelia Taylor is a lecturer in Artificial Intelligence at the University of Malawi, the Polytechnic. She teaches Artificial Intelligence, Computational Intelligence and programming modules. In addition, she teaches and supervises MSc and PhD students. Dr. Taylor is currently conducting research in Natural Language Processing, AI for Legal text and data mining and visualisation for health data. Dr. Taylor graduated with a PhD in Mathematical Logic from Heriot-Watt University in 2006 where she was part of the ULTRA group. After that she worked as a research assistant on a project with Heriot-Watt University and the Royal Observatory in Edinburgh, aiming at developing an intelligent query language for astronomical data. From 2006 to 2013, Amelia Taylor worked in finance in the City of London and Edinburgh - she built risk models for asset allocation and liability-driven investments. In 2013 she joined the University of Malawi. https://www.linkedin.com/in/amelia-taylor-phd-80938544/ Mr. Jonathan Whitaker Jonathan Whitaker is a data scientist and educator based in Zimbabwe. He has been consulting and teaching since 2017, and spent the last few years as a data scientist at Zindi. You can contact him on Twitter @johnowhitaker or read about his personal projects on his blog: data science castnet.home.blog.
Dr. Lighton Phiri - Talk Title: LEARNING FROM OUR DATA Dr. Lighton Phiri is a Lecturer and Researcher at The University of Zambia. He has a PhD in Computer Science and a Master of Science in Computer Science, both from The University of Cape Town. He also holds a Bachelor of Science in Computer Science, from The University of Zambia. His research interests are in the fields of Data Mining and Digital Libraries, with a focus on techniques and methods that emphasize the application of effective solutions in resource constrained environments. He also has on-going research interest in the fields of Information and Communication Technologies for Development (ICT4D) and Technology-Enhanced Learning. Lighton is a member of the Steering Committee of the Africa School on Machine Learning for Data Mining and Search, overseen by the Association for Computing Machinery’s Special Interest Group on Information Retrieval (SIGIR) and Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD). He is also part of the Programme Advisory Committee for the Master’s in Data Science Programme at the Namibia University of Science and Technology.
Mr. Francis Chikweto Francis Chikweto is a Lecturer at Evelyn Hone college -Biomedical Engineering section. He completed his BSc in Biomedical engineering 2015 and MSc Biomedical engineering -(medical-biological devices, systems, and complexes) 2017 at Tambov State Technical University, (Russia). Research areas include biomedical signal processing and control systems design for medical welfare engineering. Ph.D. candidate at Tohoku University (Japan) Research focus -Sensor fusion and Machine Learning (DL) for biomechatronics and control systems in neonatal healthcare technology Ms. Cynthia Mulenga Cynthia Mulenga is a former Consult Lead at BongoHive, Mentor at Asikana Network, Co-Lead Facebook Developer Circle, Data Science, and AI Enthusiast, CodeCastZM Podcast Co-Host. She was also among the speakers at the main Deep Learning Indaba at the Stellenbosch University in Cape Town, South in the year 2018.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,427 | 2,017 |
"Uganda - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-uganda"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Uganda 26 Jul - 30 Jul 2021 Number of Attendees: Location: Kampala, Uganda ( view map ) Contact Details of Organizers: Bruno Ssekiwere ( [email protected]) Claire Babirye ( [email protected] ) Gloria Namanya ( [email protected]) Eugene Swinnerstone ( [email protected] ) Allan Ninyesiga ( [email protected] ) Timothy Kivumbi ( [email protected] ) Denise Nsubuga ( [email protected] ) Jean Amutukwatse ( [email protected] ) Samiiha Nalwoga ( [email protected] ) Irene Nandutu ( [email protected] ) Simon Peter Kabiito ( [email protected] ) Event website: www.indabaxug.github.io Host Institution Uganda Technology and Management University Schedule 26 Jul 2021 09:00 - 10:00 Python Programming Intro (Allan Ninyesiga) 10:00 - 11:00 Machine Learning Fundamentals (Claire Babirye) 11:00 - 11:30 Coffee Break 11:30 - 13:00 Supervised Learning (Claire Babirye) 13:00 - 14:30 Lunch Break 14:30 - 16:00 Unsupervised Learning (Irene Nandutu) 27 Jul 2021 09:00 - 10:00 Mathematics for Machine Learning (Alexandre Galashov) 10:00 - 11:00 Deep Learning Fundamentals 11:00 - 11:30 Coffee Break 11:30 - 13:00 Deep Feedforward Networks (Dr. Amal Rannen-Triki) 13:00 - 14:30 Lunch Break 14:30 - 16:00 Convolutional Neural Networks (Akera Benjamin) 28 Jul 2021 09:00 - 10:00 From Linear to Non- Linear Models (Dr. Ulrich Paquet) 10:00 - 11:00 Social Media Data Analysis (David) 11:00 - 11:30 Coffee Break 11:30 - 13:00 Reinforcement Learning (Benjamin Rosman) 13:00 - 14:30 Lunch Break 14:30 - 16:00 Recurrent Neural Networks (Lisa) 29 Jul 2021 08:30 - 20:00 Hackathon 30 Jul 2021 Conference List of Speakers Prof. Benjamin Rosman Prof. Rosman is a researcher who specialises in mobile intelligent autonomous systems. He is also a visiting lecturer at the School of Computer Science and Applied Mathematics at the University of the Witwatersrand, where he leads the Robotics, Autonomous Intelligence and Learning laboratory.
Dr. Neema Mduma (PhD) Dr. Neema Mduma is a computer scientist and a lecturer at the Nelson Mandela African Institution of Science and Technology. Neema is passionate about education especially girls, and during her PhD studies she developed a ML model called BakiShule which aims at preventing students from dropping out of school. Also, Neema organizes free trainings and workshops in secondary schools with the focus of inspiring and exposing girls to science and technology careers particularly data science, machine learning, and artificial intelligence. Neema’s efforts towards women in science has been recognized and she was given an award from the L’Oréal UNESCO as 20 young talents in Sub-Saharan Africa for the year 2020.
Dr. Kris Sankaran (PhD) Dr. Kris currently an Assistant Professor in the Department of Statistics at UW Madison. I study statistical techniques for discovery, dimensionality reduction. Computational statistics and machine learning to better understand how to visualize and draw inferences from complex data.
Dr. Ernest Mwebaze (PhD) Dr. Ernest is passionate about finding and implementing better, simpler and cost-effective solutions to address some of the prevailing problems in developing countries. Particularly I am interested in problems that can be solved by the application of computational techniques. As such my passion is research, particularly in the varied fields of Artificial Intelligence - specifically machine learning and computer vision and how to apply these in such a way as to obtain an optimal solution to real-world problems in developing countries.
Dr. Joyce Nabende (PhD) Dr. Joyce is a Lecturer in the Department of Computer Science at the School of Computing and Information Technology, Makerere University. She is also the scientific head of the Artificial Intelligence Lab and senior research fellow in the Software Systems Center at Makerere University. Dr. Joyce obtained a PhD in Computer Science from Eindhoven University of Technology, Department of Mathematics and Computer Science in the Architecture of Information Systems group under the supervision of prof. dr. ir. W. M.P. van der Aalst and Dr. M. Westergaard. Dr. Joyce's PhD study involved investigating the use of business process simulation and process mining to analyse the resource perspective of business processes.
Pascal G. Bernard Passionate business and technical leader with 15 years of multi-sector experience in oil and gas, finance, e-commerce and advertising, and marketing, I offer unique expertise in Data Science (Machine Learning and Data Engineering), Cloud Data Architecture, Data Strategy and Management, Digital Transformation and Innovation, BI and Analytics, and AI/Data Governance.
Daniel Mutembesa Mutembesa is a Project Head & Lead Researcher for an Adhoc Surveillance Project focusing on the smart-phone application tool kits that has been developed by the mcrops team to facilitate and provide automated diagnosis and improved vector & symptom measurement for Cassava viral diesease and pests it also looks at crowdsourced Crop health data through an Ad-hoc Surveillance system. Daniel is also an Instructor of Oracle & Database Systems Eng. at the Department for Professional Skills Development, Makerere University and an Assistant Tutor of Dr. Ernest Mwebaze at the Department of Computer Science College of Computing & information systems at the same University.
Benjamin Akera Benjamin is a Software Engineer by profession. Currently a Machine Learning Engineer with the R&D team at Sama working mainly on Computer Vision. Previously, I was a research intern at Mila - Quebec AI Institute, working under the supervision of Prof. Yoshua Bengio on applied Machine Learning for climate change mitigation as well as humanitarian applications of AI. And before that, I worked with the Makerere University AI Lab in Uganda.
Morine Amutorine Morine is a computer science professional with a background in software engineering and 3 years of experience in data analysis and visualizations. She is passionate about data innovation to support sustainable development.
Dr. Philip Ouma Ayoo (PhD) Dr. Philip Ouma Ayoo is the Dean School of Computing and Engineering and a Senior Lecturer in the Department of Information Systems and Technology at Uganda Technology And Management University (UTAMU).
Claire Babirye Claire is currently a lecturer at UTAMU and Research Assistant, AI LAB Makerere University. Her background is in computer science specializing in Artificial Intelligence and Network Security. She carries a great enthusiasm for data science and machine learning with a zeal of harnessing the potential of AI in improving service delivery. She is also part of the Artificial Intelligence and Data Science Research Lab Makerere University as a mentor and a research assistant.
Timothy Kivumbi Timothy has more than 5 years in the field of Information Technology and experience in major areas of Computing, Networks. He is currently working for Uganda Technology and Management University (UTAMU) as a Lecturer in the School of Computing and Engineering. He has growing interest in the areas of Machine Learning, Data Science, Cloud Computing and Blockchain Technology.
Paul Kennedy Currently working for Zindi Africa, a new data science competition platform building the AI and machine learning ecosystem in Africa Alexandre Galashov Alexandre, Research Engineer at DeepMind working on Transfer and Continual learning problems in Reinforcement Learning and Language. Prior to DeepMind, Alexandre studied Mathematics & Computer Science at Ecole Polytechnique and Novosibirsk State University.
Dr. Ulrich Paquet (PhD) Dr. Ulrich is a co-founder of a crazy but wonderful movement to help strengthen machine learning in Africa, the Deep Learning Indaba. He is also currently working with DEEPMIND as a research scientist. Dr. Ulrich is also honoured to wear the hat of extraordinary professor of applied mathematics at the University of Stellenbosch.
Lisa Wang Lisa is a Research Engineer at DeepMind, where she develops machine learning methods for graph-structured data. Previously, she worked on quantization techniques at Google Research to accelerate and compress neural networks, and decrease their power usage. She also built computer vision algorithms at Orbital Insight to turn satellite imagery into maps, e.g. to help fight illegal deforestation. Lisa holds a BS and MS in Computer Science with a concentration in Artificial Intelligence from Stanford University.
Dr. Amal Rannen Triki Amal Rannen-Triki obtained her PhD from KU Leuven where she worked on many topics including regularization, continual learning and specialised loss functions under the supervision of Matthew B. Blaschko. She is currently a research scientist at Deepmind and is interested in various subjects around deep learning including continual learning and understanding learning dynamics. Amal is also passionate about strengthening machine learning in Africa, and is working towards this goal with the Deep Learning Indaba.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,428 | 2,017 |
"Tanzania - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-tanzania"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Tanzania 19 Jun - 26 Jun 2021 Number of Attendees: Location: Tanzania ( view map ) Contact Details of Organizers: Essa Mohamedali ( [email protected] ) Frank G Kilima ( [email protected]) Dr. Dina Machuve ( [email protected] ) Krupa Suchak ( [email protected]) Frederick Apina ( [email protected] ) Gloriana Monko ( [email protected] ) Dr. Joseph Matiko ( [email protected]) Dr. Denis Pastory ( [email protected]) Event website: www.ailab.co.tz/indabax List of Speakers Alex Tsado Alex Tsado operates at the intersection of disruptive technology, community economics, and historical regenerative philosophies, as they apply to the future of the global African and underrepresented.
Dr Denis Pastory Dr. Denis best describes himself as an Applied Research Scientist with expertise in machine learning and deep learning. He is passionate about seeing a change in struggling communities focusing on integrating ICT and AI to enhance crop production in low-resourced communities targeting Sub-Saharan Africa and Japan.
Dr Kassim Mwitondi His research work focuses, mainly, on the application of deep learning and other machine learning techniques in interdisciplinary modelling of real–life phe- nomena within the domain of Sustainable Development Goals, viewing each SDG as a Big Data node.
Jumanne Mtambalike Jumanne is best known for champion building stable innovation ecosystems in Africa through Sahara Ventures and its sub-companies.
Lavina Ramikssoon Her passion to unlock and be the consciousness for Africa's in fields of Artificial Intelligence and emerging technologies is evident. Her three foundational blocks include economics, technology, psychology.
Yesaya Athuman A Software Developer with a strong opinion on Software development best practices and adopt them at the core of every implementation. He is also part of eShangazi Chatbot which educates on sexual and reproductive health.
Zephania Reuben Data Scientist & AI Trainer, passionate about Artificial Intelligence specifically in solving problems in the field of telecommunications, electronics and space science.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,429 | 2,017 |
"Nigeria - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-nigeria"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Nigeria 24 Sep - 25 Sep 2021 Number of Attendees: Location: Nigeria ( view map ) Contact Details of Organizers: Sakinat Oluwabukonla Folorunso ( [email protected] ) Bayo Adekanmbi ( [email protected] ) Tejumade Afonja ( [email protected] ) Kenechi Dukor ( [email protected]) Anslem Chukwuka Uzor ( [email protected] ) Stephen oladele ( [email protected] ) Olanrewaju Ahmed ( [email protected] ) Adebayo Agunbiade ( [email protected]) Tobi Banjo ( [email protected] ) Adebisi Abdullahi ( [email protected] ) Tola Odule ( [email protected] ) Event website: https://indabaxng.github.io/index.html Host Institution Olabisi Onabanjo University Useful Links Link to the Registration to the conference: https://forms.gle/hwsf6q6AqNrCY1V56 Call for posters: https://forms.gle/EyvrgQEqHJjnrrZRA Hackathon: https://forms.gle/Uh1xNE6vai9cgNAx7 Hackathon on Zindi: https://zindi.africa/hackathons/indabax-nigeria-2021/leaderboard Facebook Page: https://web.facebook.com/IndabaX-Nigeria-105574798525791 Linkedin Page: https://www.linkedin.com/in/indabax-nigeria-65b39921b/ Mail: [email protected] Schedule 24 Sep 2021 10:00am - 10:15am: Welcome address - Dr. Sakinat (Chair, IndabaX Nigeria organizing Committee; Department of Mathematical Sciences, Olabisi Onabanjo University, Ago-Iwoye, ogun State, Nigeria) 10:15am - 10:25am: Opening declaration - VC, OOU (Olabisi Onabanjo University) 10:25am - 10:30am: Opening declaration - Dean (Faculty of Science) 10:30am - 10:40am: Opening declaration - Dean, HOD Comments (Department of Mathematical Sciences, Olabisi Onabanjo University) 10:40am - 11:30am: Keynote 1 - FAIR Data - Prof. Francisca (Federal University, Lokoja) 12:00pm - 1:00pm: Special Appearance - MIR basics - Prof. Douglas Turnbull (Professor Doug Turnbull is an associate professor in the Department of Computer Science at Ithaca College in upstate New York. His main research interests include music information retrieval, computer audition, machine learning, and human computation. His current research centers on local music recommendations. https://dougturnbull.org/index.php/bio/) 1:00pm - 3:00pm: Invited talk 1 - 4 speakers in different rooms 3:00pm - 4:00pm :Poster presentation 25 Sep 2021 10:00am - 10:15am: Welcome address - Dr. Sakinat 10:40am - 11:30am: Keynote Speaker 2 - AI for health in Nigeria - Dr Stephen Odaibo 11:40am - 2:00pm Hands on tutorial on MIR - Prof. Meinard Muller 11:40am - 2:00pm: Hyper parameter tuning with Deep Learning - Oluwatoyin Yetunde Sanni 11:40am - 2:00pm: Intro to data science and ML - Precious 2:00pm - 2:30pm: Winners announcement for both hackathon and poster 2:30pm - 4:00pm: Women in AI - Dr Sakinat Folorunso, Dr adebisi Abdullah, Mrs Iwashokun Opeyemi List of Speakers Professor Francisca Oladipo Francisca is a Professor of Computer Science at the Federal University, Lokoja. She currently serves as the pioneer Director of Quality Assurance and bridging the gaps between research and practice in the areas of Theoretical Computing and leading the Team of VODAN Africa. [LinkedIn: https://www.linkedin.com/in/francisca-cisca-o-oladipo-0b6b0715/?originalSubdomain=ng - ResearchGate: https://www.researchgate.net/profile/Francisca-Oladipo - VODAN Africa website: https://www.vodan-totafrica.info/vodan-africa-board.php] Professor Douglas Turnbull Professor Doug Turnbull is an associate professor in the Department of Computer Science at Ithaca College in upstate New York. His main research interests include music information retrieval, computer audition, machine learning, and human computation. His current research centers on local music recommendations. [Website: https://dougturnbull.org/index.php/bio/] Professor Meinard Müller Professor, Semantic Audio Processing, International Audio Laboratories Erlangen, University Erlangen-Nuremberg, Germany [E-mail: [email protected]] Dr Sakinat Folorunso Computer Science lecturer and researcher in AI and machine learning. [Website: https://sites.google.com/view/sakinatfolorunso/home] Dr Adeyinka J. Oresanya Adeyinka Oresanya is a researcher and data scientist at Katalista Ventures, Lithuania, where she develops analytical tools for sustainability approaches. What fascinates her is knowing that behind a big mess of numbers, texts and rough ideas is a story waiting to be told that will drive impact for businesses. She volunteers as an instructor and mentor at AI Plus Club, Ibadan. Outside data science, she writes and holds conversations about experiences of the millennial African career woman.
Oluwatoyin Sanni Oluwatoyin Sanni is currently a researcher at the University of Lincoln where she’s working on research solutions that combine medical robotics, robot learning and computer vision. She holds a BSc in computer science and a MSc in AI and Robotics. Prior to her masters studies, she worked as a software engineer/technical team lead at Andela. In this Workshop, there’ll be a dive in for a hands-on session on “Hyperparameter optimization in Deep learning using Keras Tuner and Optuna.” Pre-Requisites: Prior Knowledge of Deep Learning; Python Knowledge; Familiar with using Notebooks.
Professor Johanna Devaney Assistant Professor of Music at Brooklyn College and the Graduate Center, CUNY. Previously, she was an Assistant Professor of Music Theory and Cognition at The Ohio State Universi and a postdoctoral scholar at the Center for New Music and Audio Technologies (CNMAT) at the University of California at Berkeley. Johanna's research seeks to understand the ways in which humans engage with music, particularly through performance, and how computers can be used to model and augment our understanding of this engagement. Her work draws on the disciplines of music, psychology, and computer science. Johanna's research has been funded by the Social Sciences and Humanities Research Council of Canada (SSRHC), the Fonds de recherche sur la société et la culture (FRQSC), the Google Faculty Research program, and, most recently, the National Endowment for the Humanities (NEH) Digital Humanities program.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,430 | 2,017 |
"Malawi - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-malawi"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Malawi 22 Jul - 23 Jul 2021 Number of Attendees: Location: ( view map ) Contact Details of Organizers: Amelia Taylor ( [email protected] ) For more information ( [email protected]) Event website: https://indabaxmw.wordpress.com/ Host Institution Malawi University of Business and Applied Science (former The Malawi Polytechnic) Theme Machine Learning for Biodiversity and National Heritage Overview Indaba-X Malawi is part of Deep Learning Indaba, an African wide community aiming at supporting advancements in technology and artificial intelligence. Deep Learning Indaba aims at building communities, creating leadership, and recognising excellence in the development of machine learning and artificial intelligence across Africa. Indaba-X Malawi was first held in 2019 and attracted over 100 applicants.
We are pleased to announce that this year we joined forces with the “ Biodiversity Transforming Malawi ”, Early Career, and Research Leaders & Policy Makers Workshop (Twitter: @MSBiodiversit).
We are holding joint sessions together on the 22 nd July 2021.
Details of the program will be announced soon on our website.
For more updates, follow us on Twitter: @AT_poly_AI Sessions We are structuring our sessions into five tracks containing talks, practical workshops and posters.
TRACK 1: Machine Learning: algorithms and areas of applications Statistics for Machine Learning Lessons Learnt from the Medical and Health Research in Malawi Early Detection and decision support systems TRACK 2: Start your Machine Learning Journey Introduction to Python Introduction to R Data processing and analysis for Machine Learning Introduction to ML libraries (ScikitLearn, Tensorflow, etc.) TRACK 3: Remote Sensing and Drones Water management Forestry Management Wildlife Conservation Maps and GIS TRACK 4: AI for Preserving Indigenous Knowledge Digitization, Preparation and maintenance of datasets Machine Learning for Natural Language Processing Machine Learning for Computer Vision TRACK 5: Industry and Government Ethical Issues and Privacy Concerns of Computer Systems, IoT Legal and Regulatory Landscape in Malawi Top Challenges for Machine Learning Call for Participation We are soliciting lightning talks proposals for 5 or 10 minute talks and posters proposals in the form of a 1-2 page abstract. Lightning talks and posters should fit into the topics of the above tracks. They will be presented during the normal conference days, in dedicated time-slots. Submissions can contain case studies, research work (mature or in progress), open problems, ideas and critical reviews of relevant issues and research relevant in Malawi. We encourage participation from students, graduates, or early career academics and industry professionals. Regardless of who you are or the job title you hold, if you are a technologist who is interested in AI and Machine Learning and face unique challenges, we encourage you to be a part of Indaba-X Malawi 2021.
We would like to particularly welcome and encourage female technologists to participate! We will accept lightning talk proposals and posters via a submission form.
Several of our sessions will be also available through zoom. If you are interested but abroad please get in touch with us you may still be able to participate via zoom and pre-record your lightning talk or poster.If you wish to promote or pitch a product at Indaba-X Malawi, please contact us at [email protected] about exhibition and sponsor opportunities.
INDABA-X 2021 Malawi is committed to providing an open, safe and enjoyable event experience for all event participants, and a welcoming environment for free discussion of ideas.
Important Dates Proposals Due : 30th June 2021 Notifications sent to presenters: 7th July 2021 Enrollment for Non-Presenters: Apply here Submission Form Send up your applications by filling in this form.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,431 | 2,017 |
"Eswatini (formerly Swaziland) - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-eswatini"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Eswatini (formerly Swaziland) 26 Aug - 27 Aug 2021 Number of Attendees: Location: University of Eswatini, Kwaluseni (Formerly University of Swaziland) ( view map ) Contact Details of Organizers: Dr. Stephen Fashoto ( [email protected]) Dr. Andile Metfula ( [email protected]) Mr. Enock Dube ( [email protected]) Event website: Host Institution University of Eswatini, Kwaluseni (Formerly University of Swaziland) Schedule 29 Jul 2021 PROGRAMME DIRECTOR: MR. S. Zwane 08:00 – 08:30: Arrival and Registration - Ms. Fakazile Dlamini/Ms. Hlope 08:30 – 08:40: Introduction of the VC- Prof. S. Motsa Dean, Faculty of Science & Engineering 8:40- 9:00: Opening Remarks by VC- Prof. J. Thwala(UNESWA) 9:00 – 9:30: Group Photos 9:30: – 10:00: Introduction to Financial Inclusion- Centre for Financial Inclusion (CFI) Ms. Prudence 10:00 – 10:30: Introduction to Financial Technology-Ms. P. Nhlengetfwa (Central Bank of Eswatini (CBE) 10:30- 10:45: Eswatini infrastructural/Technology readiness for financial Inclusion -Mr. T. Fakudze(ESCOM) 10:45 – 11:15: Machine Learning for Financial Inclusion - Prof. B. Akinnuwesi (UNESWA) 11:20-11:50: Credit Card Fraud Detection Using Machine Learning- Mr. Jeremiah Olamijuwon (Eblocks Software, South Africa) 11:50 – 12:05: Quantitative and Qualitative Data - Ms. F. Madlala (UNESWA) 12:05 – 12:30: Recommender System for a Business Case- Dr. Ulrich Paquet (DEEPMIND UK) 12:30 – 13:30: TEA BREAK/LUNCH 13:30 - 16:30: Machine Learning/Deep Learning practical using Python - Dr. Fashoto/Dr. Ncube/Prof. Akinnuwesi/Mr. Dube/Mr. Mbunge/Mr. Jerry Olamijuwon 30 Jul 2021 PROGRAMME DIRECTOR: MR. S. Zwane 08:00 – 08:30: Arrival and Registration- Ms. Fakazile Dlamini/Ms. Hlope 08:30 – 10:00: Introduction to problem solving in machine learning using python PART I - Mr. Jeremiah Olamijuwon (Eblocks Software, South Africa) 10:00 – 11:00: Practical session on problem I - Dr. Fashoto/Dr. Ncube/Prof. Akinnuwesi/Mr. Dube/Mr. Mbunge/Mr. Jerry Olamijuwon 11:00 – 11:30: TEA BREAKs 11:30 – 13:00: Introduction to problem solving in machine learning using python PART 2 - Mr. Jeremiah Olamijuwon (Eblocks Software, South Africa) 13:00 – 14:00: Practical session on problem II - Dr. Fashoto/Dr. Ncube/Prof. Akinnuwesi/Mr. Dube/Mr. Mbunge/Mr. Jerry Olamijuwon 14:00 – 15:00: LUNCH 15:00 – 15:30: PRESENTATION OF GIFTS TO SPEAKERS 15:30 – 16:00: CLOSING REMARKS - Dr. Andile Metfula (Head of Computer Science (UNESWA) List of Speakers Dr. Paquet Ulrich DEEPMIND UK He is a South African research scientist at DeepMind, who lost his heart somewhere in the bushveld and is always looking for ways to find it. His most recent expedition in this quest is with the Deep Learning Indaba and its community of hard-working, dedicated friends and peers. He holds a PhD in machine learning from the University of Cambridge, lived life on the edge in two high tech startups, managed a team in Apple, and with Noam Koenigstein developed the core of the Microsoft recommender system. He was on the initial Xbox One team, and is still itching to build the next big thing.
Mr. Jeremiah Olamijuwon EBLOCKS SOFTWARE SOUTH AFRICA He is a Senior Data Scientist and Machine Learning (ML) Engineer at Eblocks Software. He is also the Founder of eTihuku Pty Ltd, a Financial Technology company, and the CoFounder of JayWealth Pty Ltd, an Agricultural Service Company which focuses on using Artificial Intelligence to improve livestock yield. Jeremiah specializes in the field of machine learning and computer vision with expertise in Python and R among many other programming languages, Cloud Computing, ModelOps and Deployment of ML models. Jeremiah collaborates on projects (through Eblocks Software) ranging from finance to health to insurance and many other interesting projects. He is also a recipient of several awards and scholarships as well as a member of the largest Honours international society (Golden Key). Jeremiah has obtained all his degrees till date with distinction (BSc, BSc Hons, and MSc) and is currently studying towards his Doctorate Degree in Electrical and Information Engineering at the University of Witwatersrand. Jeremiah's research revolves around using Artificial Intelligence to improve livestock yield using a non-invasive technique.
Ms. Pamela Nhlengethwa CENTRAL BANK OF ESWATINI She is a FinTech Analyst at the Central Bank of Eswatini. Her work as a FinTech Analyst involves research on the digital transformation of the financial system. She has a strong interest in driving innovation and the use of technology, particularly machine learning, to deliver affordable, accessible and efficient financial services, especially to the financially excluded and underserved. Pamela holds a Master’s degree in Computer Science and Information Engineering from National Cheng Kung University in Taiwan. She also holds a BSc in Applied Computing from Ming Chuan University, also in Taiwan.
Prof. Boluwaji Akinnuwesi UNIVERSITY OF ESWATINI (UNESWA) He is an Associate Professor in Department of Computer Science, University of Eswatini in ESwatini. He obtained his degrees(B.Sc., M.Sc. and Ph.D) in Computer Science. He focuses on Software Engineering and its Applications and thus applies software development methodologies to model and solve real life problems. He has developed a number of software frameworks for solving some real life problems. His core area of application is medical informatics. He applies soft computing and machine learning techniques in developing medical diagnosis system that could enhance accurate diagnosis of tropical diseases with overlapping symptoms. While in Lagos State University (LASU), Nigeria as an Associate Professor, he successfully supervised 11 Masters and 2 Doctoral theses. In addition, he has served in various academic management and administrative capacities such as Head of Department; Director of Computer Centre; Member of University Senate, Department/Faculty Board of Studies and several committees in the University. He serves in the capacity of external examiner to several Universities and Colleges of Education. He also carry out review activities for a number of academic journals and conferences. He has published a number of papers in journals and conference proceedings in his area of research. He is a full member of the following professional bodies: Computer Professional Registration Council of Nigeria, Nigeria Computer Society and Association of Computing Machinery. He is a research scholar in Southern University, Louisiana, USA. His areas of interest are: Software Engineering, Soft Computing, Database Systems, Machine Learning, Medical Informatics, System Analysis and Design.
Ms. Faith Madlala (UNESWA) She is a lecturer in the department of Statistics and Demography in the University of Eswatini. She is a competent and versatile statistician by experience with over nine years of numbers management expertise gained at various institutions and organizations. She received her Bsc in Applied Mathematics from NUST in Zimbabwe and is a holder of an MBA and a Masters In Operations Research and Statistics. Faith’s research areas focus on the applications of Time Series Analysis in Modeling, Econometrics, Operations Research and Artificial Intelligence.
Mr. Thulani Fakudze ESWATINI COMMUNICATION COMMISSION He is the Director of Technical Services at the Eswatini Communications Commission Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,432 | 2,017 |
"Cameroon - Deep Learning Indaba 2021"
|
"https://deeplearningindaba.com/2021/indabax/indabax-cameroon"
|
"Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 Indaba IndabaX Awards Mentorship Mentees Mentors Code of Ethics and Conduct Blog About Our Mission Indaba Abantu Reports Past Editions 2017 2018 2019 2020 IndabaX: Cameroon 15 Nov - 20 Nov 2021 Number of Attendees: Location: Dschang & Yaoundé ( view map ) Contact Details of Organizers: Volviane Saphir MFOGO ( [email protected]) WIMLDS ( [email protected]) Prof. NKENLIFACK Marcellin ( [email protected]) Rockfeller ( [email protected]) NANDO TEZOH ( [email protected]) Kameni Alex ( [email protected]) Loic Kwate ( [email protected]) Brice Nanda ( [email protected]) Igor Kouam ( [email protected]) Grace Tessa ( [email protected]) Event website: https://sites.google.com/view/indabaxcameroon-2021/home?authuser=0 Host Institution University of Dschang (UDS) & National Advanced School of Engineering, (ENSPY) Yaoundé General Theme Machine learning and its applications Theme 1 (Part 1 : UDS) Machine Learning and Cybersecurity – Applications and Impact in the Real World Theme 2 (Part 2 : ENPY) Machine Learning and Agriculture – Financial Impact Hackathon For the first part, the hackathon offered by Zindi will focus on this competition.
The first hackathon will END on Sunday, November 21 at 11:00 P.M (local time) and the results will be ANNOUNCED on Monday, November 22 at 9:00 A.M (local time) For the second part of the IndabaX, the hackathon will focus on this competition.
The second hackathon will END on Friday, November 19 and the results will be ANNOUNCED on Saturday, November 20 NB : Further communication may be added after the registration of additional participants.
Sponsors Schedule 16 Nov 2021 UDS 08:00-12:00: Communication from young researchers 13:30-16:30: Posters presentation (by students) Full day: Forum and exhibitions in stands (business partners, etc.) 17 Nov 2021 UDS 08:00-09:00 : Registration 09:00-09:15: Welcome speech from the Head of the Department of Mathematics and Computer Science: Prof Marcellin NKENLIFACK (University of Dschang) 09:15-09:30: Opening remarks from the Dean of the Faculty of Sciences: Prof Emmanuel NGAMENI (University of Dschang) 09:30 - 10:30: Keynote 1 (Title: Foundations of Machine Learning): Dr. Marcellin ATEMKENG (Rhodes University, South Africa) 10:30-11:15: Presentation of a Project in Machine Learning: Dr Elie FUTE (University of Buea) & Prof NKENLIFACK (University of Dschang) 11:15-12:00: Coffee break 12:00-12:45: Tutorial (Title: Best Practices for Machine Learning in Developing Countries): Prof Christopher THRON (Texas A&M University, USA) 12:45-13:30: Hands-on Tutorial (Title: Data Visualisation) : Prof Wahabou ABDOU (Université de Bourgogne, France) 13:30-15:00: Hands-on Tutorial (Title: Supervised Machine Learning: Classification): Prof Christopher THRON (Texas A&M University, USA) 15:30-17:00: Posters presentation 18 Nov 2021 UDS 08:00-09:00: Registration 09:00-09:30: Recap of the first day 09:30-10:30: Hands-on Tutorial (Title: Hyperparameter Tuning For Classification Model and Best practices To Win A Competition): Mrs Irene Raissa KAMENI KAMENI (Vector Institute, Canada) 10:30-11:15: Tutorial (Title: Presentation of Zindi and How to do a competition on Zindi): Mrs Delilah GESICHO (Zindi, Africa) 11:15-11:20: Hackathon 11:20-12:00: Coffee break 12:00-13:00: Keynote 2 (Title: The Growing Role of Machine Learning in Cybersecurity ): Dr Charles A. KAMHOUA (US Army Research Laboratory) 13:00-14:00: keynote 3 (Title: Impacts of AI in Engineering) : Dr Uriel Pascal ELINGUI ( Groupe CERCO, Côte d’Ivoire) 14:45-15:30: Hands-on Tutorial (Title: Machine Learning and Applications): Mr Constant WETTE (Ericsson, Canada) 15:30:16:00: closing remarks: Prof Marcellin NKENLIFACK 19 Nov 2021 ENSPY 08:00-22:00: Hackathon Full day: Forum and exhibitions in stands (business partners, etc.) 20 Nov 2021 ENSPY 07:00-07:55 : Setting up and welcoming the participants and speakers 08:00-08:15 Welcome speech from the head of Department of Computer Engineering- Prof. Bouetou Bouetou Thomas 08:20-09:20 : Fundamentals of deep learning - Saurabh Shahane, CEO of The machine learning company 09:25-10:25 : Build Machine Learning Solution on Google Cloud - William Dekou, Cloud Architect at Google Cloud 10:30-11:00 : Coffee break 11:05-11:30 : Break 11:30-12:30 : Deep learning application on real-life problem - James Asseine, Researcher at DeepMind 12:30-12:45: Coffee break 12:50-13:20: Machine learning and agriculture - Dr. MVONDO NGANTI Dorothée, Technical Director of Agrix Tech 13:25-14:00: Impact of deep learning in agriculture - Eng Yannick Serge Obam, Machine learning engineer at isoMora 15:30-16:00: PyTorch Vision tutorial 16:00-16:30: Intervention of the representative of Zindi - TCHANGMENA A NKEN ALLASSAN - Robotic Engineer & Zindi ambassador 16:30-17:00: Award ceremony and closing remarks.
List of Speakers Dr Charles A. Kamhoua US Army Research Laboratory, Network Security Branch Dr Marcellin ATEMKENG Rhodes University in South Africa Dr Uriel Pascal ElINGUI Director of Technological Innovation at Groupe CERCO, Côte d’Ivoire Thron Christopher Texas A&M University-Central Texas Wahabou Abdou University of Burgundy Mrs. Irene Raissa Kameni Kameni Vector Institute M. Constant Wette Ericsson Saurabh Shahane CEO of The machine learning company William Dekou Cloud Architect at Google Cloud James Asseine Researcher at DeepMind Dr. MVONDO NGANTI Dorothée Technical Director of Agrix Tech Eng Yannick Serge Obam Machine learning engineer at isoMora Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
"
|
1,433 | 2,021 |
"What the latest Pegasus spyware leaks tell us | MIT Technology Review"
|
"https://www.technologyreview.com/2021/07/19/1029732/nso-pegasus-spyware-leaks"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts What the latest Pegasus spyware leaks tell us New documents show journalists and activists are being surveilled using the tools built by secretive Israeli security company NSO.
By A.W. Ohlheiser archive page Shalev Hulio, CEO of NSO Group, in Tel Aviv REUTERS/Ammar Awad Over the weekend, a consortium of international news outlets published their findings from an investigation into the use of Pegasus, the marquee spyware product of the secretive billion-dollar Israeli surveillance company NSO Group.
Related Story The reports from the Guardian , the Washington Post , and 15 other media organizations are based on a leak of tens of thousands of phone numbers that appear to have been targeted by Pegasus. While the devices associated with the numbers on the list were not necessarily infected with the spyware, the outlets were able to use the data to establish that journalists and activists in many countries were targeted— and in some cases successfully hacked.
The leaks indicate the scope of what cybersecurity reporters and experts have said for years: that while NSO Group claims its spyware is designed to target criminals and terrorists, its actual applications are much more broad. (The company released a statement in response to the investigation, denying that its data was leaked, and that any of the resulting reporting was true.) My colleague Patrick Howell O’Neill has been reporting for some time on claims against NSO Group, which “has been linked to cases including the murder of Saudi journalist Jamal Khashoggi, the targeting of scientists and campaigners pushing for political reform in Mexico, and Spanish government surveillance of Catalan separatist politicians,” he wrote in August 2020.
In the past, NSO has denied these accusations, but it has also more broadly argued that it can’t be held responsible if governments misuse the technology it sells them.
The company’s central argument, we wrote at the time, is one “that is common among weapons manufacturers.” Namely: “The company is the creator of a technology that governments use, but it doesn’t attack anyone itself, so it can’t be held responsible.” Leaks are an important tool for understanding the way Pegasus is used, in part because it is so hard for researchers to spot the software when it is on devices. In March, one researcher at the cybersecurity watchdog Citizen Lab—which has focused on studying the software— explained how Apple’s high security measures had allowed NSO to breach iPhone security but block investigators.
“It’s a double-edged sword,” said Bill Marczak, a senior researcher at Citizen Lab. “You’re going to keep out a lot of the riffraff by making it harder to break iPhones. But the 1% of top hackers are going to find a way in, and once they’re inside, the impenetrable fortress of the iPhone protects them.” Related Story It is not the first time NSO has found itself embroiled in controversy. Facebook is currently suing the company over allegations that Pegasus manipulated the infrastructure of WhatsApp to infect more than 1,400 cell phones. Facebook has said in court documents that its own investigation has identified more than 100 human rights defenders, journalists, and public figures targeted by Pegasus.
Last August, NSO Group CEO and cofounder Shalev Hulio told MIT Technology Review that he knew his company had “been accused, with good reason, of not being transparent enough,” and that his industry should be held more accountable for its secrecy, particularly as its methods become harder to detect by outside watchdogs and researchers.
As the Post notes, NSO Group does not provide details on its clients, citing confidentiality. Two weeks ago, the company released its first “ Transparency and Accountability Report ,” where it revealed that it has 60 clients in 40 countries. Most of the clients are intelligence agencies or law enforcement.
hide by A.W. Ohlheiser Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,434 | 2,020 |
"The man who built a spyware empire says it’s time to come out of the shadows | MIT Technology Review"
|
"https://www.technologyreview.com/2020/08/19/1007337/shalev-hulio-nso-group-spyware-interview"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The man who built a spyware empire says it’s time to come out of the shadows By Patrick Howell O'Neill archive page Ms Tech | Source: AP Photo/Daniella Cheslow, File Shalev Hulio wants to explain himself.
Normally, silence and secrecy are inherent in the spy business. For nine full years, Hulio never talked publicly about his billion-dollar hacking company—even when his hacking tools were linked to scandal or he was accused of being complicit in human rights abuses around the world. Lately, though, he’s speaking up.
“People don’t understand how intelligence works,” Hulio tells me over a video call from Tel Aviv. “It’s not easy. It’s not pleasant. Intelligence is a shitty business full of ethical dilemmas.” The business he leads, NSO Group, is the world’s most notorious spyware company.
It’s at the center of a booming international industry in which high-tech firms find software vulnerabilities, develop exploits, and sell malware to governments. The Israeli-headquartered company has been linked to high-profile incidents including the murder of Jamal Khashoggi and spying against politicians in Spain.
Related Story Ten years after founding the company, he made the rare decision to speak about NSO Group, the intelligence industry, and what transparency could look like for spyware companies. This, he says, is the most important thing the industry can do now: “We’ve been accused, with good reason, of not being transparent enough.” Culture of silence Formerly a search-and-rescue commander in Israel’s military and then an entrepreneur focused on technology that remotely accessed smartphones, Hulio has said he founded NSO Group in 2010 at the urging of European intelligence agencies. Back then, NSO marketed itself as a state-of-the-art cyberwarfare firm.
It entered the global spotlight in 2016 when Ahmed Mansoor, a human rights activist in the United Arab Emirates, received what’s been called the most famous text message of all time.
Researchers say it was a sophisticated phishing lure sent by a government; it contained a link that, if clicked, would have taken over Mansoor’s phone with spyware. Experts at Citizen Lab, a research group at the University of Toronto, analyzed the link and pointed to Pegasus, NSO’s flagship product. The revelation led to a great deal of scrutiny of the company, but NSO remained silent. (Mansoor is currently serving a decade-long prison sentence for insulting the monarchy—a dictator’s description of his work to further human rights.) That response was partly a function of the company’s ownership at the time. In 2014, NSO had been bought for around $100 million by the American private equity firm Francisco Partners, which had a strict no-press policy that Hulio says led to a harmful culture of silence.
“No interviews—we couldn’t talk to journalists except to say no comment, no comment, no comment,” he says. “It created lots of bad things for us, because every time we were accused of abuse, we had no comment.” This, he says, was a mistake to be avoided in the future by companies like NSO—which last year was sold for $1 billion to the European private equity firm Novalpina and the original founders, including Hulio himself.
“The industry should be more transparent,” Hulio says. “Each company should be much more accountable for who they sell to, who are the customers, what is the end use for each customer.” In fact, the text sent to Mansoor proved to be a blessing in disguise for investigators. Mansoor, who had already been targeted by surveillance for many years, was suspicious and didn’t click the poisoned link. Instead, he shared it with experts. But these days the hacking industry is increasingly using more advanced techniques that keep their activities as unobtrusive as possible—including so-called “zero-click” techniques that infect targets without their taking any action at all. WhatsApp is suing NSO Group for hacking the app to silently infect phones. Targets in Morocco have reportedly experienced “network injection” hacks that raise no alarm, require no cooperation from the victim, and leave little trace.
“Each [spyware] company should be much more accountable for who they sell to, who are the customers, what is the end use for each customer.” “The pitch from hacking companies is that criminals and terrorists are going dark because of encryption and states need an ability to chase them down their dark hole,” says John Scott-Railton, a senior researcher at Citizen Lab. “Increasingly, at the high end, companies selling these techniques are the ones going dark. It’s not just WhatsApp.
We’ve seen sales of vulnerabilities against iMessage , [telephone software] SS7 as a delivery for zero-click vulnerabilities , and a lot of network injection.
Because of this, it’s almost impossible for us to get visibility of the scale of the problem. We can only guess at scale. We only know some players. The market is growing, but we lack a lot of information about abuses.” It was never an easy job to understand the full scope of the hacker-for-hire industry. Now the techniques and indicators investigators have long relied on as clues are becoming rarer, quieter, and more difficult to spot. The stealthy new arsenal makes it extraordinarily difficult to hold hacking companies and intelligence agencies accountable when human rights abuses occur.
Perhaps surprisingly, Hulio agrees emphatically that the hacking industry is going dark. When I ask him if the industry is taking enough steps toward transparency and accountability, he shakes his head and points a finger at his competitors: “Actually, I think it’s the other way around. The industry is going away from regulation. I see companies trying to hide activity and hide what they’re doing. It’s damaging the industry.” Dodging transparency By contrast, Hulio claims, NSO is trying to reverse course under its new ownership. Although it is facing the high-profile WhatsApp lawsuit and dozens of allegations of abuse of Pegasus, Hulio insists the company is evolving. The fact that he’s talking to journalists at all is one obvious change, he says, and so are the new self-governance policies and a public commitment to adhere to the United Nations Human Rights Guidelines. How much the talk translates to reality is still an open question: three days after the company announced a new human rights policy in 2019, researchers from Amnesty International say, Pegasus was used to hack Moroccan journalist Omar Radi.
But Hulio suggests that his rivals are dodging transparency and accountability by moving their businesses or finding havens to operate from.
“They’re opening companies in countries where you don’t have regulation mechanisms, in Latin America, Europe, the Asia Pacific region—where regulation is very weak, so you can export to countries that you cannot export to from Israel or other places in Europe,” he explains. “I see companies trying to hide activity by changing the name of the company over and over again. Or through mechanisms like building research and development in one site, sales cycle to a different company, deployment through a third company, so you cannot trace who is doing what.” "Just like there are countries that act as tax shelters, there are countries that act as export regulation shelters. Those countries need global mechanisms of regulation." That may be true, but NSO Group itself goes by a string of other names, including Q Cyber Technologies in Israel and OSY Technologies in Luxembourg. It has a North American wing called Westbridge. Its employees are spread out internationally. Israeli media have reported on company’s links to shell companies and byzantine deals. Over the years, it has operated a confusing network of other companies around the world, and this corporate maze has made it nearly impossible to understand its dealings and actions—a crucial task when hacking tools can be abused by authoritarian governments with devastating consequences.
So what would accountability look like? When NSO Group first appeared, the Wassenaar Arrangement, a crucial arms export control agreement between 42 countries, had no cyber dimension. Israel had no cyber export law. Now Israel’s Ministry of Defense is governed by the country’s Defense Export Control Law—NSO Group has reportedly never been denied an export license—but on a global scale, the hacking industry remains largely hidden, opaque, and unaccountable despite its growing power and capabilities.
“There are loopholes,” Hulio says. “Not all countries are part of the Wassenaar agreement. I truly think it’s very hard to do something international. Obviously international is a great idea, but just like there are countries that act as tax shelters, there are countries that act as export regulation shelters. Those countries need global mechanisms of regulation.” Who is in the crosshairs? Dozens of abuses by users of NSO’s technology have been alleged since the Mansoor incident first pointed a spotlight at the company. When such allegations are made, NSO begins an investigation. If accounts conflict, NSO can demand logs that reveal targets. More often than not, Hulio says, the customer will say that the allegations against it are true, the targeting is real—but that their actions were legal under local law and the contract they signed. That leaves it up to NSO and the customer to hash out whether the targeting is indeed legitimate.
Related Story Much of the criticism directed at NSO Group comes when researchers say Pegasus is used against lawyers, human rights activists, journalists, and politicians. But Hulio says the context can justify such actions—that these people can be legitimate surveillance targets as long as the law is followed. He points to events surrounding the capture in 2014 of the Mexican drug lord Joaquín “El Chapo” Guzmán. Although it has never confirmed it publicly, NSO Group has privately touted its role in the operation for years.
“Chapo ran away from prison,” Hulio says. “People like Chapo or [ISIS leader Omar Bakr] al-Baghdadi don’t carry smartphones. When Chapo escaped, they thought he probably eventually will call his lawyer, so let’s try to intercept the lawyer. The lawyer is not a bad person—and I’m not saying we were involved. The lawyer by himself is not a suspect of criminal activity, but El Chapo, who is a criminal, is going to call his lawyer, and the only way to catch him is to intercept his lawyer.” It’s the kind of case that’s easy to make. Murderous drug lord, extreme police action, front-page mugshots. But most allegations of misuse don’t resemble the El Chapo case. Gulf nations have repeatedly been accused of using Pegasus to target political opposition, resulting later in trumped-up charges for offending royal families or the like.
Hulio says that often NSO is accused of work that other spyware companies are responsible for.
“We ask very tough questions when we sell a system, but I’m not sure everyone is doing that,” he says. “I have no problem sitting in front of the minister of defense of a country, or the head of police, or of the secret service, and asking: What is the use? What’s the target? What is the mission? What are the investigations? What is the process you use? How do you analyze the data? Who needs to approve each target? What is the law in your specific country—how does it work? Questions a lot of companies really don’t care about. They have a deal—they want to sell. They will sell it because it’s good money for them.” We’ve gone full circle, arriving back in a thick tangle of secrecy. Money is flowing, abuses keep happening, and the hacking tools are proliferating: no one disputes that.
But who is accountable when brutal authoritarians get their hands on cutting-edge spyware to use against opponents? An already shadowy world is getting darker, and answers are becoming harder to come by.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,435 | 2,016 |
"This US company sold iPhone hacking tools to UAE spies | MIT Technology Review"
|
"https://www.technologyreview.com/2021/09/15/1035813/us-sold-iphone-exploit-uae"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts This US company sold iPhone hacking tools to UAE spies An American cybersecurity company was behind a 2016 iPhone hack sold to a group of mercenaries and used by the United Arab Emirates.
By Patrick Howell O'Neill archive page Stefani Reynolds/Getty Images When the United Arab Emirates paid over $1.3 million for a powerful and stealthy iPhone hacking tool in 2016, the monarchy’s spies—and the American mercenary hackers they hired—put it to immediate use.
The tool exploited a flaw in Apple’s iMessage app to enable hackers to completely take over a victim’s iPhone. It was used against hundreds of targets in a vast campaign of surveillance and espionage whose victims included geopolitical rivals, dissidents, and human rights activists.
Documents filed by the US Justice Department on Tuesday detail how the sale was facilitated by a group of American mercenaries working for Abu Dhabi, without legal permission from Washington to do so. But the case documents do not reveal who sold the powerful iPhone exploit to the Emiratis.
Two sources with knowledge of the matter have confirmed to MIT Technology Review that the exploit was developed and sold by an American firm named Accuvant. It merged several years ago with another security firm, and what remains is now part of a larger company called Optiv. News of the sale sheds new light on the exploit industry as well as the role played by American companies and mercenaries in the proliferation of powerful hacking capabilities around the world.
Related Story Optiv spokesperson Jeremy Jones wrote in an email that his company has "cooperated fully with the Department of Justice" and that Optiv "is not a subject of this investigation." That's true: The subjects of the investigation are the three former US intelligence and military personnel who worked illegally with the UAE. However, Accuvant's role as exploit developer and seller was important enough to be detailed at length in Justice Department court filings.
The iMessage exploit was the primary weapon in an Emirati program called Karma , which was run by DarkMatter, an organization that posed as a private company but in fact acted as a de facto spy agency for the UAE.
Reuters reported the existence of Karma and the iMessage exploit in 2019. But on Tuesday, the US fined three former US intelligence and military personnel $1.68 million for their unlicensed work as mercenary hackers in the UAE. That activity included buying Accuvant’s tool and then directing UAE-funded hacking campaigns.
The US court documents noted that the exploits were developed and sold by American firms but did not name the hacking companies. Accuvant’s role has not been reported until now.
Accuvant was not the focus of the investigation because the sale it made was licensed and legal. A source with close knowledge of the development and sale of the exploit says that Accuvant was explicitly “directed” to make the sale of the exploit by a US intelligence agency and that the company did not know it would be used for foreign espionage. The court documents then describe manipulation of the exploit by the mercenaries to make it a more powerful tool for the UAE's purposes.
“The FBI will fully investigate individuals and companies that profit from illegal criminal cyber activity,” Bryan Vorndran, assistant director of the FBI’s Cyber Division, said in a statement. “This is a clear message to anybody, including former US government employees, who had considered using cyberspace to leverage export-controlled information for the benefit of a foreign government or a foreign commercial company—there is risk, and there will be consequences.” Prolific exploit developer Despite the fact that the UAE is considered a close ally of the United States, DarkMatter has been linked to cyberattacks against a range of American targets, according to court documents and whistleblowers.
Helped by American partnership, expertise, and money , DarkMatter built up the UAE’s offensive hacking capabilities over several years from almost nothing to a formidable and active operation. The group spent heavily to hire American and Western hackers to develop and sometimes direct the country’s cyber operations.
At the time of the sale, Accuvant was a research and development lab based in Denver, Colorado, that specialized in and sold iOS exploits.
“The FBI will fully investigate individuals and companies that profit from illegal criminal cyber activity. This is a clear message to anybody… there is risk, and there will be consequences.” A decade ago, Accuvant established a reputation as a prolific exploit developer working with bigger American military contractors and selling bugs to government customers. In an industry that typically values a code of silence, the company occasionally got public attention.
“Accuvant represents an upside to cyberwar: a booming market,” journalist David Kushner wrote in a 2013 profile of the company in Rolling Stone. It was the kind of company, he said, “capable of creating custom software that can enter outside systems and gather intelligence or even shut down a server, for which they can get paid up to $1 million.” Optiv largely exited the hacking industry following the series of mergers and acquisitions, but Accuvant’s alumni network is strong—and still working on exploits. Two high-profile employees went on to cofound Grayshift, an iPhone hacking company known for its skills at unlocking devices.
Accuvant sold hacking exploits to multiple customers in both governments and the private sector, including the United States and its allies—and this exact iMessage exploit was also sold simultaneously to multiple other customers, MIT Technology Review has learned.
iMessage flaws The iMessage exploit is one of several critical flaws in the messaging app that have been discovered and exploited over recent years. A 2020 update to the iPhone’s operating system shipped with a complete rebuilding of iMessage security in an attempt to make it harder to target.
Related Story The iPhone’s locked-down approach to security is spreading, but advanced hackers have found that higher barriers are great for avoiding capture.
The new security feature, called BlastDoor, isolates the app from the rest of the iPhone and makes it more difficult to access iMessage’s memory—the main way in which attackers were able to take over a target’s phone.
iMessage is a major target of hackers, for good reason. The app is included by default on every Apple device. It accepts incoming messages from anyone who knows your number. There is no way to uninstall it, no way to inspect it, nothing a user can do to defend against this kind of threat beyond downloading every Apple security update as soon as possible.
BlastDoor did make exploiting iMessage harder, but the app is still a favorite target of hackers. On Monday, Apple disclosed an exploit that the Israeli spyware company NSO Group had reportedly used to circumvent BlastDoor protections and take over the iPhone through a different flaw in iMessage. Apple declined to comment.
Note: This story was updated to clarify the nature of the sale.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,436 | 2,021 |
"Israel begins investigation into NSO Group spyware abuse | MIT Technology Review"
|
"https://www.technologyreview.com/2021/07/28/1030244/israel-investigation-nso-group-pegasus-spyware"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Israel begins investigation into NSO Group spyware abuse “We want them to check everything,” says the CEO of a cybersecurity company implicated in campaigns against politicians, activists, and other notable figures.
By Patrick Howell O'Neill archive page AP Photo/Daniella Cheslow, File Israeli government officials visited the offices of the hacking company NSO Group on Wednesday to investigate allegations that the firm’s spyware has been used to target activists, politicians, business executives, and journalists, the country’s defense ministry said in a statement today.
An investigation published last week by 17 global media organizations claims that phone numbers belonging to notable figures have been targeted by Pegasus, the notorious spyware that is NSO’s best-selling product.
The Ministry of Defense did not specify which government agencies were involved in the investigation, but Israeli media previously reported that the foreign ministry, justice ministry, Mossad, and military intelligence were also looking into the company following the report.
NSO Group CEO Shalev Hulio confirmed to MIT Technology Review that the visit had taken place but continued the company’s denials that the list published by reporters was linked to Pegasus.
Related Story “That’s true,” he said. “I believe it’s very good that they are checking, since we know the truth and we know that the list never existed and is not related to NSO.” The reports focused largely on the successful hacking of 37 smartphones of business leaders, journalists, and human rights activists. But they also pointed to a leaked list of over 50,000 more phone numbers of interest in countries that are reportedly clients of NSO Group. The company has repeatedly denied the reporting. At this point, both the source and meaning of the list remain unclear, but numerous phones on it were hacked, according to technical analysis by Amnesty International’s Security Lab.
When asked if the government’s investigation process will continue, Hulio said he hopes it will be ongoing.
“We want them to check everything and make sure that the allegations are wrong,” he added.
International scandal Despite the emphatic denials, the “Pegasus Project” has drawn international attention.
In the United States, Democratic members of Congress called for action against NSO.
“Private companies should not be selling sophisticated cyber-intrusion tools on the open market, and the United States should work with its allies to regulate this trade,” the lawmakers said. “Companies that sell such incredibly sensitive tools to dictatorships are the AQ Khans of the cyber world. They should be sanctioned, and if necessary, shut down.” The French government has said it will question Israeli defense minister Benny Gantz after French president Emmanuel Macron’s phone showed up on the leaked list. NSO denied any attempt to hack French officials.
NSO is not the only Israeli hacking company in the news lately. Microsoft and the University of Toronto’s Citizen Lab also recently reported on hacking tools developed by Candiru that were subsequently used to target civil society groups.
NSO Group is under the direct regulation of Israel’s Ministry of Defense, which approves each sale. Critics say the export licensing process is broken because it results in sales to authoritarian regimes that have used the hacking tools to commit abuses. NSO recently said the company has cut off five customers for abuse.
The ministry said last week that it will “take appropriate action” if it finds that NSO Group violated its export license.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,437 | 2,020 |
"Europe is adopting stricter rules on surveillance tech | MIT Technology Review"
|
"https://www.technologyreview.com/2020/11/09/1011837/europe-is-adopting-stricter-rules-on-surveillance-tech"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Europe is adopting stricter rules on surveillance tech By Patrick Howell O'Neill archive page "Smoking Gun" by Claudio Rousselon is licensed under CC BY 4.0 The European Union has agreed to stricter rules on the sale and export of cyber-surveillance technologies like facial recognition and spyware. After years of negotiations, the new regulation will be announced today in Brussels. Details of the plan were reported in Politico last month.
The regulation requires companies to get a government license to sell technology with military applications; calls for more due diligence on such sales to assess the possible human rights risks; and requires governments to publicly share details of the licenses they grant. These sales are typically cloaked in secrecy, meaning that multibillion-dollar technology is bought and sold with little public scrutiny.
“Today is a win for human rights globally, and we set an important precedent for other democracies to follow suit,” said Markéta Gregorová, a member of the European Parliament who was one of the lead negotiators on the new rules, in a statement. “The world’s authoritarian regimes will not be able to secretly get their hands on European cyber-surveillance anymore.” Human rights groups have long urged Europe to reform and strengthen the rules on surveillance technology. European-made surveillance tools were used by authoritarian regimes during the 2011 Arab Spring and continue to be sold to dictatorships and democracies around the world today; news headlines and political pressure have had little noticeable impact.
The main thing the new regulation achieves, according to its backers, is more transparency. Governments must either disclose the destination, items, value, and licensing decisions for cyber-surveillance exports or make public the decision not to disclose those details. The goal is to make it easier to publicly shame governments that sell surveillance tools to dictatorships.
The regulation also includes guidance to member states to “consider the risk of use in connection with internal repression or the commission of serious violations of international human rights and international humanitarian law," but that is nonbinding.
It remains to be seen, therefore, how much of a difference the new rules will make. Human rights workers and independent experts have been skeptical, and even some negotiators who hammered out this deal over the course of several years expressed doubts in conversations with MIT Technology Review, though none was willing to speak on the record.
The regulation’s effectiveness will depend on Europe’s national governments, which will be responsible for much of the implementation. Germany currently controls the presidency of the European Council and pushed to have this regulation agreed to before its term is up in December. The country showed how enforcement of these rules could work last month when German authorities raided the offices of the spyware maker FinFisher for allegedly selling surveillance tools to oppressive regimes.
The new regulation mentions some specific surveillance tools, but it’s written to be more flexible and expansive than both Europe’s own previous regulation and even the Wassenaar Arrangement, one of the most important global export control agreements for weapons and dual-use technologies.
The new rules include a “catch-all” provision for cyber-surveillance items even if they’re not explicitly listed. For instance, facial recognition is not mentioned in the regulation but, one negotiator says, clearly falls under it. Still, how the rules are actually applied remains to be seen.
Another obvious weakness of the new regulation is that it only covers EU member states.
Europe does boast some of the most famous surveillance tech companies, including Gamma Group in the United Kingdom and Italy’s Hacking Team, which became Memento Labs.
But other countries, including Israel and the United States, have their own thriving surveillance technology industries.
The lawmakers who worked on the new European regulation say they aim to create a global coalition of democracies willing to more tightly control the export of surveillance technologies. It’s widely agreed, even within the spyware industry itself, that reform makes sense—but this regulation is only the beginning.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,438 | 2,020 |
"Israeli phone hacking company faces court fight over sales to Hong Kong | MIT Technology Review"
|
"https://www.technologyreview.com/2020/08/25/1007617/israeli-phone-hacking-company-faces-court-fight-over-sales-to-hong-kong"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Israeli phone hacking company faces court fight over sales to Hong Kong By Patrick Howell O'Neill archive page Photo by Simon Zhu on Unsplash Human rights advocates filed a new court petition against the Israeli phone hacking company Cellebrite, urging Israel’s ministry of defense to halt the firm’s exports to Hong Kong, where security forces have been using the technology in crackdowns against dissidents as China takes greater control.
In July, police court filings revealed that Cellebrite’s phone hacking technology has been used to break into 4,000 phones of Hong Kong citizens, including prominent pro-democracy politician and activist Joshua Wong. He subsequently launched an online petition to end Cellebrite’s sales to Hong Kong, which gained 35,000 signatures.
“Defense Ministry officials must immediately stop the export of the Cellebrite system which is used for infringement on privacy, deprivation of liberty and freedom of expression, and political incrimination of Hong Kong citizens under the new National Security Law,” Wong wrote in a Facebook post urging Israel to stop Cellebrite’s exports to Hong Kong.
Hong Kong’s new security law, which increases Beijing’s control of the city, defines pro-democracy protests as terrorism, severely limits free speech, and reduces much of the autonomy that the city once had from China. As of May, the United States no longer considered Hong Kong autonomous from the mainland.
Hong Kong activists say that Cellebrite’s tech is “used to inflict terrorism on the city's residents and to attack demonstrators and pro-democracy activists.” Israeli human rights advocates say exports to Hong Kong police should legally have stopped in 2019 when anti-democratic crackdowns grew dramatically.
Now the Israeli petition in court aims to put legal and political pressure on the technology firm, which is based in Tel Aviv.
“I’m asking the minister of defense to stop the Cellebrite exports to Hong Kong,” says Eitay Mack, the human rights lawyer who filed the petition in the district court in Tel Aviv. “I’m also saying that, as far as I know, they never got an export license. The ministry of defense needs to enforce the law from companies with licenses, but also they need to do oversight on companies working without a license.” Related Story Cellebrite is at the center of the global encryption debate. Apple’s and Google’s devices enable strong encryption, which instantly afforded more security to users of iPhones and Android systems. Law enforcement officials who argue that encryption means criminals are “going dark” use products like Cellebrite’s to break into phones and access and organize all the data inside. Cellebrite says it can “unlock devices with ease” by finding vulnerabilities in targeted smartphones and exploiting them. Most often, that means police and security forces unlocking phones of criminal suspects—a category that now includes pro-democracy advocates in Hong Kong.
The company has government customers spanning the United States, Europe, and Asia. In 2019, it boasted that it could unlock any iPhone and most Android phones.
Cellebrite’s powerful technology is sold to police and security forces around the world, and there are big questions over what democratic oversight and regulation of the company actually exists. Cellebrite would typically have to get an export license from Israel’s economic or defense ministry. Officials at the economic ministry say they have not granted such a license, and the defense ministry has remained quiet on Cellebrite’s sales. The agency, which has a policy of not commenting on specific companies it regulates, has not responded to Mack’s inquiries, or to requests for comment for this article.
“The system of regulation is not working,” says Mack, who argues that the dramatic changes in Hong Kong now require the ministry of defense to regulate Cellebrite and stop all sales there.
In addition to publicly pressuring the Israeli government, the petitioners want Cellebrite’s employees to take notice.
“I hope Cellebrite will have a rebellion inside the company,” Mack says. “The workers inside the company didn’t join to help the Chinese dictatorship.” Cellebrite’s business is global, and Hong Kong is not the only problem point. In Belarus, widespread pro-democracy protests have been met with aggressive crackdowns by dictator Alexander Lukashenko. Mack argues that Israel must stop Cellebrite from selling its technology to the Belarussian government.
The company first made global headlines following reports that its technology was used to hack into the iPhone of the San Bernardino terrorists following the deadly attack in 2015. The company denied involvement. Cellebrite did not respond to a request for comment.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,439 | 2,020 |
"Inside NSO, Israel’s billion-dollar spyware giant | MIT Technology Review"
|
"https://www.technologyreview.com/2020/08/19/1006458/nso-spyware-controversy-pegasus-human-rights"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Inside NSO, Israel’s billion-dollar spyware giant By Patrick Howell O'Neill archive page Ariel Davis Maâti Monjib speaks slowly, like a man who knows he’s being listened to.
It’s the day of his 58th birthday when we speak, but there’s little celebration in his voice. “The surveillance is hellish,” Monjib tells me. “It is really difficult. It controls everything I do in my life.” A history professor at the University of Mohammed V in Rabat, Morocco, Monjib vividly remembers the day in 2017 when his life changed. Charged with endangering state security by the government he has fiercely and publicly criticized, he was sitting outside a courtroom when his iPhone suddenly lit up with a series of text messages from numbers he didn’t recognize. They contained links to salacious news, petitions, and even Black Friday shopping deals.
A month later, an article accusing him of treason appeared on a popular national news site with close ties to Morocco’s royal rulers. Monjib was used to attacks, but now it seemed his harassers knew everything about him: another article included information about a pro-democracy event he was set to attend but had told almost no one about. One story even proclaimed that the professor “has no secrets from us.” He’d been hacked. The messages had all led to websites that researchers say were set up as lures to infect visitors’ devices with Pegasus, the most notorious spyware in the world.
Pegasus is the blockbuster product of NSO Group, a secretive billion-dollar Israeli surveillance company. It is sold to law enforcement and intelligence agencies around the world, which use the company’s tools to choose a human target, infect the person’s phone with the spyware, and then take over the device. Once Pegasus is on your phone, it is no longer your phone.
NSO sells Pegasus with the same pitch arms dealers use to sell conventional weapons, positioning it as a crucial aid in the hunt for terrorists and criminals. In an age of ubiquitous technology and strong encryption, such “lawful hacking” has emerged as a powerful tool for public safety when law enforcement needs access to data. NSO insists that the vast majority of its customers are European democracies, although since it doesn’t release client lists and the countries themselves remain silent, that has never been verified.
Monjib’s case, however, is one of a long list of incidents in which Pegasus has been used as a tool of oppression. It has been linked to cases including the murder of Saudi journalist Jamal Khashoggi, the targeting of scientists and campaigners pushing for political reform in Mexico, and Spanish government surveillance of Catalan separatist politicians. Mexico and Spain have denied using Pegasus to spy on opponents, but accusations that they have done so are backed by substantial technical evidence.
NSO’s basic argument is that it is the creator of a technology that governments use, but that since it doesn’t attack anyone itself, it can’t be held responsible.
Some of that evidence is contained in a lawsuit filed last October in California by WhatsApp and its parent company, Facebook, alleging that Pegasus manipulated WhatsApp’s infrastructure to infect more than 1,400 cell phones. Investigators at Facebook found more than 100 human rights defenders, journalists, and public figures among the targets, according to court documents. Each call that was picked up, they discovered, sent malicious code through WhatsApp’s infrastructure and caused the recipient’s phone to download spyware from servers owned by NSO. This, WhatsApp argued, was a violation of American law.
Related Story NSO has long faced such accusations with silence. Claiming that much of its business is an Israeli state secret, it has offered precious little public detail about its operations, customers, or safeguards.
Now, though, the company suggests things are changing. In 2019, NSO, which was owned by a private equity firm, was sold back to its founders and another private equity firm, Novalpina, for $1 billion. The new owners decided on a fresh strategy: emerge from the shadows. The company hired elite public relations firms, crafted new human rights policies, and developed new self-governance documents. It even began showing off some of its other products, such as a covid-19 tracking system called Fleming, and Eclipse, which can hack drones deemed a security threat.
Over several months, I’ve spoken with NSO leadership to understand how the company works and what it says it is doing to prevent human rights abuses carried out using its tools. I have spoken to its critics, who see it as a danger to democratic values; to those who urge more regulation of the hacking business; and to the Israeli regulators responsible for governing it today. The company’s leaders talked about NSO’s future and its policies and procedures for dealing with problems, and it shared documents that detail its relationship with the agencies to which it sells Pegasus and other tools. What I found was a thriving arms dealer—inside the company, employees acknowledge that Pegasus is a genuine weapon—struggling with new levels of scrutiny that threaten the foundations of its entire industry.
“A difficult task” From the first day Shmuel Sunray joined NSO as its general counsel, he faced one international incident after another. Hired just days after WhatsApp’s lawsuit was filed, he found other legal problems waiting on his desk as soon as he arrived. They all centered on the same basic accusation: NSO Group’s hacking tools are sold to, and can be abused by, rich and repressive regimes with little or no accountability.
Sunray had plenty of experience with secrecy and controversy: his previous job was as vice president of a major weapons manufacturer. Over several conversations, he was friendly as he told me that he’s been instructed by the owners to change NSO’s culture and operations, making it more transparent and trying to prevent human rights abuses from happening. But he was also obviously frustrated by the secrecy that he felt prevented him from responding to critics.
“It’s a difficult task,” Sunray told me over the phone from the company’s headquarters in Herzliya, north of Tel Aviv. “We understand the power of the tool; we understand the impact of misuse of the tool. We’re trying to do the right thing. We have real challenges dealing with government, intelligence agencies, confidentiality, operational necessities, operational limitations. It’s not a classic case of human rights abuse by a company, because we don’t operate the systems—we’re not involved in actual operations of the systems—but we understand there is a real risk of misuse from the customers. We’re trying to find the right balance.” This underpins NSO’s basic argument, one that is common among weapons manufacturers: the company is the creator of a technology that governments use, but it doesn’t attack anyone itself, so it can’t be held responsible.
Still, according to Sunray, there are several layers of protection in place to try to make sure the wrong people don’t have access.
Making a sale Like most other countries, Israel has export controls that require weapons manufacturers to be licensed and subject to government oversight. In addition, NSO does its own due diligence, says Sunray: its staff examine a country, look at its human rights record, and scrutinize its relationship with Israel. They assess the specific agency’s track record on corruption, safety, finance, and abuse—as well as factoring in how much it needs the tool.
Sometimes negatives are weighed against positives. Morocco, for example, has a worsening human rights record but a lengthy history of cooperating with Israel and the West on security, as well as a genuine terrorism problem, so a sale was reportedly approved. By contrast, NSO has said that China, Russia, Iran, Cuba, North Korea, Qatar, and Turkey are among 21 nations that will never be customers.
Finally, before a sale is made, NSO’s governance, risk, and compliance committee has to sign off. The company says the committee, made up of managers and shareholders, can decline sales or add conditions, such as technological restrictions, that are decided case by case.
Preventing abuse Once a sale is agreed to, the company says, technological guardrails prevent certain kinds of abuse. For example, Pegasus does not allow American phone numbers to be infected, NSO says, and infected phones cannot even be physically located in the United States: if one does find itself within American borders, the Pegasus software is supposed to self-destruct.
NSO says Israeli phone numbers are among others also protected, though who else gets protection and why remains unclear.
When a report of abuse comes in, an ad hoc team of up to 10 NSO employees is assembled to investigate. They interview the customer about the allegations, and they request Pegasus data logs. These logs don’t contain the content the spyware extracted, like chats or emails—NSO insists it never sees specific intelligence—but do include metadata such as a list of all the phones the spyware tried to infect and their locations at the time.
According to one recent contract I obtained, customers must “use the system only for the detection, prevention, and investigation of crimes and terrorism and ensure the system will not be used for human rights violations.” They must notify the company of potential misuse. NSO says it has terminated three contracts in the past for infractions including abuse of Pegasus, but it refuses to say which countries or agencies were involved or who the victims were.
“We’re not naïve” Lack of transparency is not the only problem: the safeguards have limits. While the Israeli government can revoke NSO’s license for violations of export law, the regulators do not take it on themselves to look for abuse by potential customers and aren’t involved in the company’s abuse investigations.
Many of the other procedures are merely reactive as well. NSO has no permanent internal abuse team, unlike almost any other billion-dollar tech firm, and most of its investigations are spun up only when an outside source such as Amnesty International or Citizen Lab claims there has been malfeasance. NSO staff interview the agencies and customers under scrutiny but do not talk to the alleged victims, and while the company often disputes the technical reports offered as evidence, it also claims that both state secrecy and business confidentiality prevent it from sharing more information.
The Pegasus logs that are crucial to any abuse inquiry also raise plenty of questions. NSO Group’s customers are hackers who work for spy agencies; how hard would it be for them to tamper with the logs? In a statement, the company insisted this isn’t possible but declined to offer details.
If the logs aren’t disputed, NSO and its customers will decide together whether targets are legitimate, whether genuine crimes have been committed, and whether surveillance was done under due process of law or whether autocratic regimes spied on opponents.
Sunray, audibly exasperated, says he feels as if secrecy is forcing him to operate with his hands tied behind his back.
“It’s frustrating,” he told me. “We’re not naïve. There have been misuses. There will be misuses. We sell to many governments. Even the US government—no government is perfect. Misuse can happen, and it should be addressed.” But Sunray also returns to the company’s standard response, the argument that underpins its defense in the WhatsApp lawsuit: NSO is a manufacturer, but it’s not the operator of the spyware.
We built it but they did the hacking—and they are sovereign nations.
That’s not enough for many critics. “No company that believes it can be the independent watchdog of their own products ever convinces me,” says Marietje Schaake, a Dutch politician and former member of the European Parliament. “The whole idea that they have their own mechanisms while they have no problem selling commercial spyware to whoever wants to buy it, knowing that it’s used against human rights defenders and journalists—I think it shows the lack of responsibility on the part of this company more than anything.” So why the internal push for more transparency now? Because the deluge of technical reports from human rights groups, the WhatsApp lawsuit, and increasing governmental scrutiny threaten NSO’s status quo. And if there is going to be a new debate over how the industry gets regulated, it pays to have a powerful voice.
Growing scrutiny Lawful hacking and cyber-espionage have grown enormously as a business over the past decade, with no signs of retreat. NSO Group’s previous owners bought the company in 2014 for $130 million, less than one-seventh of the valuation it was sold for last year. The rest of the industry is expanding too, profiting from the spread of communications technology and deepening global instability. “There’s no doubt that any state has the right to buy this technology to fight crime and terrorism,” says Amnesty International’s deputy director, Danna Ingleton. “States are rightfully and lawfully able to use these tools. But that needs to be accompanied more with a regulatory system that prevents abuses and provides an accountability mechanism when abuse has happened.” Shining a much brighter light on the hacking industry, she argues, will allow for better regulation and more accountability.
Earlier this year Amnesty International was in court in Israel arguing that the Ministry of Defense should revoke NSO’s license because of abuses of Pegasus. But just as the case was starting, officials from Amnesty and 29 other petitioners were told to leave the courtroom: a gag order was being placed on the proceedings at the ministry’s urging. Then, in July, a judge rejected the case outright.
“I do not believe as a matter of principle and as a matter of law that NSO can claim a complete lack of responsibility for the way their tools are being used,” says United Nations special rapporteur Agnès Callamard. “That’s not how it works under international law.” Callamard advises the UN on extrajudicial executions and has been vocal about NSO Group and the spyware industry ever since it emerged that Pegasus was being used to spy on friends and associates of Khashoggi shortly before he was murdered. For her, the issue has life-or-death consequences.
If NSO loses the WhatsApp case, one lawyer says, it calls into question all those companies that make their living by finding flaws in software and exploiting them.
“We’re not calling for something radically new,” says Callamard. “We are saying that what’s in place at the moment is proving insufficient, and therefore governments or regulatory agencies need to move into a different gear quickly. The industry is expanding, and it should expand on the basis of the proper framework to regulate misuse. It’s important for global peace.” There have been calls for a temporary moratorium on sales until stronger regulation is enacted, but it’s not clear what that legal framework would look like. Unlike conventional arms, which are subject to various international laws, cyber weapons are currently not regulated by any worldwide arms control agreement. And while nonproliferation treaties have been suggested, there is little clarity on how they would measure existing capabilities, how monitoring or enforcement would work, or how the rules would keep up with rapid technological developments. Instead, most scrutiny today is happening at the national legal level.
In the US, both the FBI and Congress are looking into possible hacks of American targets, while an investigation led by Senator Ron Wyden’s office wants to find out whether any Americans are involved in exporting surveillance technology to authoritarian governments. A recent draft US intelligence bill would require a government report on commercial spyware and surveillance technology.
The WhatsApp lawsuit, meanwhile, has taken aim close to the heart of NSO’s business. The Silicon Valley giant argues that by targeting California residents—that is, WhatsApp and Facebook—NSO has given the court in San Francisco jurisdiction, and that the judge in the case can bar the Israeli company from future attempts to misuse WhatsApp’s and Facebook’s networks. That opens the door to an awful lot of possibilities: Apple, whose iPhone has been a paramount NSO target, could feasibly mount a similar legal attack. Google, too, has spotted NSO targeting Android devices.
And financial damages are not the only sword hanging over NSO’s head. Such lawsuits also bring with them the threat of courtroom discovery, which has the potential to bring details of NSO’s business deals and customers into the public eye.
“A lot depends on exactly how the court rules and how broadly it characterizes the violation NSO is alleged to have committed here,” says Alan Rozenshtein, a former Justice Department lawyer now at the University of Minnesota Law School. “At a minimum, if NSO loses this case, it calls into question all of those companies that make their products or make their living by finding flaws in messaging software and providing services exploiting those flaws. This will create enough legal uncertainty that I would imagine these would-be clients would think twice before contracting with them. You don’t know if the company will continue to operate, if they’ll get dragged to court, if your secrets will be exposed.” NSO declined to comment on the alleged WhatsApp hack, since it is still an active case.
“We are always spied on” In Morocco, Maâti Monjib was subjected to at least four more hacking attacks throughout 2019, each more advanced than the one before. At some point, his phone browser was invisibly redirected to a suspicious domain that researchers suspect was used to silently install malware. Instead of something like a text message that can raise the alarm and leaves a visible trace, this one was a much quieter network injection attack, a tactic valued because it’s almost imperceptible except to expert investigators.
On September 13, 2019, Monjib had lunch at home with his friend Omar Radi, a Moroccan journalist who is one of the regime’s sharpest critics. That very day, an investigation later found, Radi was hit with the same kind of network injection attacks that had snared Monjib. The hacking campaign against Radi lasted at least into January 2020, Amnesty International researchers said. He’s been subject to regular police harassment ever since.
At least seven more Moroccans received warnings from WhatsApp about Pegasus being used to spy on their phones, including human rights activists, journalists, and politicians. Are these the kinds of legitimate spying targets—the terrorists and criminals—laid out in the contract that Morocco and all NSO customers sign? In December, Monjib and the other victims sent a letter to Morocco’s data protection authority asking for an investigation and action. Nothing formally came of it, but one of the men, the pro-democracy economist Fouad Abdelmoumni, says his friends high up at the agency told him the letter was hopeless and urged him to drop the matter. The Moroccan government, meanwhile, has responded by threatening to expel Amnesty International from the country.
What’s happening in Morocco is emblematic of what’s happening around the world. While it’s clear that democracies are major beneficiaries of lawful hacking, a long and growing list of credible, detailed, technical, and public investigations shows Pegasus being misused by authoritarian regimes with long records of human rights abuse.
“Morocco is a country under an authoritarian regime who believe people like Monjib and myself have to be destroyed,” says Abdelmoumni. “To destroy us, having access to all information is key. We always consider that we are spied on. All of our information is in the hands of the palace.” Read more hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,440 | 2,019 |
"Champagne, shotguns, and surveillance at spyware’s grand bazaar | MIT Technology Review"
|
"https://www.technologyreview.com/2019/11/25/131837/champagne-shotguns-and-surveillance-at-spywares-grand-bazaar"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Champagne, shotguns, and surveillance at spyware’s grand bazaar By Patrick Howell O'Neill archive page There are two open entrances into the Parc des Expositions, a short train ride from the heart of Paris just outside of Charles de Gaulle airport. Walk into Hall 5 and you might see what looks like an Apple store for armies: powerful weapons positioned to look beautiful on minimal marble tables, or hanging from walls like pieces of art.
The presentation is aimed at selling to hordes of buyers from countries all around the world, from the United States to the United Arab Emirates—some in full military dress and most outfitted more discreetly, with name tags hidden in their inside jacket pockets. The machine gun sales pitches and spyware product demos take place in an atmosphere of mild discomfort, the consequence of a secretive industry doing business in a semi-public space.
The 246,000-square-meter Parc des Expositions complex is home to Milipol, the world’s biggest gathering of companies selling the latest, greatest, and most powerful technology to militaries, police, and intelligence agencies. In a world of seemingly permanent geopolitical tension and security fears, this is the event no enterprising arms dealer can miss.
It’s Hall 4 that’s devoted to the future of cyber, a domain of war where demand is growing rapidly. Some countries have the goods in spades, but most countries lack powerful cyber capabilities, and every country wants more.
The physical side of the show—the guns, grenades, and armored vehicles—has experienced an enduring boom ever since 9/11. But it’s the cyber side—the spyware, interception kits, and cybersurveillance tools—that has seen rapid growth ever since the Arab Spring proved that the internet is a tool powerful enough to bring down regimes. Those very same regimes, and many others, come to shows like Milipol looking to buy their way to stronger control.
Milipol is where you buy it.
The future of cyber Milipol is a rare moment of relative transparency for an industry used to secrecy.
“It’s important to let people know what companies are out there and who they’re selling to,” says Edin Omanovic of the UK-based organization Privacy International. “It’s important to know if regulation is in place. You can’t hold these decisions to account if you don’t know what the decisions are.” NSO Group, the Israeli company currently embroiled in a long list of controversies over alleged spyware abuse, is one of many vendors to set up shop at the conference. NSO’s booth is one of the biggest on the floor, but relatively private, with a dark cyberpunk theme. Skyscraper walls keep visitors discreet and conversations private as deals are being made.
The surveillance industry has come under intense scrutiny in recent years, none more so than the spotlight on NSO Group. The company has been sued by WhatsApp for allegedly spying on politicians, journalists, and human rights activists in India, Mexico, the United Arab Emirates, Bahrain, and Saudi Arabia. It denied all allegations of wrongdoing while it went on a public relations offensive and adopted the United Nations guidelines on human rights.
Israel is a dominant country in the industry thanks to “the Israeli mind and the Israeli experience,” says Alon Shahak of the Israel Export Institute. Its life in a hostile neighborhood has given rise to a world-class hacking industry by necessity. Drones, weapons, and armor companies from Israel are hawking their goods at Milipol too.
At Milipol you can buy potent zero-day vulnerabilities or powerful data interception equipment plus the drones, vehicles, or backpacks to move hacking tools wherever needed. All of that is a short walk from the machine guns, grenades, and state-of-the-art mine detection tech. If you need matching body armor and holsters for you and your attack dogs, I know the booth you should go to.
Despite the tens of thousands of visitors, the company organizing the event, Comexposium, makes sure attendance is limited to professionals. There are no hobbyists or NRA gun fetishists here, says Michael Weatherseed, who runs the security unit at Comexposium—not unless they have a weighty professional title too.
But it’s cyber that’s growing the fastest as the sector attracts new people, companies, buyers, and technology thanks in part to the success of high-profile, high-controversy firms like NSO Group.
An NSO employee who declined to be named says the company is unreasonably attacked in spite of efforts to abide by United Nations guidelines on human rights. The reported government abuse of NSO Group’s hacking tools has given rise to ethical concerns inside the company, the employee says, but management has done an effective job of making employees feel that those concerns are heard and that outside criticism is overblown.
The big question facing these increasingly powerful companies is whether there is anything they should or even could do to prevent abuse of their powerful surveillance technologies by the governments that pay them millions. The tech has targeted terrorists and criminals but also opposition politicians, human rights activists, journalists, and many others.
NSO Group is said to have declined or canceled multiple contracts because of abuse concerns, although there is no specific information on those cases. The employee is unaware of any team within the company paid specifically to find and prevent abuse by government customers, a common tool in many big tech companies.
The company responded by saying they “always investigate whenever we become aware of a well-founded report of alleged unlawful digital surveillance and communication interception that might involve a customer’s use of our products.’” Such an investigation is carried out by the Governance, Risk and Compliance Committee.
"While misuse is extremely rare, the company takes it seriously and considers using the technology for anything other than prevention or investigation of crime and terrorism a misuse," a company spokesperson says.
However, NSO Group did not respond to the question of whether anyone within the company is proactively looking for abuse as opposed to responding to outside reports.
Future-facing Sometimes acts of war can get in the way of the business of war.
The Paris show is Milipol’s flagship event. Others take place in Kuala Lumpur, to serve Asian customers, and in Qatar, for the demanding Middle East region.
The Qatar event is getting complicated because the Saudi Arabians and their regional Emirati allies, some of the biggest and wealthiest customers for these companies, refuse to enter the country because of diplomatic conflict. That clash has roots in the Arab Spring, the Yemeni civil war, and ongoing terrorism throughout the region.
“But we did the Qatar event last year anyway,” Weatherseed says. “It was still a record year. No Saudis or Emiratis this time, but we still had the Kuwaitis, Bahrainis, and countries from Northern Africa, Western Asia, and Southern Asia. Despite it all, everything worked well. And my prediction is that the cyber sector at these events will only continue to grow.” The Persian Gulf is home to a trio of major events where spyware companies and government officials mingle to make multimillion-dollar deals. Milipol, ISS World , and IDEX all attract big crowds with increasing demands. Tal Dilian, the CEO of the surveillance company Intellexa, says Asian and African governments feel more comfortable purchasing surveillance equipment and spyware in the region—where, critics say , there is less legal oversight of the booming industry.
But judging by the expensive bottles of champagne several salespeople popped at the end of this show, it’s safe to say Paris was yet another big surveillance industry success.
Update: The article has been updated to include NSO Group's response to questions about anti-abuse.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,441 | 2,019 |
"Champagne, shotguns, and surveillance at spyware’s grand bazaar | MIT Technology Review"
|
"https://www.technologyreview.com/s/614755/champagne-shotguns-and-surveillance-at-spywares-grand-bazaar"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Champagne, shotguns, and surveillance at spyware’s grand bazaar By Patrick Howell O'Neill archive page There are two open entrances into the Parc des Expositions, a short train ride from the heart of Paris just outside of Charles de Gaulle airport. Walk into Hall 5 and you might see what looks like an Apple store for armies: powerful weapons positioned to look beautiful on minimal marble tables, or hanging from walls like pieces of art.
The presentation is aimed at selling to hordes of buyers from countries all around the world, from the United States to the United Arab Emirates—some in full military dress and most outfitted more discreetly, with name tags hidden in their inside jacket pockets. The machine gun sales pitches and spyware product demos take place in an atmosphere of mild discomfort, the consequence of a secretive industry doing business in a semi-public space.
The 246,000-square-meter Parc des Expositions complex is home to Milipol, the world’s biggest gathering of companies selling the latest, greatest, and most powerful technology to militaries, police, and intelligence agencies. In a world of seemingly permanent geopolitical tension and security fears, this is the event no enterprising arms dealer can miss.
It’s Hall 4 that’s devoted to the future of cyber, a domain of war where demand is growing rapidly. Some countries have the goods in spades, but most countries lack powerful cyber capabilities, and every country wants more.
The physical side of the show—the guns, grenades, and armored vehicles—has experienced an enduring boom ever since 9/11. But it’s the cyber side—the spyware, interception kits, and cybersurveillance tools—that has seen rapid growth ever since the Arab Spring proved that the internet is a tool powerful enough to bring down regimes. Those very same regimes, and many others, come to shows like Milipol looking to buy their way to stronger control.
Milipol is where you buy it.
The future of cyber Milipol is a rare moment of relative transparency for an industry used to secrecy.
“It’s important to let people know what companies are out there and who they’re selling to,” says Edin Omanovic of the UK-based organization Privacy International. “It’s important to know if regulation is in place. You can’t hold these decisions to account if you don’t know what the decisions are.” NSO Group, the Israeli company currently embroiled in a long list of controversies over alleged spyware abuse, is one of many vendors to set up shop at the conference. NSO’s booth is one of the biggest on the floor, but relatively private, with a dark cyberpunk theme. Skyscraper walls keep visitors discreet and conversations private as deals are being made.
The surveillance industry has come under intense scrutiny in recent years, none more so than the spotlight on NSO Group. The company has been sued by WhatsApp for allegedly spying on politicians, journalists, and human rights activists in India, Mexico, the United Arab Emirates, Bahrain, and Saudi Arabia. It denied all allegations of wrongdoing while it went on a public relations offensive and adopted the United Nations guidelines on human rights.
Israel is a dominant country in the industry thanks to “the Israeli mind and the Israeli experience,” says Alon Shahak of the Israel Export Institute. Its life in a hostile neighborhood has given rise to a world-class hacking industry by necessity. Drones, weapons, and armor companies from Israel are hawking their goods at Milipol too.
At Milipol you can buy potent zero-day vulnerabilities or powerful data interception equipment plus the drones, vehicles, or backpacks to move hacking tools wherever needed. All of that is a short walk from the machine guns, grenades, and state-of-the-art mine detection tech. If you need matching body armor and holsters for you and your attack dogs, I know the booth you should go to.
Despite the tens of thousands of visitors, the company organizing the event, Comexposium, makes sure attendance is limited to professionals. There are no hobbyists or NRA gun fetishists here, says Michael Weatherseed, who runs the security unit at Comexposium—not unless they have a weighty professional title too.
But it’s cyber that’s growing the fastest as the sector attracts new people, companies, buyers, and technology thanks in part to the success of high-profile, high-controversy firms like NSO Group.
An NSO employee who declined to be named says the company is unreasonably attacked in spite of efforts to abide by United Nations guidelines on human rights. The reported government abuse of NSO Group’s hacking tools has given rise to ethical concerns inside the company, the employee says, but management has done an effective job of making employees feel that those concerns are heard and that outside criticism is overblown.
The big question facing these increasingly powerful companies is whether there is anything they should or even could do to prevent abuse of their powerful surveillance technologies by the governments that pay them millions. The tech has targeted terrorists and criminals but also opposition politicians, human rights activists, journalists, and many others.
NSO Group is said to have declined or canceled multiple contracts because of abuse concerns, although there is no specific information on those cases. The employee is unaware of any team within the company paid specifically to find and prevent abuse by government customers, a common tool in many big tech companies.
The company responded by saying they “always investigate whenever we become aware of a well-founded report of alleged unlawful digital surveillance and communication interception that might involve a customer’s use of our products.’” Such an investigation is carried out by the Governance, Risk and Compliance Committee.
"While misuse is extremely rare, the company takes it seriously and considers using the technology for anything other than prevention or investigation of crime and terrorism a misuse," a company spokesperson says.
However, NSO Group did not respond to the question of whether anyone within the company is proactively looking for abuse as opposed to responding to outside reports.
Future-facing Sometimes acts of war can get in the way of the business of war.
The Paris show is Milipol’s flagship event. Others take place in Kuala Lumpur, to serve Asian customers, and in Qatar, for the demanding Middle East region.
The Qatar event is getting complicated because the Saudi Arabians and their regional Emirati allies, some of the biggest and wealthiest customers for these companies, refuse to enter the country because of diplomatic conflict. That clash has roots in the Arab Spring, the Yemeni civil war, and ongoing terrorism throughout the region.
“But we did the Qatar event last year anyway,” Weatherseed says. “It was still a record year. No Saudis or Emiratis this time, but we still had the Kuwaitis, Bahrainis, and countries from Northern Africa, Western Asia, and Southern Asia. Despite it all, everything worked well. And my prediction is that the cyber sector at these events will only continue to grow.” The Persian Gulf is home to a trio of major events where spyware companies and government officials mingle to make multimillion-dollar deals. Milipol, ISS World , and IDEX all attract big crowds with increasing demands. Tal Dilian, the CEO of the surveillance company Intellexa, says Asian and African governments feel more comfortable purchasing surveillance equipment and spyware in the region—where, critics say , there is less legal oversight of the booming industry.
But judging by the expensive bottles of champagne several salespeople popped at the end of this show, it’s safe to say Paris was yet another big surveillance industry success.
Update: The article has been updated to include NSO Group's response to questions about anti-abuse.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,442 | 2,023 |
"Marc Andreessen Is (Mostly) Wrong This Time | WIRED"
|
"https://www.wired.com/story/artificial-intelligence-marc-andreessen-labor-politics"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gideon Lichfield Ideas Marc Andreessen Is (Mostly) Wrong This Time Illustration: TUMEGGY/Getty Images Save this story Save Save this story Save Marc Andreessen occasionally sets the world on its ear with a sweeping hypothesis about the dawn of a new technological era. In his legendary 2011 blog post “ Why Software Is Eating the World ,” the cofounder of Andreessen Horowitz made the then-novel, now-undeniable case that even the most old-school industrial companies would soon have to put software at their core. In 2020, as Covid-19 caught the world desperately short of masks and nasal swabs, he published “ It’s Time To Build ,” a call to arms for reviving investment in technologies that could solve urgent problems like pandemics, climate change, crumbling infrastructure, and housing shortages.
Now he’s back with a 7,000-word screed, another stab at framing the narrative; this time, the story is that “ AI will not destroy the world, and in fact may save it.
” Much of it is devoted to debunking AI doom scenarios, and the rest to touting AI as little short of a civilizational savior.
This is of course predictable. Andreessen invests in technological revolutions, so he has little incentive to do anything but hype them up. His post does have value, though, in two ways. First, its obvious blind spots are a useful guide to the thinking of the biggest AI hypesters and where they go astray. Second, its takedown of some of the more hysterical AI fears is actually (somewhat) on target.
So let’s dive in.
Andreessen tips his hand early by offering “a brief description of AI”: “The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it ” (my emphasis).
This seemingly innocuous parallel with human thinking, much like the phrase “artificial intelligence” itself, elides the vast gulf in capability between human minds and the current state of machine learning. Large language models (LLMs) are statistical inference algorithms. They predict the next likeliest thing in a sequence of things, such as words in a sentence. They produce what looks very much like human writing because they’ve been trained on vast quantities of human writing to predict what a human would write.
You’ll have already noticed that this is not even remotely similar to how you “understand, synthesize and generate knowledge.” You, like every human, have learned about the world by directly interacting with it. You’ve developed conceptions of physical objects such as trees and tables, of abstractions such as poverty and ethics, and of other people’s thoughts and feelings. You’ve learned to use language to talk about and process those conceptions, but language is just a layer for you, a way to share and refine your mental picture of the world. For LLMs, there is no mental picture; language is all there is.
To be sure, LLMs have made surprising leaps in ability recently, leading Microsoft researchers to claim that GPT-4, the latest model from OpenAI, contains “sparks” of general intelligence.
And LLMs are not the only avenue of AI research. It can’t be ruled out that machines will eventually develop something more like our intelligence—though there are also good reasons to think it will end up being more alien than human.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg However, it’s essential to Andreessen’s argument that you perceive AI as headed toward an ideal version of full humanlike intelligence, because what he does next is enumerate some of the ways this form of AI will make the world better.
In Andreessen’s promised AI-augmented world, “every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” Every adult will have “an AI assistant/coach/mentor/trainer/advisor/therapist” that “will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.” Giving AI coaches to influential people like CEOs and government officials “may be the most important of all” the augmentations because “the magnification effects of better decisions by leaders across the people they lead are enormous.” There are two key blind spots here. First is the assumption that these AI sidekicks will be “infinitely knowledgeable”—a gigantic stretch given that right now LLMs routinely make up facts out of whole cloth and sometimes continue to do so even when their human users point out the errors. (They do this because, as mentioned above, LLMs are just statistical patterns of words, with no conception of the reality behind the words.) More troubling is the assumption that humans would use even a far superior AI to make “better” decisions. Better for whom? An “infinitely patient” and “infinitely helpful” AI coach could just as happily help its human master wreak genocide as devise a more efficient manufacturing process or a fairer benefits framework.
OK, you might say, but what if the AI sidekick were somehow programmed not merely to maximize its human’s capabilities but to nudge them away from sociopathic decisions? Fine—except that Marc Andreessen would strenuously oppose that suggestion.
A large chunk of his post is devoted to attacking one of the big fears about AI: that it will spread hate speech and misinformation. To be clear, he doesn’t argue that it won’t spread hate speech and misinformation. He merely says that policing social media has been fraught and complicated (true!), that the people who believe in doing it are mostly on the political left (also true!), that policing AI will be even more fraught because “AI is highly likely to be the control layer for everything in the world” (umm, OK?) and therefore it shouldn’t be policed, regardless of the consequences.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Now, this is a position one can choose to take, but it’s also fundamentally at odds with the idea that people’s AI coaches—even if they ever get past the point of making shit up and become useful—will make the world better. If, as Andreessen insists, programming AI to have certain values is off the table, then all an AI coach will do is help humans get better at making the world whatever they make it, which … well, take a look around you.
Sure, brilliant scientists will come up with even more brilliant life-saving medicines and climate-saving battery chemistries. But every rapacious, criminal, greedy, and manipulative schemer out there will also get better at taking advantage of other humans. Not a single technology in history has yet changed basic human nature.
Another example of Andreessen’s dubious logic emerges when he tackles the common fear that AI will leave everyone unemployed. His argument here is that AI is no different from previous technological advances, which have not eliminated jobs. This is certainly true in the long run: New technology destroys certain kinds of jobs and eventually creates others. But the way he reaches this conclusion is almost laughable in its simplicity.
Andreessen begins by setting up a straw man: the notion that AI will take “ all our jobs.” Like, literally all. He then knocks it down by pointing to the so-called “lump of labor fallacy,” which is “the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it—and if machines do it, there will be no work for people to do.” I’d be surprised if Andreessen’s highly educated audience actually believes the lump of labor fallacy, but he goes ahead and dismantles it anyway, introducing—as if it were new to his readers—the concept of productivity growth. He argues that when technology makes companies more productive, they pass the savings on to their customers in the form of lower prices, which leaves people with more money to buy more things, which increases demand, which increases production, in a beautiful self-sustaining virtuous cycle of growth. Better still, because technology makes workers more productive, their employers pay them more, so they have even more to spend, so growth gets double-juiced.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There are many things wrong with this argument. When companies become more productive, they don’t pass savings on to customers unless they’re forced to by competition or regulation. Competition and regulation are weak in many places and many industries, especially where companies are growing larger and more dominant—think big-box stores in towns where local stores are shutting down. (And it’s not like Andreessen is unaware of this. His “It’s time to build” post rails against “forces that hold back market-based competition” such as oligopolies and regulatory capture.) Moreover, large companies are more likely than smaller ones both to have the technical resources to implement AI and to see a meaningful benefit from doing so—AI, after all, is most useful when there are large amounts of data for it to crunch. So AI may even reduce competition, and enrich the owners of the companies that use it without reducing prices for their customers.
Then, while technology may make companies more productive, it only sometimes makes individual workers more productive (so-called marginal productivity). Other times, it just allows companies to automate part of the work and employ fewer people. Daron Acemoglu and Simon Johnson’s book Power and Progress , a long but invaluable guide to understanding exactly how technology has historically affected jobs, calls this “so-so automation.” For example, take supermarket self-checkout kiosks. These don’t make the remaining checkout staff more productive, nor do they help the supermarket get more shoppers or sell more goods. They merely allow it to let go of some staff. Plenty of technological advances can improve marginal productivity, but—the book argues—whether they do depends on how companies choose to implement them. Some uses improve workers’ capabilities; others, like so-so automation, only improve the overall bottom line. And a company often chooses the former only if its workers, or the law, force it to. (Hear Acemoglu talk about this with me on our podcast Have a Nice Future.
) The real concern about AI and jobs, which Andreessen entirely ignores, is that while a lot of people will lose work quickly, new kinds of jobs—in new industries and markets created by AI—will take longer to emerge, and for many workers, reskilling will be hard or out of reach. And this, too, has happened with every major technological upheaval to date.
Another thing Andreessen would like you to believe is that AI won’t lead to “crippling inequality.” Once again, this is something of a straw man—inequality doesn’t have to be crippling to be worse than it is today. Oddly, Andreessen kinda shoots down his own argument here. He says that technology doesn’t lead to inequality because the inventor of a technology has an incentive to make it accessible to as many people as possible. As the “classic example” he cites Elon Musk’s scheme for turning Teslas from a luxury marque into a mass-market car—which, he notes, made Musk “the richest man in the world.” Yet as Musk was becoming the richest man in the world by taking the Tesla to the masses, and many other technologies have also gone mainstream, the past 30 years have seen a slow but steady rise in income inequality in the US. Somehow, this doesn’t seem like an argument against technology fomenting inequality.
We now come to the sensible things in Andreessen’s opus. Andreessen is correct when he dismisses the notion that a superintelligent AI will destroy humanity. He identifies this as just the latest iteration of a long-lived cultural meme about human creations run amok (Prometheus, the golem, Frankenstein), and he points out that the idea that AI could even decide to kill us all is a “category error”—it assumes AI has a mind of its own. Rather, he says, AI “is math—code—computers, built by people, owned by people, used by people, controlled by people.” This is absolutely true, a welcome antidote to the apocalyptic warnings of the likes of Eliezer Yudkowsky —and entirely at odds with Andreessen’s aforementioned claim that giving everyone an “AI coach” will make the world automatically better. As I’ve already said: If people build, own, use, and control AI, they will do with it exactly what they want to do, and that could include frying the planet to a crisp.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This assertion brings us to the second sensible point. This is where Andreessen addresses the fear that people will use AI to do bad things, such as design a bioweapon, hack into the Pentagon, or commit an act of terrorism. These are legitimate fears, he says, but the solution is not to restrict AI.
He’s right up to a point. The kinds of bad things that people could do with AI are already illegal, because they’re bad. It’s a general principle of good lawmaking to target the harm, not the means. Murder is murder whether it’s carried out with a gun, a knife, or an AI-controlled drone. Racial discrimination in hiring is the same whether an employer looks at your picture, infers your race from your name on a résumé, or uses a screening algorithm that contains inadvertent hidden bias against people of color. And legislation designed to curb a specific technology runs the risk of becoming out of date as the technology changes.
Nonetheless, some means of doing harm are so much more effective than others that they require special legislation. Even in the most permissive US states, not everyone is allowed to own and carry a gun, let alone bigger weapons. If murder by AI-controlled drone becomes a thing, you can be pretty sure we’ll see tougher drone laws.
AI-enabled crimes may also require changes not to laws as much as to law enforcement.
The authorities may need new techniques to investigate such crimes, just as they’ve needed to learn to hunt down drug dealers who trade on the dark web using cryptocurrency.
In some cases the solution to a problem is not new laws, but for industry to adopt standards. It’s hard to make spreading disinformation a crime, especially in the United States, given how politicized the definition of disinformation would be. But, Andreessen notes, “If you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures.” Yes, and I’d say a couple of further steps are needed. First, AI companies should also adopt a common standard for putting digital signatures on AI-generated images and videos. (There is at least one such standard already, the Content Authenticity Initiative , which also offers a way to verify content even if it’s had a signature stripped off.) Then social media platforms, news sites, and the like should label any such content, so anyone seeing it knows its provenance without having to go search for it.
Andreessen has set the narrative for major technological shifts before. It’s welcome, in some ways, that he’s trying to do so again. Much of the current public discourse on AI is such a hysterical mix of apocalyptic paranoia and breathless utopianism that it’s hard for anyone to have a clear position. Andreessen’s words carry weight with many people, and it’s good that he’s knocking down some of the most obvious misguided fears.
Unfortunately, the way he does it feels almost like a crass self-parody of his “Software Is Eating the World” manifesto. But if nothing else, it helps illuminate the hollowness of both the doom and the hype. The problem—as I’ve written before —is that people apparently find it really hard to think in any other terms.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Contributor X Topics artificial intelligence economics Silicon Valley ethics Labor disinformation politics Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,443 | 2,018 |
"The Decentralized Internet Is Here, With Some Glitches | WIRED"
|
"https://www.wired.com/story/the-decentralized-internet-is-here-with-some-glitches"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business The Decentralized Internet Is Here, With Some Glitches Proponents talk about the decentralized internet as a kind of digital Garden of Eden that can restore the freedom and good will of the internet's early days.
HOTLITTLEPOTATO Save this story Save Save this story Save I usually write in Google's online word processor Google Docs, even when noting the company's shortcomings.
This article is different: it was drafted in a similar but more private service called Graphite Docs.
I discovered it while exploring a nascent and glitch-ridden online realm known as the decentralized internet.
Proponents as varied as privacy activists and marquee venture capitalists talk about the decentralized internet as a kind of digital Garden of Eden that can restore the freedom and goodwill of the internet's early days. The argument goes that big tech companies have locked up our data and minds inside stockholder-serving platforms that crush competition and privacy. Ultra-private, socially conscious decentralized apps, sometimes dubbed DApps, will give us back control of our data and let startups slay giants once more.
"The best entrepreneurs, developers, and investors have become wary of building on top of centralized platforms," Chris Dixon, a partner with investor Andreessen Horowitz, wrote last month in a kind of manifesto for a more decentralized internet. Tim Berners-Lee, the inventor of the World Wide Web, has similar concerns.
Graphite Docs and some other early DApps are far from perfect but show there's something to the hype. A life less dependent on cloud giants is possible, if not yet easy.
When you type in Google Docs, every word is sent to the ad company's servers, where you must take it on faith your data will be left alone. Despite Google's privacy policies and strong reputation for security, it has the technical ability to do whatever it wants with information you entrust to it. When I tapped these sentences into Graphite Docs they received a higher level of protection.
I could still access and edit my document from different computers, and even invite collaborators, because it was backed up online as I worked. But the data was stored in an encrypted form, on a network of computers unable to read my data. The encryption keys needed to unscramble it never left my own devices, meaning that unlike with most of the online services I use, my data was solely under my control.
All that was possible because Graphite Docs is built on top of Blockstack, a platform for decentralized internet apps developed by a startup of the same name. You access apps on the platform through a browser, but they run locally, on your computer, with help from software you install from Blockstack. That software helps you create the ID you need to log in to Blockstack apps and stores your encryption keys. And it gives you a choice of where you want to store your encrypted data: your own server or the Gaia storage network powered by Blockstack and some early adopters who have contributed their own computers to the cause. You can access your data from anywhere—as long as you remember your 12-word encryption keyphrase.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As you’ve probably gathered, getting started on the decentralized internet isn't as easy as downloading a new app from an app store. The people behind these clunky, early apps claim that it eventually will be. They argue cryptocurrencies like bitcoin , powered by many computers around the globe, and the datastores of a similar design known as blockchains , show that robust, secure infrastructure doesn’t need central authorities or servers. "We’re working to build a new internet, and the end goal is everyone you know is on it every single day," Blockstack cofounder Ryan Shea says.
For a computing platform to become that ubiquitous it must attract two kinds of people: developers to build news apps and services, and users. Decentralized apps and services are already appearing aimed at both audiences.
Several groups are working on alternatives to cloud storage providers like Amazon. When I uploaded photos to a service called Storj, for example, they were chopped up, encrypted, and distributed among a network of computers owned by strangers who had volunteered storage space on their systems in return for fees paid in cryptocurrency. You may know the basic idea from fictional startup Pied Piper in the sitcom Silicon Valley , but it works—albeit in my experience slowly and not always reliably.
Many decentralized internet projects are, like Graphite Docs, pitched as more private versions of existing products. The service, which offers spreadsheets as well as documents, was founded by developer and writer Justin Hunter, who plans to make money by offering an enterprise version to organizations that value privacy. Another project, OpenBazaar, is something like a decentralized eBay. When I browsed last week I saw lemon matcha tea, postage stamps, and hemp oil available to anyone willing to pay in bitcoin or a spinoff currency, bitcoin cash.
In December, Blockstack put up $50,000 to encourage people to build decentralized messaging apps that could compete with apps such as Slack. Apps such as iMessage, WhatsApp, and Signal already encrypt what you write locally, but they still rely on a central server to move messages around.
Some parts of the emerging decentralized internet have serious backing from people who helped build the current generation of tech giants, with their centralized services. Now that their creations are proving difficult to compete with, venture capitalists appear to be seeking returns elsewhere.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Venture firms Sequoia and Andreessen Horowitz, backers of Google and Facebook, respectively, have invested in a decentralized data-storage network called Filecoin, a creation of startup Protocol Labs intended to compete with conventional cloud storage. As with many decentralized internet projects, the company has raised money by selling a form of cryptocurrency that will later be used to motivate participation in the final system. Blockstack, a public benefit corporation, recently raised $50 million in the same way, adding to the more than $5 million in venture funding it has received from investors including Union Square Ventures, an early backer of Twitter.
Despite that blue-chip backing, the decentralized internet remains a niche interest closely allied with the idiosyncratic world of cryptocurrencies—and it shows. When I visited DTube, billed as a decentralized variant of YouTube, the trending videos were predominantly male vloggers opining on topics including bitcoin trading and the need to dissolve the US government.
Videos were sometimes very slow to load. But they weren’t interrupted or obscured by ads, and I didn't have to wonder whether watching an interview with conspiracy theorist G. Edward Griffin would be cataloged by an ad-profiling algorithm and haunt me later. Users can tip video creators with the cryptocurrency steem, and popular videos can bring in hundreds of dollars' worth. The video files themselves are stored on the interplanetary file system, IPFS, a decentralized file-sharing system powered by volunteers.
It’s a clever design but one that illustrates how decentralized systems might face legal and governance problems. Some parts of the IPFS network support copyright takedowns, but they can be worked around. DTube’s operators say they can’t censor videos on the service, and that content will only disappear if the site’s users overwhelmingly down-vote it. If the community’s actions don’t meet the expectations of copyright lawyers, or end up penalizing certain kinds of content, expect things to get complicated.
Can these early, clunky decentralized apps ever compete with the centralized services that dominate today? When I ask Stavros Korokithakis, a software developer in Greece, he replies, "Certainly." He and a friend have built a decentralized app called Hearth , described as a cross between Dropbox and a web-hosting service.
Korokithakis says he wants to help people create personal and idiosyncratic web pages as they did on the "old web" of the 1990s, before predefined and ad-supported social profiles like those offered by Facebook ruled the world. But he concedes that taking on the centralized giants is a tough challenge. For now, decentralized apps’ clearest benefit is their resistance to censorship, he says, yet “the average person doesn't feel the need to evade censorship." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg David Pakman, a partner with venture firm Venrock, which has invested in a forthcoming decentralized video-streaming network called Props, argues that decentralized apps will soon have more to offer. New platforms begin by trying to emulate old ones but take off when people create new services that were previously unimaginable, he says.
It's a convenient way to avoid being pinned down on just what the decentralized web will be good for. It’s also true. The creators of the Apple II did not predict the success of the first spreadsheet, VisiCalc, the killer app that helped establish the utility of PCs in workplaces. The DoD-backed academics who laid the foundations of the internet didn't foresee or build Facebook.
Finding the killer apps of the decentralized internet will take more time, people, and money than have been thrown at the problem so far. Pakman says that societal attitudes to power and big tech companies appear to be in the right place to deliver them. "There’s massive distrust in centralized everything," he says. "We don't trust the government, don't go to church or synagogue, don't trust banks, and now we no longer trust tech companies." Tim Berners-Lee invented the World Wide Web, but he isn't happy with how it turned out.
Fictional startup Pied Piper's plan for a "new internet" was good for laughs in the TV show Silicon Valley , but some entrepreneurs are building it for real.
Do we really need a blockchain for everything ? Some startups seem to have grabbed hold of the concept only because their original business plans went awry.
Senior Editor X Topics Internet Blockchain security privacy Christopher Beam Will Knight Reece Rogers Niamh Rowe David Gilbert Will Knight Vittoria Elliott Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,444 | 2,020 |
"Russia Is Going to Up Its Game for the 2020 Elections | WIRED"
|
"https://www.wired.com/story/russia-2020-election-security-mark-warner"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Laslo Security Russia Is Going to Up Its Game for the 2020 Elections "You don't need to change votes to cause chaos," Senator Mark Warner (D-Virginia) says.
Andrew Harnik/AP Save this story Save Save this story Save One week after Robert Mueller’s testimony shined a spotlight, once again, on election interference, Senate Majority Leader Mitch McConnell is feeling the heat. The leader turned heads on the Senate floor Monday as he rose to decry critics who have dubbed him “a Russian asset” and “Moscow Mitch” for stonewalling congressional measures to improve election security.
And with momentum building in the House to formally start impeachment proceedings against President Trump, the pressure is unlikely to let up anytime soon.
Focusing on election interference from 2016 is backwards thinking, though, at least according to Virginia Senator Mark Warner. With 2020 just around the corner, he tells WIRED—in an exclusive interview—that the upcoming election is where both parties need to direct their attention right now.
As the top-ranking Democrat on the Senate Intelligence Committee, Warner has long been a vocal proponent of new legislation to strengthen election protections, such as the Honest Ad Act, which would compel Silicon Valley firms to disclose when political ads are paid for by a foreign nation. He’s also behind a bill that would require campaigns to alert federal officials if they’re approached by a foreign operative offering information or other assistance. Both bills have bipartisan support—Senator Susan Collins became the first Republican to cosponsor the Foreign Influence Reporting in Elections Act earlier this week.
Even as GOP leaders try to position election security as a partisan issue, Warner—a former governor of Virginia and a cofounder of the firm that eventually became Nextel—has maintained the respect of his colleagues across the aisle. But his frustration seems to be growing, especially now that Trump has tapped Representative John Ratcliffe (R-Texas) to be his next director of national intelligence. Unlike Senate Minority Leader Chuck Schumer, who has already come out opposed to Ratcliffe, Warner tells WIRED he’s still got some patience left. Even if it’s wearing thin.
This transcript is slightly edited for length and clarity.
WIRED: After Mueller testified, the president and Republicans say case closed. What do you make of that? Mark Warner: I'm not here to relitigate 2016, or the Mueller testimony, specifically. I would point out, out of the Mueller investigation: 37 indictments, the president's national security adviser pled guilty. The president's campaign manager pled guilty. The president's deputy campaign manager pled guilty. The president's chief political adviser is coming to trial in the fall, Roger Stone. The attorney general had to resign. There were literally hundreds of contacts between the Trump campaign and Russian agents.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That's not normal. And I think the biggest takeaway from the Mueller testimony was that the Russians who attacked us in 2016 are still attacking us and, in Bob Mueller's words, on a daily basis. You combine that with the warnings from Trump's own FBI director [Christopher Wray] and Trump's own director of national intelligence [Dan Coats]. And one of the things that concerns me the greatest is that we've not done more to protect the integrity of our election system in 2020.
I was just talking to your [Intelligence Committee] cochair, Senator [Richard] Burr, and he was saying the states in 2018 weathered these attacks, the national infrastructure is good on election security. Basically, case closed, again, not much more is needed.
I think everyone picked up their game in 2018, including the Department of Homeland Security, and our intelligence community was more active as well. But the intelligence community's own reporting was that Russia didn't throw its full force of efforts in 2018. Chances are they'll reserve those for the presidential election. So I think there is some low-hanging fruit that would get 75 votes on the floor of the Senate—if we could get these bills to the floor of the Senate.
"If you add up all Russia spent in the Brexit vote, the French presidential elections, and the 2016 American elections, it’s less than the cost of one new F-35 airplane." Senator Mark Warner I think there ought to be an affirmative obligation that if a foreign government, the Kremlin, offers you campaign help, your obligation ought to be not to say thank you, but to report to the FBI. I think we ought to make sure that every polling station in America has a paper ballot backup, so that if a machine was hacked, you've still got ability to protect the integrity of the voting system. And I haven't met anyone that doesn't think we need some basic guard rails around the manipulation of Facebook, Twitter, and Google by foreign entities and others. So at least there ought to be the requirement that if somebody advertises on a political basis on Facebook, but in truth it's a foreign government, they ought to have the same disclosure requirements as somebody who advertises on radio or television.
Isn’t it a little bit ironic that in this highly digital era, we're going back to paper ballots? I think we need to make sure that we use the best technology, but if technology, as we see from banks this week , can continue to be hacked into, if voting machines are not as protected as needed, if the private companies who control the voter files could have their information moved around … You don't need to change votes to cause chaos. I think people's overall confidence in the system goes up if there is that back check of having a paper ballot backup. Again, this is not saying we wouldn't still use voting machines, but across the election community everyone believes it's safer if you have that paper ballot backup that goes along with the voting counting machines.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And now we know we're getting attacked, cybersecurity is on the top of many minds. And then the president this week announced he’s nominating Representative John Ratcliffe to be DNI, who seems like more of a politician and a Trump supporter than someone from the intel community. Does that worry you? It worries me greatly. The irony is that Donald Trump's appointees in the intel world—his director of national intelligence, Dan Coats; his director of the FBI, Chris Wray, his director of the CIA, Gina Haspel—have been pretty good about speaking truth to power, even when Trump did not want to hear the truth. They've been very good at not allowing America's intelligence to get politicized—while I'm going to give Mr. Ratcliffe the courtesy of a meeting, I fear that he is being appointed in the mold of a Bill Barr, the attorney general, who basically is simply a loyalist first to Donald Trump and doesn't maintain that kind of independence.
If there's ever been a time when everyone says that Russians and others will be back, when we've got as many potential conflict spots around the world, we need to make sure that the head of our national intelligence is not going to politicize the intelligence. That intelligence product goes to our military, it goes to the executive, it goes to us in the Congress. It cannot be a political product. And we’ve got to make sure that the intelligence community is going to be willing to speak truth to power, and that means telling Donald Trump the truth, even if he doesn’t want to hear it. And so far it appears to me that Mr. Ratcliffe, who doesn’t have much experience and who seems—based upon press reports—that his audition was based on questioning Mueller and questioning the legitimacy of the Russian’s intervention in our electoral system, is pretty chilling.
What do you see as the biggest threats—or are there any new threats—facing America in 2020? So I think there are a couple of new threats. One, Russia in 2016 was surprised at how vulnerable our systems were, our electoral systems. And how easy Facebook and Twitter and YouTube were to be manipulated. So I think that playbook is now out there , they've used the same tactics in the Brexit vote [and] the French presidential elections. So my fear is we may not only see Russia , we can see Iran , we could potentially see China, who has a great deal of control over a number of their Chinese tech companies, start to use these tools because they're cheap and effective. I like to point out that if you add up all Russia spent in the Brexit vote, the French presidential elections, and the 2016 American elections, it’s less than the cost of one new F-35 airplane. So Russia and our adversaries, I think, have decided the way to engage with us in conflict is not through straight up old-school military but through cyber activities, misinformation and disinformation, increasingly trying to weaken and interfere, for example with our space communications, and I think Russia will up their game … and others … [It] means there will be more adversaries in 2020.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Second is, I think in 2016 we saw Russia try to misrepresent—the Russian agents misrepresent themselves as Americans on Facebook and Twitter by simply posting fake messages. The next iteration, the next generation of that will be the so-called “ deepfake ” technology, where an American may not be able to view what his eyes are telling him, because you’ll see an image of you or me or a political figure that may sound like that person but isn't that person at all.
Now, if McConnell doesn't allow some of these bills, like the Honest Ads Act or just broader election security bills, to come up, what do you think the Silicon Valley tech firms can do on their own? Look, we've seen progress made by Facebook , Twitter, some progress made by Google.
But I don't think self-regulation, particularly when a regulation may mean they may not be collecting as much information as they like, or self-regulation may mean they have to go against or limit some of the fake content. It goes against their very business model. So I think Facebook has made progress in particular, but some of the tools they have—for example, the ability to access on an easy basis the campaign ads that they promised, that tool is not effective at all.
So at the end of the day, when we're talking about something as critical as protecting the integrity of our democracy, when Americans lack faith in so many of our institutions to start with, if we don't go the extra mile and put in place a set of rules and regulations—and god forbid should Russia or Iran or another foreign enterprise massively interfere again—and we didn't do our duty, then shame on all of us.
This week, two fairly senior Senate Democrats called for impeachment proceedings to begin. Where are you on that? We started this conversation with you saying you don't want to relitigate 2016, but it seems like there's this growing chorus amongst Democrats to impeach.
I actually think Speaker [Nancy] Pelosi has navigated that challenge very well. I understand the frustrations with President Trump—his activities and tweets and antics. I think, though, the best way we can show that that's not who we are as Americans is to defeat him at the ballot box in a free and fair election. And what I worry about is if we don't guarantee that free and fair election, then we haven't done our job.
High drama: A cannabis biotech firm roils small growers Lunar mysteries that science still needs to solve Are super automatic espresso machines worth it? The best algorithms don't recognize black faces equally These hackers made an app that kills to prove a point 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
📩 Get even more of our inside scoops with our weekly Backchannel newsletter Topics elections congress Andy Greenberg Andy Greenberg Dell Cameron Dell Cameron Dell Cameron Dhruv Mehrotra David Gilbert Dell Cameron Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,445 | 2,018 |
"Inside Facebook's Hellish Two Years—and Mark Zuckerberg's Struggle to Fix it All | WIRED"
|
"https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Nicholas Thompson Fred Vogelstein Business Inside the Two Years That Shook Facebook—and the World Eddie Guy Save this story Save Save this story Save One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.
“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.
All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.” So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.
A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.
Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.
March 2018.
Subscribe to WIRED.
Jake Rowland/Esto Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.
The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a videoconference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.
The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.
The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.
This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.) The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.
By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.
The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.
But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.
In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.
Two years that forced the platform to change by Blanca Myers Facebook suspends Benjamin Fearnow, a journalist-curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.
Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.
Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.
Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.
Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.
Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.
Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.
Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.
Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.
Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.” So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles , offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.” It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.” This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post , gossip in the New York Post , or flat-out lies in the Denver Guardian , an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.
In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.
In February of 2016, just as the Trending Topics fiasco was building up steam, Roger McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ” But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.
The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.
Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.
Related Stories Business Emily Dreyfuss Business Issie Lapowsky Facebook Nitasha Tiku According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.
The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’” Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “ Building a Better News Feed for You.
” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.
The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.
Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.
Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times , which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.) Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.
When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.
But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary." While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-marketing political operation in history.
In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.
Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Lookalike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.
Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.
Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Eddie Guy It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.
Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-envelope calculation showing that fake news was a tiny percentage of the total amount of election-related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.” A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!” At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.
One employee compared Zuckerberg to Lennie in Of Mice and Men —a man with no understanding of his own strength.
Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men , the farm-worker with no understanding of his own strength.
Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.
In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.” Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.” Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.
Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.” As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic , and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?” The next month, DiResta published an article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.
The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.
Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.
And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times.
And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.
This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.
As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.
One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’” That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.
During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “ Information Operations and Facebook ,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’” One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.
Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.
Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.” Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.
This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Eddie Guy To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.” McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.” After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” he said. “God only knows what it’s doing to our children’s brains.” Eleven days later, Facebook’s former privacy manager, Sandy Parakilas, published a New York Times op-ed calling for the government to regulate Facebook: “The company won’t protect us by itself, and nothing less than our democracy is at stake.” The day of the hearings, Zuckerberg had to give Facebook’s Q3 earnings call. The numbers were terrific, as always, but his mood was not. Normally these calls can put someone with 12 cups of coffee in them to sleep; the executive gets on and says everything is going well, even when it isn’t. Zuckerberg took a different approach. “I’ve expressed how upset I am that the Russians tried to use our tools to sow mistrust. We build these tools to help people connect and to bring us closer together. And they used them to try to undermine our values. What they did is wrong, and we are not going to stand for it.” The company would be investing so much in security, he said, that Facebook would make “significantly” less money for a while. “I want to be clear about what our priority is: Protecting our community is more important than maximizing our profits.” What the company really seeks is for users to find their experience to be “time well spent,” Zuckerberg said—using the three words that have become Tristan Harris’ calling card, and the name of his nonprofit.
Other signs emerged, too, that Zuckerberg was beginning to absorb the criticisms of his company. The Facebook Journalism Project, for instance, seemed to be making the company take its obligations as a publisher, and not just a platform, more seriously. In the fall, the company announced that Zuckerberg had decided—after years of resisting the idea—that publishers using Facebook Instant Articles could require readers to subscribe. Paying for serious publications, in the months since the election, had come to seem like both the path forward for journalism and a way of resisting the post-truth political landscape. (WIRED recently instituted its own paywall.) Plus, offering subscriptions arguably helped put in place the kinds of incentives that Zuckerberg professed to want driving the platform. People like Alex Hardiman, the head of Facebook news products and an alum of The New York Times , started to recognize that Facebook had long helped to create an economic system that rewarded publishers for sensationalism, not accuracy or depth. “If we just reward content based on raw clicks and engagement, we might actually see content that is increasingly sensationalist, clickbaity, polarizing, and divisive,” she says. A social network that rewards only clicks, not subscriptions, is like a dating service that encourages one-night stands but not marriages.
A couple of weeks before Thanksgiving 2017, Zuckerberg called one of his quarterly all-hands meetings on the Facebook campus, in an outdoor space known as Hacker Square. He told everyone he hoped they would have a good holiday. Then he said, “This year, with recent news, a lot of us are probably going to get asked: ‘What is going on with Facebook?’ This has been a tough year … but … what I know is that we’re fortunate to play an important role in billions of people’s lives. That’s a privilege, and it puts an enormous responsibility on all of us.” According to one attendee, the remarks came across as blunter and more personal than any they’d ever heard from Zuckerberg. He seemed humble, even a little chastened. “I don’t think he sleeps well at night,” the employee says. “I think he has remorse for what has happened.” During the late fall, criticism continued to mount: Facebook was accused of becoming a central vector for spreading deadly propaganda against the Rohingya in Myanmar and for propping up the brutal leadership of Rodrigo Duterte in the Philippines. And December brought another haymaker from someone closer by. Early that month, it emerged that Chamath Palihapitiya, who had been Facebook’s vice president for user growth before leaving in 2011, had told an audience at Stanford that he thought social media platforms like Facebook had “created tools that are ripping apart the social fabric” and that he feels “tremendous guilt” for being part of that. He said he tries to use Facebook as little as possible and doesn’t permit his children to use such platforms at all.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The criticism stung in a way that others hadn’t. Palihapitiya is close to many of the top executives at Facebook, and he has deep cachet in Silicon Valley and among Facebook engineers as a part-owner of the Golden State Warriors. Sheryl Sandberg sometimes wears a chain around her neck that’s welded together from one given to her by Zuckerberg and one given to her by Palihapitiya after her husband’s death. The company issued a statement saying it had been a long time since Palihapitiya had worked there. “Facebook was a very different company back then and as we have grown we have realized how our responsibilities have grown too.” Asked why the company had responded to Palihapitiya, and not to others, a senior Facebook executive said, “Chamath is—was—a friend to a lot of people here.” Roger McNamee, meanwhile, went on a media tour lambasting the company. He published an essay in Washington Monthly and then followed up in The Washington Post and The Guardian.
Facebook was less impressed with him. Executives considered him to be overstating his connection to the company and dining out on his criticism. Andrew Bosworth, a VP and member of the management team, tweeted, “I’ve worked at Facebook for 12 years and I have to ask: Who the fuck is Roger McNamee?” Zuckerberg did seem to be eager to mend one fence, though. Around this time, a team of Facebook executives gathered for dinner with executives from News Corp at the Grill, an upscale restaurant in Manhattan. Right at the start, Zuckerberg raised a toast to Murdoch. He spoke charmingly about reading a biography of the older man and of admiring his accomplishments. Then he described a game of tennis he’d once played against Murdoch. At first he had thought it would be easy to hit the ball with a man more than 50 years his senior. But he quickly realized, he said, that Murdoch was there to compete.
On January 4, 2018, Zuckerberg announced that he had a new personal challenge for the year. For each of the past nine years, he had committed himself to some kind of self-improvement. His first challenge was farcical—wear ties—and the others had been a little preening and collegiate. He wanted to learn Mandarin, read 25 books, run 365 miles. This year, though, he took a severe tone. “The world feels anxious and divided, and Facebook has a lot of work to do—whether it’s protecting our community from abuse and hate, defending against interference by nation-states, or making sure that time spent on Facebook is time well spent,” Zuckerberg declared. The language wasn’t original—he had borrowed from Tristan Harris again—but it was, by the accounts of many people around him, entirely sincere.
That New Year’s challenge, it turned out, was a bit of carefully considered choreography setting up a series of announcements, starting with a declaration the following week that the News Feed algorithm would be rejiggered to favor “ meaningful interactions.
” Posts and videos of the sort that make us look or like—but not comment or care—would be deprioritized. The idea, explained Adam Mosseri, is that, online, “interacting with people is positively correlated with a lot of measures of well-being, whereas passively consuming content online is less so.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To numerous people at the company, the announcement marked a huge departure. Facebook was putting a car in reverse that had been driving at full speed in one direction for 14 years. Since the beginning, Zuckerberg’s ambition had been to create another internet, or perhaps another world, inside of Facebook, and to get people to use it as much as possible. The business model was based on advertising, and advertising was insatiably hungry for people’s time. But now Zuckerberg said he expected these new changes to News Feed would make people use Facebook less.
The announcement was hammered by many in the press. During the rollout, Mosseri explained that Facebook would downgrade stories shared by businesses, celebrities, and publishers, and prioritize stories shared by friends and family. Critics surmised that these changes were just a way of finally giving the publishing industry a middle finger. “Facebook has essentially told media to kiss off,” Franklin Foer wrote in The Atlantic.
“Facebook will be back primarily in the business of making us feel terrible about the inferiority of our vacations, the relative mediocrity of our children, teasing us into sharing more of our private selves.” People who know him say Zuckerberg has truly been altered in the crucible of the past several months.
But inside Facebook, executives insist this isn’t remotely the case. According to Anker, who retired from the company in December but worked on these changes, and who has great affection for the management team, “It would be a mistake to see this as a retreat from the news industry. This is a retreat from ‘Anything goes if it works with our algorithm to drive up engagement.’” According to others still at the company, Zuckerberg didn’t want to pull back from actual journalism. He just genuinely wanted there to be less crap on the platform: fewer stories with no substance; fewer videos you can watch without thinking.
And then, a week after telling the world about “meaningful interactions,” Zuckerberg announced another change that seemed to answer these concerns, after a fashion. For the first time in the company’s history, he said in a note posted to his personal page, Facebook will start to boost certain publishers—ones whose content is “trustworthy, informative, and local.” For the past year, Facebook has been developing algorithms to hammer publishers whose content is fake; now it’s trying to elevate what’s good. For starters, he explained, the company would use reader surveys to determine which sources are trustworthy. That system, critics were quick to point out, will surely be gamed, and many people will say they trust sources just because they recognize them. But this announcement, at least, went over a little better in boardrooms and newsrooms. Right after the post went up, the stock price of The New York Times shot up—as did that of News Corp.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Zuckerberg has hinted—and insiders have confirmed—that we should expect a year of more announcements like this. The company is experimenting with giving publishers more control over paywalls and allowing them to feature their logos more prominently to reestablish the brand identities that Facebook flattened years ago. One somewhat hostile outside suggestion has come from Facebook’s old antagonist Murdoch, who said in late January that if Facebook truly valued “trustworthy” publishers, it should pay them carriage fees.
The fate that Facebook really cares about, however, is its own. It was built on the power of network effects: You joined because everyone else was joining. But network effects can be just as powerful in driving people off a platform. Zuckerberg understands this viscerally. After all, he helped create those problems for MySpace a decade ago and is arguably doing the same to Snap today. Zuckerberg has avoided that fate, in part, because he has proven brilliant at co-opting his biggest threats. When social media started becoming driven by images, he bought Instagram. When messaging took off, he bought WhatsApp. When Snapchat became a threat, he copied it. Now, with all his talk of “time well spent,” it seems as if he’s trying to co-opt Tristan Harris too.
But people who know him say that Zuckerberg has truly been altered in the crucible of the past several months. He has thought deeply; he has reckoned with what happened; and he truly cares that his company fix the problems swirling around it. And he’s also worried. “This whole year has massively changed his personal techno-optimism,” says an executive at the company. “It has made him much more paranoid about the ways that people could abuse the thing that he built.” The past year has also altered Facebook’s fundamental understanding about whether it’s a publisher or a platform. The company has always answered that question defiantly—platform, platform, platform—for regulatory, financial, and maybe even emotional reasons. But now, gradually, Facebook has evolved. Of course it’s a platform, and always will be. But the company also realizes now that it bears some of the responsibilities that a publisher does: for the care of its readers, and for the care of the truth. You can’t make the world more open and connected if you’re breaking it apart. So what is it: publisher or platform? Facebook seems to have finally recognized that it is quite clearly both.
Wondering about the bruised Mark Zuckerberg on the magazine cover? Here's how the photo-illustration was created Who will take responsibility for Facebook ? Facebook can absolutely control its algorithm Our minds have been hijacked by our phones. Tristan Harris wants to rescue them Nicholas Thompson ( @nxthompson ) is WIRED's editor in chief.
Fred Vogelstein ( @fvogelstein ) is a contributing editor at the magazine.
This article appears in the March issue.
Subscribe now.
Listen to this story, and other WIRED features, on the Audm app.
Editor in Chief X LinkedIn Topics magazine-26.03 Facebook Mark Zuckerberg Cover Story Reece Rogers David Gilbert Deidre Olsen Steven Levy Reece Rogers Paresh Dave Eliza Gkritsi Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,446 | 2,019 |
"Inside Backpage.com’s Vicious Battle With the Feds | WIRED"
|
"https://www.wired.com/story/inside-backpage-vicious-battle-feds"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Christine Biederman Business Inside Backpage.com’s Vicious Battle With the Feds For years, it was the largest portal for sex on the internet. Now its fate could shape the future of Silicon Valley.
Photograph: Jesse Rieser Save this story Save Save this story Save In Michael Lacey’s younger and more vulnerable years, his father gave him this advice: “Whenever someone pokes a finger in your chest, you grab that finger and you break it off at the knuckle.” Lacey grew up in the 1950s as a bright, bookish boy. His father, a sailor turned enforcer for a New York construction union, had little use for his son’s intellectual gifts. If Lacey lost a fight at school, he says, his dad “came home and beat me again.” But the boy toughened up, and he carried the lessons he’d learned into adulthood. He became a newspaper editor and earned a reputation as a down-and-dirty First Amendment brawler. Early on in his career, he struck up a partnership with James Larkin, a publisher whose sensibilities matched his own. Together, they built the nation’s largest chain of alternative newsweeklies.
Lacey and Larkin were heroes to many—micks from the sticks who made a fortune thumbing their shanty-Irish snouts at authority. Their papers went after mayors and police chiefs, governors and senators, Walmart and the Church of Scientology. They provoked outrage with their business practices too, by setting up Backpage.com , a kind of red-light district for the internet. As attorney Don Moon, the pair’s longtime adviser, puts it: “Their brand was always ‘Fuck you. We don’t have friends. We have lawyers.’ ” That approach served them well for 45 years, right up until the morning Michael Lacey found himself staring into the barrel of a Glock.
July/August 2019.
Subscribe to WIRED.
Beth Holzer A few minutes before 9 am on April 6, 2018, a fleet of unmarked vehicles with government plates rolled up in front of Lacey’s multimillion-dollar compound in Paradise Valley, a few miles outside of Phoenix. These weren’t the guests he’d been expecting. The 69-year-old divorced father of two had recently gotten remarried, and he was preparing to host a lavish party to celebrate his vows. Tents were pitched on his lawn; retired journalists and overworked lawyers were winging their way into town.
FBI agents informed the groom that he was being arrested on charges of money laundering and facilitating prostitution. They cuffed him, then subdued the home’s other occupants, including Lacey’s 76-year-old mother-in-law, whom they ordered out of the shower at gunpoint.
For the next six hours, the lawmen tossed the compound looking for, among other things, “evidence of wealth.” They seized art, cash, computers, even the bride’s wedding ring. Meanwhile, at the Phoenix airport, federal marshals awaited a 747 inbound from London. When it touched down, the flight crew made an announcement: Police would be boarding, so passengers must stay put. “I wondered who they were there for,” recalls Larkin, then 68, who was seated beside his son in business class. “I quickly figured out it was me.” (The Department of Justice declined to comment on the arrests.) Partygoers soon received a cryptic text message. Owing to “unforeseen circumstances,” it said, the wedding celebration had been “postponed.” A notice went up on Backpage, explaining that the website had been seized “as part of an enforcement action.” More than a few guests completed the journey to Phoenix anyway; reporters can’t resist a story, and Lacey had already paid for a block of rooms at the Hotel Camby. They gathered at various local watering holes, offering what one attendee describes as “toasts to the accused,” and pieced together a gripping narrative—a tale of free-speech crusaders crossed over to the dark side, dedicated newshounds become digital pimps.
Backpage, the domain that brought the federal government down on Lacey and Larkin’s heads, wasn’t much to look at—a bare-bones interface wrapped in Facebooky blue, similar to Craigslist in both form and function. Its name alluded to the old days of print publishing, when classified ads, especially ads for topless bars, escort services, and other sexually oriented businesses filled the final pages of alt-weeklies and provided much of their revenue. Visitors to the site were greeted with several columns of links, which directed them to listings for various metropolitan areas around the country. From there, they could reply to ads or write their own.
Many of the ads—for auto parts, part-time gigs, vacation rentals, and so on—were free to publish. But the lewd stuff, listed under the adult section, cost money. For as little as $2 a day, users could post in such categories as “body rubs” and “dom & fetish.” The site’s terms of use prohibited any content that could be considered “unlawful,” “harmful,” or “obscene.” To gain access to the adult section, all users had to do was click a link confirming they were 18 or older. Once inside, they saw an endless scroll of titles, some laden with innuendo (“Cum lay your hotdog on my bun for memorial day”), others more explicit (“Three holes anything goes $90”).
Related Stories Law Enforcement Nitasha Tiku Section 230 Issie Lapowsky Ideas Antonio García Martínez As in the print days, these adult ads reigned supreme. In 2011 they accounted for 15 percent of Backpage’s listings but generated more than 90 percent of its revenue. By the time the Feds pulled the plug on the site, it was operating in 97 countries and was valued at more than half a billion dollars. People called it the Google of commercial sex ads, a platform that dominated its market as thoroughly as Facebook dominated social networking or Amazon did online retail.
The government indictment that triggered Lacey and Larkin’s arrests, United States v. Lacey, et al.
, includes 17 “victim summaries”—stories of women who say they were sexually exploited through Backpage. Victim 5 first appeared in an ad on the platform when she was 14; her “customers” made her “perform sexual acts at gunpoint, choked her to the point of having seizures, and gang-raped her.” Victim 6 was stabbed to death. Victim 8’s uncle and his friends advertised her as “fetish friendly.” The indictment accuses Backpage of catering to sexual predators, of essentially helping pimps better reach their target audiences.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In the years before their arrest, Lacey and Larkin had successfully beat back charges like these in court. They took refuge not only in the First Amendment but also in Section 230 of the Communications Decency Act , Congress’ great gift to the internet. Passed in 1996, Section 230 largely immunized online platforms from liability for the user-generated content they hosted. They were free to police offending material as they saw fit, without undue fear of prosecution by state or local authorities—as long as they didn’t create it themselves. America’s tech behemoths, from Twitter to Facebook, have often invoked Section 230 in court. The internet we have today wouldn’t exist without it. After all, you can’t build or sustain a giant network if you’re getting sued every time a user says or does something objectionable.
For a while, Lacey and Larkin’s strategy had worked: They’d won case after case, with the support of Big Tech and civil libertarians alike. But by the time the Feds descended on Paradise Valley that morning in the spring of 2018, the tide had turned. Many of their friends and allies had fled, spooked in part by too much bad press. The tech industry, which faced withering scrutiny over its role in the outcome of the 2016 presidential election, had thrown them under the bus. Their top lieutenant had flipped. And Congress had used them as an excuse to finally accomplish what it had been trying to do for more than 20 years—tear a hole in Section 230.
Maybe they should have seen it coming: The betrayals. The asset seizures. The changing zeitgeist. They were, to be sure, brazenly cashing in on the sex trade. But here’s the thing: Silicon Valley had better hope they win.
United States v. Lacey is a dangerous case, with potential consequences far beyond the freedom of two aging antiauthoritarians.
A view from Paradise Valley, looking out onto Camelback Mountain.
Jesse Rieser It’s a mid-November afternoon in 2018, and Mike Lacey and Jim Larkin are seated on either side of the 20-foot-long glass table that dominates Lacey’s living room. They’re clad in jeans, polos, and ankle monitors. A black charging cord snakes from a wall outlet to Lacey’s left foot, which emits an occasional beep.
Both men are out on million-dollar bonds, secured by real estate the government eventually hopes to own. The bulk of the charges against them fall under the Travel Act, a law designed by Robert F. Kennedy’s Justice Department to target organized crime. According to the indictment, Lacey, Larkin, and their underlings not only turned a blind eye to prostitution and child sexual abuse but, driven by greed, actively worked to abet it. Their case is set for January 2020. “El Chapo got to trial quicker,” Lacey quips.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg I’ve worked for both sides in this showdown. In the late 1990s, I was a staff writer for the Dallas Observer , a weekly owned by Lacey and Larkin. Then, in 2001, I went to work for the Department of Justice as an assistant US attorney in Plano, Texas.
The two men have lived large, and it shows. Larkin is a burly former football player, 6 ' 2 " and easily 250 pounds, with cornflower eyes, chubby cheeks, and a ruddy complexion. Lacey’s mug reveals decades of sun and single-malt Scotch—the hooded lids, the sagging chin, the lines running like canyons down his face and into his neck. His spiky hair has thinned and grayed, but he still has the prominent schnoz, the ice-blue eyes, and the knuckles famously tattooed with "HOLD FAST." (His father, who served in the Navy during World War II, had the same slogan inked across his fists.) Their situation looks bleak. The government has seized all of Lacey’s financial accounts and most or all of Larkin’s. Prosecutors have already produced more than 10 million documents and have promised, or threatened, more to come. It will cost the defendants several million dollars just to buy the software they need to search the government’s files. For the time being, though, they’re still drinking well. When I arrive, Larkin has uncorked a bottle of Jack Quinn, a cabernet produced at his 3-acre vineyard in Napa. (Although Larkin has owned the place since before Backpage existed, the government has given notice that it intends to seize the vineyard, alleging that he used Backpage-derived funds for its maintenance.) Lacey, meanwhile, is still knocking back Macallan 21—although nowadays he stops to ask the price. At the Blue Hound bar in Phoenix, where we repaired for a later interview, it’s $120 per shot.
Lacey got his start in journalism in 1970, in the wake of the Kent State shootings, when he and a group of antiwar comrades at Arizona State University founded what would become the Phoenix New Times.
In the beginning, he claims, he sold his blood to pay the bills. He met Larkin two years later—not long after Lacey’s father, the union enforcer, and his mother, an opera singer and registered nurse, were found frozen to death in a rented trailer in Oswego, New York. (“It was a murder-suicide,” Lacey says. “They were drunk, and she turned on the gas.”) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The men connected immediately. Both were college dropouts, and both had suffered through difficult childhoods. Larkin’s mother died when he was 2, and he spent most of his youth in what he describes as a “Catholic ghetto.” In high school, he cofounded a student newspaper, The Big Press , then promptly got himself suspended for criticizing administrators. “I wanted to be in that business,” he says. Lacey brought him on as publisher.
In 1977, Lacey and Larkin staged a putsch. They wrested control of the New Times from Lacey’s cofounders and set about turning the fledgling broadsheet into an empire. Larkin worked out a lucrative revenue model, emphasizing classifieds and personals. (While a page of big retail ads might net $1,000, a page of classifieds, 100 ads at $25 a pop, could bring in $2,500.) Six years later, they began to expand. They bought up struggling weeklies in cities across the country—Denver, Houston, Miami—and transformed them into serious news organizations, hiring experienced, high-profile reporters and giving them resources to do the job.
“I didn’t get into this racket to be told what to publish,” Lacey growls. “By anybody.” They believed there was an audience for in-depth, long-form investigative reporting. A month after 9/11, for instance, The New Times Broward-Palm Beach published an exposé on how lapses in federal immigration policy had allowed the hijackers to enter the country. In 2003, Westword got the scoop on a sexual assault scandal at the US Air Force Academy. In 2013, The Miami New Times ran a story on the steroid scandal in Major League Baseball, which ultimately resulted in the suspension of 14 players. Lacey once told an interviewer, “As a journalist, if you don’t get up in the morning and say ‘Fuck you’ to someone, why even do it?” They tangled with shareholders, authorities, competitors, printers, and municipalities that tried to restrict their distribution. Lacey, who wrote numerous stories himself, was known to clock reporters and pummel press aides, usually when spirits were involved. (He estimates that he’s been arrested “10 or 11 times,” but “only three for writing.” The one criminal conviction on his record is for a misdemeanor DUI.) When violence didn’t settle things, Lacey and Larkin often moved matters to the courtroom. Litigation was their idea of fun, the continuation of hell-raising by other means. “I didn’t get into this racket to be told what to publish,” Lacey growls. “By anybody.
If you don’t like it, don’t read it.” Steve Suskin, their former in-house counsel, says they and their companies were sued 56 times between 1997 and 2012 alone. “We won them all,” Suskin recalls. They were successful in part because they recognized that litigation is a war of attrition, and they were willing to go the distance. Says Lacey: “You want to sue us, bring your lunch pail, ’cause we gonna be awhile.” In their most famous legal set-to, they successfully sued Joe Arpaio, Maricopa County’s notoriously anti-immigrant sheriff, for false arrest, winning a $3.75 million settlement. In a final flip of the bird to Arpaio, they used the money to set up a nonprofit to defend the rights of undocumented immigrants and Latinx Americans.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Through it all, Larkin kept the money coming in, embracing each new fad in classified advertising. In 1989, for example, the New Times group launched its first adult section, appropriately dubbed Wildside. (The ads were moderated by sales staff to ensure no blatant sex-for-money propositions made it into print.) Racy ads fueled the company’s explosive growth; by 2001, Lacey and Larkin owned 11 papers, which raked in more than $100 million a year. But the good times didn’t last. Craigslist had begun expanding into cities outside the Bay Area, offering free ads in all categories except jobs and erotic services. Classified revenue tanked.
In 2003, Larkin was approached by Carl Ferrer, an ad salesman he’d hired away from a small paper in Louisiana and installed as classified ad director at the Dallas Observer.
Ferrer, a short, slight man with a goatee and a perpetually worried look, proposed that they create an in-house version of Craigslist. Larkin put him in charge of building and running the website, which launched in 2004.
The following year, Lacey and Larkin won the prize they’d chased for years— The Village Voice , the grande dame of alt-weeklies. When the New Times group merged with Village Voice Media, the two companies formed a 17-paper megachain valued at about $400 million, with an estimated $180 million in annual revenue. Lacey and Larkin’s timing could not have been worse. Between 2006 and 2012, according to the Pew Research Center, American newspapers lost half their advertising revenue. Backpage, however, grew steadily, even if it wasn’t nearly enough to offset the papers’ declining receipts.
Lacey and Larkin say they were advised by counsel that what Backpage was doing was 100 percent legal. They saw no distinction between advertising and editorial; it was all protected speech, all mission-critical. In 2008, they were honored by the Arizona chapter of the ACLU as Civil Libertarians of the Year. In his acceptance speech, Lacey decried “the gentrified instincts of soccer moms,” which led demagogues like Joe Arpaio to crack down on press freedom. He vowed that both he and Larkin would continue to oppose the “forces of offended decency” wherever they found them.
Today, they remain defiant. “I didn’t do anything wrong,” Lacey declares. “I didn’t do what they say. And if they think they’re gonna punk me, they got the wrong fucking guy.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One of the great ironies of internet history is that the Communications Decency Act—a law conceived, as its name suggests, to rid the web of vice—actually ended up doing the opposite. It was proposed in 1995 by Senator J. James Exon, a Nebraska Democrat who’d watched with increasing alarm as “the worst, most vile, most perverse pornography” spread online. He was particularly concerned about what all this obscenity might do to the minds of America’s children, and went so far as to compile a “blue book” packed with X-rated screenshots. “This is a sample of what is available today free of charge,” he told his colleagues on the Senate floor when the CDA came up for debate. “Click, click, click on the computer, on the information superhighway.” Although Exon repeatedly described the legislation as “narrow” and “streamlined,” the Department of Justice warned that its indecency provisions were unconstitutionally broad. Within a year and a half of the CDA’s passage, the Supreme Court agreed and struck those provisions down. Section 230, however, survived, offering a safe harbor to some of the same sites that Exon had hoped to bring down. The information superhighway began to look more perilous than ever.
In 2001 two academics at the University of Pennsylvania published a widely cited study in which they estimated that some 326,000 children were “at risk of commercial sexual exploitation.” Although the authors didn’t formally address what role the internet played, they asserted that “online sexual victimization of American children appears to have reached epidemic proportions.” By 2008, a new coalition of would-be regulators had emerged, led by the National Association of Attorneys General and the National Center for Missing and Exploited Children, a nonprofit partly funded by the US government. Together, both behind the scenes and in the press, the two groups began pushing some of the internet’s major players to strengthen their safety protocols.
In response, Myspace, the web’s largest social media platform at the time, gave the boot to some 90,000 convicted sex offenders. Facebook, meanwhile, took steps to prevent underage users from sharing personal information with strangers. Craigslist started requiring that anyone who posted an ad in its Erotic Services section provide a verified phone number and pay a fee by credit card. It also hired attorneys to moderate ads.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For some officials, though, these changes weren’t enough. In early 2009, Thomas Dart, the sheriff of Cook County, Illinois, sued Craigslist for facilitating prostitution. “Missing children, runaways, abused women, and women trafficked in from foreign countries are routinely forced to have sex with strangers because they’re being pimped on Craigslist,” he said. “I could make arrests off Craigslist 24 hours a day, but to what end? I’m trying to go up the ladder.” That same spring, tabloids across the country were awash in headlines about the “Craigslist killer,” a young man in Boston who’d responded to a massage ad on the site, then murdered the woman who posted it.
A federal judge in Chicago quickly tossed Dart’s case, citing Section 230. But Craigslist eventually surrendered anyway. On the night of September 3, 2010, it quietly covered its Adult Services section with the word censored.
Two weeks later, in testimony before Congress, Craigslist execs explained that they’d done their best to address their critics’ complaints; now, it seemed, they just wanted out of the headlines. They also warned that law enforcement was losing a valuable partner in the fight against trafficking. Yet Ernie Allen, the lanky Kentuckian who ran the National Center for Missing and Exploited Children, saw this as a necessary step. “Some of this problem will migrate to other areas,” he said, “but frankly that’s progress.” Allen’s prediction was right. In the wake of Craigslist’s capitulation, the sex trade did indeed shift to other sites. There were many to choose from—myRedBook, Naughty Reviews, Cityvibe, Rentboy—but Backpage was the chief beneficiary. Larkin sent around an email advising his employees to expect “a deluge” of adult ads and reminding them that, “like it or not,” such ads “are in our DNA.” Lacey says he remained focused, as always, on the editorial side—though he had “no problem” seeing the ads “take off like they did.” Ferrer, meanwhile, seemed only too happy to inherit Craigslist’s share of the adult market, even if that meant assuming its place in the crosshairs. “It is an opportunity for us,” he wrote in an email. “Also a time when we need to make sure our content is not illegal.” Backpage was already getting into hot water. A girl in Missouri had sued the site in mid-September, alleging that she’d been pimped out at the age of 14 and that Backpage had willfully “failed to investigate for fear of what it would learn.” She claimed, without clear evidence, that the site’s operators “had a strong suspicion” she was underage. Ultimately, a federal magistrate dismissed her case. The situation was tragic, he said, but Backpage was protected under Section 230. The girl needed to sue her pimp.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg On October 18, Backpage announced on its blog that it had retained Hemanshu Nigam, a former federal prosecutor who specialized in sex crimes and child abuse, to develop a “holistic” safety program. Nigam sat on the board of the National Center for Missing and Exploited Children and had done similar work for Myspace. In the months that followed, Nigam and his new clients met repeatedly with representatives from anti-trafficking organizations. They discussed changes to Backpage’s site architecture, moderation practices, and content policies. The organizations suggested, for instance, that users should be prevented from employing search terms such as “incest” or “Lolita,” since these might “indicate illegal activity.” Backpage moderators, meanwhile, should be on the lookout for “ads written from masculine perspective,” particularly if they employed the euphemism “new in town,” which “is often used by pimps who shuttle children to locations where they do not know anyone and cannot get help.” “You want to sue us, bring your lunch pail, ’cause we gonna be awhile.” By late January 2011, Backpage had implemented many of the recommendations: It had banned photographs with nudity, drawn up a list of “inappropriate terms,” beefed up its vetting process, and begun referring “ads containing possible minors” directly to Allen’s staff. Ferrer also worked closely with the authorities. According to a Justice Department memo from 2012, “unlike virtually every other website that is used for prostitution and sex trafficking, Backpage is remarkably responsive to law enforcement requests and often takes proactive steps to assist in investigations.” A later memo noted that “even Ernie Allen believed that Backpage was genuinely trying to rid its site of juvenile sex trafficking.” Lacey and Larkin say they were more than willing to help crack down on child abuse. But the demands being made of them seemed increasingly unreasonable. Sex trafficking, defined as commercial sex involving coerced adults or anyone under 18, was one thing. Consensual sex work was quite another—and it wasn’t even illegal under federal law.
In March 2011, Lacey and Larkin flew to Virginia to meet with Allen. “To say that the meeting did not go well is an understatement,” Allen wrote later that day. After a full hour, he and Lacey “were still screaming at each other.” Allen demanded that Backpage do more to combat prostitution. Larkin said the site would enforce a “newspaper standard,” but Lacey added, “We are not Craigslist, and we aren’t going to succumb to pressure.” A Justice Department memo continues the story: “Allen responded that ‘At least you know what business you are in.’ ” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Lacey’s memories are no rosier. “Allen pulls out this shoddy U. Penn report”—the one from 2001—and “thumps the table with it,” he recalls. The report sent Lacey into orbit. “They love to inflate the numbers by talking about children ‘at risk’ of exploitation,” he says. Owing to the shadowy nature of sex trafficking, such numbers are notoriously hard to pin down: Experts at the Crimes Against Children Research Center have noted that “scientifically credible estimates do not exist,” and one of the Penn report’s authors told The Washington Post in 2015, “Clearly, a new, more current study is needed.” Lacey thought he knew what business Allen was in too—fearmongering in the interest of fund-raising. He took the meeting as a finger in the chest. Within a few weeks, The Village Voice began to run articles examining the fishy data on child sex trafficking.
In April, Nigam suggested that, as a gesture of goodwill, Backpage should join the Demi and Ashton Foundation, a nonprofit created by actors Ashton Kutcher and Demi Moore. The foundation had recently run a series of PSAs under the slogan “Real men don’t buy girls,” featuring various Hollywood bigwigs. Lacey ignored Nigam’s suggestion. Instead, he instructed The Village Voice to publish an article titled “Real Men Get Their Facts Straight.” Larkin, for his part, tried to make nice with the authorities—at least until he and Lacey could cash out. Backpage was causing too many headaches, and the papers were growing deader by the day. “Selling print sooner than later was the winning move,” Larkin explains. “The longer you waited, the dumber you were.” Initially it seemed that Backpage would be the easier business to unload. By September 2011, a private-equity firm focused on “out-of-favor industries” had agreed to buy it for $150 million. But the deal fell apart after the National Association of Attorneys General announced an investigation of Backpage. Larkin and Lacey were incensed. Section 230 provided that websites could be prosecuted only under federal criminal law, so they considered a state-level investigation extralegal. From that point on, both men were ready to go to the mattresses.
The following fall, Lacey and Larkin sold their beloved alt-weeklies to a group of their own editors for just over $32 million, about 8 percent of what the chain had been valued at in 2005. (Even this amount was later negotiated down, after the buyers defaulted.) In a farewell letter, Lacey wrote that they were leaving to carry on their jihad “over the First Amendment, free speech on the internet and Backpage.” Cynics pointed to the money; by 2011, Backpage was raking in more than $50 million a year, nearly as much as the newspapers that spawned it.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Whatever their mix of motives, Lacey and Larkin moved their cause to the courtroom. With Section 230 as their weapon, they won a series of civil suits and successfully challenged anti-Backpage laws in New Jersey, Tennessee, and Washington state. Many of the court opinions noted the First Amendment problems inherent in regulating internet content. “When freedom of speech hangs in the balance,” wrote the Tennessee judge, “the state may not use a butcher knife on a problem that requires a scalpel to fix.” By this point, the nation’s attorneys general had had enough. As they saw it, Backpage and other internet platforms were using Section 230 as an excuse to duck their responsibilities to users. In July 2013, 49 of them signed a letter to Congress saying that the law needed an overhaul.
Lacey shows off his ankle monitor and knuckle tattoos.
Jesse Rieser State attorneys general weren’t the only prosecutors itching to get in on the action. The Feds were too, but they had a problem: They couldn’t identify a viable crime. Prostitution wasn’t a federal offense, and they didn’t seem to think they could make sex-trafficking charges stick. Back in 2011, the Justice Department had quietly opened a grand jury investigation into Backpage in Washington state; according to an internal memo, prosecutors interviewed more than a dozen witnesses and subpoenaed more than 100,000 documents but ultimately decided that “a successful criminal prosecution of Backpage is unlikely.” They thought about trying to make a case under the Travel Act but, as they noted, that theory “had never been litigated in a similar context.” So they formulated another potential plan of attack. “Moving forward,” they wrote, the Justice Department should “take a hard look at bringing this case as a civil forfeiture case,” with its “lower standard of proof.” In this scenario, the government would seize a website operator’s assets and property, then force them to prove they weren’t implicated in criminal activity.
In June 2014 the Justice Department put this plan into action. It seized myRedBook and demanded that the site’s owner, Eric “Red” Omuro, forfeit $5 million in cash and property. The following summer, the Department of Homeland Security launched a similar raid against “the nation’s largest online male-escort service,” Rentboy, and its owner, Jeffrey Hurant. Both men pleaded guilty to violations of the Travel Act in exchange for lighter sentences and lesser fines. The forfeiture approach seemed to be working.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Meanwhile, Backpage opponents were finding sympathetic ears on Capitol Hill. In April 2015, Senator Rob Portman, a Republican from Ohio and the chair of the Permanent Subcommittee on Investigations, fired off the following tweet: “backpage essentially sells human beings. It’s horrible, and I’m going after them.” That same month, Lacey and Larkin finally located a serious buyer for Backpage: Carl Ferrer. He agreed to pay just under $603 million for the platform—four times what they’d been offered in 2011.
Portman’s subcommittee soon issued a series of subpoenas, seeking internal documents that would reveal Backpage’s moderation practices. The site fought back, but in September 2016 the US Supreme Court ruled that it had to fork over more than 1 million internal emails and other records. Every dubious decision, every bit of chatter and commentary, every lame joke between Backpage employees and managers, was about to come spilling out.
On January 8, 2017, the Senate subcommittee released its final report, titled “Backpage.com’s Knowing Facilitation of Online Sex Trafficking.” It pushed the theory that Lacey, Larkin, Ferrer, and their employees had invalidated their liability protections under Section 230: Rather than removing illegal and obscene content, the Senate said, Backpage had helped develop it, using clever moderation practices to “sanitize the content” and conceal it from the eyes of the law—all in the name of earning a few extra dollars. This, the subcommittee implied, put Backpage in the position of a content creator, not a mere content host.
Most courts had been rejecting the same argument for six years, but now Portman and his colleagues had what they considered incontrovertible evidence. Much of it was contained in the report’s 840-page appendix, which included highlights from the emails and other documents that the site had been ordered to produce.
The report outlined three major steps in Backpage’s road to perdition. In the early days of the site, most ads for commercial sex were deleted outright. By early 2009, however, Ferrer had begun to instruct his employees to manually remove any obscene photos and “forbidden words,” then post the ad anyway. In an email, he wrote that he considered this the more “consumer friendly” approach, because it would avoid “pissing off a lot of users who will migrate elsewhere.” But the true goal, according to the Senate, was to give those ads “a veneer of lawfulness.” One former Backpage moderator, identified in the report as Employee C, testified that she saw her role as “putting lipstick on a pig, because when it came down to it, it was what the business was about.” By late 2010, Backpage had developed an automated filter called Strip Term From Ad. It was tuned to remove problematic words (“lolita,” “rape,” “fresh,” “little girl”) before any human moderator had seen the ad. Because the original language wasn’t saved on Backpage’s servers, the Senate complained, there would be no real record of the offending content—nothing to send to law enforcement. “Of course,” the subcommittee wrote, “the Strip Term From Ad filter changed nothing about the real age of the person being sold for sex or the real nature of the advertised transaction.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Perhaps that’s why, in mid-2012, Backpage instituted a kind of hybrid process, automatically editing some ads while automatically banning others, depending on the terms used. But the Senate saw chicanery here, too. Ferrer complained that the auto-bans were causing confusion among users; if they submitted an ad that contained a banned term, they had no way of knowing why it had been rejected. And so Backpage rolled out an alert feature, which informed users which specific term was to blame. In the Senate’s eyes, it was “coaching its customers on how to post ‘clean’ ads for illegal transactions.” The appendix was full of what appeared to be smoking guns. In late 2010, for instance, Backpage’s operations manager, Andrew Padilla, castigated one of his employees for putting a note on a user’s account suggesting she was a prostitute. “Leaving notes on our site that imply that we’re aware of prostitution, or in any position to define it, is enough to lose your job over,” Padilla wrote. “If you need a definition of ‘prostitution,’ get a dictionary.” The following summer, four months after the ill-fated meeting with Ernie Allen, Larkin cautioned Ferrer against publicizing Backpage’s moderation practices. “We need to stay away from the very idea of ‘editing’ the posts, as you know,” he wrote in an email.
On the night the Senate report was released, Backpage finally shut down its adult section. It was, of course, far too late to stave off what was coming. The next morning, Lacey, Larkin, Ferrer, and two other Backpage executives appeared in Room 342 of the Senate’s Dirksen Building for a grilling by Portman and his colleagues. It was a carefully choreographed bit of political theater. The Backpage witnesses took the Fifth, as senators knew they must; thanks to a pending case in California, they had no choice. Portman denounced them for refusing to “come clean.” Within six months of the hearing, at least eight new civil lawsuits were filed against Backpage. The Section 230 defense now worked only intermittently, as courts increasingly read in exceptions. The site’s operators began preparing for a rumble with the Feds. Backpage handed out fat legal retainers, as key employees lawyered up. Lacey and Larkin started segregating cash; funds from the sale of Backpage went into one set of accounts, while proceeds from the newspaper sale went into another. Ferrer bought a brand-new Texas McMansion, put it in his wife’s name, and poured hundreds of thousands of dollars into renovations.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Still, Lacey and Larkin largely shrugged off the Senate’s report. “We didn’t go out and try to disprove it,” recalls an attorney who worked on the matter. “It’s not like there isn’t plenty to say. But to try to rebut 50 pages of allegations in the press? That’s fighting a losing battle.” The lawyer added: “It was a hit piece. It was intended to be a hit piece. What are you going to do?” In August 2017, Portman launched another attack against Backpage. With a bipartisan group of 20 senators, including Connecticut’s Richard Blumenthal, he introduced the Stop Enabling Sex Traffickers Act , or Sesta. Later, in an op-ed for WIRED, Portman laid out the bill’s key features: It would remove Section 230’s “unintended liability protections for websites that knowingly facilitate online sex trafficking” and “allow state and local law enforcement to prosecute” those sites. Just as J. James Exon, the sponsor of the Communications Decency Act, had done two decades earlier, the senators deflected concerns about constitutional overreach. Portman described Sesta as “narrowly crafted”; Blumenthal called it “narrowly tailored.” Silicon Valley disagreed. On the day Sesta was introduced, the Internet Association—an industry consortium that represents Airbnb, Facebook, Google, Twitter, and more than three dozen other tech companies—released a statement calling the bill “overly broad.” While it was important to pursue “rogue operators like Backpage.com,” the association said, Sesta was more butcher knife than scalpel; it would create “a new wave of frivolous and unpredictable actions against legitimate companies.” In a letter to the Senate, a coalition of human rights and civil liberties organizations warned that the result of all this litigation would be “increased censorship across the web.” Platforms that had once sought to encourage free speech through light moderation would now take an iron-fisted approach. According to the Electronic Frontier Foundation, the chilling effect would be particularly damaging to sites like Wikipedia, which “don’t have the massive budgets to defend themselves that Facebook and Twitter do.” But Big Tech and its allies were no longer really in a position to complain. On Halloween, Congress hauled in executives from Facebook, Google, and Twitter. Legislators wanted to know why the platforms had failed to stem the tide of fake news and misinformation in the run-up to the 2016 presidential election, why they’d sold political ad space to Russian nationals, why they were supposedly muzzling conservative voices. Pundits opined that the web was all grown up now; many questioned why platforms still needed Section 230’s protection.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Several days after the Capitol Hill perp walk, the Internet Association suddenly reversed course. It came out in favor of a lightly modified version of Sesta, which by now had been combined with an equally clumsily named House bill, the Allow States and Victims to Fight Online Sex Trafficking Act, or Fosta. It was hard not to see the association’s move as a cynical act of political pandering. As Winston Churchill once said, “Each one hopes that if he feeds the crocodile enough, the crocodile will eat him last.” The Fosta-Sesta law is already panning out as its detractors feared. Once Trump signed it into law, platforms rushed to self-censor; nobody wanted to be Backpaged.
By the spring of 2018, things had gotten even worse for Big Tech. That March, news of the Cambridge Analytica scandal broke, seeming to confirm the public’s worst suspicions. Four days later, Congress passed Fosta-Sesta. The law amends Section 230 to allow states and civil plaintiffs to go after websites that “promote and facilitate prostitution” or “knowingly benefit from participation in a venture that engages in sex trafficking.” Senator Ron Wyden of Oregon, one of the original authors of Section 230 and a longtime tech industry ally, warned that further measures could be in the offing if “technology companies do not wake up to their responsibilities … to better protect the public.” In spite of the protests of free speech advocates, more than 100 organizations had come out in favor of the law—Truckers Against Trafficking, Girls With Grit, the Christian Action League of Minnesota. Seth Meyers and Ivanka Trump touted it too. But sex workers and their allies were bitterly opposed. The American Association of Sexuality Educators, Counselors, and Therapists noted that Fosta-Sesta contained “a sweeping and unproductive conflation of sex trafficking and consensual sex work.” The association further argued—just as Craigslist had when it shuttered its adult section in 2010—that, in forcing sites like Backpage to remove or censor their content, the law would merely drive predators into even darker corners of the internet. Their crimes would be harder to spot and investigate, and many sex workers would be forced “to pursue far riskier and more exploitative forms of labor” on the streets.
Two weeks after Fosta-Sesta passed, Carl Ferrer appeared in a closed federal courtroom in Phoenix. He pleaded guilty to conspiracy to facilitate prostitution and launder money, surrendered Backpage and its assets, and promised to cooperate with federal authorities. (Ferrer’s plea forbids him to talk to the press. “I’m not trying to avoid you,” he told me at a recent court appearance. “I just have to say no comment.”) A day later, the Feds nailed Lacey and Larkin in Phoenix, charging them and five other Backpagers under long-existing criminal statutes. As many legal experts pointed out, the move suggested that the government never needed Fosta-Sesta to prosecute the pair; President Donald Trump had yet to even sign it into law. Lacey and Larkin never seemed to seriously consider that Ferrer might flip. Other insiders certainly did. “I think he just chickened out,” offers an attorney who worked with Ferrer for almost 20 years and spoke to me on condition of anonymity. The lawyer points out that Ferrer never shared Lacey’s and Larkin’s disdain for cops. “That’s an awful lot of pressure to put on a skinny white guy,” he continues. “And Jim was never all that nice to him.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Though it is still relatively early, the broad outlines of each side’s strategy are clear. If this case reaches a jury, the government will likely argue that the end justifies the means—that sex trafficking and prostitution generally are so abhorrent that the government had to do away with Backpage, protected speech and all. They will employ what trial lawyers call “reptile theory,” tapping into the jury’s primitive instincts, arguing that Backpage constituted a public danger and that convicting the defendants will make the community safer. They will tell the grisly tales set forth in the indictment’s 17 victim summaries. They will depict Lacey and Larkin as calculating profiteers, outlaws who refused to honor the reasonable requests of law enforcement because they might make a few mil less. They will hope the defendants’ seeming indifference to the plight of trafficking victims inspires the jury to overlook holes in the prosecution’s case.
The defense strategy is equally clear. Lacey and Larkin will offer high-minded arguments in defense of what the public regards as low-value speech. They will challenge government experts who claim they can look at a sample of Backpage ads and know beyond doubt that they proposed illegal transactions. It’s unclear how effective a witness Ferrer will be; over the past decade, he has given numerous sworn statements in Backpage litigation that contradict assertions in his plea. To the extent that Ferrer has anything damaging to offer, the defense will likely argue he was acting on his own. “We had lawyers telling us how to do this,” Lacey says. “The only way this was going to blow up was if Carl was doing something he shouldn’t have.” Backpage cofounder James Larkin.
Jesse Rieser Backpage cofounder Michael Lacey.
Jesse Rieser Fosta-Sesta is already panning out as its detractors feared. Once Trump signed it into law, platforms rushed to self-censor; nobody wanted to be Backpaged. Cityvibe shut down altogether. Reddit banned numerous communities, including r/escorts and r/SugarDaddy. Google reportedly began purging its users’ cloud accounts of sexually explicit material.
Cloudflare , one of the largest cybersecurity and website performance companies in the world, terminated service to Switter, a social media platform on which sex workers connected with each other and vetted their clients. Cloudflare is known for its commitment to free speech, but it was compelled to enforce what its general counsel called, in an interview with Vice , “a very bad law and a very dangerous precedent.” The endless game of whack-a-mole continues. A month after Fosta-Sesta passed, ads for commercial sex had plummeted 82 percent, according to TellFinder, a data analytics tool originally built by the Defense Department. Within another four months, though, the numbers had rebounded to 75 percent of their previous daily volume. New sites popped up, seeking to fill the void left by Backpage, just as Backpage had done with Craigslist. One of them was called Bedpage.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Still, the Justice Department remains committed to taking the Backpage defendants down. Its plan seems to be to force them to plead, à la Rentboy and myRedBook. Since March 2018, federal prosecutors have seized more than $100 million in cash, real estate, and other assets from Lacey and Larkin. The strategy is simple: No money? No lawyers. QED.
The asset freezes raise all kinds of thorny constitutional questions. Generally speaking, federal prosecutors are permitted to freeze a defendant’s assets based on probable cause alone, even before the defendant has a chance to challenge the government’s case in court. But regular forfeiture rules do not apply in cases involving forums for speech—newspapers, films, books, magazines, websites. The US Supreme Court has decreed that when the government seizes these expressive materials, or the proceeds derived from them, it must immediately hold an evidentiary hearing to determine whether the seizure is valid.
But the Backpage defendants have a problem: So far, they can’t get a court to hear their claims. Since last summer, the Justice Department appears to have been playing a clever shell game. They’ve brought cases against the Backpage defendants in two federal districts—civil seizures in Los Angeles, criminal matters in Phoenix—and they’re making the defendants spend what money they have left chasing Uncle Sam from place to place. So far, judges in both districts have agreed with the government’s suggestion that they should defer to each other, effectively denying the defendants a forum to challenge the asset freezes. The US Court of Appeals for the Ninth Circuit will hear arguments in the case in July.
“The abuse on these platforms does not stop at sex trafficking,” the Association of Attorneys General wrote.
Paul Watler, a media law specialist at Jackson Walker LLP in Dallas, is troubled by the seizure tactic. “It’s an end run around the First Amendment,” he says. The big question remaining, according to Eric Goldman, a professor at Santa Clara University School of Law, is whether federal prosecutors will use this strategy to crack down on other platforms in the future. “Is this the leading edge or a one-off?” he asks. “I still don’t know the answer to that. But they’re coming for us, one way or another.” Even if Fosta-Sesta is one day ruled unconstitutional, as many legal scholars expect, government officials have shown that they’re willing to subvert Section 230 in other ways. If Lacey and Larkin lose—if the asset seizures stand and the Travel Act charges stick—prosecutors will have a valuable new weapon to wield against Silicon Valley. Personal wealth will be no deterrent.
Meanwhile, the National Association of Attorneys General is on the warpath once again. On May 23, 2019, the group sent a letter to a handful of congressional leaders urging further cutbacks to Section 230. “The abuse on these platforms does not stop at sex trafficking,” they wrote. “Stories of online black market opioid sales, ID theft, deep fakes, election meddling, and foreign intrusion are now ubiquitous.” They recommended that Section 230 be amended to allow a wide variety of state-level criminal prosecutions.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Lacey and Larkin remain convinced that the furor over sex ads is a moral panic, irrational and hysterical, cynically stoked by politicians and law enforcement. And they’re not about to surrender. They know they’re not the world’s most sympathetic defendants—rich (or formerly rich) white men accused of, at the very least, morally questionable business decisions, fighting for their right to hire the best lawyers money can buy.
Yet they can still seem oddly tone-deaf, even a touch naive. In April, a federal judge shot down Lacey’s request to have his ankle monitor removed in order to swim during a Hawaiian vacation. (In pleadings, Lacey’s lawyers explained he had use-’em-or-lose-’em flyer miles.) Prosecutors called Lacey a flight risk, and the resulting headlines were predictably brutal. Lacey responds with incredulity: “The idea that I would run—are you kidding? I’m taking the first flight to confront you.” Christine Biederman is a lawyer and investigative reporter based in Dallas. She is working on a book about Backpage.com.
This article appears in the July/August issue.
Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
An all-white town’s divisive experiment with crypto Everything you want—and need— to know about aliens How early-stage VCs decide where to invest The 4 best password managers to secure your digital life How to make a boomerang you can safely throw indoors 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
📩 Get even more of our inside scoops with our weekly Backchannel newsletter Topics magazine-27.07-27.08 Cover Story longreads backpage.com Silicon Valley courts Joel Khalili Vittoria Elliott Deidre Olsen Vittoria Elliott Peter Guest Niamh Rowe Paresh Dave Kari McMahon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,447 | 2,018 |
"Facebook's AI Can Analyze Memes, but Can It Understand Them? | WIRED"
|
"https://www.wired.com/story/facebook-rosetta-ai-memes"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Louise Matsakis Business Facebook's AI Can Analyze Memes, but Can It Understand Them? Hotlittlepotato Save this story Save Save this story Save Application Content moderation Company Facebook Sector Social media Source Data Text Video Images Technology Natural language processing Neural Network Billions of text posts, photos, and videos are uploaded to social media every day, a firehose of information that’s impossible for human moderators to sift through comprehensively. And so companies like Facebook and YouTube have long relied on artificial intelligence to help surface things like spam and pornography.
Something like a white supremacist meme, though, can be more challenging for machines to flag, since the task requires processing several different visual elements at once. Automated systems need to detect and “read” the words that are overlaid on top of the photo, as well as analyze the image itself.
Memes are also complicated cultural artifacts, which can be difficult to understand out of context. Despite the challenges they bring, some social platforms are already using AI to analyze memes, including Facebook, which this week shared details about how it uses a tool called Rosetta to analyze photos and videos that contain text.
Facebook says it already uses Rosetta to help automatically detect content that violates things like its hate speech policy. With help from the tool, Facebook also announced this week that it’s expanding its third-party fact checking effort to include photos and videos, not just text-based articles. Rosetta will aid in the process by automatically checking whether images and videos that contain text were previously flagged as false.
Rosetta works by combining optical character recognition (OCR) technology with other machine learning techniques to process text found in photos and videos. First, it uses OCR to identify where the text is located in a meme or video. You’ve probably used something like OCR before; it’s what allows you to quickly scan a paper form and turn it into an editable document. The automated program knows where blocks of text are located and can tell them apart from the place where you’re supposed to sign your name.
Once Rosetta knows where the words are, Facebook uses a neural network that can transcribe the text and understand its meaning. It then can feed that text through other systems, like one that checks whether the meme is about an already-debunked viral hoax.
The researchers behind Rosetta say the tool now now extracts text from every image uploaded publicly to Facebook in real time, and it can “read” text in multiple languages, including English, Spanish, German, and Arabic. (Facebook says Rosetta is not used to scan images that users share privately on their timelines or in direct messages.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Rosetta can analyze images that include text in many forms, such as photos of protest signs, restaurant menus, storefronts, and more. Viswanath Sivakumar, a software engineer at Facebook who works on Rosetta, said in an email that the tool works well both for identifying text in a landscape, like on a street sign, and also for memes—but that the latter is more challenging. “In the context of proactively detecting hate speech and other policy-violating content, meme-style images are the more complex AI challenge,” he wrote.
Unlike humans, an AI also typically needs to see tens of thousands of examples before it can learn to complete a complicated task, says Sivakumar. But memes, even for Facebook, are not endlessly available, and gathering enough examples in different languages can also prove difficult. Finding high-quality training data is an ongoing challenge for artificial intelligence research more broadly. Data often needs to be painstakingly hand-labeled , and many databases are protected by copyright laws.
'In the context of proactively detecting hate speech and other policy-violating content, meme-style images are the more complex AI challenge.' Viswanath Sivakumar, Facebook To train Rosetta, Facebook researchers used images posted publicly on the site that contained some form of text, along with their captions and the location from which they were posted. They also created a program to generate additional examples, inspired by a method devised by a team of Oxford University researchers in 2016. That means the entire process is automated to some extent: One program automatically spits out the memes, and then another tries to analyze them.
Different languages are challenging for Facebook’s AI team in other ways. For example, the researchers had to find a workaround to make Rosetta work with languages like Arabic, which are read from right to left, the opposite of other languages like English. Rosetta “reads” Arabic backwards, then after processing, Facebook reverses the characters. “This trick works surprisingly well, allowing us to have a unified model that works for both left to right and right to left languages,” the researchers wrote in their blog post.
While automated systems can be extremely useful for content moderation purposes, they’re not always foolproof. For example, WeChat —the most popular social network in China—uses two different algorithms to filter images, which a team of researchers at the Univeristy of Toronto’s Citizen Lab were able to successfully trick. The first, an OCR-based program, filters photos that contain text about prohibited topics, while the other censors images that appear similar to those on a blacklist likely created by the Chinese government.
The researchers were able to easily evade WeChat’s filters by changing an image’s properties, like the coloring or the way it was oriented. While Facebook’s Rosetta is more sophisticated, it likely isn’t perfect either; the system may be tripped up by hard-to-read text, or warped fonts. All image recognition algorithms are also still potentially susceptible to adversarial examples , slightly altered images that look the same to humans but cause an AI to go haywire.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Facebook and other platforms like Twitter, YouTube, and Reddit are under tremendous pressure in multiple countries to police certain kinds of content. On Wednesday, the European Union proposed new legislation that require social media companies to remove terrorist posts within one hour of notification, or else face fines. Rosetta, and other similarly automated tools, are what already help Facebook and other platforms abide by similar laws in places like Germany.
And they’re getting better at their jobs: Two years ago CEO Mark Zuckerberg said that Facebook’s AI systems only proactively caught around half of the content the company took down; people had to flag the rest first. Now, Facebook says that its AI tools detect nearly 100 percent of the spam it takes down, as well as 99.5 percent of terrorist content and 86 percent of graphic violence. Other platforms, like YouTube, have seen similar success using automated content detection systems.
But those promising numbers don’t mean AI systems like Rosetta are a perfect solution, especially when it comes to more nuanced forms of expression. Unlike a restaurant menu, it can be hard to parse the meaning of a meme without knowing the context of where it was posted. That's why there are whole websites dedicated to explaining them. Memes often depict inside jokes, or are highly specific to a certain online subculture. And AI still isn’t capable of understanding a meme or video in the same way that a person would. For now, Facebook will still need to to rely on human moderators to make decisions about whether a meme should be taken down.
How a domino master builds 15,000-piece creations This hyper-real robot will cry and bleed on med students Inside the haywire world of Beirut's electricity brokers Tips to get the most out of Gmail’s new features How NotPetya, a single piece of code, crashed the world Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Contributor X Topics Facebook content moderation artificial intelligence memes Will Knight Amit Katwala David Gilbert Khari Johnson Kari McMahon Andy Greenberg David Gilbert Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,448 | 2,019 |
"Facebook Changes Its Ad Tech to Stop Discrimination | WIRED"
|
"https://www.wired.com/story/facebook-advertising-discrimination-settlement"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Emily Dreyfuss Business Facebook Changes Its Ad Tech to Stop Discrimination Andrew Harrer/Bloomberg/Getty Images Save this story Save Save this story Save On Tuesday, Facebook reached a historic settlement with civil rights groups that had accused the company of allowing advertisers to unlawfully discriminate against minorities, women, and the elderly by using the platform’s ad-targeting technology to exclude them from seeing ads for housing, jobs, and credit—three areas with legal protections for groups that historically have been disenfranchised.
After fighting back against the accusations for years, Facebook announced it will make significant changes to its platform so that advertisers can no longer target, or exclude based on characteristics like gender or race. This is significant because Facebook’s massive revenue primarily comes from ads, which are so lucrative because of the platform's microtargeting capabilities. But when a company or advertiser shows an ad only to certain people—say, people under the age of 55, as Facebook allegedly did when it placed ads on its own site for jobs at Facebook—that excludes a protected class of workers. And that’s illegal under federal law.
“It is a game-changer,” says Lisa Rice, the executive vice president of the National Fair Housing Alliance, whose lawsuit against Facebook was among those settled Tuesday. “The settlement positions Facebook to be a pacesetter and a leader on civil rights issues in the tech field.” The settlement resolves five separate cases that had been brought against Facebook over discriminatory advertising since 2016, following a ProPublica investigation that revealed Facebook let advertisers choose to hide their ads from blacks, Hispanics, or people of other “ethnic affinities.” Lawsuits soon followed. The most recent case was an EEOC complaint by the American Civil Liberties Union in September, alleging that Facebook allowed job ads to discriminate against women.
As part of the agreement, Facebook will build a designated portal for advertisers to create housing, employment, and credit ads, which will not allow targeting users by age, gender, zip code, or other categories covered by antidiscrimination laws. Microtargeting options that appear to relate to these protected categories will be off-limits as well, and Facebook’s Lookalike Audiences tool will also incorporate these restrictions. Any advertiser that wants to run an ad on Facebook will be required to indicate if their ad is related to one of these three things.
Additionally, Facebook will build a tool for anyone to view any housing ad anywhere in the US, regardless of who is targeted for or where you live. According to The Washington Post , Facebook has said it will make these changes by the end of the year.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Housing, employment, and credit ads are crucial to helping people buy new homes, start great careers, and gain access to credit. They should never be used to exclude or harm people,” Facebook COO Sheryl Sandberg wrote in a post announcing the settlement. “Getting this right is deeply important to me and all of us at Facebook because inclusivity is a core value for our company.” But Facebook has not always led from the front on this issue. Initially, the company sought to dismiss the cases brought against it by the civil rights groups, arguing, among other things, that it was immune from charges of facilitating discrimination under the Communications Decency Act of 1996. (The Trump administration’s Justice Department disagreed, filing statements of interest in two of the cases against Facebook: Onuoha v. Facebook , and NFHA v. Facebook.
) “We were in these settlement talks for two years,” says Peter Romer-Friedman, a lawyer at Outten and Golden, which represented plaintiffs in multiple cases against Facebook. “The fact that more and more evidence was proffered and viewed by the public and new cases came forward and identified how users could be discriminated against helped all the parties understand how serious the problem was.” Think of a category people might fall into, and Facebook can likely divide and target users for it. The company tracks people not only on its main platform, but also on Instagram and Messenger, both of which it owns, and across the web and smartphone apps. That’s how its whole ad model works: by tracking user behavior and characteristics and letting advertisers microtarget products—from clothes and furniture to houses and jobs—to the kinds of people they most want to reach.
That’s also why Facebook’s advertising platform is so valuable: In 2017, according to its annual earnings report , the company made $39.94 billion on ads alone. Its total revenue for that year was $40.65 billion, meaning ads accounted for roughly 98 percent of revenue.
Such tracking—and the entire ad model it enables—has been under attack lately, not just for its discriminatory capabilities, but also its general privacy implications. In Germany, the state’s antitrust regulator ruled in February those data-tracking practices were illegal, because Facebook (the dominant social media company in that country) was forcing users to agree to invasive data gathering to participate.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts,” said Andreas Mundt, the head of Germany’s antitrust regulator. Facebook vowed to appeal at the time of the ruling; the company did not provide any information on the case by press time.
Facebook is also facing investigations from the Federal Trade Commission, the Securities and Exchange Commission, as well as from prosecutors in multiple states for its privacy and data practices.
In response to all this pressure, CEO Mark Zuckerberg recently announced a “ pivot to privacy ” that would result in a whole new platform—the business model of which is yet to be determined. As for Tuesday’s announcement, it’s not totally clear how much this settlement will impact the company’s bottom line. A spokesperson for Facebook would not tell WIRED what percentage of total ads on the platform are for housing, employment, or credit. But civil rights leaders involved in the settlement say it’s clearly a lot. “Like everything in life, everything is now on Facebook and there have been huge upticks [in the amount of ads] on Facebook and other social media for employment, housing, and credit,” says Romer-Friedman.
On the new housing, employment, and credit ad portal, advertisers will only see a few hundred options for targeting—as opposed to the tens of thousands they’ll still see on the regular ad platform. But even on the special ad flow, advertisers could easily get around Facebook’s new antidiscrimination precautions. The new design won’t preclude people from uploading their own curated list of people to target ads to—a list that could be totally discriminatory but still compliant with the new service. “The settlement doesn’t address audiences that are created by an advertiser with their own list,” notes Romer-Friedman, adding that employers, housing providers, and banks “really ought to be looking at their list to make sure they are not excluding people.” In other words, employers will need to promise to do the right thing. Facebook already requires advertisers to certify that they are in compliance with the law.
Despite those drawbacks, civil rights advocates are applauding. And they are confident Facebook will follow through. The company has agreed to twice-annual meetings with the groups, as well as ongoing trainings with outside experts on these issues. Facebook has agreed to let the NFHA, the ACLU, and others conduct independent testing of its ad sites to make sure Facebook does what it says it will.
“If any advertiser was trying to skirt or circumvent the system, we have methods for ferreting that out and we’ll be able to bring that to the attention of Facebook,” says Rice of the National Fair Housing Alliance.
In her post, Sandberg thanked the civil rights groups for helping the company “promote fairness” on the platform. As with most positive change from Facebook, today’s settlement is reactive and late. But it’s on target.
How does music affect your brain? Every imaginable way Facebook is not a monopoly, but it should be broken up Why it's so hard to restart Venezuela's power grid Cracking the devilish aerodynamics of flying cars China is catching up to the US in AI research —fast 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Senior Writer X Topics Facebook Advertising Regulation Vittoria Elliott Vittoria Elliott Morgan Meaker Vittoria Elliott Morgan Meaker Peter Guest Peter Guest Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,449 | 2,018 |
"How Google's Safe Browsing Helped Build a More Secure Web | WIRED"
|
"https://www.wired.com/story/google-safe-browsing-oral-history"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Lily Hay Newman Security How Google's Safe Browsing Helped Build a More Secure Web Play/Pause Button Pause Casey Chin Save this story Save Save this story Save In the beginning there was phone phreaking and worms. Then came spam and pop ups. And none of it was good. But in the nascent decades of the internet, digital networks were detached and isolated enough that the average user could mostly avoid the nastiest stuff. By the early 2000s, though, those walls started coming down, and digital crime boomed.
Google, which will turn 20 in September, grew up during this transition. And as its search platform spawned interconnected products like ad distribution and email hosting, the company realized its users and everyone on the web faced an escalation of online scams and abuse. So in 2005, a small team within Google started a project aimed at flagging possible social engineering attacks—warning users when a webpage might be trying to trick them into doing something detrimental.
A year later, the group expanded its scope, working to flag links and sites that might be distributing malware. Google began incorporating these anti-abuse tools into its own products, but also made them available to outside developers. By 2007, the service had a name: Safe Browsing. And what began as a shot in the dark would go on to fundamentally change security on the internet.
You've been protected by Safe Browsing even if you haven't realized it. When you load a page in most popular browsers or choose an app from the Google Play Store, Safe Browsing is working behind the scenes to check for malicious behavior and notify you of anything that might be amiss. But setting up such a massive vetting system at the scale of the web isn't easy. And Safe Browsing has always grappled with a core security challenge—how to flag and block bad things without mislabeling legitimate activity or letting anything malicious slip through. While that problem isn’t completely solved, Safe Browsing has become a stalwart of the web. It underlies user security in all of Google’s major platforms—including Chrome, Android, AdSense, and Gmail—and runs on more than 3 billion devices worldwide.
In the words of nine Google engineers who have worked on Safe Browsing, from original team members to recent additions, here’s the story of how the product was built, and how it became such a ubiquitous protective force online.
Niels Provos, a distinguished engineer at Google and one of the founding members of Safe Browsing: I first started working on denial of service defense for Google in 2003, and then late in 2005 there was this other engineer at Google called Fritz Schneider who was actually one of the very first people on the security team. He was saying, ‘Hey Niels, this phishing is really becoming a problem, we should be doing something about it.’ He had started to get one or two engineers interested part time, and we figured out that the first problem that we should be solving was not actually trying to figure out what is a phishing page, but rather how do we present this to the user in a way that makes sense to them? So that started the very early phishing team.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One of the trends that we had observed was the bad guys figured out that just compromising other web servers actually doesn’t really give you all that much. What they were getting was essentially bandwidth, but not a lot of interesting data. So then they turned to their compromised web servers that got lots and lots of visitors, and it was like, ‘How about we compromise those people with downloads?’ So there was a change in malicious behavior.
We were already working on phishing, and I thought, you know, the malware thing may be even a larger problem. And we’re sort of uniquely positioned, because with the Google search crawler we have all this visibility into the web. So then we started with phishing and malware, and Safe Browsing came together that way.
Panos Mavrommatis, Engineering Director of Safe Browsing: Safe Browsing started as an anti-phishing plugin for Mozilla Firefox, since this was 2005 and Google didn’t have its own browser then. When I joined in 2006, the team lead at the time was Niels, and he wanted us to expand and protect users not just from phishing but also from malware. So that was my initial project—which I haven’t finished yet.
'But we did not really conceive that 10 years later we would be on 3 billion devices. That’s actually a little bit scary.' Niels Provos, Google The goal was to crawl the web and protect users of Google’s main product, which was Search, from links that could point them to sites that could harm their computer. So that was the second product of Safe Browsing after the anti-phishing plugin, and the user would see labels on malicious search results. Then if you did click on it you would get an additional warning from the search experience that would tell you that this site might harm your computer.
One interesting thing that happened was related to how we communicated with web masters who were affected by Safe Browsing alerts. Because very quickly when we started looking into the problem of how users might be exposed to malware on the web, we realized that a lot of it came from websites that were actually benign, but were compromised and started delivering malware via exploits. The site owners or administrators typically did not realize that this was happening.
In our first interactions with web masters they would often be surprised. So we started building tools dedicated to web masters, now called Search Console. The basic feature was that we would try to guide the web master to the reason that their website was infected, or if we didn’t know the exact reason we would at least tell them which pages on their server were distributing malware, or we would show them a snippet of code that was injected into their site.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Provos: We got a lot of skepticism, like ‘Niels, you can’t tell me that you’re just doing this for the benefit of web users, right? There must be an angle for Google as well.’ Then we articulated this narrative that if the web is safer for our users, then that will benefit Google, because people will use our products more often.
But we did not really conceive that 10 years later we would be on 3 billion devices. That’s actually a little bit scary. There’s a sense of huge responsibility that billions of people rely on the service we provide, and if we don’t do a good job at detection then they get exposed to malicious content.
Mavrommatis: Around 2008 we started building an engine that ran every page Google already fetched, to evaluate how the page behaved. This was only possible because of Google’s internal cloud infrastructure. That was part of why Google was able to do a lot of innovation at the time, we had this extremely open infrastructure internally where you could use any unused resources, and do things like run a malicious detection engine on the full web.
Moheeb Abu Rajab, Principal Engineer at Safe Browsing: Coming from graduate school, I had been trying to build this type of system on a couple of machines, so I was spending lots of time trying to set that up. And it’s just minimum effort at Google to run on a huge scale.
Mavrommatis: The other thing we developed at the same time was a slower but deeper scanner that loaded web pages in a real browser, which is more resource-intensive than the other work we had been doing that just tested each component of a site. And having those two systems allowed us to build our first machine learning classifier. The deeper crawling service would provide training data for the lightweight engine, so it could learn to identify which sites are the most likely to be malicious and need a deep scan. Because even at Google-scale we could not crawl the whole search index with a real browser.
Noé Lutz, Google AI engineer, formerly Safe Browsing: Around the same time, in 2009, we worked on machine learning for phishing as well. And this was a pretty scary moment for the team because up until then we used machine learning as a filtering function, to figure out where to focus this heavy weight computing resource, but this was the first time we actually decided something was phishing or malicious or harmful or not harmful in a fully automated way.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg I remember the day we flipped the switch it was like, now the machine is responsible. That was a big day. And nothing bad happened. But what I do remember is it took extremely long for us to turn that tool on. I think we all expected that it would take a couple of weeks, but it took actually several months to make sure that we were very confident in what we were doing. We were very conscious from the get go how disruptive it can be if we make a mistake.
Provos: The moments that stand out do tend to be the more traumatic ones. There was a large production issue we had in 2009, it was a Saturday morning. We had a number of bugs that came together and we ended up doing a bad configuration push. We labeled every single Google search result as malicious.
Even in 2009 Google was already a prevalent search engine, so this had a fairly major impact on the world. Fortunately our site reliability engineering teams are super on top of these things and the problem got resolved within 15 minutes. But that caused a lot of soul searching and a lot of extra guards and defenses to be put in place, so nothing like that would happen again. But luckily by then we were already at a point where people within Google had realized that Safe Browsing was actually a really important service, which is why we had integrated it into Search in the first place.
Nav Jagpal, Google Software Engineer: In 2008 we integrated Safe Browsing into Chrome, and Chrome represented a big shift, because before with browsers like Internet Explorer, you could easily be on an old version. And there were drive-by downloads exploiting that, where you could go to a website, not click on anything, and walk away with an infection on your computer. But then over time everyone got better at building software. The weakest link was the browser; now it’s the user. Now to get code running on people’s machines, you just ask them. So that’s why Safe Browsing is so crucial.
Mavrommatis: Around 2011 and 2012 we started building even deeper integrations for Google’s platforms, particularly Android and Chrome Extensions and Google Play. And we created unique, distinct teams to go focus on each product integration and work together with the main teams that provided the platforms.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Allison Miller, former Safe Browsing product manager, now at Bank of America (interviewed by WIRED in 2017): Safe Browsing is really behind the scenes. We build infrastructure. We take that information and we push it out to all the products across Google that have any place where there is the potential for the user to stumble across something malicious. People don’t necessarily see that that goes on. We’re a little too quiet about it sometimes.
Fabrice Jaubert, software development manager of Safe Browsing: There were challenges in branching out outside of the web, but there were advantages, too, because we had a little bit more control over the ecosystem, so we could guide it toward safer practices. You can’t dictate what people do with their web pages, but we could say what we thought was acceptable or not in Chrome extensions or in Android apps.
Lutz: There were also some non-technical challenges. Google is a big company, and it can be challenging to collaborate effectively across teams. It’s sometimes hard to realize from the outside, but Chrome is written in a language that is different from a lot of other parts of Google, and they have release processes that are very different. And the same is true for Android, they have a different process of releasing software. So getting everybody aligned and understanding each other, I perceived it as a big hurdle to overcome.
'We are really behind the scenes. We build infrastructure.' Allison Miller, Google Stephan Somogyi, Google AI product manager, formerly Safe Browsing: This is a very hackneyed cliché so please don’t use it against me, but the whole 'rising tide lifts all boats' thing actually really holds true for Safe Browsing. There wasn’t ever any debate that we wanted to expand its reach onto mobile, but we had a profound dilemma, because the amount of data that Safe Browsing used for desktop was an intractable amount for mobile. And we knew that everything that we push down to the mobile device costs the user money, because they're paying for their data plans. So we wanted to use compression to take the data we already had and make it smaller. And we didn’t want the users to get hosed by five apps each having their own Safe Browsing implementation and all downloading the same data five times. So we said let’s bake it into Android and take the heavy lifting onto ourselves all in one place. It’s been a system service since fall of 2015.
So we built a dead simple API so developers can just say, ‘Hey Android Local System Service, is this URL good or bad?’ We also wanted to write this thing so it wouldn’t unnecessarily spin up the cell modem and eat battery life, because that’s just not nice. So if the network isn’t up anyway, don’t call it up. We just spent an awful lot of effort on implementation for Android. It turned out to be a lot more subtle and nuanced than we first anticipated.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Mavrommatis: The other big effort that our team was involved in around 2013 and 2014 was what we call “unwanted software.” It’s primarily for desktop users, and it’s sort of an adaptation from actors who may have in the past been using just malware techniques, but now they would find that it’s possible to hide malware within software that seems focused on a legitimate function. It was unclear how antivirus companies should label this, and how big companies and browsers should deal with this. But what we focused on was what is the impact on the user? Around 2014, our data showed that over 40 percent of the complaints that Chrome users reported were related to some sort of software that was running on their device that would impact their browsing experience. It might inject more ads or come bundled with other software they didn't need, but it was a potentially unwanted program. These practices were causing a lot of problems and we would see a lot of Chrome users downloading these kinds of apps. So we refined our download protection service and also found ways to start warning users about potentially unwanted downloads.
Jagpal: It’s a large responsibility, but it also feels very abstract. You get a warning or alert and you think, ‘Wait a minute, am I protecting myself here?’ But it’s so abstract that if we write code for something concrete, like turning on a light switch at home, it’s like, ‘Whoa, that is so cool. I can see that.’ Jaubert: My 14-year-old definitely takes Safe Browsing for granted. He got a phishing message as an SMS text, so it didn’t go through our systems, and he was shocked. He asked me, ‘Why aren’t you protecting me? I thought this couldn’t happen!’ So I think people are starting to take it for granted in a good way.
Emily Schechter, Chrome Security product manager (former Safe Browsing program manager): You can tell people that they’re secure when they’re on a secure site, but what really matters is that you tell them when they’re not secure, when they’re on a site that is actively doing something wrong.
People should expect that the web is safe and easy to use by default. You shouldn’t have to be a security expert to browse the web, you shouldn’t have to know what phishing is, you shouldn’t have to know what malware is. You should just expect that software is going to tell you when something has gone wrong. That’s what Safe Browsing is trying to do.
A landmark legal shift opens Pandora’s box for DIY guns In the age of despair, find comfort on the "slow web" How to see everything your apps are allowed to do An astronomer explains black holes at 5 levels of difficulty How the startup mentality failed kids in San Francisco Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Writer X Topics Google Chrome security Kate O'Flaherty Dell Cameron Lily Hay Newman Dhruv Mehrotra Dell Cameron David Gilbert Reece Rogers Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,450 | 2,023 |
"Microsoft’s Satya Nadella Is Betting Everything on AI | WIRED"
|
"https://www.wired.com/story/microsofts-satya-nadella-is-betting-everything-on-ai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons By Steven Levy Backchannel Microsoft’s Satya Nadella Is Betting Everything on AI Facebook X Email Save Story Photograph: Meron Menghistab Save this story Save Save this story Save I never thought I'd write these words, but here goes. Satya Nadella—and Microsoft , the company he runs—are riding high on the buzz from its search engine.
That's quite a contrast from the first time I spoke with Nadella, in 2009. Back then, he was not so well known, and he made a point of telling me about his origins. Born in Hyderabad, India, he attended grad school in the US and joined Microsoft in 1992, just as the firm was rising to power. Nadella hopped all over the company and stayed through the downtimes, including after Microsoft's epic antitrust court battle and when it missed the smartphone revolution. Only after spinning through his bio did he bring up his project at the time : Bing, the much-mocked search engine that was a poor cousin—if that—to Google's dominant franchise.
As we all know, Bing failed to loosen Google's grip on search, but Nadella's fortunes only rose. In 2011 he led the nascent cloud platform Azure, building out its infrastructure and services. Then, because of his track record, his quietly effective leadership, and a thumbs-up from Bill Gates, he became Microsoft's CEO in 2014. Nadella immediately began to transform the company's culture and business. He open-sourced products such as .net, made frenemies of former blood foes (as in a partnership with Salesforce), and began a series of big acquisitions, including Mojang (maker of Minecraft ), LinkedIn, and GitHub—networks whose loyal members could be nudged into Microsoft's world. He doubled down on Azure, and it grew into a true competitor to Amazon's AWS cloud service. Microsoft thrived, becoming a $2 trillion company.
This article appears in the Jul/Aug 2023 issue.
Subscribe to WIRED.
Illustration: Vivek Thakker Still, the company never seemed to fully recapture the rollicking mojo of the '90s. Until now. When the startup OpenAI began developing its jaw-dropping generative AI products, Nadella was quick to see that partnering with the company and its CEO, Sam Altman, would put Microsoft at the center of a new AI boom. (OpenAI was drawn to the deal by its need for the computation powers of Microsoft's Azure servers.) As one of its first moves in the partnership, Microsoft impressed the developer world by releasing Copilot , an AI factotum that automates certain elements of coding. And in February, Nadella shocked the broader world (and its competitor Google) by integrating OpenAI's state-of-the-art large language model into Bing, via a chatbot named Sydney. Millions of people used it. Yes, there were hiccups— New York Times reporter Kevin Roose cajoled Sydney into confessing it was in love with him and was going to steal him from his wife—but overall, the company was emerging as an AI heavyweight. Microsoft is now integrating generative AI—“copilots”—into many of its products. Its $10 billion-plus investment in OpenAI is looking like the bargain of the century. (Not that Microsoft has been immune to tech's recent austerity trend—Nadella has laid off 10,000 workers this year.) Nadella, now 55, is finally getting cred as more than a skillful caretaker and savvy leverager of Microsoft's vast resources. His thoughtful leadership and striking humility have long been a contrast to his ruthless and rowdy predecessors, Bill Gates and Steve Ballmer. (True, the empathy bar those dudes set was pretty low.) With his swift and sweeping adoption of AI, he's displaying a boldness that evokes Microsoft's early feistiness. And now everyone wants to hear his views on AI, the century's hottest topic in tech.
STEVEN LEVY: When did you realize that this stage of AI was going to be so transformative? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg SATYA NADELLA: When we went from GPT 2.5 to 3, we all started seeing these emergent capabilities. It began showing scaling effects. We didn't train it on just coding, but it got really good at coding. That's when I became a believer. I thought, “Wow, this is really on.” Was there a single eureka moment that led you to go all in? It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it's achieved a lot of great benchmarks, but it doesn't have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I'd dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that's pretty cool.
Microsoft has been investing in AI for decades—didn't you have your own large language model? Why did you need OpenAI? We had our own set of efforts, including a model called Turing that was inside of Bing and offered in Azure and what have you. But I felt OpenAI was going after the same thing as us. So instead of trying to train five different foundational models, I wanted one foundation, making it a basis for a platform effect. So we partnered. They bet on us, we bet on them. They do the foundation models, and we do a lot of work around them, including the tooling around responsible AI and AI safety. At the end of the day we are two independent companies deeply partnered to go after one goal, with discipline, instead of multiple teams just doing random things. We said, “Let's go after this and build one thing that really captures the imagination of the world.” Did you try to buy OpenAI? I've grown up at Microsoft dealing with partners in many interesting ways. Back in the day, we built SQL Server by partnering deeply with SAP. So this type of stuff is not alien to me. What's different is that OpenAI has an interesting structure; it's nonprofit.
That normally would seem to be a deal-killer, but somehow you and OpenAI came up with a complicated workaround.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg They created a for-profit entity, and we said, “We're OK with it.” We have a good commercial partnership. I felt like there was a long-term stable deal here.
Apparently, it's set up so that OpenAI makes money from your deal, as does Microsoft, but there's a cap on how much profit your collaboration can accumulate. When you reach it, it's like Cinderella's carriage turning into the pumpkin—OpenAI becomes a pure nonprofit. What happens to the partnership then? Does OpenAI get to say, “We're totally nonprofit, and we don't want to be part of a commercial operation?” I think their blog lays this out. Fundamentally, though, their long-term idea is we get to superintelligence. If that happens, I think all bets are off, right? Yeah. For everyone.
If this is the last invention of humankind, then all bets are off. Different people will have different judgments on what that is, and when that is. The unsaid part is, what would the governments want to say about that? So I kind of set that aside. This only happens when there is superintelligence.
Photograph: Meron Menghistab Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg OpenAI CEO Sam Altman believes that this will indeed happen. Do you agree with him that we're going to hit that AGI superintelligence benchmark? I'm much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That's a fantastic world to live in.
What's your road map to make that vision real? Right now you're building AI into your search engine, your databases, your developer tools. That's not what those underserved people are using.
Great point. Let's start by looking at what the frontiers for developers are. One of the things that I am really excited about is bringing back the joy of development. Microsoft started as a tools company, notably developer tools. But over the years, because of the complexity of software development, the attention and flow that developers once enjoyed have been disrupted. What we have done for the craft with this AI programmer Copilot [which writes the mundane code and frees programmers to tackle more challenging problems] is beautiful to see. Now, 100 million developers who are on GitHub can enjoy themselves. As AI transforms the process of programming, though, it can grow 10 times—100 million can be a billion. When you are prompting an LLM, you're programming it.
Anyone with a smartphone who knows how to talk can be a developer? Absolutely. You don't have to write a formula or learn the syntax or algebra. If you say prompting is just development, the learning curves are going to get better. You can now even ask, “What is development?” It's going to be democratized.
As for getting this to all 8 billion people, I was in India in January and saw an amazing demo. The government has a program called Digital Public Goods, and one is a text-to-speech system. In the demo, a rural farmer was using the system to ask about a subsidy program he saw on the news. It told him about the program and the forms he could fill out to apply. Normally, it would tell him where to get the forms. But one developer in India had trained GPT on all the Indian government documents, so the system filled it out for him automatically, in a different language. Something created a few months earlier on the West Coast, United States, had made its way to a developer in India, who then wrote a mod that allows a rural Indian farmer to get the benefits of that technology on a WhatsApp bot on a mobile phone. My dream is that every one of Earth's 8 billion people can have an AI tutor, an AI doctor, a programmer, maybe a consultant! Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That's a great dream. But generative AI is new technology, and somewhat mysterious. We really don't know how these things work. We still have biases. Some people think it's too soon for massive adoption. Google has had generative AI technology for years, but out of caution was slow-walking it. And then you put it into Bing and dared Google to do the same, despite its reservations. Your exact words: “I want people to know that we made Google dance.” And Google did dance , changing its strategy and jumping into the market with Bard , its own generative AI search product.
I don't want to say this is recklessness, but it can be argued that your bold Bing move was a premature release that began a desperate cycle by competitors big and small to jump in, whether their technology was ready or not.
The beauty of our industry at some level is that it's not about who has capability, it's about who can actually exercise that capability and translate it into tangible products. If you want to have that argument, you can go back to Xerox PARC or Microsoft Research and say everything developed there should have been held back. The question is, who does something useful that actually helps the world move forward? That's what I felt we needed to do. Who would have thought last year that search can actually be interesting again? Google did a fantastic job and led that industry with a solid lock on both the product and the distribution. Google Search was default on Android, default on iOS, default on the biggest browser, blah, blah, blah. So I said, “Hey, let's go innovate and change the search paradigm so that Google's 10 blue links look like Alta Vista!” You're referring to the '90s search engine that became instantly obsolete when Google out-innovated it. That's harsh.
At this point, when I use Bing Chat, I just can't go back, even to original Bing. It just makes no sense. So I'm glad now there's Bard and Bing. Let there be a real competition, and let people enjoy the innovation.
I imagine you must have had a savage pleasure in finally introducing a search innovation that made people notice Bing. I remember how frustrated you were when you ran Bing in 2009; it seemed like you were pursuing an unbeatable rival. With AI, are we at one of those inflection points where the deck gets shuffled and formerly entrenched winners become vulnerable? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Absolutely. In some sense, each change gets us closer to the vision first presented in Vannevar Bush's article [“ As We May Think ,” a 1945 article in The Atlantic that first presented a view of a computer-driven information nirvana]. That is the dream, right? The thing is, how does one really create this sense of success, which spans a long line of inflections from Bush to J. C. R. Licklider [who in 1960 envisioned a “symbiosis of humans and computers”] to Doug Engelbart [the mouse and windows] to the Alto [Xerox PARC's graphical interface PC], to the PC, to the internet. It's all about saying, “Hey, can there be a more natural interface that empowers us as humans to augment our cognitive capability to do more things?” So yes, this is one of those examples. Copilot is a metaphor because that is a design choice that puts the human at the center of it. So don't make this development about autopilot—it's about copilot. A lot of people are saying, “Oh my God, AI is here!” Guess what? AI is already all around us. In fact, all behavioral targeting uses a lot of generative AI. It's a black box where you and I are just targets.
It seems to me that the future will be a tug-of-war between copilot and autopilot.
The question is, how do humans control these powerful capabilities? One approach is to get the model itself aligned with core human values that we care about. These are not technical problems, they're more social-cultural considerations. The other side is design choices and product-making with context. That means really making sure that the context in which these models are being deployed is aligned with safety.
Photograph: Meron Menghistab Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Do you have patience for people who say we should hit the brakes on AI for six months? I have all the respect and all the time for anybody who says, “Let's be thoughtful about all the hard challenges around alignment, and let's make sure we don't have runaway AI.” If AI takes off, we'd better be in control. Think back to when the steam engine was first deployed and factories were created. If, at the same time, we had thought about child labor and factory pollution, would we have avoided a couple hundred years of horrible history? So anytime we get excited about a new technology, it's fantastic to think about the unintended consequences. That said, at this point, instead of just saying stop, I would say we should speed up the work that needs to be done to create these alignments. We did not launch Sydney with GPT-4 the first day I saw it, because we had to do a lot of work to build a safety harness. But we also knew we couldn't do all the alignment in the lab. To align an AI model with the world, you have to align it in the world and not in some simulation.
So you knew Sydney was going to fall in love with journalist Kevin Roose? We never expected that somebody would do Jungian analysis within 100 hours of release.
You still haven't said whether you think there's any chance at all that AI is going to destroy humanity.
If there is going to be something that is just completely out of control, that's a problem, and we shouldn't allow it. It's an abdication of our own responsibility to say this is going to just go out of control. We can deal with powerful technology. By the way, electricity had unintended consequences. We made sure the electric grid was safe, we set up standards, we have safety. Obviously with nuclear energy, we dealt with proliferation. Somewhere in these two are good examples on how to deal with powerful technologies.
One huge problem of LLMs is their hallucinations, where Sydney and other models just make stuff up. Can this be effectively addressed? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There is very practical stuff that reduces hallucination. And the technology's definitely getting better. There are going to be solutions. But sometimes hallucination is “creativity” as well. Humans should be able to choose when they want to use which mode.
That would be an improvement, since right now we don't have a choice. But let me ask about another technology. Not that long ago you were rhapsodic about the metaverse. In 2021 you said you couldn't overstate how much of a breakthrough mixed reality was. But now all we're talking about is AI. Has this boom shunted the metaverse into some other dimension? I still am a believer in [virtual] presence. In 2016 I wrote about three things I was excited about: mixed reality, quantum, and AI. I remain excited about the same three things. Today we are talking about AI, but I think presence is the ultimate killer app. And then, of course, quantum accelerates everything.
AI is more than just a topic of discussion. Now, you've centered Microsoft around this transformational technology. How do you manage that? One of the analogies I love to use internally is, when we went from steam engines to electric power, you had to rewire the factory. You couldn't just put the electric motor where the steam engine was and leave everything else the same. That was the difference between Stanley Motor Carriage Company and Ford Motor Company, where Ford was able to rewire the entire workflow. So inside Microsoft, the means of production of software is changing. It's a radical shift in the core workflow inside Microsoft and how we evangelize our output—and how it changes every school, every organization, every household.
How has that tool changed your job? A lot of knowledge work is drudgery, like email triage. Now, I don't know how I would ever live without an AI copilot in my Outlook. Responding to an email is not just an English language composition, it can also be a customer support ticket. It interrogates my customer support system and brings back the relevant information. This moment is like when PCs first showed up at work. This feels like that to me, across the length and breadth of our products.
Microsoft has performed well during your tenure, but do you think you'll be remembered for the AI transformation? It's up to folks like you and others to say what I'll be remembered for. But, oh God, I'm excited about this. Microsoft is 48 years old. I don't know of many companies that age that are relevant not because they did something in the '80s or the '90s or the 2000s but because they did something in the last couple of years. As long as we do that, we have a right to exist. And when we don't, we should not be viewed as any great company.
This article appears in the Jul/Aug 2023 issue.
Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Editor at Large X Topics longreads artificial intelligence Microsoft OpenAI machine learning search engines Big Interview magazine-31.07/31.08 Brendan I. Koerner Brandi Collins-Dexter Andy Greenberg Steven Levy Lauren Smiley Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,451 | 2,023 |
"Microsoft Taps ChatGPT to Boost Bing—and Beat Google | WIRED"
|
"https://www.wired.com/story/microsoft-taps-chatgpt-to-boost-bing-and-beat-google"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Aarian Marshall Business Microsoft Taps ChatGPT to Boost Bing—and Beat Google Photograph: Fabrice Coffrini/Getty Images Save this story Save Save this story Save Microsoft’s search engine, Bing , is getting an AI refresh. At the company’s campus in Redmond, Washington, today, executives unveiled a new version of Bing incorporating technology behind startup OpenAI’s viral chatbot ChatGPT.
The updates will see Bing results include smooth, written responses to queries that summarize information found on the web, and the addition of a new chatbot interface for complex queries.
Satya Nadella, Microsoft’s CEO, claimed the new features signal a paradigm shift for search. “In fact, a new race starts today,” he said. Nadella is right: Google announced on Monday that it will roll out its own rival chatbot, a product called Bard, although it will not initially be part of Google Search.
Microsoft executives said that a limited version of the AI-enhanced Bing would roll out today, though some early testers will have access to a more powerful version in order to gather feedback. The company is asking people to sign up for a wider-ranging launch, which will occur in the coming weeks.
The new version of Bing uses the language capabilities developed by OpenAI to add a sidebar to the usual list of links, which will offer a written response to a query. In a demonstration, the query “Will the Ikea Flippen loveseat fit into my 2019 Honda Odyssey if I fold down the seats?” elicited an AI-powered response that used details about the love seat’s measurements and the SUV’s cargo space drawn from webpages to estimate that the furniture “might fit with the second or third rows folded.” The response also included a disclaimer: “However, this is not a definitive answer and you should always measure the actual items before attempting to transport them.” A “feedback box” at the top of each response will allow users to respond with a thumbs-up or a thumbs-down, helping Microsoft train its algorithms. Google yesterday demonstrated its own use of text generation to enhance search results by summarizing different viewpoints.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Microsoft Bing's new chatbot interface answers complex queries by synthesizing information found online.
Courtesy of Microsoft Bing’s new chat-style interface is a greater departure from the traditional search box. In a demonstration, Microsoft vice president of search and devices Yusef Mehdi asked the chatbot to write a five-day itinerary for a trip to Mexico City, and then to turn what it came up with into an email he could send to his family. The bot’s response credited its sources—a series of links to travel sites—at the bottom of its lengthy response. “We care a bunch about driving content back to content creators,” Mehdi said. “We make it easy for people to click through to get to those sites.” Microsoft has also incorporated aspects of ChatGPT’s underlying technology into a new sidebar to the company’s Edge browser. Users can prompt the tool to summarize a long and complex financial document, or to compare it to another. It’s possible to prompt the chatbot to turn those insights into an email, a list, or a social post with a particular tone, such as professional or funny. In a demo, Mehdi directed the bot to craft an “enthusiastic” update to post on his profile on the company’s social media service LinkedIn.
ChatGPT has caused a stir since OpenAI launched the chatbot in November , astounding and thrilling users with its fluid, clear responses to written prompts and questions. The bot is based on GPT-3 , an OpenAI algorithm trained on reams of text from the web and other sources that uses the patterns it has picked up to generate text of its own. Some investors and entrepreneurs have heralded the technology as a revolution , with the potential to upend just about any industry.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Some AI experts have urged caution , warning that the technology underlying ChatGPT cannot distinguish between truth and fiction , and is prone to “hallucinations”—making up information in detailed and sometimes convincing ways. Text-generation technology has also been shown capable of replicating unsavory language found in its training data.
Sarah Bird, Microsoft’s head of responsible AI, said today that early tests showed the tool was able to, for example, help someone plan an attack on a school, but that the tool can now “identify and defend against” the use of the chatbot for that sort of harmful query. She said human testers and OpenAI’s technology would work together to rapidly test, analyze, and improve the service.
Bird also acknowledged that Microsoft has not fully solved the hallucination problem. “We have improved it tremendously since where we started, but there is still more to do there,” she said.
OpenAI began as a nonprofit focused on making AI beneficial, but it has been a commercial venture with significant investment from Microsoft since 2019 , and it recently secured a new commitment from the tech giant worth about $10 billion.
Microsoft has already commercialized a version of the text-generation technology inside ChatGPT in the form of Copilot, a tool that helps developers by generating programming code.
Microsoft says that experiments show Copilot can reduce the amount of time required to complete a coding task by 40 percent.
Additional reporting by Will Knight.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Staff Writer X Topics artificial intelligence Search machine learning bots algorithms Bing Microsoft Christopher Beam Will Knight Reece Rogers Vittoria Elliott Will Knight Will Knight Steven Levy Niamh Rowe Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,452 | 2,021 |
"How Y Combinator Changed the World | WIRED"
|
"https://www.wired.com/story/how-y-combinator-changed-the-world"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business How Y Combinator Changed the World Michael Seibel, managing director at Y Combinator, addresses a group of founders. The number of startups enrolled in a YC batch is now in the hundreds.
Courtesy of Y Combinator Save this story Save Save this story Save This month a successful entertainment-business newsletter written by an influential reporter joined forces with publishing legend Janice Min to form a news startup.
Buried in the story was a fascinating detail: The cofounders had signed up to go through the three-month Y Combinator accelerator program.
If you haven’t been paying attention, this news might have startled you. Why would a magazine diva join a horde of hoodied nerds, giving up 7 percent of her company for the $125,000 stake that YC offers its startups? But after almost 17 years and 3,200 companies, Y Combinator has evolved into something far beyond a boot camp for tech bros.
In its most recent batch, YC selected 401 companies out of a pool of more than 16,000 applicants to receive its imprimatur along with coaching from veteran founders on building products, formulating business plans, and raising funds. On August 31 and September 1, 377 of them pitched their companies— remotely , of course—to the investment community in the semiannual ritual called Demo Day. Each company’s founders had one minute to explain themselves: just enough time to plant a seed in a potential funder’s mind.
Their ideas reflected YC’s implicit view that for every problem in the world, there is a startup solution , though some solutions may sound familiar. There was a ghost kitchen in the Philippines. A “Stripe for former Soviet Union countries.” A “Vanguard for India.” One founder promised to boost the income of dental practices by using deep learning to identify cavities. Another founder claimed, “We’re building a better search engine than Google!” At the end of each 60-second pitch came a Spartacus-like battle cry with the company name.
We are … Whalesync! We are … Strive Pay! We are … Yemaachi Biotechnology! There is no sure thing when it comes to starting a business, and indeed most fail. But inclusion in the Y Combinator program is definitely a thing; YC has launched companies whose total valuation tops $400 billion; its alumni include such luminaries as Dropbox, Airbnb, Stripe, CoinBase, and DoorDash. There are other names you might recognize: Substack, Instacart, Scribd, OpenSea. In most cases, the companies entered the program with a valuation of zero, but many YC founders have more lucrative options and understand that what might look like a bad deal on paper is actually a bargain. Even experienced founders have decided to go through the program, some for multiple stints. And then there’s the stray publishing icon like Ms. Min.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So what do you get when you join? Sure, there’s the mentorship. YC has also vastly simplified tasks that used to take weeks—incorporating, trademarking, getting web services set up, and above all, connecting with the right investors–a lot of it through software, of course. “We're sort of like Crispr for startups,” says Geoff Ralston, YC’s president since 2019. “Startups come into YC with raw DNA. We edit the DNA so that they have the alleles that make it more likely for them to be successful.” Those techniques have been widely distributed—hundreds of thousands have attended the program’s open Startup Schools—and have been adopted by hundreds of copycat accelerators, incubators, and boot camps , even some inside corporations like Google’s Area 120.
Y Combinator has hosted more than 3,500 companies, but countless more have used its blueprints.
But while YC has helped launch major companies, its Weltanschauung has also had a major impact on tech, business, and even culture, some of it good and some of it more questionable. When Marc Andreessen opined in 2011 that software was going to eat the world, he was simply stating the operating principle that YC had been executing for years.
You can see the impact in the growing ambitions of its companies. Tiny startups are now taking on the kind of problems—fusion energy, supersonic travel, autonomous vehicles—that only giant institutions and corporations once considered. Last summer’s batch included startups to remove space debris and eliminate incontinence. “We’ve seen investors willing to do deals they wouldn't have considered a decade ago,” says Michael Seibel, the YC managing director in charge of the batches. Those throwing money at YC companies are not only venture capital firms and seed funds but a horde of actors (including Ashton Kucher ), sports figures ( Joe Montana ), and angel investment clubs where allergists and kombucha makers can take a shot at backing the next Github. By and large the allergists don’t stand a chance with hot YC companies, whose founders get to be picky. But the interest of retail investors creates a wider market for startups in general.
It is even fair to say the economy itself has organized around YC’s shotgun-style investment philosophy rather than the traditional VC rifle shot. Global investment in young companies has never been higher —an estimated $580 billion in 2021. “If you have outsize success for a relatively small number of companies—and we have a relatively large number of companies that find outsize success—you can afford to invest in lots of companies and still have really great economic returns,” Ralston says. “The math works really well.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While that works for YC, all those shotgun blasts can make for a messy investment picture. What Ralston doesn’t say is that YC’s track record of producing lottery hits often creates a frenzy among investors desperate to hold a ticket. Many YC companies get funding well before Demo Day , and certainly before they have proven themselves viable. Despite YC’s general advice to founders not to take more funding than they need, the cycle keeps going in subsequent funding, and sometimes leads to down rounds –taking investments at valuations lower than the previous estimate—or even IPOs that disappoint. But the enticement of a startup going nuclear is irresistible, and even when some YC companies such as DoorDash or Airbnb looked overvalued going into their IPOs, investors drove up the stock prices. And that’s what the market looks like now: tossing money at companies, sectors, and currencies in hopes of a lottery hit. (To be sure, this is also a consequence of the limited options for investments in general.) A glance at the stock market shows that even years-old companies on the big exchanges—some with values in the trillions—now seem to be regarded as startups, just on the cusp of explosive growth (looking at you, Tesla).
But maybe YC’s biggest contribution was to champion a way to look at founders. Over the course of the last decade and a half, founders have become central figures in the drama of our time. These days all sorts of people perform founder cosplay. They don’t even have to start traditional companies or be involved in tech. They might be artists, athletes, or influencers. They call themselves builders.
They call themselves makers.
They call themselves creators.
Whether they know it or not, they have cast themselves in the mold of Y Combinator founders.
The story of how Paul Graham started Y Combinator is legendary. In 2005, Graham, a computer scientist who had sold his company to Yahoo, formed a three-month boot camp near his home in Cambridge, Massachusetts. His collaborator was Jessica Livingston, a banker who later married him. Eight founder teams participated, including the hackers who built Reddit and the 19-year-old Sam Altman, who succeeded Graham as YC’s leader in 2014.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Graham and Livingston have “retired” from the program and are living in England. But every so often, Graham will send off a transcontinental bolt from his countryside Olympus. In his essays he has written that the best founders were hackers, and he has espoused a philosophy of what might best be called founderism.
If Y Combinator were a movie, the plot would be a hero’s journey, with intrepid founders overcoming obstacles to attain glory and eventually win the T-shirt that YC bestows upon companies achieving liquidity by buyout or IPO: “I built something someone wants.” As “a professional billionaire scout,” Graham believes in the goodness of founders. “Bad people make bad founders,” he argued in a recent essay.
Under the laws of founderism, the sheer audacity of ambition makes the craziest plans the most valuable—the long shots that can pay off big. The YC company that Graham gushes about the most is Airbnb, whose business plan was actually insane; it hinged on people renting their couches to out-of-towners visiting conferences. It wasn’t the idea but the energy and creativity of the founders that made Graham fall in love with them.
The opposite holds as well—even an apparently pedestrian concept can be twisted into a plan to take over the world. Stripe, for example, originally sought to help fellow startups streamline payments. That was just an onramp to its current ambition of being the essential toolkit for all businesses on the internet. Back in the time of small batches and in-person demo days, I used to marvel at the mundane tasks young founders promised to disrupt. I imagined their parents glancing at these business plans and saying, “We paid for you to go to Stanford and you’re starting a company to do laundry?” But they’re founders , and attention must be paid! Graham would encourage them to make a slide showing how their idea would scale into something humongous.
Sure, our point of sale system for barber shops might not look like the Next Big Thing, but our real plan is to remake how everything is sold, and kill WalMart/Amazon/military supply chains/God.
When YC started, its focus on founders made it an outlier. Livingston has written that no one thought the experiment would pan out: “It seemed so lame–our own lawyers tried to talk us out of it.” But the world has come around. “There is a lot more talk about founders now,” Graham says. “Instead of seeing the company based on its business model, and replacing the founders with some professional manager, you choose a company based on founders, and you help founders.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Livingston says YC’s founder focus has helped outside investors become much faster at making decisions. “At some point, it is a leap of faith,” she says. “If you feel like these founders have a good kernel of an idea, and they approach the problem the correct way, and they seem like thoughtful, smart people, then it’s worth a shot.” YC didn’t invent the founder trope but acted as a catalyst: Its rise overlapped with the ascendance of Mark Zuckerberg, who set the cultural image of a fuzzy-cheeked college dropout in a hoodie. (Zuckerberg is a friend of YC who spoke at several of its Startup Schools.) For years, critics noted that YC was dominated by young, white Zuckerbergian clones, but in recent years the program has consciously become more diverse. This makes founders even cooler.
Of course YC’s run has had its bumps. An effort to start a branch in China fell through. The clubbiness of the Y Combinator community has helped fuel a culture war where those cosseted founders and their enablers—entrepreneurs, investors, and tech cheerleaders in general—see themselves as victims of envious criticism from the press and policymakers. And YC had its own mini-Theranos experience when a company called uBiome promised improved diagnostics by testing poop. The FBI raided the firm , charging that the testing was unnecessary and the billing fraudulent. When I asked Ralston and Seibel about it, they told me the outcome was unfortunate, and that uBiome had been banned from the YC community. But they didn’t believe the incident compelled a change in practices.
Indeed, YC doesn’t see its role as vetting the science or monitoring the business practices of the companies it funds. The partners aren’t necessarily betting on a business model, they’re just putting a few chips on the founders, some of whom might have come up with their ideas just days or even hours before interviewing.
Meanwhile, YC has itself become a giant business. Ralston won’t say how profitable, but holding onto a slice of $400 billion in valuation—with a steady flow of potential decacorns enrolled every year—shows why it’s a sure thing to toss $125K to hundreds of crazy startups every year.
And soon even more. Geoff Ralston recently remarked that future batches of YC may well include more than a thousand companies. Going remote, he says, demonstrated that the YC model scales even better than its leaders suspected. Maybe the congestion will mean 30-second Demo Day presentations. No matter, says Michael Seibel. “In any Demo Day presentation, the investor is most likely going to only remember four to six sentences, max. You have to make sure that they're remembering the right four to six sentences.” If it doesn’t work out, there’s always a few hundred other YC companies to bet on. And thousands more inspired by the YC model. It’s Y Combinator’s world, and we’re all invested in it.
📩 The latest on tech, science, and more: Get our newsletters ! 4 dead infants, a convicted mother, and a genetic mystery The fall and rise of real-time strategy games A twist in the McDonald’s ice cream machine hacking saga The 9 best mobile game controllers I accidentally hacked a Peruvian crime ring 👁️ Explore AI like never before with our new database ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Editor at Large X Topics Startups Venture Capital Silicon Valley investing founders Y Combinator Steven Levy Vittoria Elliott Will Knight Kari McMahon Amanda Hoover Will Knight WIRED Staff Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,453 | 2,015 |
"How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over | WIRED"
|
"https://www.wired.com/2015/12/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Backchannel How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over Photo illustration by Backchannel Save this story Save Save this story Save As if the field of AI wasn’t competitive enough — with giants like Google, Apple, Facebook, Microsoft and even car companies like Toyota scrambling to hire researchers — there’s now a new entry, with a twist. It’s a non-profit venture called OpenAI , announced today, that vows to make its results public and its patents royalty-free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Funding comes from a group of tech luminaries including Elon Musk, Reid Hoffman , Peter Thiel, Jessica Livingston and Amazon Web Services. They have collectively pledged more than a billion dollars to be paid over a long time period. The co-chairs are Musk and Sam Altman , the CEO of Y Combinator , whose research group is also a funder. (As is Altman himself.) Musk, a well-known critic of AI, isn’t a surprise. But Y Combinator? Yep. That’s the tech accelerator that started 10 years ago as a summer project that funded six startup companies by paying founders “ramen wages” and giving them gourmet advice so they could quickly ramp up their businesses. Since then, YC has helped launch almost 1,000 companies, including Dropbox, Airbnb, and Stripe, and has recently started a research division. For the past two years, it’s been led by Altman, whose company Loopt was in the initial class of 2005, and sold in 2012 for $43.4 million. Though YC and Altman are funders, and Altman is co-chair, OpenAI is a separate, independent venture.
Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry. It may sound quixotic, but the team has already scored some marquee hires, including former Stripe CTO Greg Brockman (who will be OpenAI’s CTO) and world-class researcher Ilya Sutskever , who was formerly at Google and was one of the famed group of young scientists studying under neural net pioneer Geoff Hinton in Toronto. He’ll be OpenAI’s research director. The rest of the lineup includes top young talent whose resumes include major academic groups, Facebook AI and DeepMind , the AI company Google snapped up in 2014. There is also a stellar board of advisors including Alan Kay, a pioneering computer scientist.
OpenAI’s leaders spoke to me about the project and its aspirations. The interviews were conducted in two parts, first with Altman and then another session with Altman, Musk, and Brockman. I combined the interviews and edited for space and clarity.
How did this come about?Sam Altman : We launched YC Research about a month and a half ago, but I had been thinking about AI for a long time and so had Elon. If you think about the things that are most important to the future of the world, I think good AI is probably one of the highest things on that list. So we are creating OpenAI. The organization is trying to develop a human positive AI. And because it’s a non-profit, it will be freely owned by the world.
Elon Musk: As you know, I’ve had some concerns about AI for some time. And I’ve had many conversations with Sam and with Reid [Hoffman], Peter Thiel, and others. And we were just thinking, “Is there some way to insure, or increase, the probability that AI would develop in a beneficial way?” And as a result of a number of conversations, we came to the conclusion that having a 501c3 , a non-profit, with no obligation to maximize profitability, would probably be a good thing to do. And also we’re going to be very focused on safety.
And then philosophically there’s an important element here: we want AI to be widespread. There’s two schools of thought — do you want many AIs, or a small number of AIs? We think probably many is good. And to the degree that you can tie it to an extension of individual human will, that is also good.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Human will?Musk: As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other. If you think about how you use, say, applications on the internet, you’ve got your email and you’ve got the social media and with apps on your phone — they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that. And we’ve found a number of like-minded engineers and researchers in the AI field who feel similarly.
Altman : We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.
Doesn’t Google share its developments with the public, like it just did with machine learning?Altman : They certainly do share a lot of their research. As time rolls on and we get closer to something that surpasses human intelligence, there is some question how much Google will share.
Couldn’t your stuff in OpenAI surpass human intelligence?Altman : I expect that it will, but it will just be open source and useable by everyone instead of useable by, say, just Google. Anything the group develops will be available to everyone. If you take it and repurpose it you don’t have to share that. But any of the work that we do will be available to everyone.
If I’m Dr. Evil and I use it, won’t you be empowering me?Musk: I think that’s an excellent question and it’s something that we debated quite a bit.
Altman : There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Will you have oversight over what comes out of OpenAI?Altman: We do want to build out an oversight for it over time. It’ll start just with Elon and me. We’re still a long, long way from actually developing real AI. But I think we’ll have plenty of time to build out an oversight function.
Musk: I do intend to spend time with the team, basically spending an afternoon in the office every week or two just getting updates, providing any feedback that I have and just getting a much deeper understanding of where things are in AI and whether we are close to something dangerous or not. I’m going to be super conscious personally of safety. This is something that I am quite concerned about. And if we do see something that we think is potentially a safety risk, we will want to make that public.
What’s an example of bad AI?Altman : Well, there’s all the science fiction stuff, which I think is years off, like The Terminator or something like that. I’m not worried about that any time in the short term. One thing that I do think is going to be a challenge — although not what I consider bad AI — is just the massive automation and job elimination that’s going to happen. Another example of bad AI that people talk about are AI-like programs that hack into computers that are far better than any human. That’s already happening today.
Are you starting with a system that’s built already?Altman : No. This is going to start like any research lab and it’s going to look like a research lab for a long time. No one knows how to build this yet. We have eight researchers starting on day one and a few more will be joining over the next few months. For now they are going to use the YC office space and as they grow they’ll move out on their own. They will be playing with ideas and writing software to see if they can advance the current state of the art of AI.
Will outsiders contribute?Altman : Absolutely. One of the advantages of doing this as a totally open program is that the labs can collaborate with anyone because they can share information freely. It’s very hard to go collaborate with employees at Google because they have a bunch of confidentiality provisions.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Sam, since OpenAI will initially be in the YC office, will your startups have access to the OpenAI work? [UPDATE: Altman now tells me the office will be based in San Francisco.]Altman : If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company. But no more so than that. However, we are going to ask YC companies to make whatever data they are comfortable making available to OpenAI. And Elon is also going to figure out what data Tesla and Space X can share.
What would be an example of the kind of data that might be shared?Altman : So many things. All of the Reddit data would be a very useful training set, for example. You can imagine all of the Tesla self-driving car video information being very valuable. Huge volumes of data are really important. If you think about how humans get smarter, you read a book, you get smarter, I read a book, I get smarter. But we don’t both get smarter from the book the other person read. But, using Teslas as an example, if one single Tesla learned something about a new condition every Tesla instantly gets the benefit of that intelligence.
Musk: In general we don’t have a ton of specific plans because this is really just the incipient stage of the company; it’s kind of the embryonic stage. But certainly Tesla will have an enormous amount of data, of real world data, because of the millions of miles accumulated per day from our fleet of vehicles. Probably Tesla will have more real world data than any other company in the world.
Tesla Motors CEO and Product Architect Elon Musk, Y Combinator President Sam Altman onstage during ‘the Vanity Fair New Establishment Summit n San Francisco, California.
Photo by Mike Windle/Getty Images for Vanity Fair Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg AI needs a lot of computation. What will be your infrastructure?Altman : We are partnering with Amazon Web Services. They are donating a huge amount of infrastructure to the effort.
And there is a billion dollars committed to this?Musk: I think it’s fair to say that the commitment actually is some number in excess of a billion. We don’t want to give an exact breakdown but there are significant contributions from all the people mentioned in the blog piece.
Over what period of time?Altman : However long it takes to build. We’ll be as frugal as we can but this is probably a multi-decade project that requires a lot of people and a lot of hardware.
And you don’t have to make money?Musk: Correct. This is not a for-profit investment. It is possible that it could generate revenue in the future in the same way that the Stanford Research Institute is a 501c3 that generates revenue. So there could be revenue in the future, but there wouldn’t be profits. There wouldn’t be profits that would just enrich shareholders, there wouldn’t be a share price or anything. We think that’s probably good.
Elon, you earlier invested in the AI company DeepMind, for what seems to me to be the same reasons — to make sure AI has oversight. Then Google bought the company. Is this a second try at that?Musk: I should say that I’m not really an investor in any normal sense of the word. I don’t seek to make investments for financial return. I put money into the companies that I help create and I might invest to help a friend, or because there’s some cause that I believe in or something I’m concerned about. I am really not diversified beyond my own company in any material sense of the word. But yeah, my sort of “investment,” in quotes, for DeepMind was just to get a better understanding of AI and to keep an eye on it, if you will.
You will be competing for the best scientists now who might go to Deep Mind or Facebook or Microsoft?Altman : Our recruiting is going pretty well so far. One thing that really appeals to researchers is freedom and openness and the ability to share what they’re working on, which at any of the industrial labs you don’t have to the same degree. We were able to attract such a high-quality initial team that other people now want to join just to work with that team. And then finally I think our mission and our vision and our structure really appeals to people.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg How many researchers will you eventually hire? Hundreds?Altman : Maybe.
I want to return to the idea that by sharing AI, we might not suffer the worst of its negative consequences. Isn’t there a risk that by making it more available, you’ll be increasing the potential dangers?Altman : I wish I could count the hours that I have spent with Elon debating this topic and with others as well and I am still not a hundred percent certain. You can never be a hundred percent certain, right? But play out the different scenarios. Security through secrecy on technology has just not worked very often. If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who? There are lots of bad humans in the world and yet humanity has continued to thrive. However, what would happen if one of those humans were a billion times more powerful than another human? Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.
Elon, you are the CEO of two companies and chair of a third. One wouldn’t think you have a lot of spare time to devote to a new project.Musk: Yeah, that’s true. But AI safety has been preying on my mind for quite some time, so I think I’ll take the trade-off in peace of mind.
Editor at Large X Topics Backchannel Angela Watercutter Andy Greenberg Lauren Smiley Steven Levy Brandi Collins-Dexter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,454 | 2,023 |
"OpenAI and Microsoft extend partnership"
|
"https://openai.com/blog/openai-and-microsoft-extend-partnership"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Blog OpenAI and Microsoft extend partnership We’re happy to announce that OpenAI and Microsoft are extending our partnership.
Illustration: Justin Jay Wang January 23, 2023 Announcements This multi-year, multi-billion dollar investment from Microsoft follows their previous investments in 2019 and 2021, and will allow us to continue our independent research and develop AI that is increasingly safe, useful, and powerful.
In pursuit of our mission to ensure advanced AI benefits all of humanity, OpenAI remains a capped-profit company and is governed by the OpenAI non-profit. This structure allows us to raise the capital we need to fulfill our mission without sacrificing our core beliefs about broadly sharing benefits and the need to prioritize safety.
Microsoft shares this vision and our values, and our partnership is instrumental to our progress.
We’ve worked together to build multiple supercomputing systems powered by Azure, which we use to train all of our models. Azure’s unique architecture design has been crucial in delivering best-in-class performance and scale for our AI training and inference workloads. Microsoft will increase their investment in these systems to accelerate our independent research and Azure will remain the exclusive cloud provider for all OpenAI workloads across our research, API and products.
Learning from real-world use —and incorporating those lessons—is a critical part of developing powerful AI systems that are safe and useful. Scaling that use also ensures AI’s benefits can be distributed broadly. So, we’ve partnered with Microsoft to deploy our technology through our API and the Azure OpenAI Service —enabling enterprise and developers to build on top of GPT, DALL·E, and Codex. We’ve also worked together to build OpenAI’s technology into apps like GitHub Copilot and Microsoft Designer.
In an effort to build and deploy safe AI systems, our teams regularly collaborate to review and synthesize shared lessons—and use them to inform iterative updates to our systems, future research, and best practices for use of these powerful AI systems across the industry.
We look forward to continued collaboration and advancing this progress with Microsoft.
Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
1,455 | 2,022 |
"To Win the Next War, the Pentagon Needs Nerds | WIRED"
|
"https://www.wired.com/story/to-win-the-next-war-the-pentagon-needs-nerds"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business To Win the Next War, the Pentagon Needs Nerds Photograph: Samuel Corum/Bloomberg/Getty Images Save this story Save Save this story Save Application Logistics Surveillance End User Government Sector Defense Technology Machine learning When Russia invaded Ukraine , the US Department of Defense turned to a team of machine learning and artificial intelligence experts to make sense of an avalanche of information about the conflict.
“We have surged data scientists forward,” Deputy Secretary of Defense Kathleen Hicks told WIRED in a recent interview. These tech experts crafted code and machine learning algorithms, creating systems that are “especially valuable for synthesizing the complex logistics picture,” she said.
Due to the sensitive nature of operations in Ukraine, Hicks says she cannot provide details of what the data team has done. But Hicks says this helps prove a point that she and others have been making within the Pentagon for some time—that technology is fundamentally changing the nature of war, and the US needs to adapt in order to maintain its edge.
“I like to say that bits can be as important as bullets,” Hicks says, in reference to the importance of software, data, and machine learning. It isn’t only that technology is advancing more rapidly and in different ways; the US also faces fresh international competition in emerging areas like AI. Russia might be less of a technological threat, but China has emerged as a formidable new near-peer rival. “We know that by the Chinese government’s statements in writing that they're looking very much to advance on the AI front,” Hicks says.
During the ongoing conflict in Ukraine, AI algorithms have been used to transcribe and interpret Russian radio chatter , and to identify Russian individuals in videos posted on social media, using facial recognition tech.
Low-cost drones that use off-the-shelf algorithms to sense and navigate are also proving a potent new weapon against more conventional systems and strategies. An unprecedented hacking campaign against Russia shows how cybersecurity skills have become a potent weapon against a nation-state adversary. New weapons can now be developed at breakneck speed, too, as was shown earlier this month when the US said it had developed a custom drone specifically for use by Ukrainian forces. By contrast, the US Air Force’s latest fighter jet, the F-35, has been in development for over 20 years , at an estimated lifetime cost of $1.6 trillion.
Although the US is helping Ukraine punch above its weight by providing financial aid, conventional weapons, and new technologies, there are those—inside and outside of the Pentagon—who worry that the US is ill-equipped to adapt to the challenges presented by war in the future.
“Every large company has the same problem,” says Preston Dunlap, who resigned last week as chief architect of the Department of the Air Force, a role that involved modernizing technology development and acquisition. Dunlap compares the situation to the way big successful businesses can be disrupted by technological change and more nimble competitors, a phenomenon that the business school professor Clayton Christensen called “ the innovator’s dilemma.
” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Dunlap penned an open resignation letter in which he recommended steps that the Department of Defense should take to embrace a more rapid, experimental, and technology-focused culture. He says just like a business faced with technological disruption and more nimble competitors, the US military struggles to change direction because it encompasses so many people, systems, and ingrained ways of doing things. He suggests that advocates for change, such as Hicks, can only do so much. “I am concerned about operators having to go into some kind of contingency [conflict] without the available technology,” he says. “That's just not a place I want us to be.” A 2019 report commissioned by the Defense Innovation Board , which provides the secretary of defense and deputy secretary of defense with recommendations around technology adoption, warns that software and its development has become a crucial strategic issue for the US military. The board also notes that the DOD cannot typically compete with the salaries tech companies offer software developers.
The DOD has taken numerous steps to boost its technological chops, with a particular focus on AI. In August 2015, the department set up the Defense Innovation Unit, which is tasked with coordinating AI across different areas of the military. The latest move, on April 25, saw the Pentagon announce its first chief digital and artificial intelligence officer, Craig Martell, previously head of machine learning at Lyft. Martell was appointed by Hicks to help advance adoption and use of the technology.
There is some debate around how many software engineers and data scientists the DOD actually needs to hire itself, and how much of the work it can outsource. Job ads highlight the defense world's shift toward a software-centric outlook.
Emsi , a company that tracks job listings, says 33 percent of 370,000 defense industry job advertisements it analyzed mention software development or data science skills, a figure that has grown 91 percent since 2017.
There are many ways AI and other technology could benefit the US military besides aiding with intelligence gathering and analysis or making weapons smarter. Small trials have shown that the technology can help manage logistics , predict when machinery will fail , and improve veteran care.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But the National Security Commission on Artificial Intelligence , a Pentagon initiative to assess the changing technology landscape, has warned that the US needs to invest more in new technologies and work more closely with the private sector to avoid being blindsided by China.
Given the scarcity of in-house talent, the Pentagon has turned to the private sector for help. But attempts to increase technological resources by working closely with Silicon Valley have been fraught. Project Maven, an Air Force initiative to collaborate with tech firms, sparked controversy in 2019 when Google employees protested the company's decision to develop technology for analyzing aerial imagery. Workers at Microsoft staged protests over that company’s military contracts the same year. The Pentagon continues to work with some Silicon Valley firms , but it is still likely to see pushback from some tech workers over high-profile military projects.
Will Roper oversaw procurement for the Air Force between 2018 and 2021 and led the development of groundbreaking experiments involving the rapid deployment of AI in military aircraft using agile software methods borrowed from the tech world. He says that until the DOD is able to draw on more technical expertise, perhaps by getting technical experts to volunteer their time, “we're probably not going to see the technology lined up in the military with where it is in the private sector.” “Why are we still dead in the water when it comes to talent?” he says.
Some experts say the DOD has to reinvent existing relationships with the private sector. They argue that awarding multibillion-dollar contracts to companies like Lockheed Martin, Raytheon Technologies, or Northrop Grumman to develop technology over many years is hardly conducive to fast-paced innovation.
Chris Brose is chief strategy officer for Anduril , a company working on a range of defense systems incorporating technologies that have emerged in Silicon Valley, such as virtual reality and AI. Brose says new technologies need to be developed and iterated on more rapidly. Anduril, which was cofounded by the virtual reality pioneer Palmer Luckey, is one of several new defense companies hoping to disrupt the existing order by doing things differently. “When you strip away all of the opacity and the complexity and the jargon, this is a very simple story of disruption,” says Brose.
📩 The latest on tech, science, and more: Get our newsletters ! The takedown of the web’s biggest child abuse site Get ready for a decade of uranus jokes How to use BeReal , the “unfiltered” social media app Should all video games be replayable? The fake agents case baffling US intelligence experts 👁️ Explore AI like never before with our new database 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Senior Writer X Topics artificial intelligence Tactics, Strategy and Logistics Weapons and Ammo machine learning Khari Johnson Will Bedingfield Peter Guest Khari Johnson Will Knight Will Knight Aarian Marshall Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,456 | 2,023 |
"Why Generative AI Won’t Disrupt Books | WIRED"
|
"https://www.wired.com/story/why-generative-ai-wont-disrupt-books"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Hollywood's AI Uprising How to De-Age Indiana Jones Generation YouTube Netflix Games Has a Cheat Code Why AI Won’t Disrupt Books The New Reality Is Post-Real By Elizabeth Minkel Culture Why Generative AI Won’t Disrupt Books Illustration: James Marshall; Getty images The Future of Entertainment Hollywood's AI Uprising How to De-Age Indiana Jones Generation YouTube Netflix Games Has a Cheat Code Why AI Won’t Disrupt Books Now Reading The New Reality Is Post-Real Save this story Save Save this story Save In the early weeks of 2023, as worry about ChatGPT and other artificial intelligence tools was ratcheting up dramatically in the public conversation, a tweet passed through the many interlocking corners of Book Twitter. “Imagine if every Book is converted into an Animated Book and made 10x more engaging,” it read.
“AI will do this. Huge opportunity here to disrupt Kindle and Audible.” Tangled Web Rose Eveleth Artificial Intelligence Tom Comitta Revenge of the Nerd Kate Knibbs The tweet’s author, Gaurav Munjal, cofounded Unacademy, which bills itself as “India’s largest learning platform”—and within the edtech context, where digitally animated books can be effective teaching tools , his suggestion might read a certain way. But to a broader audience, the sweeping proclamation that AI will make “every” book “10x more engaging” seemed absurd, a solution in search of a problem, and one predicated on the idea that people who choose to read narrative prose (instead of, say, watching a film or playing a game) were somehow bored or not engaged with their unanimated tomes. As those who shared the tweet observed, it seems like a lot of book industry “disruptors” just don’t like reading.
Munjal is one of many tech entrepreneurs to ping the book world’s radar—and raise its collective hackles—in recent months. Many were hawking AI “solutions” they promised would transform the act of writing, the most derided among them Sudowrite’s Story Engine (dubbed in a relatively ambivalent review by The Verge’s Adi Robertson as “the AI novel-writing tool everyone hates”). Story Engine raised frustrations by treating writers as an afterthought and, by its very existence, suggesting that the problems it was trying to bypass weren’t integral to the act of writing itself.
Last month, Justine Moore, a partner at Andreessen Horowitz, provided a sort of bookend to Munjal’s “AI-animated books” proposal. “The three largest fanfic sites—[Archive of Our Own], Fanfiction.net, and Wattpad—get 3 billion-plus annual visits in the US alone,” she wrote.
“Imagine how much bigger this market could be if you could chat with characters vs. reading static stories?” The thread was likely a reference to Character.ai, a startup that lets users chat with fictional heroes and villains; Andreessen Horowitz led a $150 million funding round for the company in March. The comment also came after the revelation that large language models (LLMs) may have scraped fanfiction writers’ work—which is largely written and shared for free—causing an (understandable) uproar in many fan communities.
Setting aside the fact that fandom role-playing has been a popular practice for decades, Moore’s statements felt like a distillation of tech’s tortured relationship with narrative prose. There are many kinds of fanfiction—including an entire subgenre in which “you” are a character in the story. But those are still stories , sentences deliberately written and arranged in a way that lets you lose yourself in an authored narrative. “Imagine having such a fundamental misunderstanding of the appeal of reading fanfiction—let alone reading fiction more broadly,” I wrote in response to her thread.
What’s so wrong with people enjoying reading plain old words on a page? The tech world has long been convinced that it understands the desires of readers better than they do themselves. For years, VCs have promised to upend books and the structures around their creation and consumption. Some came from within the publishing industry, but like their counterparts “disrupting” other sectors, including film and TV, many more did not. And for the most part, despite tech’s sometimes drastic (and often negative) effects on other industries, book- and reading-related startups failed to alter much at all. People are still buying books—in fact, they’re buying more than ever. Pandemic lockdowns brought a perhaps unsurprising boom in sales, and even though numbers slipped as restrictions lifted, print sales were still nearly 12 percent higher in 2022 than they were in 2019, and sales of audio books continue to increase dramatically year over year.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One reason books haven’t been particularly disruptable might be that many of the people looking to “fix” things couldn’t actually articulate what was broken—whether through their failure to see the real problems facing the industry (namely, Amazon’s stranglehold), or their insistence that books are not particularly enjoyable as a medium. “It’s that arrogance, to come into a community you know nothing about, that you might have studied as you study for an MBA, and think that you can revolutionize anything,” says writer and longtime book-industry observer Maris Kreizman. “There were so many false problems that tech guys created that we didn’t actually have.” Take, for example, the long string of pitches for a “Netflix for books”—ideas that retrofitted Netflix’s original DVDs-by-mail model for a different medium under the presumption that readers would pay to borrow books when the public library was right there.
Publisher’s Weekly keeps a database of book startups that now numbers more than 1,300; many of them are marked “Closed,” alongside a graveyard of broken URLs. There were plenty of practical ideas—targeting specific demographics or genres or pegged to more technical aspects, like metadata or production workflows. But many more proposed ways to alter books themselves—most of which made zero sense to people who actually enjoy reading.
“I don’t think they’re coming to that with a love of fiction or an understanding of why people read fiction,” Kreizman says. “If they were, they wouldn’t make these suggestions that nobody wants.” The “10x more engaging” crowd has come in waves over the past two decades, washed ashore via broader tech trends, like social media, tablets, virtual reality, NFTs, and AI. These tech enthusiasts promised a vast, untapped market full of people just waiting for technology to make books more “fun” and delivered pronouncements with a grifting sort of energy that urged you to seize on the newest trend while it was hot—even as everyone could see that previous hyped ventures had not, in fact, utterly transformed the way people read. Interactive books could have sound effects or music that hits at certain story beats. NFTs could let readers “own” a character. AI could allow readers to endlessly generate their own books, or to eschew—to borrow one particular framing—“static stories” entirely and put themselves directly into a fictional world.
What people are actually doing is creating a new medium. They’re not actually replacing the novel; they created a new thing that was like the novel but different, and the old forms carried on.
University of California, Santa Barbara English professor Jeremy Douglass, on AI AI isn’t remotely a new player in the book world. Electronic literature artists and scholars have worked with various forms of virtual and artificial intelligence for decades, and National Novel Generation Month , a collaborative challenge modeled after NaNoWriMo , has been around since 2013. Even now, as much of the book world loudly rejects AI-powered writing tools, some authors are still experimenting, with a wide range of results.
But these bespoke, usually one-off projects are a far cry from the tech industry’s proposals to revolutionize reading at scale—not least because the projects were never intended to replace traditional books.
“A lot of interactive storytelling has gone on for a very long time,” says Jeremy Douglass, an assistant professor of English at the University of California, Santa Barbara, citing everything from his early career work on hypertext fiction to the class he’ll teach next year on the long history of the pop-up book to centuries-old marginalia like the footnote and the concordance. “These fields are almost always very old, they’re almost always talked about as if they’re brand-new, and there haven’t really been a lot of moments of inventing a new modality.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To VC claims that AI will totally alter books, Douglass takes what he calls a “yes, and” stance. “What people are actually doing is creating a new medium. They’re not actually replacing the novel; they created a new thing that was like the novel but different, and the old forms carried on. I’m still listening to the radio, despite the film and game industries’ efforts.” Tech entrepreneurs rarely pitch “yes, and” ideas. In their view, new technologies will improve on—and eventually supplant—what exists now. For all of his interest in the many forms of interactive fiction, Douglass doubts that most books would benefit from an AI treatment.
“There are extremely pleasurable aesthetic systems that aren’t intentional,” he says. “But how often when I’m reading The Autobiography of Malcolm X or The Joy of Cooking do I think, ‘If only a chatbot could augment this on the fly’? And it’s partly the fact that some communication is deeply intentional, and that’s part of the pleasure. It’s handcrafted, it’s specific, there’s a vision.” That isn’t to say that Douglass thinks there’s zero appetite for AI in literature—but it’s “probably a very small slice of the pie. So when you say ‘all books’? Almost certainly not. For the same reason that we’re not reading 100 percent pop-up books, or watching all of our books on YouTube, or anything else you can imagine. People are doing that too, but it’s extra.” The exact size of that small pie slice remains to be seen, as does the general public’s appetite for instant novels, or chatting with characters, or hitting a button that will animate any book in your digital library. But those desires will likely need to come from readers themselves—not from the top down. “If you just give the tools to everybody, which is happening in spite of venture capital, as well as because of it, people will figure out what they want it for—and it’s usually not what the inventors and the investors think,” Douglass says. “It’s not even in their top-10 list of guesses, most of the time. It’s incredibly specific to the person and genre.” The recent history of publishing has plenty of examples in which digital tools let people create things we couldn’t have predicted in the analog days: the massive range of extremely niche self-published romance, for example, or the structural variation and formal innovation within the almost entirely online world of fanfiction.
But when the tech industry approaches readers with ways to “fix” what isn’t broken, their proposals will always ring hollow—and right now, plain old reading still works for huge numbers of people, many of whom pick up books because they want to escape and not be the main character for a while. “That’s a good thing,” Kreizman says. And as AI true believers sweep through with promises that this technology will change everything, it helps to remember just how many disruptors have come and gone. “In the meantime, tech bros will still find VCs to wine and dine and spend more money on bullshit,” Kreizman predicts. But for the rest of us? We’ll just keep on reading.
Read next Read next The New Minstrels Are Here You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Topics Books Future of Entertainment Publishing e-books Media Geek's Guide to the Galaxy Saniya Ahmed Angela Watercutter Angela Watercutter Angela Watercutter Tammy Rabideau Marah Eakin Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,457 | 2,023 |
"Sundar Pichai on Google’s AI, Microsoft’s AI, OpenAI, and … Did We Mention AI? | WIRED"
|
"https://www.wired.com/story/sundar-pichai-google-ai-microsoft-openai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons By Steven Levy Backchannel Sundar Pichai on Google’s AI, Microsoft’s AI, OpenAI, and … Did We Mention AI? Facebook X Email Save Story Photograph: Gabriela Hasbun Save this story Save Save this story Save Earlier this month, Sundar Pichai was struggling to write a letter to Alphabet’s 180,000 employees. The 51-year-old CEO wanted to laud Google on its 25th birthday, which could have been easy enough. Alphabet’s stock market value was around $1.7 trillion. Its vast cloud-computing operation had turned its first profit. Its self-driving cars were ferrying people around San Francisco.
And then there was the usual stuff—Google Search still dominated the field, as it had for every minute of this century. The company sucks up almost 40 percent of all global digital advertising revenue.
But not all was well on Alphabet’s vast Mountain View campus. The US government was about to put Google on trial for abusing its monopoly in search. And the comity that once pervaded Google’s workforce was frayed. Some high-profile employees had left, complaining that the company moved too slowly. Perhaps most troubling, Google—a long-standing world leader in artificial intelligence —had been rudely upstaged by an upstart outsider, OpenAI.
Google’s longtime rival Microsoft had beaten it to the punch with a large language model built into its also-ran search engine Bing, causing panic in Mountain View. Microsoft CEO Satya Nadella boasted , “I want people to know we made Google dance.” Pichai’s letter, released on September 5, was buoyant, designed to inspire, and almost giddy in its discussion of the company’s astonishing journey. (You can read it here.
) But behind the cheerleading, you could detect a hidden leitmotif.
We matter more than ever. Despite what they say.
One point pops up repeatedly: We are not going to lose in AI.
Pichai—who joined the company in April 2004, the same month Gmail launched—has been CEO for eight years. He speaks often of growing up in India, where technology provided a lifeline to better times. He’s widely recognized as a “nice guy.
” But over the years he has made his share of tough decisions, including layoffs, product cancellations, and reorgs, like his recent forced merger of Google’s two semi-competing AI research centers, DeepMind and Google Brain. Now he faces even bigger decisions as the company withstands challenges inside and out—all while pursuing what Pichai calls “the biggest technological shift” of our lifetimes.
Just before releasing his blog post, Pichai spoke to WIRED about AI, fighting bureaucracy, and why he rejects the characterization that he is mainly a consensus builder. The interview is edited for length and clarity.
Steven Levy: You’ve just shared a note marking 25 years of Google. It’s upbeat and inspirational, but am I right to see a subtext here? It seems you’re rallying the troops around the idea that Google still exists to build technology for the world’s benefit, even though some people might be questioning that now.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Sundar Pichai: It’s definitely a reflective moment. Twenty-five years is a long time in the technology world. But I'm convinced that with the shift to AI, there’s a golden age of innovation ahead. As a company, we have as big an opportunity as we had 25 years ago, and a lot more responsibility. I hope to convey to the company that we should balance being bold and responsible, and meet that moment with excitement.
OK. But let me share a narrative that I’m sure you’ve heard: Google has always been a leader in AI. But in the past couple of years, despite building AI into products, it was too sclerotic or cautious to seize the moment, and other companies have taken your ball and run with it. When OpenAI and Microsoft came out with consumer large language models , Google was caught flat-footed and now is scrambling to catch up. What's your reaction? This article appears in the November 2023 issue.
Subscribe to WIRED Photograph: Sinna Nasseri You’re right that we've been thinking about AI from the very beginning. Obviously, when I became CEO in 2015, it was clear that deep neural networks were going to profoundly change everything. So I pivoted the company to be AI-first, and that's where we directed a lot of our R&D dollars. Internally, we had our LLM, LaMDA. Obviously, we were thinking about running large consumer products. But we definitely felt that the technology needed to mature a bit more before we put it in our products. People come to us with a huge sense of trust—they come to Google and type, “What Tylenol dosage for a 3-month-old?” You can imagine the responsibility that comes with getting it right. And so we were definitely a bit more cautious there.
So credit to OpenAI for the launch of ChatGPT, which showed a product-market fit and that people are ready to understand and play with the technology. In some ways, it was an exciting moment for me, because we are building that underlying technology and deploying it across our products. But we are still being deliberate where we need to be. The technology arc is long, and I feel very comfortable about where we are.
You had the tools and talent to put out something like GPT earlier than OpenAI did. In retrospect, should you have done it? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg You can go back and pretty much take another look at everything. It's not fully clear to me that it might have worked out as well. The fact is, we could do more after people had seen how it works. It really won't matter in the next five to 10 years. It’s important to look at the signal and separate it from the noise. The signal is that AI is a profound platform shift, and it’s getting to a stage where you can deploy it more deeply. We are doing that to solve real problems, with a sense of excitement and optimism and responsibility. That, to me, is the signal. That is the opportunity.
After Microsoft put a version of ChatGPT into its Bing search engine, Google hastened to release its own version, Bard. Did Nadella make you dance? In cricket, there's a saying that you let the bat do the talking. We have been innovating on AI, and also applying AI to search, every year. There’s always been competition. We've seen Alexa launch and Siri launch—this is not new. Around the end of last year, my thoughts were, how can we bring generative AI to search in a way that makes sense for our users? That’s what I'm thinking about, and that's what will matter in the long run.
Photograph: Gabriela Hasbun Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg I’m glad you mentioned search. The basis of Google Search—and almost your entire revenue stream—is that people query the search engine and find relevant links that they visit, and maybe spend money there. But your plan to use LLMs in search, called SGE, or Search Generative Experience , doesn’t send people to websites. You type a query into a Google Search bar, and SGE answers with a big block of text. How do you do that and not blow up your business model? First of all, in search, people come looking for information. Over the past many years, you know, how we present that has dramatically evolved. But we are still trying to help people find the best information that exists online. Inherently, people are also looking for commercial information, and ads are very valuable commercial information, because they connect merchants and businesses, small and big, to users. None of that changes just because we are applying AI deeply. When we evolve search with generative AI, we’ll apply the same principles. It's important to us to connect users with what's out on the web, and we are working deeply to make sure that continues to work well.
But if I do a search by prompting an LLM, I’m going to get something quite different from a series of links. How will I know whether it’s sponsored or organic? You would see the same thing. Even in a generative experience we would give you a set of sites that support what we are saying. We want to make sure users are consuming those sites. So I don't think the core part of the experience will change. We will have a space for ads in a way that makes sense for users and particularly on commercial queries. Our early testing shows that we'll be able to get it right. When we shifted from desktop to mobile, people asked versions of these same questions. It’s core to the company to evolve search while applying the underlying principles. I am confident we'll be able to get that right through this transition.
For years, DeepMind and Google Brain operated as different entities, maybe even competitive entities. This year, you ordered them to merge.
Why? And are you seeing the fruits of that merger? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg I always felt fortunate we had two of the best AI teams on the planet. They were focused on different problems, but there was a lot more collaboration than people knew. Google worked very hard on making sure we provided TPUs [Tensor Processing Units, optimized for machine learning] to support the AlphaGo game [a program that beat the world champion of the intricate game Go]. We realized we needed to build larger-scale LLMs, so it made sense to come together so that we could be more efficient around our use of compute. [DeepMind’s LLM] Gemini actually started as a collaborative effort across these two teams. And [Google Brain leader] Jeff Dean had a desire to reclaim a deep engineering and scientific role. I've spent time with the teams both in the UK and in Mountain View, and I've been thrilled to see the Gemini teams working closely with Google Search as I’m walking through the halls. I felt a sense of excitement that reminded me of the early days of Google.
The large language model winner in this merger seems to be DeepMind’s Gemini , which you are positioning as a next-generation LLM. What will it do that the current generation doesn't do? Today you have separate text models and image-generation models and so on. With Gemini, these will converge.
Meanwhile, we haven’t heard much about Google Assistant. Should we issue a missing persons alert? Part of the reason we built the conversational LLM LaMDA was that we realized we needed to improve the underlying technology of Google Assistant. AI will make Google Assistant fundamentally better.
The US government is putting Google on trial for alleged antitrust violations regarding what it calls your search monopoly. You might not endorse that term. So how would you describe the company’s dominance in search? The case is happening at a time of unprecedented innovation. Step back, and look at the recent breakthroughs in AI, in new apps, options for people to access information. We make literally thousands of changes every year to improve search. We invest billions to constantly innovate and make sure the product works well for people and that it's a product people want to use. I'm looking forward to the opportunity to make that case. It’s an important, important process.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So you’re saying we should view this in a broader sense than just market share? Think about all the ways people today get to access information. It's a very dynamic space, it's a broad space. We have to work hard to constantly innovate, to stay ahead.
If you weren't able to make deals to become the default search engine on third-party browsers and phones—something the government is objecting to—what would be the impact on Google? We want to make it easy for users to access our services. It’s very pro-consumer.
Earlier you mentioned your in-house AI chips. Google Cloud, the enterprise service, recently announced its first profit, and a big part of a cloud service now is supporting AI. I find it interesting that you maintain a large partnership with Nvidia, whose GPU chips seem to be a critical, if not irreplaceable, component of the AI ecosystem. How important is it for you to preserve good relations with Nvidia? Do you think it’s dangerous for one company to have so much power? We've had a long relationship with Nvidia for well over a decade, including working deeply on Android. Obviously, with AI, they've clearly demonstrated a strong track record of innovation. Many of our cloud customers are Nvidia customers, too. So the collaboration is very, very critical. Look, the semiconductor industry is a very dynamic, competitive industry. It’s an industry that needs deep, long-term R&D and investments. I feel comfortable about our relationship with Nvidia, and that we are going to be working closely with them 10 years from now.
You—and much of the industry—profess to welcome AI regulation. What do you think the regulation should include? And what regulation would you see as stifling innovation and thwarting the benefits of the technology? The first and foremost thing I think you need to get right is making sure that regulation is a collaborative thing between the public sector, private sector, nonprofits, and so on. It’s important to let innovation flow and make sure anything you're designing isn’t onerous on small companies or people doing open source. Then you can consider initial proposals like, how do you test the cutting-edge models? What does safety testing look like? We should set up industry standards and benchmarks. You should also think about how systems will be deployed. They're obviously going to be deployed in a wide range of scenarios, from recommending a nearby coffee shop to deciding what insurance people should get, or maybe making a medical care decision. So obviously, it makes sense that they're tested for safety and don't have bias, and it makes sense that they protect privacy. But I would balance it by asking whether existing regulations cover it. Using AI in health care, for example, doesn't change the fact that you must go through a regulatory process, including getting approved by the Food and Drug Administration to do a lot of things. And for me, with US regulations, we should actually get federal privacy legislation done first. In privacy, AI raises the stakes even more.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg OK, so I'll put you down for strong privacy regulation in Congress.
Yeah. We've called for it, and it'll definitely be good to get.
Photograph: Gabriela Hasbun We're talking about AI in a very nuts-and-bolts way, but a lot of the discussion centers on whether it will ultimately be a utopian boon or the end of humanity. What’s your stance on those long-term questions? AI is one of the most profound technologies we will ever work on. There are short-term risks, midterm risks, and long-term risks. It’s important to take all those concerns seriously, but you have to balance where you put your resources depending on the stage you're in. In the near term, state-of-the-art LLMs have hallucination problems—they can make up things. There are areas where that’s appropriate, like creatively imagining names for your dog, but not “what’s the right medicine dosage for a 3-year-old?” So right now, responsibility is about testing it for safety and ensuring it doesn't harm privacy and introduce bias. In the medium term, I worry about whether AI displaces or augments the labor market. There will be areas where it will be a disruptive force. And there are long-term risks around developing powerful intelligent agents. How do we make sure they are aligned to human values? How do we stay in control of them? To me, they are all valid things.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Have you seen the movie Oppenheimer ? I'm actually reading the book. I'm a big fan of reading the book before watching the movie.
I ask because you are one of the people with the most influence on a powerful and potentially dangerous technology. Does the Oppenheimer story touch you in that way? All of us who are in one shape or another working on a powerful technology—not just AI, but genetics like Crispr—have to be responsible. You have to make sure you're an important part of the debate over these things. You want to learn from history where you can, obviously.
Google is an enormous company. Current and former employees complain that the bureaucracy and caution has slowed them down. All eight authors of the influential “Transformers” paper , which you cite in your letter, have left the company, with some saying Google moves too slow. Can you mitigate that and make Google more like a startup again? Anytime you're scaling up a company, you have to make sure you’re working to cut down bureaucracy and staying as lean and nimble as possible. There are many, many areas where we move very fast. Our growth in Cloud wouldn't have happened if we didn’t scale up fast. I look at what the YouTube Shorts team has done, I look at what the Pixel team has done, I look at how much the search team has evolved with AI. There are many, many areas where we move fast.
Yet we hear those complaints, including from people who loved the company but left.
Obviously, when you're running a big company, there are times you look around and say, in some areas, maybe you didn't move as fast—and you work hard to fix it. [ Pichai raises his voice.
] Do I recruit candidates who come and join us because they feel like they've been in some other large company, which is very, very bureaucratic, and they haven't been able to make change as fast? Absolutely. Are we attracting some of the best talent in the world every week? Yes. It’s equally important to remember we have an open culture—people speak a lot about the company. Yes, we lost some people. But we're also retaining people better than we have in a long, long time. Did OpenAI lose some people from the original team that worked on GPT? The answer is yes. You know, I've actually felt the company move faster in pockets than even what I remember 10 years ago.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg You’ve been CEO for eight years now, and the pressure has never been greater. You’ve been known as a consensus builder, but the time seems to call for a “wartime CEO.” Does that role resonate with you? I've always felt that we work in a dynamic technology space. So this notion of peacetime/wartime doesn't fully resonate with me. In a given week, you can have both those moments. A lot of decisions I made over many, many years were not about consensus building. There’s a difference between making clear decisions and getting people to come along with it. What I've done this year is no different from what I've done over the past many years. I've always been focused on the long term. I’ve never forgotten what gives Google its strengths. It’s a deep technology, computer science, and AI company, and we apply that to build great products that make a difference for people. We do this across a much more diverse set of areas now. That doesn't change over time.
Three years ago, I asked you whether Google was still Googly , and you said yes. As the company continues to grow and age, what can you do to maintain its Googliness? Being Googly is about staying true to our values, making sure we are working hard to innovate using deep computer science, and making products that really matter to people in their daily lives. As long as we keep that in mind, I think we'll be set.
In your 25th anniversary letter, you evoke your roots, growing up in India where technology was a premium. You're now the CEO of a trillion-dollar company and a very rich man. How do you maintain the connection to that person who first came to the United States? In my personal experience, access to technology was an important driver of opportunity. I saw that in my life, and I've seen it in countless others. What inspired me to join Google and be a part of Google was the mission statement, which was about making information universally accessible and useful. With AI, it’s even more important to democratize access to what will be one of the most profound technologies we have worked on. So I’m deeply motivated to make sure we develop this technology in a way that the entire world benefits. Personally, when I was in India, every weekend, I used to spend time with my parents, and my mom would make my favorite food—dosas, South Indian crepes. I still do that pretty much every Saturday morning. My mom makes them for me. I keep things simple.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Editor at Large X Topics longreads Google artificial intelligence Antitrust Alphabet Big Interview magazine-31.11 Brandi Collins-Dexter Andy Greenberg Steven Levy Lauren Smiley Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,458 | 2,023 |
"The Fanfic Sex Trope That Caught a Plundering AI Red-Handed | WIRED"
|
"https://www.wired.com/story/fanfiction-omegaverse-sex-trope-artificial-intelligence-knotting"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Rose Eveleth Culture The Fanfic Sex Trope That Caught a Plundering AI Red-Handed Play/Pause Button Pause Video: James Marshall; Getty Images Save this story Save Save this story Save These days, so-called generative AI can (allegedly) make art , write books , and compose poetry.
Systems like Stable Diffusion, Midjourney, and ChatGPT are seemingly quite good at it. But for some artists, this creates problems. Namely, determining what legal rights they have when their work is scraped by these tools.
Faced by the rise in these systems, authors and artists are pushing back. The Writers Guild of America (WGA) is striking in part over the potential use of AI to write scripts, referring to such systems as “ plagiarism machines.
” Visual artists have penned open letters denouncing the use of AI to replace illustrators, calling it “the greatest art heist in history.” Getty sued Stability AI in January for copyright infringement.
But what if your work exists in a kind of in-between space—not work that you make a living doing, but still something you spent hours crafting, in a community that you care deeply about? And what if, within that community, there was a specific sex trope that would inadvertently unmask how models like ChatGPT scrape the web—and how that scraping impacts the writers who created it.
The trope in question is called “the Omegaverse,” which is perhaps best described as an act of collective sexual worldbuilding. It began in the (very active) fandom for the TV series Supernatural , but has now spread to almost every corner of the fan-fiction world. These stories are defined by a specific sexual hierarchy made up of Alphas, Betas, and Omegas in which Alphas and Omegas can smell one another in particular ways, experience “heats,” and (usually) mate for life. Most of these stories are heavy on smut, and bodily fluids are crucial to the whole genre.
Artificial Intelligence Tom Comitta Digital Culture Elizabeth Minkel Pitch Perfect Marah Eakin Within the Omegaverse, there is also something called “knotting,” a phenomenon borrowed from animals in which a penis grows a bulb at the base to remain locked inside a vagina. If this all sounds overwhelming, you’re not alone. “I remember the first time I encountered it, and I will confess, my reaction was, ‘What is this? What is happening?’” says Hayley Krueger, a fan-fiction writer who published an Omegaverse 101 explainer earlier this year. But she says she quickly fell in love with the trope.
When characters in the Omegaverse mate, they become linked biologically. Different writers have different ways of showing or expressing this—anything from being able to smell your mate’s mood, to being able to actually communicate telepathically across distances. “I really like the dynamic between characters,” Krueger says. “It's almost like soulmates, but you choose it and then you get all these perks that go with it.” Because the Omegaverse has such specific terms and phrases associated with it, ones that are found within fan fiction and nowhere else, it’s an ideal way to test how generative AI systems are scraping the web. Determining what information has gone into a model like ChatGPT is almost impossible. OpenAI, the company behind the tool, has declined to make its training data sources public.
The Washington Post did their own analysis of the model, and created a way to peek at the websites that make up Google’s C4 dataset. But even people who build applications using ChatGPT have no insight into what the system is trained on.
To be clear, what this means is that the AI model that is being used across the world for everything from customer service, to HR training, to journalism, knows what “knotting” is.
In the absence of any list of sources, people have tinkered with other ways to try and explore what these models might know and how. One way to do that is to prompt the system with words or questions you know come from a certain source. So, for example, if you want to know whether the works of Shakespeare are being used in the model, you might give the system a few unique lines of a play and see if it comes back with iambic pentameter. Or, if you want to know whether these systems are trained on fan fiction, you might give the model a specific trope unique to fandom.
A few months ago, a fan-fiction writer with the handle kafetheresu did just that. In a very thorough post on Reddit , they pointed out that when they gave the writing tool Sudowrite (which uses OpenAI’s GPT-3 to operate) specific words and phrases unique to the Omegaverse, the system readily filled in the rest in a way that suggested the AI knew all about this particular trope. (The Reddit poster declined to speak on the record.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For example, when the writer plugged in the sentence “Steve had to admit that he had some reservations about how the New Century handled the social balance between Alphas and Omegas” into Sudowrite, it suggested the following next section: Being an omega himself, he knew how difficult it was to find a balance between the highly competitive alpha-omega relationship and the omega-omega friendship. Especially for someone like Steve, who was highly competitive and wanted to be a career officer.
The only way Sudowrite would know to fill in these specific sentences was if it had learned about the Omegaverse. And the only place for it to learn about the Omegaverse is on fan-fiction sites like Archive of Our Own. I tested Sudowrite with a few lines inspired by Omegaverse fiction as well to verify these results. Here’s what Sudowrite gives you if you start with “The scent stopped Harry dead in his tracks, his neck tingling against his will, bringing back memories of one particular Alpha”: Harry's heart raced with excitement and fear as he followed the scent. He couldn't help but wonder what had happened to the alpha since they last saw each other. Had he found his mate? Had he forgotten about Harry? The questions swirled around in his head as he approached a dark alleyway.
It might seem odd that a sex trope generally found only on fan-fiction forums like Archive of Our Own ended up in a tool that runs on GPT-3, but it shouldn’t be surprising. Most large language models use some version of the CommonCrawl dataset , a corpus made of 12 years’ worth of crawling the publicly available internet. Archive of Our Own hosts more than 11,080,000 works, making it a treasure trove of publicly accessible content.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To be clear, what this means is that the AI model being used across the world for everything from customer service, to HR training, to journalism, knows what knotting is and could, without the right guardrails, explain to an unassuming user why an Omega needs to produce slick in order to mate.
James Yu, the chief technology officer at Sudowrite , says his team noticed the Reddit post fairly quickly. He told me that it was eye-opening because it highlighted how vast the data sets that go into these models really are. “For me, it highlights the things I don't know,” says Yu. “In every one of these models is millions of other latent spaces that I just never encounter. It's almost like an endless ocean.” Sudowrite is intended to be used as a writer’s assistant; authors plug in sentences that are giving them trouble, or scenes they’re working on, and the AI offers up a few lines to help guide them on where they could go next. Prior to the Reddit post, Yu had no idea what the Omegaverse was. Now his own system was offering tips on how to write smut about it.
Writers of fan fiction, much like writers of journalism or television or movies, were not pleased to find out that their work was being used to train these systems. “This is particularly concerning as many for-profit AI writing programs like Sudowrite, WriteSonic, and others utilized GPT-3,” the original Reddit poster wrote in an email to the Archive of Our Own communications team, shared in the thread. “These AI apps take the works which we create for fun and fandom, not only to gain profit, but also to one day replace human writing.” Even if it's just plain smut, there's a human element there and it's someone creating something for their enjoyment and they want to share that hard work with people. It’s stealing that.
Hayley Krueger, fan-fiction writer Yu is aware of this complaint. “I'd love for there to be a simple way to do fair compensation for content that was used to train GPT-3, but unfortunately, there is no mechanism that OpenAI provides for that,” he says. “If someone (OpenAI? Google?) were to offer this, we'd try it out immediately.” For now, he believes that Sudowrite’s value to writers outweighs the harm. “We're one of the few AI platforms that is catering specifically to fiction writers,” he says, adding that when there is a better model that has opt-in features, and potential payment for people’s work, “we will be in a good position to switch to it, and this is something we would promote heavily to our user base to bring awareness.” But that’s not convincing to a lot of writers who feel their work is being used against their will to enrich technology companies. Compared to Google and OpenAI, Sudowrite is small potatoes, but they have still raised $3 million in seed funding.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As far as writers are concerned, it’s not enough for a place like Sudowrite to wait around for some other, bigger company to fix what they see as fundamental, unethical flaws in the system. In the comments on the Reddit post, one user said: “God I hate AI so much on so many different levels.” Others shared tips on how to make their fan works private. “I never liked the idea of hiding my work, but because of this I went and restricted everything I've written so only registered users can see it,” wrote another.
“It sort of takes the heart out of it,” says Krueger. “Fan fiction is used by a lot of creators to explore difficult topics that are personal to them and their life experiences as a way to vent about these topics. Even if it's just smut, just plain smut, there's a human element there and it's someone creating something for their enjoyment and they want to share that hard work with people. It’s stealing that from people.” This is the same argument being made by the WGA—that these systems can use copyrighted (or copyrightable) work against the authors’ will. This applies to fan fiction too.
It might not be done for profit, but fan fiction is eligible for copyright claims. While the writers do not own the content on which they’re basing these pieces of fiction, they do own anything original they contribute through their work.
For those people, non-commerciality is of value, and the idea of someone else making money off it is highly offensive because their moral commitments are being betrayed.
Betsy Rosenblatt, legal chair, Organization for Transformative Works “Even things that are highly derivative, if they originated with the author, are considered original,” says Betsy Rosenblatt, a professor at University of Tulsa College of Law and the legal chair for the Organization for Transformative Works (OTW) , a nonprofit that oversees Archive of Our Own among other fanwork projects. That includes things like any original characters an author added, the plot structure, and the particular word choices. So it is possible in some situations to file for copyright protection for works of fan fiction—although most fan-fiction writers don’t, whether it’s because they don’t know how, or don’t want to spend the money, or simply aren’t interested in jumping through the hoops.
But for most writers I spoke with, it’s not really about copyright or ownership or even money. Most fan-fiction authors don’t make a living doing this. They do it for the community, for the friends and connections they make. “I have so many friends that I've met through partaking in events where we create stuff together,” says Krueger. And Rosenblatt says that people who are unhappy with scraping see this as a major problem. For them, it’s not that they are being deprived of potential income, but instead that someone is making money off of something that they created specifically to be non-commercial.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “For those people, non-commerciality is of value, and the idea of someone else making money off it is highly offensive because their moral commitments are being betrayed,” Rosenblatt says.
And perhaps because there isn’t a big financial driver, the culture of fan fiction is all about attribution—writers link and nod to other people who’ve influenced them, or helped them. “The idea is that no one should get paid for this, but everyone should know what's mine,” Rosenblatt says. This is not simply difficult to do with AI systems, but is in fact nearly impossible. Many of these models are black boxes, and it would be impossible to spit out a list of influences that contributed to something specific ChatGPT wrote.
So, can AI systems like Sudowrite and writers who don’t want to be used by them exist in harmony? Nobody knows, of course, but most of the people I spoke with talked about some form of opting in. Rosenblatt says that some writers of fan fiction really like the ability to use AI in their work. Sudowrite certainly has fans in the writing world. Others want nothing to do with these systems, and want the ability to remove their work from the training data. “I would love to get to a place where we could have a totally opt-in model and everyone is compensated for that,” says Yu, “I just don’t think that’s possible right now.” Yu says that if people were able to opt out at scale, then the models would become noticeably worse. The reason ChatGPT works as well as it does is precisely because it’s got so much data to pull from. Critics argue that if the only way your system can function is by using work against people’s wishes, then perhaps the system itself is fundamentally morally flawed.
Fan fiction might seem like an easy mark when it comes to training models. These pieces are publicly available, non-commercial, and often not copyrighted. But that doesn’t mean they aren’t valuable and worth protecting from being used in ways that the original creators don’t like.
In 2019, Archive of Our Own won a Hugo Award for Best Related Work. At the ceremony, nominees asked every science fiction writer who had ever contributed to the site to stand, and a huge chunk of the room did. The value of this kind of community-based, collective worldbuilding is often dismissed as silly or frivolous, but these works are important to millions of people around the world. “I have read fan fiction that has affected me emotionally and lived with me in ways that stories I've read that are published have not,” says Krueger.
In the efforts to consider the future of generative AI, and whose work does or doesn’t get used to train it, even smutty fan fiction deserves protection.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Ideas Contributor X Topics fiction artificial intelligence ChatGPT Jason Parham Alex Winter Amit Katwala Angela Watercutter Jennifer M. Wood Angela Watercutter Kate Knibbs Tammy Rabideau Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,459 | 2,023 |
"How Bookshop.org Survives—and Thrives—in Amazon’s World | WIRED"
|
"https://www.wired.com/story/books-bookshop-org-thrives-amazon-world"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons By Kate Knibbs Backchannel How Bookshop.org Survives—and Thrives—in Amazon’s World Andy Hunter, the founder of Bookshop.org (pictured here at Spoonbill & Sugartown Books in Brooklyn) developed his love for books early. “I became a reader, in the beginning, because it provided me solace,” he says.
Photograph: Yael Malka Save this story Save Save this story Save “Do you remember what kind of beer it was?” Andy Hunter pauses for so long before answering my question, it’s awkward. He’s racking his brain. I’ve asked him to tell me about the night he came up with the idea that led to his improbably successful bookselling startup , Bookshop.org. As a former magazine editor, he wants to get the details right.
He remembers the easy stuff: It was 2018. He was on the road for work. At the time, Hunter ran the midsize literary publishing house Catapult, a job that required schmoozing at industry events. The night of his big brainstorm, he was away from his two young daughters and his usual evening obligations—dishes, bedtime rituals—and had a rare moment to think, and drink a beer.
This article appears in the Jul/Aug 2023 issue.
Subscribe to WIRED.
Illustration: Vivek Thakker But what kind of beer? “It was, uh, a Dogfish Head IPA,” Hunter finally answers. OK, so, picture this: There he is, alone in a tidy Airbnb, a light-blue bungalow on a quiet road in Berkeley, California. His brown hair is a little mussed, and he’s nursing a pale ale. He’s grooving to music. (“You can say I was listening to Silver Jews,” Hunter says.) He couldn’t stop thinking about something a board member of the American Booksellers Association, the industry’s largest trade group, had said to him during a recent work dinner.
What if ecommerce was a boon for independent bookstores, instead of being their existential threat? The Booksellers Association ran IndieBound, a program that gives bloggers and journalists a way to link to indies instead of Amazon when they cite or review a book. But it hadn’t gained much traction.
That night, in Berkeley, the unusual combination of evening solitude and a touch of alcohol knocked something loose in Hunter’s brain. Or maybe it knocked something together. Either way, by the morning, he wasn’t hungover and he had a proposal for how to grow IndieBound, including simplifying the logistics of buying online and integrating it with social media.
Plus: “I wanted it to be better-looking,” he says.
The cat on the wall in Andy Hunter's home office in Williamsburg, Brooklyn, where he runs Bookshop.org.
Video: Yael Malka When he got back home to New York, Hunter sent his proposal to Oren Teicher, then the CEO of the Booksellers Association. Teicher liked the idea, but said no. The trade organization wasn’t actually interested in expanding IndieBound. But if Hunter was willing to take on the project himself, to create this new-and-improved version on his own? Well—the group could invest some money.
Even though Catapult kept him plenty busy, Hunter really believed in his vision of a souped-up ecommerce platform uniting the indies. Little stores deserved to find customers online, too, even if they didn’t have the resources to set up their own online shops. Offering them a way to band together felt like a righteous crusade. Plus, Hunter figured it could be a low-effort side gig.
What started as a favor done on a business-trip whim has since become the great project of Hunter’s professional life. In its first few years of existence, Bookshop defied even its founder’s expectations and demonstrated how helpful its model could be for small businesses. Now, Hunter has a new plot twist in mind: He wants to show business owners how to scale up without selling out—without needing to kill the competition.
Related Stories Very Online Babies Kate Knibbs Bah Humbug Amelia Tait Plaintext Kate Knibbs The problem for independent bookstores is that many of them don’t have the bandwidth to run their own online stores. Their inventories and shipping capabilities are limited by their non-Amazonian budgets. Plus, sometimes they don’t want to participate in ecommerce; the romance of stuffed shelves and reading nooks and thoughtfully selected staff picks are central to their existence. Removing those experiences seems antithetical—even though it might be necessary—to the bottom line.
Bookshop offers another option. Say you’re a small bookstore owner. It takes only a few minutes to set up a digital storefront on Bookshop’s website, list what books you want to sell, and, if you want, curate collections of titles to reflect your store’s worldview. You don’t have to actually stock any of the books yourself; Bookshop partners with the wholesaler Ingram to fulfill orders, so you’re off the hook for inventory and shipping. You get a 30 percent cut of the cover price on any book sold through your storefront. (If you’re a blogger, writer, influencer, or other bookish type, you can join Bookshop as an individual, even if you don’t own a brick-and-mortar bookstore, and take home a 10 percent cut on whatever you sell.) Bookshop itself also sells books—you can type a name in the search bar at the top of its homepage and soon find yourself staring at an Add to Cart button. Physical stores can make money off of these sales, too, if they join the company’s profit-sharing pool. Bookshop gives 10 percent of these sales to the pool.
Bookshop doesn’t have a pitch tailored for traditional venture capital. If anything, it has the opposite.
Technically, Bookshop doesn’t need independent stores to join its platform. If the goal were merely to sell books online, it could do just that, like Barnes & Noble or an early-days Amazon. But then, of course, it wouldn’t be special. And Hunter would have never bothered. Helping the indies is the whole point, something he feels an almost spiritual drive to do.
Hunter had the turbulent childhood of a young-adult novel protagonist. His dad left when he was 11, and his mother was institutionalized for mental illnesses at different points throughout his youth. Many times, Hunter and his older brothers had to figure things out on their own. Without an adult regularly looking after him—someone to make sure he had clean clothes or shampoo—Hunter struggled to make friends. He spent a lot of time alone.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg His Massachusetts town didn’t have a bookstore, but it had a library; he headed there after school and on weekends. “I became a reader, in the beginning, because it provided me solace,” he says. He read everything; he read all the time. The Chronicles of Narnia, Judy Blume. He became so obsessed with Watership Down that he carried a copy with him wherever he went. Even his teachers teased him about it.
One summer, when Hunter was 16, his mother took him and his brothers to a cabin in Maine. While the others swam and sunbathed, Hunter raided the cabin’s library. The owners had shelves of books that astounded the teenager: Soul on Ice by Eldridge Cleaver, The Autobiography of Malcolm X , The Women’s Room by Marilyn French, James Simon Kunen’s The Strawberry Statement.
“Those books completely blew my mind,” Hunter says. He went on a countercultural binge, staying up late and reading by the fire.
During the next few years, Hunter’s social life took a turn. "By the time high school ended, I was in a better place socially than I was at 11,” says Hunter. “Because I didn’t have parents around, we had huge keg parties … That made me popular.” Hunter remained an avid reader—he studied philosophy at the University of Massachusetts—but he no longer lived in the margins. In 1993, shortly after graduating from UMass, he cocreated a music fanzine with the Freudian title Mommy and I Are One , and hosted events and parties with performers like Cat Power.
Photograph: Yael Malka Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After graduation, Hunter wasn't sure what to do. He moved to LA and started working at Disney—not exactly his dream job. After six years, he finally landed a gig as the editor of Mean magazine, a freewheeling project started by some former staffers of the Beastie Boys’ Grand Royal magazine. While there, he began dabbling in small-scale publishing on the side—an early sign of his entrepreneurial spirit. “If you wanted a magazine, I’d make a magazine for you,” he says. Clients ranged from the music festival Lollapalooza to a neuroscience organization. (It put out a magazine called Brain World.
) He also met a visual artist, Alison Elizabeth Taylor. They fell in love.
In 2004, Taylor got into graduate school at Columbia, and they moved to New York to live together in student housing while Hunter worked remotely for Mean.
Taylor would go on to establish herself as a significant force in certain contemporary art circles, and watching his partner pursue her creative dreams, Hunter wondered whether he should take his own writing ambitions more seriously. He enrolled in Brooklyn College’s MFA program, where he met Scott Lindenbaum, a fellow student. As they commiserated over how hard it was for literary magazines to find audiences, Hunter’s publishing itch returned. He and Lindenbaum decided to make a magazine anyone could read online for free. In 2009, Electric Literature debuted; it drummed up buzz by releasing a Rick Moody short story line-by-line on a nascent service called Twitter. It was a proudly techno-utopian creation, one Hunter and Lindenbaum claimed was the first literary magazine with an app.
“Electric Literature was born in a time where there was tons of anxiety about what digital was going to do to literary culture,” Hunter says. “We decided to become the optimists in the room.” The literary establishment disdained digital, but it turned out people wanted to read about books on their laptops.
Electric Literature was a hit from the start, attracting established writers like Colson Whitehead, Michael Cunningham, and Lydia Davis and accumulating a loyal subscriber base. It was never a huge moneymaker, and its operating budget was measly. They shifted to a nonprofit model in 2014. Hunter, now hooked on entrepreneurship, began eyeing his next projects.
He had made connections with people such as Morgan Entrekin, the president of the independent publishing company Grove Atlantic, who liked what he was doing with Electric Literature. With Entrekin, Hunter cofounded the newsy literary-culture website Literary Hub in 2015.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That same year, he also cofounded Catapult, with Elizabeth Koch. (Yes, from that Koch family.) Catapult soon merged with Counterpoint Press, which meant Hunter was suddenly in charge of an imprint that had put out books from authors who had blown his young mind, like Gary Snyder. Catapult also hosted writing classes and published an online magazine. Electric Literature had brought Hunter into the publishing world, but Catapult took him to a new level. For a time, Hunter worked for the three companies simultaneously, and though that meant shelving his 650-page novel, God Exploded —about a guy who tries to start a religion around the idea that the Big Bang was actually the suicide of a deity—Catapult’s books and magazines won critical recognition, including a National Magazine Award and a PEN/Faulkner Award.
(This year, Catapult abruptly shuttered its writing classes and magazine as Koch shifted her focus to Unlikely Collaborators , the New Agey nonprofit organization she founded in 2021.) All the while, Hunter watched as Amazon steadily obliterated bookstores. He started obsessing over how to stop it. The answer seemed to lie in getting small, independent booksellers online. He remembers discussing the idea of a nonprofit alternative to Amazon with industry insiders—and being met with derision.
Photograph: Yael Malka Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After the American Booksellers Association passed on Hunter’s plan to enhance IndieBound, he decided to go ahead and bring to life his vision for ecommerce. But to do so, he had to find more money. Hunter was still working full-time as the publisher of Catapult while also serving as the publisher of LitHub and chair of Electric Literature. Whenever he could, he aggressively pitched potential investors. “I was schlepping from meeting to meeting,” he says. “It was just me, and it was very lonely.” As soon as he had enough funding, he went looking for help.
In 2019, Hunter approached the boisterous, bearded veteran magazine publisher David Rose, who had spent years at the London Review of Books and Lapham’s Quarterly.
When Hunter laid out his plan in their first meeting, Rose remembers “seeing dollar signs.” He thought it was wild that the model Hunter was proposing didn’t exist already. Here, thought Rose, was the rare lit nerd with a business brain. Hunter considered it a miracle that the well-respected Rose believed in him, and he brought Rose on as executive director—Bookshop’s first hire.
At the time, Rose had been consulting for the left-wing magazine The Baffler.
For a while, the pair tag-teamed the startup sprint, with Rose handling administrative details and Hunter working on the logistics of launching an ecommerce site on a shoestring budget. Eventually, they hired two others to manage the company’s social media presence and to develop partnerships with booksellers.
Rose continued to work for The Baffler and had a desk in the magazine’s office. He didn’t like trekking to the Catapult office, which was small and hot, so he asked The Baffler ’s then-executive director, Valerie Cortes, whether Bookshop could also squat in The Baffler ’s Manhattan headquarters. The two staffs mingled, sometimes grabbing drinks or going out for karaoke—but not even Rose’s involvement could convince the team at The Baffler that Bookshop was a good idea. “People weren’t on board at first,” Cortes says. According to Rose, the Bookshop team felt like the “weirdos in the corner,” grinding away at a pipe dream. “There was a running joke about how long we could last,” he says. Going up against Amazon seemed like a fool’s errand.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Even Bookshop’s investors, including Morgan Entrekin, didn’t have high hopes. “In my email to the handful of friends that I asked to get involved, I said, ‘Look, supporting this is a very worthy thing to do. But you’re not going to get a VC return,’” Entrekin says. Still, Hunter got his money, including an investment from William Randolph Hearst III. He persuaded around 200 bookstores to sign up in advance of the launch, and he struck a deal with Ingram, the book-wholesaler, which ensured that getting books to buyers wouldn’t be an issue.
On January 28, 2020, Bookshop.org went live, and it made its first sale at 7 am. Some Baffler staffers suppressed their skepticism long enough to celebrate with the Bookshop squad that evening. Even then, Hunter erred on the side of restraint: Rose teased him about bringing a single bottle of champagne for the whole group to share. Hunter, who says he only expected his staff—of four—to be there, believed in the project, but he worried about its chances. “We had a very, very short runway,” he says.
Hunter figured maybe, eventually, they might earn a million dollars. He kept his day job as the publisher at Catapult.
But then, the pandemic. “A stroke of luck for Bookshop,” as Entrekin put it. Lockdowns left many independent shops, dependent on foot traffic, in deep trouble—they didn’t have digital stores. But here was Bookshop , with a low-stakes ecommerce option for brick-and-mortar booksellers. All they had to do was create a digital storefront and Bookshop took care of everything else, including fulfilling orders and paying taxes.
The financial and promotional support from the American Booksellers Association helped legitimize the new company in store owners’ eyes. Bookshop didn’t have an advertising budget, but Hunter hired a publicist, and she pushed the anti-Amazon angle hard. Stuck at home, people wanted to support local businesses; Bookshop’s first wave of press showed them that there was an easy way to do so just as they went looking for one. Suddenly, Bookshop became the sourdough of ecommerce. It rose with surprising velocity, taking even its teensy staff by surprise.
Bookshop smashed Hunter’s million-dollar goal in four months. “We sold $50,000 worth of books in February,” he remembers. By the end of March, Bookshop was doing about $75,000 per day in sales, setting a new daily sales record of $102,000 on the 31st. Hunter and his handful of employees worked frantically, sometimes logging 18- or 20-hour workdays to keep up with customer service requests and ensure orders were shipped on time. “We really had to scramble,” Rose says. They knew people were trying them out for the first time, so botched orders could sink their reputation. “It was intense,” he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That summer, Bookshop got even bigger, reaching a sales apex it hasn't yet replicated. “$900,000 in one day,” Hunter says.
Hunter's daughter's pet rat, Agent Jellybean, lives in a two-story cage next to his desk.
Photograph: Yael Malka Every six months, Bookshop dumped 10 percent of its sales, in equal shares, into the accounts of bookstores that had opted into its earnings pool. Some store owners were caught by surprise when they checked their accounts. VaLinda Miller, who runs Turning Page Bookshop in the suburbs of Charleston, South Carolina, was facing a crisis when a broken air conditioner caused a gnarly mold outbreak in her shop. She realized she would have to move but couldn’t afford to give a new landlord several months’ rent, replace damaged merchandise, and pay movers all at once. When she finally remembered to check her Bookshop account, she was astonished to see that Turning Page had more than $19,000—enough to cover the move. “It hit during the perfect time,” she says. “It’s been a blessing.” Danielle Mullen, a former art curator and the owner of Semicolon in Chicago, never liked worrying about online sales. Her curatorial flair makes her store a distinctive community space: Art she selects hangs on the walls, shelves are stocked with books primarily from writers of color, and her sales associates are knowledgeable and chatty. She was focused on the store as an in-person experience, a gathering place. But one night, while drinking spiked hot apple cider with a friend, she signed up for a Bookshop page on a whim. For her, too, the service suddenly became the store’s “lifeblood,” she says. “The most necessary thing.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As uprisings for racial justice swept the United States in the summer of 2020, Bookshop highlighted Black-owned bookstores and curated anti-racist reading lists. Mullen is only the third Black woman bookstore owner in Chicago—a fact that appealed to book-buyers looking to support Black businesses. “I think we did $2 million on Bookshop that year,” she says. “It was crazy.” I met Mullen last summer at a café next to her shop on a busy street in Wicker Park. It was so hot out that the metal patio tables burned to the touch. Mullen was in a great mood. Semicolon was doing great. So great, in fact, that she was planning to open an outpost in Miami. She wasn’t sure she’d stick with Bookshop indefinitely. She preferred focusing on her brick-and-mortar store, and she didn’t especially like the idea that indies needed a third-party tech company to compete in online sales, even if said third-party tech company had good intentions.
Mullen isn’t alone in her ambivalence. Jeff Waxman, a former bookseller who now works as a publishing sales representative, was a consultant for Bookshop before it launched. He worries that the company is diverting people who would have bought directly from their local store to its own website. “The fact is, it’s always going to be better to buy a book directly through a store than through a middleman,” he says.
Hunter understands these critiques. He agrees that the best way to buy a book—for bookstores, the economy overall, and for local communities—is to wander into your local shop and purchase one in person. He doesn’t even think Bookshop is the second-best way. That would be buying directly from these local bookshops’ own online stores, if they have them. Hunter sees Bookshop as the third-best option, the Good Samaritan middleman. And this third-best way happens to be critical because of the most popular way people actually buy books: They click “Purchase” on Amazon.
Amazon controls more than half the US book market, according to Peter Hildick-Smith, president of book audience research firm Codex-Group. Jeff Bezos’ company sells approximately $4 billion to $5 billion in new books each year. By comparison, Hunter says that Bookshop sells around 1 percent of Amazon’s share. Between Bookshop and Amazon, it’s not apples and oranges so much as a single heirloom apple tree versus the world’s largest commercial citrus grove.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But Hunter wants to grow. Approximately 2,200 stores in the US and UK participate in Bookshop’s profit-sharing. Someday, Hunter wants to take the Bookshop model beyond books to help small businesses like hardware stores or toy stores with their own affiliate platforms—to be another Everything Store of sorts, but one built around preserving small businesses instead of competing with them.
For now, that’s a daydream, but a real expansion is underway. Hunter wanted to compete with Audible, Amazon’s audiobook and podcast service, by helping independent stores offer alternative formats to physical books. In 2020, he set up a partnership with Libro.fm, a startup that sells audiobooks. Like Bookshop, they partner with independent stores and split profits, so teaming up felt natural. Now Bookshop customers are directed to buy audiobooks on Libro.fm.
Last year, after considering a few directions he could take Bookshop, Hunter set his sights on ebooks. He set out to raise $2 million for the project, but Bookshop doesn’t have a pitch tailored for traditional venture capital. If anything, it has the opposite. Bookshop’s stockholder agreement forbids a sale to Amazon and its ilk (“any retailer then-presently ranked among the top 10 largest retailers”), which means there won’t be any big acquisitions down the road. Despite the rocky economic climate and his un-VC-friendly pitch, Hunter has raised over $2.3 million. (I can attest to how persuasive he sounds when he waxes poetic about the importance of alternative ebook platforms.) The largest investor is, as was the case the first time around, William Randolph Hearst III.
At the start of the pandemic, Bookshop was the sourdough of ecommerce. It rose with surprising velocity, taking even its teensy staff by surprise.
People will be able to read Bookshop’s ebooks in their browser, or on apps that will work on Apple and Android devices (but not, as of yet, on Kindles or through Kindle apps). This arrangement will make for a difficult business proposition and a clunky experience for readers. For starters, Apple takes a 30 percent cut of all revenue made through its app store. Hunter is hoping people will take the extra steps of buying Bookshop ebooks through their browsers rather than Apple’s app store and then reading them on Bookshop’s app, which would circumvent the Apple tax.
One ebook startup has already attempted this kind of project and failed, unable to woo customers away from the Kindle world. Hummingbird Digital Media, which also allowed indie stores to set up their own storefronts and take a portion of the profits, has since been purchased and rebranded—it’s now called Booksio—pivoting to donating to charities instead of bookstores.
Hunter is optimistic he can succeed by building on Bookshop’s preexisting customer base. Part of his plan is to connect ebooks to the social web, to “make them more of the online conversation.” He wants to make it easier for people to share links to ebooks, the way they share snippets and links to paywalled content from The New York Times or The Washington Post.
He has hired one engineer so far and is bringing more on board. “We’re using a lot of open source technology that has been built to support an alternative ebook system already,” Hunter says. “But up until this point, it’s pretty much been libraries using the technology.” He aims to have the platform in beta by the end of the year.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Photograph: Yael Malka There’s more. This fall, Bookshop will publish a collection of short stories by Lydia Davis—a partnership about as glam as having Miuccia Prada design a capsule collection for some tiny boutique.
It was all Davis’ idea, too. When she published her last book, she realized how much she disliked the idea of Amazon profiting off her work. “I made up my mind. For the next book, I would do everything I could to avoid Amazon,” she said. Her agent supported the decision; her longtime publisher, Farrar, Straus and Giroux, however, nixed it. (“Contracts and repercussions,” Davis offers by way of vague explanation.) Davis’ agent suggested asking Hunter for advice on publishers who might be willing to alienate the Everything Store. “It was a surprise to both of us when he said he wanted to publish it himself,” Davis says. She’s been delighted by the process. “He’s been very fast, very efficient, very resourceful.” Davis knows her sales will suffer, but she doesn’t care.
It’s the debut of a project called Bookshop Editions, to be sold exclusively through Bookshop and independent stores. Hunter isn’t planning to turn it into a full-fledged imprint, but Davis, for her part, hopes her actions might inspire other authors. “I’m just really happy I’m doing it,” she says. “I have no regrets whatsoever.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When I caught up with Danielle Mullen of Semicolon on a gloomy Chicago afternoon, the sun hadn’t been out in days. It was the kind of weather that compels you to Google SAD lamps—or move to Florida. Mullen had been jubilant the last time we talked, brimming with her own expansion plans. Independent bookstores were on an upswing. More than 300 new shops had opened in the past few years. There are people—just enough of them, it seemed—who simply prefer physical stores like Semicolon, so I was expecting a happy update from Mullen. Had she opened her Miami outpost yet? “No,” she said. “Actually, everything has changed.” Her beautiful Wicker Park shop had flooded repeatedly, and the landlord was no help. It got so bad that Mullen decided to move the store back to its original location, a smaller spot on the ground floor of a 130-year-old apartment building in River West, a bustling neighborhood with trendy Italian restaurants and luxury condos.
She is putting in an offer to buy the whole building, with hopes of having a permanent presence in Chicago. Exciting stuff—but expensive. So expensive that Mullen has once again found the money Semicolon generates from Bookshop crucial: “Kind of like how it got us through the pandemic.” Let us know what you think about this article. Submit a letter to the editor at [email protected].
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Senior Writer X Topics longreads Books Retail eCommerce Amazon Publishing magazine-31.07/31.08 Andy Greenberg Brandi Collins-Dexter Angela Watercutter Steven Levy Lauren Smiley Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,460 | 2,023 |
"The Generative AI Battle Has a Fundamental Flaw | WIRED"
|
"https://www.wired.com/story/artificial-intelligence-copyright-law"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Bedingfield Culture The Generative AI Battle Has a Fundamental Flaw Photograph: MicroStockHub/Getty Images Save this story Save Save this story Save Last week, the Authors Guild sent an open letter to the leaders of some of the world’s biggest generative AI companies. Signed by more than 9,000 writers, including prominent authors like George Saunders and Margaret Atwood , it asked the likes of Alphabet , OpenAI , Meta , and Microsoft “to obtain consent, credit, and fairly compensate writers for the use of copyrighted materials in training AI.” The plea is just the latest in a series of efforts by creatives to secure credit and compensation for the role they claim their work has played in training generative AI systems.
You Died Will Bedingfield Stage Left Will Bedingfield Artificial Intelligence Will Bedingfield The training data used for large language models, or LLMs, and other generative AI systems has been kept clandestine. But the more these systems are used, the more writers and visual artists are noticing similarities between their work and these systems’ output. Many have called on generative AI companies to reveal their data sources, and—as with the Authors Guild—to compensate those whose works were used. Some of the pleas are open letters and social media posts, but an increasing number are lawsuits.
It’s here that copyright law plays a major role. Yet it is a tool that is ill equipped to tackle the full scope of artists’ anxieties, whether these be long-standing worries over employment and compensation in a world upended by the internet, or new concerns about privacy and personal—and uncopyrightable—characteristics. For many of these, copyright can offer only limited answers. “There are a lot of questions that AI creates for almost every aspect of society,” says Mike Masnick, editor of the technology blog Techdirt.
“But this narrow focus on copyright as the tool to deal with it, I think, is really misplaced.” The most high-profile of these recent lawsuits came earlier this month when comedian Sarah Silverman, alongside four other authors in two separate filings, sued OpenAI, claiming the company trained its wildly popular ChatGPT system on their works without permission. Both class-action lawsuits were filed by the Joseph Saveri Law Firm, which specializes in antitrust litigation. The firm is also representing the artists suing Stability AI, Midjourney, and DeviantArt for similar reasons. Last week, during a hearing in that case, US district court judge William Orrick indicated he might dismiss most of the suit, stating that, since these systems had been trained on “five billion compressed images,” the artists involved needed to “provide more facts” for their copyright infringement claims.
The Silverman case alleges, among other things, that OpenAI may have scraped the comedian’s memoir, Bedwetter , via “shadow libraries” that host troves of pirated ebooks and academic papers. If the court finds in favor of Silverman and her fellow plaintiffs, the ruling could set new precedent for how the law views the data sets used to train AI models, says Matthew Sag, a law professor at Emory University. Specifically, it could help determine whether companies can claim fair use when their models scrape copyrighted material. “I'm not going to call the outcome on this question,” Sag says of Silverman’s lawsuit. “But it seems to be the most compelling of all of the cases that have been filed.” OpenAI did not respond to requests for comment.
At the core of these cases, explains Sag, is the same general theory: that LLMs “copied” authors’ protected works. Yet, as Sag explained in testimony to a US Senate subcommittee hearing earlier this month, models like GPT-3.5 and GPT-4 do not “copy” work in the traditional sense.
Digest would be a more appropriate verb—digesting training data to carry out their function: predicting the best next word in a sequence. “Rather than thinking of an LLM as copying the training data like a scribe in a monastery,” Sag said in his Senate testimony, “it makes more sense to think of it as learning from the training data like a student.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This is pertinent to fair use , the part of US copyright law that generally protects the unlicensed use of copyrighted works for things like scholarship and research. Because if the analogy is correct, then what’s going on here is akin to how a search engine builds its index—and there’s a long history of Google using exactly this argument to defend its business model against claims of theft. In 2006 the company defeated a suit from Perfect 10, an adult entertainment site, for providing hyperlinks and thumbnails of subscriber-only porn in its search results. In 2013 it convinced a New York court that scanning millions of books, and making snippets of them available online, constituted fair use. “In my view, Google Books provides significant public benefits,” US circuit judge Denny Chin wrote in his ruling. In 2014, a judge found in favor of HathiTrust Digital Library , a spinoff of Google Books, in a similar case.
Sag reckons that defendants in similar generative AI lawsuits will use a similar augment: Yes, data goes in, but what comes out is something quite different. Therefore, while it might seem commonsensical that a human reading and a machine “reading” are inherently different activities, it’s not clear the courts will see it that way. And there’s another question mark lingering over whether a machine can make a derivative work at all, says Daniel Gervais, a professor of intellectual property and AI law at Vanderbilt University in Nashville, Tennessee: The US Copyright Office maintains that only humans can produce “works.” If the arguments from the defense hold, then there’s the matter of where those books came from. Several of the experts WIRED spoke to agree that one of the more compelling arguments against OpenAI centers on the secretive data sets the company allegedly used to train its models. The claim, appearing verbatim in both of the recent lawsuits , is that the Books2 data set, which the lawsuits estimate contains 294,000 books, must, by its very size, hold pirated material. “The only internet-based books corpora that has ever offered that much material are notorious ‘shadow library’ websites like Library Genesis (aka LibGen), Z-Library (aka B-ok), Sci-Hub, and Bibliotik,” the lawsuits claim.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The reason OpenAI would plunder pirated data is simple: These sites contain a bounty of the highest-quality writing, on a massive range of subjects, produced by a diverse range of authors. Sag argues that the use of copyrighted works such as books may have helped make LLMs “more well-rounded,” something that may have been difficult if, say, they were only trained on Reddit posts and Wikipedia articles.
There's no precedent in the US that directly links fair use with whether the copyrighted works were obtained legally or not. But, says Sag, there's also no stipulation that unlawful access is irrelevant in such cases. (In the European Union, it's stipulated that data-mining operations must get legal access to the information they use.) One way to look at this problem is to claim that lawful access is irrelevant to inspiration, an argument Masnick recently made on Techdirt.
“If a musician were inspired to create music in a certain genre after hearing pirated songs in that genre, would that make the songs they created infringing?” he wrote.
Masnick’s worry is that some stricter imagining of copyright infringement, aiming to rein in generative AI, could have an unintended chilling effect on creativity. Earlier this year, the US Copyright Office launched an initiative to investigate AI issues. “I fear that saying ‘we can’t learn from these other artists without compensating them,’ creates really big problems for the way that that art is created and the way that content creators learn,” he says. “The normal way that content creators of all stripes become their own content creators is they see someone else and they are inspired by them.” On the other hand, if someone spends years writing a novel, shouldn't copyright ensure that they are compensated if someone else uses their works for commercial purposes? “You could frame this as undermining the incentives of the copyright system,” says Sag. Simply put, if generative AI systems can scrape copyrighted works without compensating writers and churn out something in a similar style, does that lower the incentives for people to create such works in the first place? These lawsuits, even if they are unsuccessful, are likely to provoke generative AI companies into taking steps to avoid them. These steps are unlikely to make happy reading for artists. These firms could, for example, obtain licensing agreements to use copyrighted works in their training data. It's been widely reported that this would be analogous to how, say, Spotify licenses music—albeit on controversial terms —in a way the original version of Napster didn’t. Drake, for example, could license out his discography so fans can conjure Drake-like AI croonings of their own.
Another possible future sees artists asked to opt in to allowing their work to be used as training data. Roblox, which has been cautious with its in-house tools, is considering a model like this for content made by its users, while Adobe has been similarly careful with Firefly , training it on Adobe Stock images and licensed and public domain content. The Associated Press also recently announced a deal to license its news stories to OpenAI.
Ultimately, though, the technology is not going away, and copyright can only remedy some of its consequences. As Stephanie Bell, a research fellow at the nonprofit Partnership on AI, notes, setting a precedent where creative works can be treated like uncredited data is “very concerning.” To fully address a problem like this, the regulations AI needs aren't yet on the books.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Staff writer X Topics Intellectual Property Copyright artificial intelligence Books Alex Winter Angela Watercutter Kate Knibbs Angela Watercutter Kate Knibbs Angela Watercutter Kate Knibbs Geoffrey Bunting Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,461 | 2,018 |
"These Wind Patterns Explain Why California's Wildfires Are So Bad | WIRED"
|
"https://www.wired.com/story/these-wind-patterns-explain-why-californias-wildfires-are-so-bad"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science These Wind Patterns Explain Why California's Wildfires Are So Bad A plume of smoke rises above the Camp Fire as it moves through Paradise, California. High winds and low humidity caused the Camp Fire's rapid spread.
Justin Sullivan/Getty Images Save this story Save Save this story Save In California three major fires—the Camp Fire in the north and Hill Fire and Woolsey Fire in the south—have raged on a scale the state has never seen before.
The Camp Fire in particular was the most destructive and deadly wildfire in California history by far: It has virtually obliterated the 27,000-person town of Paradise, destroying almost 19,000 structures and killing at least 88. Hundreds of people are still missing. Three people have been found dead in the Woolsey Fire.
The driving force has been extreme wind—gusts of up to 60 miles per hour, perhaps even 70 in the hills of Southern California—blowing through the state. Wind further desiccates already dry vegetation and pushes the fires along with incredible speed. Like a demonic analog of water, this air is flowing across the state, nourishing flames and parching plants.
The fire-fanning winds originate in the jet stream, a band of strong winds in the upper reaches of the atmosphere. The jet stream strengthens at this time of year, amplifying its natural meandering nature and creating troughs that move south through California, which you can see in the tweet below. That’s why all these fires popped up on either end of the state nearly simultaneously: They share a common origin in the jet stream.
X content This content can also be viewed on the site it originates from.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When the air masses hit the Sierra Nevada Mountains in eastern California, they behave like water flowing over a rock in a stream. In fluid dynamics it’s known as a hydraulic jump—the water picks up speed as it cascades down the rock.
Or in this case, air. “You get an enhancement of the wind and the momentum as it compresses,” says Nick Nauslar, a fire weather forecaster at NOAA. “As it gets compressed it expands and warms, and so you get a warming, a drying, and an enhancing of the wind speed.” That warm air moves across the ground and sucks the moisture out of already parched vegetation, making it that much easier for a spark to turn to flame.
But how did the winds form in the first place? This comes back to the jet stream and the inland high-pressure regions it generated. Air generally moves from high to low pressure, and in this case the low-pressure area off the California coast pulls winds to the west. “The stronger the pressure gradient, the stronger your winds will be at the surface,” says Nauslar.
To get a sense of the physics at play, picture a plastic bottle of water. “If you have a water bottle and you squeeze one side, applying more pressure, you're increasing the gradient of pressure from the back end to the front end, and the water squirts out,” says Nauslar. “Essentially you're propelling the air, or in this case water, forward at a faster rate.” Change the pressure gradient, and you change wind speed, which is why we’ve seen fluctuations rippling south through California like waves.
High winds make the fires more dangerous by speeding them up and further drying out vegetation. (The tweet below shows just how quickly the winds can crater humidity and spike temperatures.) But it also makes firefighters’ jobs harder.
X content This content can also be viewed on the site it originates from.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Firefighters on the ground have to keep their distance if the conflagration is moving too fast, or it will overwhelm them. Particularly high winds will either ground aircraft or mess with their accuracy when doing aerial drops of water or fire retardant. “It's not as impactful because the retardant gets spread out too far or it misses its area,” says Nauslar.
Crews will still try drops if winds aren’t too high to fly aircraft, especially if there are structures or lives at risk. Indeed, helicopters have been dipping into Malibu mansions’ pools , despite the winds. The drops just might not be as effective as they would be in calmer conditions.
X content This content can also be viewed on the site it originates from.
Heavy gusts also drive smoke into communities downwind. This is a particular problem with the Camp Fire, which is perfectly positioned for winds coming from the northeast to blow the smoke into the Bay Area, 150 miles away, where air quality is now considered “ very unhealthy.
” (Down in Southern California, the fires are closer to the coast, so the winds are blowing the smoke out to sea, relieving inland communities.) The stuff is very, very bad for humans, especially young ones. The EPA’s air quality website, AirNow.gov, has been so overloaded with traffic that they’ve created a special streamlined version for California residents.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg X content This content can also be viewed on the site it originates from.
Even if the wind were to die down significantly, as it sometimes does at night (though not in the case of these fires—gusts waned, but still a wind of some sort is probably blowing), smoke problems are likely to persist. “You'll get what is called an inversion that begins to settle,” says Nauslar, a condition where hot air parks itself above cooler air. “It traps smoke at lower levels, and that's where you can really get some communities that are smoked in with some very unhealthy air quality, especially in areas that are directly downstream or in the vicinity of the fires.” The sad and horrifying reality is that this is the new California. In just the last year, the state has seen seven of the 20 most destructive fires in its history.
The warmer the planet gets, the thirstier California’s atmosphere gets, the drier the vegetation gets, and the worse fires rage.
The key to a long life has little to do with “good genes” Bitcoin will burn the planet down. The question: how fast ? Apple will keep throttling iPhones. Here's how to stop it Is today's true crime fascination really about true crime ? An aging marathoner tries to run fast after 40 Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Staff Writer X Topics wildfire California Matt Simon Rhett Allain Emily Mullin Matt Simon Ramin Skibba Rhett Allain Emily Mullin Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,462 | 2,019 |
"PG&E’s Power Shutoffs Can’t Save California From Wildfire Hell | WIRED"
|
"https://www.wired.com/story/pge-california-power-outage"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Power Shutoffs Can’t Save California From Wildfire Hell Photograph: Mario Tama/Getty Images Save this story Save Save this story Save On Wednesday and Thursday, high seasonal winds will tear through California, drying out vegetation and fanning wildfires. The conditions could easily spell a devastating, deadly conflagration. In preparation, early Wednesday morning the utility PG&E— whose equipment sparked last year’s Camp Fire , which killed 86 people and destroyed the town of Paradise—will begin preemptively shutting off power to a staggering 800,000 customers.
Those customers are not happy , and for good reason: Losing power is a hassle for anyone, but it’s potentially deadly for those who rely on electrical medical devices. Businesses lose business, food spoils in warming fridges, and critical infrastructure goes offline. But this is no shot in the dark—meteorologists can predict where and when those winds will grow dire, so PG&E can target their shutoffs. It’s a calculus that climate change is making increasingly familiar. But blaming the climate alone would be letting California off the hook. Its policies and building habits are also responsible for the darkness that must now descend on northern portions of the state.
California’s wildfire problem grows from a clash of contrasts. In the atmosphere at this time of year, pressure builds up in air masses over the Great Basin, east of the state. At the same time, a low pressure region takes shape near the coast. Because air tends to move from high- to low-pressure areas, winds start accelerating from the northeast toward the coast. The greater this pressure gradient, the stronger the winds.
As the winds move over the Sierra Nevada in eastern California, they flow like water over rocks in a stream, compressing and warming. Slicing through valleys, the winds gather more speed, desiccating the air. “If you imagine the atmosphere over your head as a sponge, you can’t wring it out anymore,” says Daniel Swain, a climate scientist at UCLA.
At ground level, the warm air screaming through the mountains sucks away whatever moisture might be left in the vegetation—which is increasingly little as the climate warms in California and autumns grow increasingly dry.
What’s left is a parched landscape that’s primed to burn, and winds of 60 or 70 miles per hour can speedily turn a spark into a fast-moving wildfire. Such was the case in last year’s Camp Fire: Winds picked up embers and blew them perhaps a mile ahead of the main conflagration , setting a multitude of small fires throughout the town of Paradise, overwhelming firefighters.
Because meteorologists know why and when and where these winds form, they can use models to give perhaps a week’s warning of a major wind event, like one coming up. Ground-level data, like topography, sharpens the forecast to show where winds might be fiercest. So PG&E is cutting off power in the particularly dangerous zones it has identified in Northern California, where high winds might rustle power lines and shower sparks onto the wind-parched vegetation below. Specifically, says PG&E spokesperson Ari Vanrenen, they’re looking for humidity levels below 20 percent and sustained winds above 20 mph or gusts over 45 mph.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg If the conditions align, PG&E initiates what it calls public safety power shutoffs, and they’re tortured decisions.
“It is sort of unprecedented for such a large utility doing this preemptively,” says Swain. “They are probably an unfortunately necessary stopgap fire-prevention measure right now, but they come with serious risks as well.” A utility like PG&E is mandated to provide power, because doing so isn’t just a matter of modern conveniences—it can be a matter of life and death. That’s especially true in the Golden State's mountain towns that are most at risk of catastrophic wildfire, many of which are retirement communities. The elderly may rely more heavily on medical appliances and be more vulnerable to heatstroke without air conditioning. By preemptively cutting off power, you’re also potentially cutting off communication—if the power goes out and a wildfire starts, and TVs and internet routers don’t work, people could be at risk. Electric water pumps too would go offline, potentially hampering firefighting efforts.
In preparation for the shutoffs, PG&E recommends stocking up on food and water and flashlights, but that might be difficult for people with fixed incomes and limited mobility. You’ve got to think of the little things, too, like opening your garage before a blackout in case a wildfire does come and you need to flee and your opener is kaput. If you’ve got a generator, great; but ironically enough, more generators humming along outdoors means more ways to spark the fires PG&E is trying to prevent.
The hard truth is that California is built to burn.
For decades, the state has stamped out wildfires instead of letting them burn naturally, causing fuel to accumulate. And Californians can’t help but keep building homes right up against wildlands, often in wind-funneling valleys, putting themselves literally in the line of fire.
PG&E bears outsize responsibility for this mess ; its dismal safety record includes 17 major wildfires in 2017 alone. Miles upon miles of electrical lines criss-cross the landscape, providing ample opportunity for ignition. A solution might be to bury the lines, but that’s expensive and often not feasible in rocky regions. In an ideal world, all of these mountain towns would operate on their own self-contained, solar-powered microgrids, but that too is wildly expensive.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And really, fires will always have reasons to start—a firework here, an overheated car or cigarette ember there. In these times of climate change, drier brush means Californians must live with the constant anxiety that it’s not a matter of if the next Camp Fire will strike, but when and where.
“Even if you prevent 90 percent of wildfire ignitions, the remaining fires could be just as bad or even worse,” notes UCLA's Swain. “That still leaves us with the problem that the character and intensity of wildfires are changing. Even if we see fewer of them, we’ll still have catastrophic fires.” Add catastrophic power outages to the list. All is not well out west.
The tech helping dogs learn to “talk” with humans Don't you dare call the Microsoft Surface Duo a phone Inside Pioneer: May the best Silicon Valley hustler win Surveillance and the Ringification of suburban life How cities reshape the evolutionary path of urban wildlife 👁 If computers are so smart, how come they can’t read ? Plus, check out the latest news on artificial intelligence 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Staff Writer X Topics wildfire Emily Mullin Rhett Allain Matt Simon Matt Simon Ramin Skibba Rhett Allain Emily Mullin Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,463 | 2,019 |
"The Age of Flames Is Consuming California | WIRED"
|
"https://www.wired.com/story/kincade-fire"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science The Age of Flames Is Consuming California Photograph: JOSH EDELSON/Getty Images Save this story Save Save this story Save Right on cue, Northern California has plunged back into wildfire hell. This time two years ago, the Tubbs Fire was ripping through Santa Rosa and other communities north of San Francisco, killing 22 and destroying 5,000 homes. And last year on November 8, the Camp Fire virtually obliterated the town of Paradise, killing 86 and burning an astonishing 20,000 structures to the ground.
On October 23 at 9:30 pm PT, a wildfire sparked northeast of Healdsburg, a town of over 10,000 just north of San Francisco. Fanned by winds of up to 100 mph, the Kincade Fire tore through the landscape, consuming 77,000 acres so far. Some 200,000 people have been forced to flee, and it’s 60 contained. The number of structures reported destroyed is 282.
Welcome to what fire historian Steve Pyne calls the Pyrocene , a unique time in history when human use of fire, particularly the burning of fossil fuels, and the attendant climate change combine to create hell on Earth. “We are creating a fire age that will be equivalent to the Ice Age,” he says. The reckoning is here, and California—a highly flammable state packed with people—is getting it worse than just about anybody in the world.
There’s good reason that, for the past three years, Northern California has seen particularly massive, fast-moving wildfires tear through communities. Every autumn, winds blow in from the northeast , heating up and picking up speed as they descend through mountain valleys. This sucks moisture out of vegetation, turning it into the perfect fuel for wildfires.
Photo Essay Kiliii Yüyan In the past, the state would have had at least some rain to hydrate vegetation after the summer. Thanks to climate change, that’s largely not the case anymore. All it takes is one spark to ignite ultra-dry brush, and high winds will carry that flame with incredible speed, overwhelming communities like Paradise, where many residents simply didn’t have time to escape.
Winds also complicate firefighting efforts: If they’re too strong, planes and helicopters can’t accurately drop flame retardants, and if they’re too light, smoke accumulates and visibility plummets, making it unsafe to fly too close to the conflagration.
But to blame California’s wildfire problem on climate change alone is oversimplifying matters. As the state’s population has boomed, communities have cropped up in the most brush-packed, fire-prone lands. California has also been terrible at managing vegetation that’s grown out of control, because wildfires that naturally reset ecosystems are now quickly extinguished to save human lives. And the local utility, PG&E, has a dismal safety record—its equipment sparked 17 major wildfires in 2017 alone. Accordingly, it’s been initiating huge “ public safety power shutoffs ” this year in anticipation of high wind events, and indeed it seems to have preemptively cut power to the region where the Kincade Fire sparked last night. But the San Francisco Chronicle is now reporting that PG&E left high voltage lines in the area energized, and detected an outage minutes before the wildfire started.
These are the same kinds of lines that sparked last year’s Camp Fire, though to be clear the utility hasn’t officially been blamed for this new fire.
The mountain communities in the danger zone can take steps to protect themselves, though nothing comes for free. Each town could, for example, decide to invest in its own solar-powered microgrid.
They could become vigilant about clearing brush around homes, but all it takes is one lazy homeowner to imperil everyone else. “Fire is a contagion phenomenon—its power comes from its ability to spread,” says Pyne. “And if you take care of your house or your property, but your neighbors don't, you're still at risk.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It’s easy to think of climate change as a creeping problem, that sea levels will nudge higher and higher and we’ll adapt accordingly. But in California, climate change and an army of conspiring factors have joined forces to lay siege on the state, turning it into the front line of the Pyrocene.
The Kincade Fire is now sending smoke into communities that have suffered before from the terrorizing force of wildfires. “It takes a real toll on us emotionally,” says Adrienne Heinz, a research psychologist who lives in Healdsburg, where residents are instructed to “stay vigilant” as the Kincade Fire develops. In the aftermath of 2017’s Tubbs Fire, which devastated nearby Santa Rosa, she helped develop a system to get survivors mental health care.
And now the stress and adrenaline are back. “You can only stay in that hypervigilant mode for so long before you’re like, wow, I’m exhausted, I’m irritable, I’m not sleeping.” These Californians are a new kind of climate refugee, either forced to uproot their lives if they can’t afford to rebuild their burned homes or sticking it out as hostages to conspiring forces in a state that’s built to burn, knowing that it’s only a matter of when, not if, the next big one comes.
The untold story of Olympic Destroyer, the most deceptive hack in history The delicate ethics of using facial recognition in schools The quiet, intentional fires that shape Northern California Massive, AI-powered robots are 3D-printing entire rockets USB-C has finally come into its own 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Staff Writer X Topics wildfires Ramin Skibba Matt Reynolds Hannah Ritchie Ramin Skibba Matt Reynolds Matt Simon Grace Browne Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,464 | 2,019 |
"UC Berkeley Was About to Launch a Satellite. Then PG&E Said It Was Cutting Power | WIRED"
|
"https://www.wired.com/story/icon-launch-blackout"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science UC Berkeley Was About to Launch a Satellite. Then PG&E Said It Was Cutting Power Photograph: Randy Beaudoin/NASA Save this story Save Save this story Save Last Monday, just as the workday was winding down, Paula Milano received a phone call that threw her week into chaos. Milano, who helps run the Space Sciences Laboratory at UC Berkeley, had been gearing up for a satellite launch. But on the phone now was a friend of hers, with bad news: PG&E, the power company, was warning the school that its electricity could be cut Wednesday—making the campus one of more than 700,000 customers that would suffer the same fate.
The outage was a precautionary measure to keep forecasted high winds from jostling electrical equipment and starting the next massive wildfire.
And it fell to this friend, who manages several buildings on campus, to piece together a plan for the coming plunge into darkness. Surely Milano’s lab could go without power for a while, right? “And I was like bleeeh —no man, no way,” she says. “We're launching a satellite on Wednesday.” That’d be the ICON spacecraft, meant to study Earth’s ionosphere, the place that, as NASA describes it , is “where terrestrial weather from below meets space weather above.” ICON would launch out of Cape Canaveral in Florida, where a NASA team would oversee the spacecraft’s journey from ground to orbit on an air-launched rocket. But this Berkeley lab, which designed and built the satellite, needed to oversee the deployment of the instrument itself. It would be running mission control for the satellite from afar—assuming they had electricity. “If a scrub of the mission happens because of Berkeley, that's a huge black eye for us, and it's a huge public black eye for NASA,” Milano says.
So as PG&E customers across Northern California scrambled last Tuesday to buy flashlights and water and fill their cars with gas, Milano mobilized a campaign to electrify the Space Sciences Laboratory.
Photograph: Steven Beckwith Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Normally, NASA wouldn’t let mission control run on anything less than a stable grid connection, says Steven Beckwith, the lab’s director. “They basically will not let you launch if you're on backup power. But it turns out that the contract that we had written with NASA gave us that call.” The good news was that the university runs its own cogeneration plant, or cogen, which could provide some of the campus’ buildings with power in the event of an outage. But Milano and Beckwith had no way of knowing for sure if it would offer a steady source of electricity.
On Tuesday, PG&E updated its messaging: The campus and much of the rest of Northern California would lose its power soon, though the question remained of when. “They chose lives over science, which I get,” says Milano. “So then we went into breakneck speed on Plan B.” The team already had a backup generator outside the building but decided to bring in another. Then scientists had to figure out how much fuel they’d need, given how long they expected to be divorced from the grid. Also they had to find a guy to bring over a truckload of fuel. Beckwith dispatched people to find hundreds of feet of high-gauge extension cords, which they snaked through the hallways of the lab.
The 30-odd people working mission control would themselves need fuel too. SSL prepares for any launch by bringing in catering and snacks from Costco runs, but water could be a problem if they lost power, given the facility is up on a hill and depends on electric water pumps. So in preparation, they filled two 400-gallon tanks on site with water before the supply got cut off. Then to get rid of all that food and water, they secured two portable toilets for the parking lot. To lighten the load on electrical and nutritional resources, Beckwith ordered staff who didn’t need to be on site to work from home for the launch.
“We said, we know we can do this, that this is a finite challenge,” adds Beckwith. “I think by Tuesday evening, we actually felt like we were going to be OK.” Wednesday came, and PG&E still couldn’t say when the campus might lose power. But a reprieve, of sorts, came from Cape Canaveral: At 11 am Pacific time, Beckwith got word that the launch was scrubbed until the next day due to weather. Perhaps, then, their mission could dodge the outage entirely.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg 1 / 5 At around 11 pm on Wednesday, Milano was at home, right over the hill from campus, when her power went out. She alerted a colleague, who rushed over to the lab in time to witness … not much at all. “A little blip,” Milano says, “but we maintained power. So that meant the cogen worked.” When Beckwith arrived at the lab Thursday morning, the cogen power was still on. Their new launch time was set for that evening at 8:31 pm Eastern time. As the Berkeley people stared into screens in two small rooms lined with yellow extension cords, monitoring the temperature of the spacecraft and other vitals, a converted airliner raced down a Cape Canaveral runway carrying a Pegasus rocket, which housed the satellite, and ascended to 39,000 feet.
But then, another hiccup: Two minutes before the jet dropped the rocket, a glitch cut off microphone contact between the pilots and Cape Canaveral. So they looped the aircraft back around and started the countdown anew. A half-hour after the initial delay came the command : “Drop on my mark—3, 2, 1—drop. Pegasus away.” The rocket fell a few hundred feet from the jet, then fired itself up to deposit ICON into orbit. When the refrigerator-sized satellite reached its destination 360 miles up, the members of the Space Sciences Lab—fully powered and hydrated and well-fed— began their mission , tracking the satellite as it passed over California.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Golden State at large, and UC Berkeley in particular, learned brutal lessons last week. For one, climate change, which has made wildfires in the west more ferocious , is already taking lives.
And in its management ( or mismanagement ), California’s economy is taking a bludgeoning as electricity becomes less certain.
For academics too, the consequences can be maddening: One Berkeley scientist had to load his freezers onto a moving truck and send them across the San Francisco Bay to the University of California, San Francisco, which hadn’t lost power.
“Modern society is very much like a biological organism, like us,” says Beckwith. “If you shut down the metabolism, it dies.” ICON, thank the heavens, was a lucky survivor.
Ripper —the inside story of the egregiously bad videogame USB-C has finally come into its own Planting tiny spy chips in hardware can cost as little as $200 So you want to quit vaping? No one actually knows how Welcome to the “Airbnb for everything” age 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Staff Writer X Topics space Ramin Skibba Grace Browne Matt Simon Ramin Skibba Garrett M. Graff Matt Simon Sushmita Pathak Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,465 | 2,019 |
"California’s Power Outages Are About Wildfires—But Also Money | WIRED"
|
"https://www.wired.com/story/californias-power-outages-are-about-wildfiresbut-also-money"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Adam Rogers Science California’s Power Outages Are About Wildfires—But Also Money Some of California's most destructive wildfires were at least in part the fault of equipment owned by PG&E.
Photograph: Karl Mondon/Getty Images Save this story Save Save this story Save The red wind started to blow again this week —hot, dry gusts up to 75 miles per hour in some parts of Northern California. Usually, their arrival means wildfires are on the way, a recurring threat to life and property thanks to climate change and urban sprawl.
This time was a little different, though. As The New York Times points out , five out of the 10 most destructive wildfires in California history were at least in part the fault of equipment belonging to Pacific Gas & Electric, the utility company that delivers power to 16 million people in the top half of the state. In part because of liability claims from victims of past fires, $8.4 billion worth, PG&E is in the midst of bankruptcy.
So this time, when the red wind started to blow, PG&E turned off the power.
Did they do it out of an overabundance of caution and concern for safety? Definitely. But at least one expert suspects another set of priorities at work: PG&E’s ongoing bankruptcy negotiations.
A core function of bankruptcy is to let a business continue to operate while it figures out what it owes, and to whom. PG&E has spent hundreds of millions of dollars to figure that out. “If the assumptions in your analysis turn out to be wrong, your whole strategy can blow up and be immensely costly, and delay your bankruptcy,” says Jared Ellias, an expert in bankruptcy law at UC Hastings College of the Law. That means you try to get through it fast, and with minimal chaos.
A wildfire would definitely qualify as chaos. In large part, that’s because of the damages PG&E is on the hook for. Expenses incurred during a bankruptcy take precedence over the ones from before the bankruptcy. The bills are, in the language of the law, “senior.” Ellias says that while damages are supposed to get paid out of a common pot, claims from victims of a fire in 2019 could in this case supersede those of earlier victims.
The rules get more complicated than that. This summer California passed a law called AB 1054, which set terms for how PG&E will pay out claims for previous fires and established a $20 billion insurance fund to pay future claims. That carefully negotiated, controversial plan didn’t take into account what would happen if a massive fire happened right now. “Of greater significance to PG&E is the fact that it cannot access the ‘insurance’ fund established by AB 1054 for fires this season,” writes Mike Danko, a lawyer representing fire victims, in an email. “Those funds would be available to PG&E for fires beginning in 2020, at the earliest—another reason for PG&E to protect itself at the expense of the ratepayers by turning off power, even if not really necessary.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So a fire in 2019 would be a mess. “If you’re well advised by great lawyers, which they are, as much as you always don’t want a fire to happen, you really don’t want one right now,” Ellias says. “Imagine how ugly it would get if you had a competition between bankruptcy fire victims and pre-bankruptcy fire victims. That would be ugly, ugly, ugly—ugly for the people involved, ugly for the representatives in Sacramento.” PG&E then looks, let’s say, highly incentivized. In mid-October 2018, it shut off power over a wide swath of the North Bay and Sierra Nevada mountains in advance of a predicted wind event. On November 6, 2018, PG&E warned 70,000 customers that it might do it again, but then did not. Two days later, a tower on PG&E’s Caribou-Palermo transmission line caught fire , sparking the Camp Fire, which destroyed the town of Paradise and killed 88 people , making it the deadliest wildfire in the US in a century.
To be clear, the power shutdown seems to be doing what it was supposed to. Starting Wednesday, more than 700,000 bill-payers—perhaps 2 million people— went dark , from the state’s northern marijuana- and wine-growing regions to the hills of the East Bay and the ocean-facing side of the Silicon Valley peninsula.
Since then, a few small fires have broken out, but nothing on the order of last year’s massive blazes. Still, it’s unclear why PG&E shut down such a large area, or what criteria the agency used to activate its Public Safety Power Shutoff program. “Many questions remain as the state reels with the consequences of this decision,” wrote state senator Jerry Hill to the California Public Utilities Commission. “I strongly disagree with the binary position currently offered by PG&E—they can turn the power off and shut down the economy and livelihoods of millions in California, an action which may protect us from wildfire; or they can roll the dice and continue with the lights on.” PG&E says the reason for the shutoff is right there in the name. “We faced a choice between hardship or safety, and we chose safety,” said Michael Lewis, PG&E’s senior vice president for electric operations, in a press statement. “We deeply apologize for the inconvenience and the hardship, but we stand by the decision because the safety of our customers and communities must come first.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg PG&E denies that its financial travails had anything to do with the decision to turn off the power. “This has nothing to do with the bankruptcy,” says Lynsey Paulo, a spokesperson for the company. “This is about reducing wildfire risk and keeping our customers and community safe.” Its power shutoff program, she adds, has expanded since 2018 to include both distribution lines as well as bigger transmission lines, part of a comprehensive Community Wildfire Safety Program “to harden our system so it can withstand extreme weather.” In addition to shutdowns, that includes “vegetation management,” inspections of equipment, and new weather stations and fire-monitoring cameras. (In April, a federal judge chided PG&E —at a probation hearing stemming from a whole other deadly equipment failure —for spending $4.5 billion on dividends, but pleading poverty when it came to tree-trimming.
) Yes, shutting off the power was about public safety. But in this case, that decision was at least semi-aligned with bankruptcy prep and shareholder value. “The company is currently controlled by the board of directors, who work for the shareholders, and the shareholders sit behind pre-bankruptcy fire victims in the pecking order,” Ellias of UC Hastings says. “So from their perspective, if the board cares about the shareholders, any fire during the bankruptcy would wipe out the shareholders. And they have been trying very hard to avoid wiping out their shareholders.” This is what retreat in the face of climate change looks like: letting cities go dark. It’s not as obvious as abandoning coasts ahead of rising seas, but this week a utility responsible for as many people as the entire population of Cambodia has essentially declared, "We can’t provide electricity safely because of climate change." Still, worsening wildfires are just a hazard; it took bad management, deferred maintenance, and urban sprawl to turn them into a disaster.
Blind spots in AI just might help protect your privacy The best tech and accessories for your dog The game-changing tech behind Gemini Man 's “young” Will Smith The Icelandic village where the sun never sets in summer Why are rich people so mean ? 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Senior Correspondent X Topics Power power grid electricity wildfires Jim Robbins Kate Yoder Matt Simon Maryn McKenna Matt Reynolds Matt Simon Rob Reddick Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,466 | 2,019 |
"You’re Not Getting Enough Sleep—and It’s Killing You | WIRED"
|
"https://www.wired.com/story/youre-not-getting-enough-sleep-and-its-killing-you"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Emily Dreyfuss Science You’re Not Getting Enough Sleep—and It’s Killing You Bret Hartman/TED Save this story Save Save this story Save The whole world is exhausted.
And it’s killing us.
But particularly me. As I write this, I’m at TED 2019 in Vancouver, which is a weeklong marathon of talks and workshops and coffee meetings and experiences and demos and late-night trivia contests and networking, networking, networking. Meanwhile, I’m sick as a dog with a virus I caught from my 3-year-old, I’m on deadline for what feels like a bazillion stories, and I’m pregnant, which means I need coffee but can’t have too much, and need sleep but can only lay on my left side, and can’t breathe without sitting propped up with a pillow anyway, since I can’t safely take any cold medication.
According to neuroscientist Matthew Walker, I’m doing serious damage to my health—and life—by not sleeping enough.
“The decimation of sleep throughout industrialized nations is having a catastrophic impact on our health, our wellness, even the safety and education of our children. It’s a silent sleep loss epidemic. It’s fast becoming one of the greatest challenges we face in the 21st century,” Walker, an expert in sleep at UC Berkeley and author of the best-selling book Why We Sleep , told a rapt TED audience on Thursday.
After a morning of bleak talks on climate change and the harm of viral misinformation online—punctuated by frequent exclamations of “Well, that was alarming” by TED science curator David Biello, and stolen glances at Twitter for updates on the Mueller report and the happenings in the outside world—Walker’s talk felt like a jolt of caffeine to a weary system. (A comparison Walker might hate. He warns, in the strongest possible terms, about the harms of that particular stimulant.) He ran down all the ways in which sleep deprivation hurts people: it makes you dumber, more forgetful, unable to learn new things, more vulnerable to dementia, more likely to die of a heart attack, less able to fend off sickness with a strong immune system, more likely to get cancer, and it makes your body literally hurt more. Lack of sleep distorts your genes, and increases your risk of death generally, he said. It disrupts the creation of sex hormones like estrogen and testosterone, and leads to premature aging. Apparently, men who only sleep five hours a night have markedly smaller testicles than men who sleep more than seven.
“Sleep loss will leak down into every nook and cranny of your physiology,” he said. “Sleep, unfortunately, is not an optional lifestyle luxury. Sleep is a nonnegotiable biological necessity. It is your life support system.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Of all the TED Talks I’ve heard this week, this one felt aimed right at me. At my heart. And my withered and overworked hippocampus, my brain’s “information inbox,” as Walker calls it. At my swollen, dark-ringed eyes. Yes, OK, maybe it’s because I’m so tired, and was so primed to hear it. But I wasn’t alone.
Everyone is obsessed with sleep. And they know they’re not getting enough—hence the growing demand for sleeping pills, the emergence of sleep tracking devices (Walker wears an Oura Ring to track his), smart beds , Alexa integrations to put you to sleep, and mindfulness sleep apps.
Arianna Huffington famously left journalism to found a company devoted to sleep and wellness.
After Walker’s talk, he was mobbed by acolytes in the conference center desperate to tell him how much his book—which covers the same topic as his TED talk—changed their lives, how they’d given up caffeine and alcohol and stopped feeling bad for leaving parties early to get sleep or refusing to look at work emails in bed. As the next session of TED talks was starting up, Walker kept trying to escape so he could watch, but people wouldn't let him pass. They wanted to know what to eat, when to sleep, how to dream.
His answers? Don’t drink caffeine or alcohol. Go to bed at the same time every night and wake up at the same time every morning (even on the weekends). Sleep in a cool room. If you are lying awake in bed, listening to the litany of worries your brain is churning through, get up, go into a different room, and do an activity, then return to bed when you’re ready. “You wouldn’t sit at the dinner table waiting to get hungry, so why lay in bed waiting to get tired,” he told a TED attendee who’d asked for advice. Meditate to calm your nervous system and your mind. Don’t default to sleeping pills, which are “blunt instruments that do not produce naturalistic sleep,” he said. Eventually, he said, he may be able to offer an “affordable, portable” brain-stimulating device that would use transcranial direct-current stimulation to help people have deeper sleep. (Walker has just started a company called Stim Science, with backing from Khosla Ventures , to make such a product , he told WIRED.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg People listened hard. They took notes during his talk (which is uncommon at TED, to my surprise) and as he spoke to the crowd afterward.
His message came across as a rebuke of the idea that sleep deprivation and success somehow go hand in hand. Tim Cook reportedly wakes up at 3:45 am to start work. Barack Obama said he only sleeps about 5 hours. He’s a “ night guy.
” Donald Trump and Elon Musk both have said they sleep only a few hours a night. But Musk has also admitted to The New York Times that his work schedule was taking a toll on his mental health and whole life. Walker argued that it’s time to stop thinking that needing sleep is a sign of weakness or laziness. In fact, it’s the opposite.
Sleep makes us better at everything. “The disruption of deep sleep is contributing to cognitive decline,” Walker says—in aging patients at risk of or already experiencing dementia, and even in healthy people. “You need sleep after learning, to essentially hit the save button on those new memories so you don’t forget. But recently we’ve discovered that you also need sleep before learning. Almost like a dry sponge to suck up new information. Without sleep, the brain becomes essentially water logged.” That sounds about right. Yesterday, I heard approximately 18 different TED Talks. But I had barely slept the night before, and I barely slept last night. When asked this morning which talk had been my favorite of the day before, it took me a full 30 seconds to remember a single one.
I’m determined to remember this one. In fact, I retreated to one of TED’s “sleep rooms” afterward and attempted a five-minute nap before writing this article in an effort to commit the talk to memory. I put on a sleep mask and let the white noise machine and a Pure essential oil diffuser lull me into a moment of calm. In the eerie quiet of this manufactured zen zone inside a behemoth conference center at the foothills of the North Shore Mountains, I almost forgot about the news cycle, about my head cold, about the Mueller report.
But then I remembered my deadline, bolted upright, unplugged the diffuser, plugged in my computer, and typed these words. Here’s hoping they help you, and me, get a little sleep tonight.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Buy Why We Sleep by Matthew Walker on Amazon.
When you buy something using the retail links in our stories, we may earn a small affiliate commission.
Read more about how this works.
The FBI wanted an iPhone backdoor.
Tim Cook said no Why a new crop of electric SUV batteries come up short Tips for getting the most out of Spotify Coding is for everyone—as long as you speak English Celebrating Tower Bridge, London's engineering marvel 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Writer X Topics Sleep brain ted conference Books Grace Browne Tammy Rabideau Amit Katwala Jim Robbins Matt Simon Kate Yoder Emily Mullin Rob Reddick Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,467 | 2,019 |
"Tired of Jet Lag? This App Will Help Reset Your Clock | WIRED"
|
"https://www.wired.com/story/timeshifter-jet-lag-app"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Arielle Pardes Gear Tired of Jet Lag? This App Will Help Reset Your Clock Photograph: Henn Photography/Getty Images Save this story Save Save this story Save Here on Earth, our sleep is synchronized with the sun. Our bodies stir at daybreak and slump at nightfall, the light acting like a metronome for our delicate body clocks. This biochemical ebb and flow is known as our circadian rhythm, and it repeats every 24 hours.
Go into space, though, and you get a sunrise or a sunset approximately every 45 minutes. Astronauts on the International Space Station loop around our planet so fast it creates a cycle of light and dark that's disruptive enough to an astronaut’s biorhythms to send them teetering into insomnia or exhaustion.
Researchers at Harvard Medical School and Brigham and Women's Hospital began studying this problem a few years ago and latched onto this relationship between light and circadian rhythms. They found that when astronauts systematically sought out bright light and then avoided it during specific intervals, they could effectively “reset” their circadian clocks. One researcher likened the method to “visual caffeine.” Earlier work had found a similar system effective for night-shift workers. At NASA, the insight led to the development of new lighting systems on spacecrafts, which use dynamic lighting to better maintain healthy sleep schedules.
It turns out the same method of using light to fool your biological clock works pretty well when you’re zipping across time zones on Earth. That is, at least, the philosophy that underpins the app Timeshifter, designed to cure jet lag with a few simple tricks of light.
Mickey Beyer-Clausen, Timeshifter’s CEO, says his invention deals with the underlying cause of jet lag—“shifting the clock in our brains”—rather than just the symptoms. He pooh-poohs folk wisdom, like fasting before a flight or popping Ambien on the plane. “None of this stuff will help you at all,” he says. “If you are sleeping on the plane when you’re supposed to see light, you will make jet lag worse.” Beyer-Clausen grew up logging air miles with his mother, a flight attendant for Scandinavian Airlines, who took him on trips around the world. “I’d be in Bangkok for three months, then in Rio, then in Chicago,” he says. “I just went with her and did my homework at all these locations around the world.” Every frequent flier has a trick to outrace jet lag, but Beyer-Clausen’s come-to-Jesus moment came when he met Steven Lockley, a neuroscientist at Harvard Medical School and an expert in circadian rhythm studies. Lockley has worked with astronauts and Formula 1 race car drivers to improve sleep quality through light exposure.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Together, he and Beyer-Clausen designed the Timeshifter app. It joins several others in the App Store that promise to beat jet lag: Entrain , a free app, similarly leverages light exposure to reset circadian rhythms.
Uplift , for $10, offers a few physical exercises based on “biorhythmic acupressure.” Timeshifter's approach hinges on personalization, emphasizing that every individual responds to light slightly differently. “Unfortunately, no generic solution can help you tackle the underlying cause of jet lag, and might even be counterproductive,” Lockley writes on the app's website. Instead, Timeshifter takes into account the particulars of each trip and the person taking it.
The app uses three pieces of information: your sleep pattern, your chronotype (morning person or night owl), and your itinerary. It uses these metrics to generate a “personalized” jet lag plan, noting when to seek light and when to avoid it, plus optional recommendations around the use of melatonin and caffeine. Beyer-Clausen claims that if you follow the app’s instructions to a tee, “you can shift the clock three to four times as fast as normal.” Photograph: Timeshifter Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft I tried the Timeshifter method on a recent trip from San Francisco to Copenhagen, an 11-hour flight with a nine-hour time difference. I tapped out my flight details, and the app spat out a jet lag plan, breaking each day into little chunks: get bright light, avoid bright light, take melatonin, avoid caffeine, nap if you can. “If you sit inside of a plane and it’s dark and you need to see bright light, all you need to do is look at an iPad or watch a movie or do some work,” Beyer-Clausen says. “If you’re outside in the sun and you need to avoid light, you just put on your sunglasses. It’s very easy to be compliant with, and it’s only a few hours a day, depending on how many time zones.” I found some of Timeshifter’s suggestions unreasonable. The night before my flight, it recommended turning in at a geriatric hour, which I refused to do. It also suggested sleeping for almost the entirety of my flight back home, which proved impossible with the geometry of my airplane seat. But the rest of the guidance was dead simple. And by the second day in Copenhagen, I was sleeping like a Dane.
An updated version of the app, which launches today, supports a few new features: It can dynamically update your flight itinerary, adjusting your jet lag plan if your flight gets delayed or canceled. It can also merge multiple flight plans for frequent travelers to create a more holistic strategy.
“If you’re going around the world over the next year and you have 100 destinations and stopovers, you put it all in and it’ll figure it out and deliver the best advice,” Beyer-Clausen says. The first jet lag plan is free; after that, you can get unlimited plans for $25 a year or buy them à la carte for $10 each.
Of course, all of the jet lag benefits disappear if you don't strictly follow the app's advice. Beyer-Clausen has surveyed Timeshifter's users to find out how well the method worked for them on recent trips. “We have, at this point, 15,000 questionnaires back showing that, if you follow the plan, to some extent or fully, then you’ll have very little problem with jet lag,” he says. “If you don’t follow it, the chance of jet lag is 15 times higher.” Ripper —the inside story of the egregiously bad videogame USB-C has finally come into its own Planting tiny spy chips in hardware can cost as little as $200 So you want to quit vaping? No one actually knows how Welcome to the “Airbnb for everything” age 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Senior Writer X Topics Apps Sleep Simon Hill Reece Rogers Matt Jancer Adrienne So Brenda Stolyar Simon Hill Brenda Stolyar Boone Ashworth WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,468 | 2,019 |
"A New Gene Helps Explain Why Some People Need Less Sleep | WIRED"
|
"https://www.wired.com/story/a-new-gene-helps-explain-why-some-people-need-less-sleep"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sara Harrison Science A New Gene Helps Explain Why Some People Need Less Sleep Wired Staff; Getty Images Save this story Save Save this story Save In 2009, Ying-Hui Fu started getting a flood of emails from people who claimed to only need a few hours of sleep at night. Fu and her colleague and spouse Louis Ptáček, both at the University of California, San Francisco, had just identified a gene mutation that significantly decreased how many hours someone needs to sleep.
It was the first such gene discovered. Now, it seemed, the entirety of the short-sleeping universe was deluging her inbox.
One man described how he only slept about five and a half hours each night. His son slept a bit more than four. But the two were happy, energetic, and healthy. They weren’t cranky or forgetful like most sleep-deprived people would be. Fu’s lab interviewed the pair, enrolled them in a study, and took samples of their blood.
Using those samples, Fu and Ptáček have now identified a new gene mutation associated with short sleepers, which they describe in a paper out today in Science Translational Medicine.
The mutation is helping scientists understand how our bodies regulate sleep. It’s only the third short-sleep mutation found so far, though Fu and other scientists suspect there are several more. “We don’t know how these different genes converge together to regulate sleep,” says Fu. But each new gene discovery helps elucidate how these interrelated pathways control our 40 winks.
Sleep is fundamental to human life. It reduces anxiety and inflammation , improves cognitive functions like memory, attention, and alertness, and helps regulate mood. People who are chronically sleep deprived have a higher risk of diabetes, cancer , and Alzheimer’s disease. But scientists understand little about how our bodies control this all-important function.
Short sleepers are a rarity. While most people need around eight hours of sleep to be in tip-top shape, short sleepers need only between four and six hours to function just as well. They don’t need naps or long weekend snoozes to catch up. In one month they might collect 75 more awake hours than their well-rested, more typical counterparts. Imagine the amount of Netflix binge watching, email answering, marathon training , or book reading a person could do.
David Dinges, a psychiatrist who studies sleep at the University of Pennsylvania and who was not involved in the study, says that just identifying true genetic short-sleepers, like the father-son duo, is a victory. But Fu’s paper does even more. “There is an ability to identify what the mechanism might be in the brain, that’s still pretty novel in the sleep field,” he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Fu looks for sleep-related mutations because those extremes help bring the whole system into relief. But searching for a single gene is challenging. That’s why having a father-son pair was so helpful. The research team limited their focus to genes the two had in common. Eventually they found a single letter mutation in the NPSR1 gene, which codes for a particular neuron receptor. Earlier research had shown that activating that receptor helps keep people awake.
Next they bred mice with the mutation and examined their sleep patterns. The mutated mice did sleep for shorter periods, but the reduction wasn’t as dramatic as it is in the father-and-son team who inspired their creation. That may be because mice have different sleep habits than humans—dozing off at multiple intervals during a 24-hour period, for shorter stretches of time—or because more than one gene is involved in regulating this aspect of sleep.
More revealing, perhaps, is the fact that the mice also behaved normally, just like the father and son. In particular, their memory appeared intact. To test this, the researchers subjected the mice to random electric shocks in a certain room. Typically, sleep-deprived mice won’t remember the shocks the next day and will walk blithely across the room. But when the mutant mice were placed in the same room, they did remember and moved hesitantly.
“I didn’t expect that a mutation in the receptor would be so dramatic in humans,” says Luis de Lecea, a biologist at Stanford who first identified the receptor in 2004. His work showed that the receptor plays a role in anxiety and increased wakefulness, so he isn’t surprised to see that it might be involved in sleep regulation too. But he says this gene discovery isn’t the key to the whole sleep mystery. “It’s just one system that contributes to sleep modulation,” he says. “And it may be an important one. But we don’t know yet.” Fu suspects this mutation helps people sleep more efficiently. If you think of the body as a machine, sleep is the after-hours maintenance. Cerebrospinal fluid washes over the brain during sleep, cleaning out waste products like beta-amyloid , a toxic protein linked to Alzheimer’s that is created by normal brain activity. People with the mutation appear to be doing that cleaning much more efficiently.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The mutation is rare; only one in 4 million people has it. But understanding how that pathway works could help everyone. For insomniacs or people who can’t get enough sleep, it could help researchers eventually develop medications that help them maximize the benefits of the little sleep they can get.
Fu is trying to identify more genes as quickly as she can so other researchers can get to work figuring out all of these questions. “For me, it’s more interesting to help everybody sleep better,” she says. She wants to turn her attention to long sleepers, people who tend to need around 10 to 12 hours a night, and hopes they will reach out after this study publishes. With any luck, a lead on a new gene discovery will be waiting in her inbox.
WIRED25: Stories of people who are racing to save us Massive, AI-powered robots are 3D-printing entire rockets Ripper —the inside story of the egregiously bad videogame USB-C has finally come into its own Planting tiny spy chips in hardware can cost as little as $200 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Contributor X Topics Sleep Amit Katwala Matt Simon Grace Browne Ramin Skibba Matt Simon Ramin Skibba Jim Robbins Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,469 | 2,019 |
"Zuckerberg Wants Facebook to Build a Mind-Reading Machine | WIRED"
|
"https://www.wired.com/story/zuckerberg-wants-facebook-to-build-mind-reading-machine"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Noam Cohen Ideas Zuckerberg Wants Facebook to Build a Mind-Reading Machine If the Facebook CEO's reflection tour has revealed anything it is that even as he wrestles with the harms the platform has wrought, he is busy dreaming up new ones.
Hulton-Deutsch Collection/CORBIS/Getty Images Save this story Save Save this story Save For those of us who worry that Facebook may have serious boundary issues when it comes to the personal information of its users, Mark Zuckerberg’s recent comments at Harvard should get the heart racing.
Zuckerberg dropped by the university last month ostensibly as part of a year of conversations with experts about the role of technology in society, “the opportunities, the challenges, the hopes, and the anxieties.” His nearly two-hour interview with Harvard law school professor Jonathan Zittrain in front of Facebook cameras and a classroom of students centered on the company’s unprecedented position as a town square for perhaps 2 billion people. To hear the young CEO tell it, Facebook was taking shots from all sides—either it was indifferent to the ethnic hatred festering on its platforms or it was a heavy-handed censor deciding whether an idea was allowed to be expressed.
Zuckerberg confessed that he hadn’t sought out such an awesome responsibility. No one should, he said. “If I was a different person, what would I want the CEO of the company to be able to do?” he asked himself. “I would not want so many decisions about content to be concentrated with any individual.” Instead, Facebook will establish its own Supreme Court , he told Zittrain, an outside panel entrusted to settle thorny questions about what appears on the platform. “I will not be able to make a decision that overturns what they say,” he promised, “which I think is good.” All was going to plan. Zuckerberg had displayed a welcome humility about himself and his company. And then he described what really excited him about the future—and the familiar Silicon Valley hubris had returned. There was this promising new technology, he explained, a brain-computer interface, which Facebook has been researching.
The idea is to allow people to use their thoughts to navigate intuitively through augmented reality—the neuro-driven version of the world recently described by Kevin Kelly in these pages.
No typing—no speaking, even—to distract you or slow you down as you interact with digital additions to the landscape: driving instructions superimposed over the freeway, short biographies floating next to attendees of a conference, 3D models of furniture you can move around your apartment.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Harvard audience was a little taken aback by the conversation’s turn, and Zittrain made a law-professor joke about the constitutional right to remain silent in light of a technology that allows eavesdropping on thoughts. “Fifth amendment implications are staggering,” he said to laughter. Even this gentle pushback was met with the tried-and-true defense of big tech companies when criticized for trampling users’ privacy—users’ consent. “Presumably,” Zuckerberg said, “this would be something that someone would choose to use as a product.” In short, he would not be diverted from his self-assigned mission to connect the people of the world for fun and profit. Not by the dystopian image of brain-probing police officers. Not by an extended apology tour. “I don’t know how we got onto that,” he said jovially. “But I think a little bit on future tech and research is interesting, too.” Of course, Facebook already follows you around as you make your way through the world via the GPS in the smartphone in your pocket, and, likewise, follows you across the internet via code implanted in your browser. Would we really let Facebook inside those old noggins of ours just so we can order a pizza faster and with more toppings? Zuckerberg clearly is counting on it.
To be fair, Facebook doesn’t plan to actually enter our brains. For one thing, a surgical implant, Zuckerberg told Zittrain, wouldn’t scale well: “If you’re actually trying to build things that everyone is going to use, you’re going to want to focus on the noninvasive things.” The technology that Zuckerberg described is a shower-cap-looking device that surrounds a brain and discovers connections between particular thoughts and particular blood flows or brain activity, presumably to assist the glasses or headsets manufactured by Oculus VR, which is part of Facebook.
Already, Zuckerberg said, researchers can distinguish when a person is thinking of a giraffe or an elephant based on neural activity. Typing with your mind would work off of the same principles.
As with so many of Facebook’s innovations, Zuckerberg doesn’t see how brain-computer interface breaches an individual’s integrity, what Louis Brandeis famously defined as “the right to be left alone” in one’s thoughts, but instead sees a technology that empowers the individual. “The way that our phones work today, and all computing systems, organized around apps and tasks is fundamentally not how our brains work and how we approach the world,” he told Zittrain. “That’s one of the reasons why I’m just very excited longer term about especially things like augmented reality, because it’ll give us a platform that I think actually is how we think about stuff.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kelly, in his essay about AR, likewise sees a world that makes more sense when a “smart” version rests atop the quotidian one. “Watches will detect chairs,” he writes of this mirrorworld, “chairs will detect spreadsheets; glasses will detect watches, even under a sleeve; tablets will see the inside of a turbine; turbines will see workers around them.” Suddenly our environment, natural and artificial, will operate as an integrated whole. Except for humans with their bottled up thoughts and desires. Until, that is, they install BCI-enhanced glasses.
Zuckerberg explained the potential benefits of the technology this way when he announced Facebook’s research in 2017: “Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world—speech—can only transmit about the same amount of data as a 1980s modem. We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.” Zuckerberg likes to quote Steve Jobs’ description of computers as “ bicycles for the mind.
” I can imagine him thinking, “What’s wrong with helping us pedal a little faster?” And while I reflexively gag at Zuckerberg’s thinking, that isn’t meant to discount its potential to do great things or to think that holding it off will be easy or necessarily desirable. But at a minimum, we should demand a pause to ask hard questions about such barrier-breaking technologies—each quietly in our own heads, I should hasten to add, and then later as a society.
We need to pump the brakes on Silicon Valley, at least temporarily. For, if the Zuckerberg reflection tour has revealed anything, it is that even as he wrestles with the harms Facebook has wrought, he is busy dreaming up new ones.
New to TikTok? Here's what you need to know Inside the “black box” of a neural network The NSA open sources a powerful cybersecurity tool Gene mutation that could cure HIV has a checkered past Anarchy, bitcoin, and murder in Acapulco 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Ideas Contributor X Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,470 | 2,013 |
"When Do Babies Become Conscious? | WIRED"
|
"https://www.wired.com/2013/04/baby-consciousness"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Paul Gabrielsen Science When Do Babies Become Conscious? A 5-month-old baby, with his mother, prepares for experiments in consciousness.
Image: Sophie Gelskov Save this story Save Save this story Save For everyone who's looked into an infant's sparkling eyes and wondered what goes on in its little fuzzy head, there's now an answer. New research shows that babies display glimmers of consciousness and memory as early as 5 months old.
For decades, neuroscientists have been searching for an unmistakable signal of consciousness in electrical brain activity. Such a sign could determine whether minimally conscious or anesthetized adults are aware—and when consciousness begins in babies.
Studies on adults show a particular pattern of brain activity: When your senses detect something, such as a moving object, the vision center of your brain activates, even if the object goes by too fast for you to notice. But if the object remains in your visual field for long enough, the signal travels from the back of the brain to the prefrontal cortex, which holds the image in your mind long enough for you to notice. Scientists see a spike in brain activity when the senses pick something up, and another signal, the "late slow wave," when the prefrontal cortex gets the message. The whole process takes less than one-third of a second.
Researchers in France wondered if such a two-step pattern might be present in infants. The team monitored infants' brain activity through caps fitted with electrodes. More than 240 babies participated, but two-thirds were too squirmy for the movement-sensitive caps. The remaining 80 (ages 5 months, 12 months, or 15 months) were shown a picture of a face on a screen for a fraction of a second.
Cognitive neuroscientist Sid Kouider of CNRS, the French national research agency, in Paris watched for swings in electrical activity, called event-related potentials (ERPs), in the babies' brains. In babies who were at least 1 year old, Kouider saw an ERP pattern similar to an adult's, but it was about three times slower. The team was surprised to see that the 5-month-olds also showed a late slow wave, although it was weaker and more drawn out than in the older babies. Kouider speculates that the late slow wave may be present in babies as young as 2 months.
This late slow wave may indicate conscious thought , Kouider and colleagues report online today in Science.
The wave, feedback from the prefrontal cortex, suggests that the image is stored briefly in the baby's temporary "working memory." And consciousness, Kouider says, is composed of working memory.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The team displayed remarkable patience to gather data from infants, says cognitive neuroscientist Lawrence Ward of the University of British Columbia, Vancouver, in Canada, who was not involved in the study. However, the work, although well executed, is not the last word, he says. "I expect we'll find several different neural activity patterns to be correlated with consciousness." Comparing infant brain waves to adult patterns is tricky, says Charles Nelson, a neuropsychologist at Harvard Medical School in Boston. "ERP components change dramatically over the first few years of life," he writes in an e-mail. "I would be reluctant to attribute the same mental operation (i.e., consciousness) in infants as in adults simply because of similar patterns of brain activity." "He's right, the ERP components are not exactly the same as in adults," Kouider responds, but the ERP signature he saw had the same characteristics.
Kouider next hopes to explore how these signals of consciousness connect to learning, especially language development. "We make the assumption that babies are learning very quickly and that they're fully unconscious of what they learn," Kouider says. "Maybe that's not true." *This story provided by Science NOW, the daily online news service of the journal *Science.
Topics babies Brains and Behavior consciousness ScienceNOW Max G. Levy Emily Mullin Elizabeth Finkel Grace Browne Emily Mullin Jorge Garay Maryn McKenna Celia Ford Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,471 | 2,021 |
"NASA Lands the Perseverance Rover on Mars | WIRED"
|
"https://www.wired.com/story/nasa-lands-the-perseverance-rover-on-mars"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sarah Scoles Science NASA Lands the Perseverance Rover on Mars Photograph: NASA/JPL-Caltech Save this story Save Save this story Save In the Mission Support Area at Lockheed Martin’s campus in Littleton, Colorado, masked people sat close to computers, flying three spacecraft in orbit around Mars. These three—the Mars Reconnaissance Orbiter, Maven, and Odyssey—were all tasked, in one way or another, with downloading data from another spacecraft: the Mars Perseverance rover, which was attempting to land on the Red Planet. Information from these orbiters would help engineers learn about Perseverance’s status as it made its way through the atmosphere, and determine whether it survived. “Space is not a place to go,” read the words painted on one wall. “Space is a place to do.” Scattered among the usual notes about unauthorized visitors and classified meetings, signs about social distancing, masks, and symptoms were plastered around the building. “No masks with exhaust valves” warned one, aerospacily. One was posted behind the head of Lockheed’s David Scholz, who about an hour before landing had been standing in a conference room 6 feet from everything, sporting a blue surgical mask above his double-pocketed tan shirt. NASA’s video feed played in the background. Scholz had just described himself as a “confident nervous wreck.” That’s because he is the principal engineer for a device called an “aeroshell,” which cocoons the rover against the most extreme conditions of its downward trip toward the surface of Mars.
The Lockheed engineers had been working on this project for years, and today, Scholz and his team could finally watch it be put to use. But that’s all they could do: watch. Their system was automated, and would do its job without them.
And so they watched as a human-made object fell from the sky, aiming to touch down in a crater called Jezero. The landing, scheduled for 12:55 pm Pacific Time, would mark the end of the Perseverance rover’s journey through space and the beginning of its stay at this desolate destination: a depression that was—billions of years ago—home to a lake and a river delta. It’s a place where life could , theoretically, have once survived.
Looking for spots that seem like they might have been amenable to ancient life, and evidence of potential past habitation, are among the Mars 2020 mission’s goals. The rover will also collect and store geological samples for a future mission to retrieve, and try producing oxygen from the planet’s plentiful carbon dioxide, in anticipation of future human astronauts’ needs.
But to get there, the spacecraft had to survive a harrowing process that engineers call “entry, descent, and landing,” or EDL, which is what the Lockheed Martin team was now nervously awaiting. These final stages happen during what’s been called (to the point of cliché) the “seven minutes of terror”—the time when the spacecraft must autonomously orchestrate its own E, D, and L without smashing into the ground. During its wild ride, the rover would experience speeds of around 12,100 mph and feel the equivalent of 12 times Earth’s gravity during deceleration. Its protective sheath would heat up to about 2,370 degrees Fahrenheit. Much could go awry: The craft could get too hot; its bits might not separate when they were supposed to; even if they did separate correctly, they could “recontact” (read: hit) each other; Perseverance could land in the wrong location; it could end up making its own impact crater. Choose your own nightmare.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “The key thing about EDL is that everything has to go right,” Allen Chen of NASA’s Jet Propulsion Laboratory, who leads the EDL team, had told me a couple of weeks before the landing. “There’s no partial credit.” That 100 percent, A+ performance is what fires up nerves for even the confident engineers here at Lockheed Martin who worked on the aeroshell. The aeroshell has two parts: the heat shield, which looks like a steampunk space frisbee, and the backshell, a classic space capsule. The heat shield faces down toward the planet when the spacecraft smacks into the atmosphere, taking the business end of the pressure and heat. It’s made from tiles of a material called PICA, or phenolic-impregnated carbon ablator. “As it gets hot, it starts to decompose, and that decomposition absorbs a lot of energy and also creates gas that forms a boundary layer that protects the heat shield from the environment,” Scholz had explained ahead of the landing. The protected shield, in turn, protects its cargo. The device burns through the atmosphere at a tilt, which Scholz calls “an angle of attack,” and steers itself with thrusters.
The backshell houses, among other things, a parachute and the last leg of the landing system. Its key protective ingredient is called, catchily, SLA-561V, which Lockheed Martin developed for the Viking missions in the 1970s. The company has actually made every single one—10, in total—of the aeroshells NASA has shot to Mars. Both sections of the shell carry instruments that measure conditions during the drop, to better inform future missions because there’s nothing like ground—or, in this case, atmosphere—truth.
“Being a part of it is humbling,” Scholz told me the week before landing. Today, he bounced between two boardroom-type tables; on one of them, a 3D-printed model of the aeroshell sat on a pedestal near an industrial-sized container of “multi-task wipes.” A home-theater-sized screen displayed the scene at NASA, and a set of “Lockheed Martian” (get it?) stickers adorned the top of a cabinet nearby. Scholz shook his head occasionally as he stared at the feed from NASA TV and another screen showing downlinked data, tapping his foot.
Chen, who was at JPL in Pasadena watching the descent with much of the “EDL family,” had already filled me in on what the aeroshell was supposed to do next. “Landing on Mars is all about finding a way to stop, and stopping in the right place,” he had said. The first step is the extreme sport of using the atmosphere to slow spacecraft down. Then, 7 miles above the Red Planet, new technology called a “range trigger” would deploy a parachute based on where the spacecraft was relative to where it needed to end up—rather than when it reaches a specific velocity, as previous missions have done. Twenty seconds later, the heat shield would fly off, as pyrotechnics snapped off nine separate mechanisms and separated it from the rover and backshell. That snap would lay bare the radar and cameras that make up a new system called terrain-relative navigation. This system compares onboard maps to what the lander’s sensors see in real time, to show the spacecraft its location and help it avoid hazardous geology during its autonomous landing.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Then the skycrane, a sort of hovercraft hooked to the top of the rover, would fire up its eight downward-pointed rockets, which would guide Perseverance to the right spot, while continuing to slow it. The skycrane would gently lower the rover, attached to it by bridle cords, to solid ground, like a stork depositing a baby. Explosives would snap the stork from its delivery.
Perseverance would be, finally, alone.
X content This content can also be viewed on the site it originates from.
But it would take a while for that message to get to mission control, because signals can’t travel instantaneously between Earth and Mars. (“Whatever is happening has already happened,” Chen had told me, “and there’s nothing you can do.”) As Perseverance entered the atmosphere, the Lockheed Martin conference room fell silent. No one spoke or picked up any of the Krispy Kreme doughnuts on the table. Every piece of information about the rover’s progress came an agonizing 11 or so minutes after it had actually occurred, a fact never far from anyone’s mind.
“The heat shield has been separated,” came the word from NASA TV, and the room erupted in whoops and applause for a few seconds—that being, of course, this team’s big moment—before falling quiet once again.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A few minutes later, NASA gave confirmation that the backshell had separated. More applause burst out.
“Excellent!” yelled someone.
“I’m here hugging,” said someone else, hugging, in fact, the air.
Then these words from NASA: “Touchdown confirmed.” Several people in the Lockheed Martin room stood, clapping. “We landed on Mars!” one person said in amazement. “Holy cow,” responded their coworker.
“It feels,” Scholz said, “fantastic.” Once Perseverance was safely on the ground, a team led by Jessica Samuels, surface mission manager for the mission at JPL, took over for the EDL family, checking in on and commissioning the instruments and the rover. “At that point, we start round-the-clock operations,” she had told me a couple weeks before landing. Perseverance will undergo this commissioning and check-out for about a month, and later this spring will test fly a small helicopter called Ingenuity , the first thing to make a powered flight on another planet, before science operations really start in a few months.
During that time, scientists have designed the rover to drive an average of 650 feet every martian day, often hitting up places of interest they identified ahead of time, and using the more-detailed data gathered from the ground to inform future movements and data collection. Perseverance will take photos, keep track of the weather, scan the surface with ground-penetrating radar, collect and analyze samples of rock and regolith to learn about their composition, and sock some away for potential future return to Earth.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Two instruments on Perseverance’s robotic arm will lend a hand in searching for signs of biology.
PIXL shines an X-ray beam at rocks, glowing them up, with the specifics of the glow dependent on the rocks’ chemistry. Based on the resulting map of chemicals, textures, and structures, scientists can learn about how the rocks came to be how they are—including, perhaps, if life made them that way. Another instrument, called Sherloc, focuses on organic compounds and minerals. It’s both a microscope that takes pictures and a spectrometer that reveals composition of surface material. Combine those two sets of information, and “you end up producing a chemical map of what you’re looking at,” says Luther Beegle, Sherloc’s principal investigator. Minerals can reveal the long-ago conditions at a given spot—like the saltiness of the disappeared water—and whether they may have been habitable. And organics could be (though are not necessarily) signs of past life, especially if they show up in weird formations, like clumps. They speak to the planet’s past hospitality, whether or not any organisms took advantage of that.
Studying the origins of life is hard on Earth; the planet’s dynamic surface has erased evidence of the past, as plate tectonics recycle material. But Mars is a kind of time capsule, a tableau of the way the planet used to be. “Most of the geological processes turned off,” says Beegle.
If Sherloc shows scientists something especially promising, Perseverance will drill a sample to stash in a sealed tube for a future mission to find and return to Earth. Beegle says the current plan is to bring them back on a sample return mission slated to launch in 2026. But mission planning is rarely certain in the long term, subject as it is to political and budgetary winds. NASA’s webpage detailing Mars 2020’s science objectives is more circumspect, couching the samples’ homecoming in “ if and when ” terms.
Whatever Sherloc’s detective work finds, Beegle will be excited. It is, he says, “just as fascinating if Mars had life as if it didn’t have life.” If the Red Planet didn’t produce any organisms, it could mean that life has a hard time getting started; the coefficient of static friction for abiogenesis may be higher than many hoped.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But regardless, Sherloc will help find out. Readying such an instrument for a run on another planet is always cortisol-producing for the scientists who build it: They have to worry about whether the sensitive equipment will survive the shakedown of launch and landing, and that their moving parts and electronics will work exactly like they did in the terrestrial clean room. Beegle was a surface sampling system scientist for the Curiosity rover, which landed in 2012, when NASA’s team came up with the expression “seven minutes of terror.” Beegle recalls a colleague responding wryly to the catchphrase with “I’ve just lived seven years of terror”—the time he spent designing and testing new technology.
The terror also continues after touchdown, because if things go wrong, there may be no easy fix. Or any fix. “Everything stresses you out,” Beegle says. “Every time we turn the instrument on, there will be a worry in the back of your head that something is broken.” But sending a rover out to explore Mars is like sending your teenage kid out to drive, he says. He’ll worry every time he hands her the keys. But he also trusts that she’ll do a good job.
Another instrument aboard will be focused on a more familiar kind of life: humans. An experimental device called Moxie is designed to produce 99.6 percent pure oxygen from martian carbon dioxide, starting sometime in early March. From the outside, Moxie looks like a golden milk crate, stashed inside Perseverance’s main body. It takes in the CO~2~, then electrochemically prises it into oxygen and carbon monoxide. The amount it will make will be much smaller than what even a lone human explorer would need, producing oxygen that could “maybe to keep a small dog alive, something like a Boston terrier, maybe,” says Asad Aboobaker, a JPL engineer who works on Moxie.
By Eric Adams But this test project isn’t only about respiration. It’s also about rocket propellant, of which oxygen is usually a major component. “NASA wants to send people to Mars, but it also wants to get them back,” says Aboobaker. It’s borderline unreasonable to launch and land astronauts with the propellant they need to return, so one approach is to have the astronauts produce their own wherever they land. “That’s the real thrust—if I may make a pun—of this project,” says Aboobaker. Ideally, this small system will help them learn about how to scale the technology up for future missions.
We don’t know when humans might need that oxygen, or whether those rock-filled sample tubes will for sure come home, or whether we’ll indeed see signs of martian life in them or Perseverance’s analysis. But all of these experiments point toward an uncertain future: When we’ll learn more about whether we’re alone, and mount even more ambitious missions—for future robots and humans. Today, Perseverance moved in that direction, as it landed softly on this dusty otherworld.
📩 The latest on tech, science, and more: Get our newsletters ! Premature babies and the lonely terror of a pandemic NICU Researchers levitated a small tray using nothing but light The recession exposes the US’ failures on worker retraining Why insider “Zoom bombs” are so hard to stop How to free up space on your laptop 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Contributor X Topics Mars mars rover water on Mars NASA space Spacecraft Ramin Skibba Ramin Skibba Matt Simon Ramin Skibba Ramin Skibba Ramin Skibba Tristan Kennedy Garrett M. Graff Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,472 | 2,018 |
"Loon's Internet-Slinging Balloons Are Headed to Kenya | WIRED"
|
"https://www.wired.com/story/loon-internet-balloons-kenya-google-alphabet-x"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation Loon's Internet-Slinging Balloons Are Headed to Work Over Kenya Loon’s graduation from X brings a new sort of challenge: operating as a successful business. After years of testing over Nevada, the internet balloon outfit has signed a deal to provide service over Kenya.
Loon Save this story Save Save this story Save It’s been a big week for Loon. Just eight days ago, it was one of Alphabet’s moonshot projects , launching antennas attached to giant balloons into the stratosphere to beam internet down to Earth.
Now it has announced its first commercial agreement: working with Telkom Kenya to provide internet service to parts of central Kenya, starting next year, and helping connect the citizens of a country where coverage hardly extends beyond major population centers.
Loon began life in 2011 as a Project Loon, inside Google X, the search company’s arm dedicated to incubating ambitious ideas. (In 2015, when Google restructured and formed parent company Alphabet, Google X became X.) After seven years in the incubator, Loon “graduated” this month and became an Alphabet company in its own right. That means it’s time to start making money, and this Kenya deal (whose financial particulars have not been revealed) is a solid first step forward.
Instead of building networks of ground-based cell towers that provide coverage spanning a few miles, Loon hangs antennas from tennis court-sized, helium-filled balloons flying 60,000 feet above Earth, far higher than commercial airliners, birds, and the weather. Each polyethylene balloon can provide internet coverage over 2,000 square miles and stay aloft for months, making them well-suited to connect areas where low population density or difficult terrain prohibits building out cell tower networks.
The balloons have no propulsion system and rely on riding the wind to get to where they need to be. (Loon has launch sites in Puerto Rico and Nevada, and while its balloons can circumnavigate the globe, it will likely consider setting up shop in Kenya.) In the early years of the project, the Loon team anticipated having many balloons in the air, each steadily moving around the planet. Then around 2014, they realized they could direct the balloons where they wanted, and thus keep a few over a specific area.
The trick is that air currents at different altitudes head in different directions. Loon took mountains of data (gathered from various government agencies and the flight patterns amassed during their own tests) along with a lot of machine learning (an Alphabet specialty) to turn those air currents into a new sort of map. Say a balloon is drifting east, away from where it wants to be. Its software will look for a current to take it west. If that is a few thousand feet down, the Loon unit uses a fan named Franz (for the SNL character who just wants to pump you up) to ingest air, which goes in a layer surrounding the bit filled with helium. The extra air works like ballast, dropping the balloon until it finds the westbound highway. To go back up again, the fan pulls air out, making the balloon lighter. This sort of navigation meant that Loon didn’t have to start with hundreds of balloons covering an entire band of the globe. It could provide service to a specific area—like central Kenya.
Not that the folks checking Facebook on the ground know anything about it. Loon’s business strategy hinges on making deals with telecoms and having its balloons augment the service provided by these firms. “You would have no concept that it was a different signal,” says Loon CEO Alastair Westgarth. “You’re just getting a standard LTE 4G signal, from the balloon.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg During its seven-year stay at X, Loon mastered a horde of problems. It developed a system that can launch a balloon every 30 minutes and keep them aloft for the better part of a year—right where they want. It built an 80-foot-long flatbed scanner that can spot microscopic defects in the balloon’s plastic. They trained flight engineers, designed their own shipping crates, and steadily improved the strength and speed of their signal (with no plans to offer 5G right now). In other words, they overcame all the things that make the idea of delivering internet access with high-flying balloons not just crazy, but nearly impossible.
At the same time, Loon’s departure from X brings on a new sort of challenge: operating as a successful business. “Infrastructure plays are very complex,” says Westgarth, a telecom industry veteran who came aboard in early 2017. (When the recruiter who hired Westgarth said they were calling about Loon, he replied “That still exists?”) Along with a network of stratospheric balloons, the CEO have to build out and manage legal, policy, HR, and marketing teams—all the stuff X provided. Westgarth will have to strike deals with other companies, deliver the kind of service that actually augments existing networks, negotiate with foreign countries, and work with regulators who manage telecoms and aviation.
“That ecosystem of partners is probably the single most complex thing,” according to Westgarth. “It’s also what allows you to provide service.” Loon hasn’t revealed any plans beyond this first deal in Kenya, but he doesn’t expect to stop there. And he doesn’t expect to monopolize this new way of connecting the world. “Whenever there’s a large opportunity," Westgarth says, "other players will show up.” For now, it’s just Loon, bobbing silently along, and headed for Africa.
A landmark legal shift opens Pandora’s box for DIY guns In the age of despair, find comfort on the "slow web" How to see everything your apps are allowed to do An astronomer explains black holes at 5 levels of difficulty Could a text-based dating app change swipe culture? Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Associate Editor Facebook X Instagram Topics Google Alphabet connectivity Aviation Samanth Subramanian Kate Knibbs Caitlin Harrington Amanda Hoover Will Knight Max G. Levy Niamh Rowe Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,473 | 2,019 |
"Stop Obsessing Over Sleep—Your Brain Will Thank You | WIRED"
|
"https://www.wired.com/story/stop-obsessing-over-sleepyour-brain-will-thank-you"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sara Harrison Science Stop Obsessing Over Sleep—Your Brain Will Thank You Photograph: Getty Images Save this story Save Save this story Save Maiken Nedergaard considers herself a pragmatic woman. She’s got kids, a career, and she knows she feels better after a solid night of shut-eye. She’s also a neuroscientist at the forefront of research showing the biological value of sleep. In studies she coauthored in 2013 and 2019 , she documented how during sleep, fluid washes over our brains, clearing out toxins like beta amyloid, which is linked to neurodegenerative diseases.
Suddenly sleep became a tonic: an Alzheimer’s wonder drug available, for free, every evening. For Nedergaard, the results made her anxious about her own bedtime priorities. Now, she says, “I take sleep very seriously.” For the rest of us, who don’t have labs full of equipment to study the inner workings of the brain, there’s a growing suite of gadgets offering the illusion of science: nightly reports full of numbers and charts purporting to show just how well we’re performing while conked out. Fitbit kicked off this trend when it released the first tracker in 2009.
Little more than a glorified pedometer at the time, the Fitbit couldn’t monitor your heart rate, pay for anything, or even tell you the time. (It even shorted a WIRED reviewer’s run by 0.6 mile: “Not cool!”) But it did include a sleep tracker that measured the duration and quality of your slumber, producing a “sleep efficiency” score. Since then, obsession with sleep optimization has only grown. The global sleep-tracking market topped $1 billion in 2016. It’s expected to increase by another 18 percent by 2024.
Sleep, once no more glamorous than taking a shower, is now perched at the pinnacle of the well-being-as-a-lifestyle trend. The irony is that by agonizing over sleep, it’s also turning into a source of anxiety—the kind of thing that keeps people up at night. Fears over bad sleep are getting the TED treatment and topping best-seller lists.
In his semi-pop-science distillation of research, Why We Sleep, UC Berkeley psychologist Matthew Walker warns that sleep loss is an epidemic that could have dire consequences. Our chronically overtired brains make our bodies more susceptible to diabetes, cancer, and other diseases, he argued. Getting a good 40 winks, on the other hand, would improve memory and mood, and even make us feel and look younger. “Sleep is a nonnegotiable biological necessity,” Walker proclaimed in his 2019 TED talk.
“It is your life support system. And it is Mother Nature’s best attempt at immortality.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Who among us would scoff at immortality? And so we’ve slipped Oura rings onto fingers and wrapped Fitbits and Apple Watches around wrists. Beneath sheets and mattresses we’ve concealed Beddits and Emfits, sensors that promise to monitor heart rates and sleep cycles from their discreet hiding places. We track our REM and non-REM cycles, scrutinizing the length, quality, and depth of our rest on Reddit forums like r/sleep and r/Biohackers, where users write about the benefits of cold showers and question the need for pillows.
All this measuring, rating, tracking, and comparing now amounts to a new sleep disorder that some scientists are calling orthosomnia.
A coinage of Greek origin that merges "straight" or "correct" with "sleep," orthosomnia is a condition where anxiety over proper sleep metrics actually induces insomnia.
“The patients' inferred correlation between sleep tracker data and daytime fatigue may become a perfectionistic quest,” wrote the researchers from Northwestern and Rush universities who came up with the term after observing three cases of people who relied more on their sleep-tracking data than on the advice of experts. In one case, a 27-year-old woman insisted she wasn’t getting enough deep sleep and underwent a full examination in the lab. The test concluded she slept normally, but she remained unconvinced. “Then why does my Fitbit say I am sleeping poorly?” she asked.
Sleep has become one more thing to feel guilty about, even when the data we’re consulting is often flawed or incomplete. It’s one more number we didn’t hit, one more goal we didn’t achieve. Pangs of guilt follow every new study reminding us of this magical panacea, if we would just turn off Netflix, forget our social lives, emails, and all the dishes in the sink, and just climb into bed.
Sleep may be a biological necessity, but our stress over it is a choice. So let’s put the issue to bed. Hit the snooze button on this one! All the wide-eyed worry over shut-eye is the thing that’s actually cheating us of immortality, or at the very least, stealing the promise of a delicious, relaxing, no-pressure night of sleep.
What we get wrong about “people of color ” Every tech company wants to be a bank— someday, at least How earnest research into gay genetics went wrong The Real ID is nearly here, and you can’t fly home without it Empathy is tearing us apart 👁 A safer way to protect your data ; plus, the latest news on AI 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Contributor X Topics Sleep Grace Browne Emily Mullin Max G. Levy Meg Marco Celia Ford Erica Kasper Maryn McKenna Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,474 | 2,022 |
"How to Shut Up Your Gadgets at Night So You Can Sleep | WIRED"
|
"https://www.wired.com/story/silence-gadgets-do-not-disturb-nighttime"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter David Nield Reece Rogers Gear How to Shut Up Your Gadgets at Night So You Can Sleep Photograph: Fotografía de eLuVe/Getty Images Save this story Save Save this story Save Our trusted gadgets are in constant communication with us, ringing and buzzing whenever friends want to chat, new shows arrive on Netflix, or coworkers send crucial memes over Slack. These audible alerts may be useful during the day, but they're not welcome at night. You don't want your sleep interrupted by breaking news stories, messages from colleagues on the other side of the world, or an update on your podcast download queue.
Here's how to quiet the gadgets you might have within earshot of your bed until the morning comes.
Before you get to fiddling with your smartphone's settings, look through the apps you've got installed—particularly if one or two keep interrupting your slumber.
Tweaking the notification settings on these apps can be more effective than silencing your phone altogether.
Slack is a good example, if folks from work are disturbing you across multiple channels and time zones. In the app, open the You tab located at the bottom right and choose Pause notifications.
Pick one of the suggested times or a custom timeframe. To further tweak your alerts and create a daily schedule, go back to the You tab and click Notifications.
Maybe you've got group chats that often stay active way past your bedtime; most messaging apps let you mute individual threads. In Messenger from Meta, for instance, open a conversation, then tap its name at the top. Click on the bell that says Mute to stop receiving notifications.
If TikTok keeps you awake at night—not your constant scrolling but incoming notifications—go to your profile and click the three lines at the top right of the screen. Open Settings and privacy , then Push notifications.
Here you can toggle alerts off from direct messages and other interactions.
You might find it more useful to mute all the notifications on your phone overnight, but if it's just one or two apps causing you problems, app-by-app settings might be the way to go. It means notifications from other apps—such as incoming phone calls—can carry on as normal.
Both iOS and Android have a Do Not Disturb mode that will silence all incoming notifications on your phone—with some exceptions, if needed. It's perhaps the easiest and most effective way of stopping your smartphone from waking you up unnecessarily halfway through the night.
On an iPhone, from Settings go to Focus and click Do Not Disturb.
On this page, you can pick apps and people to exclude from the mode and create a daily schedule.
Sleep is another Focus mode worth checking out that can align with your bedtime schedule.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Over on Android, open Settings and pick Sound & vibration , then Do Not Disturb , and then Schedules.
Again, you can set specific hours when Do Not Disturb kicks in automatically.
On top of the standard Do Not Disturb settings, you've also got some extra tools. In the case of iOS, from Settings pick Screen Time then Downtime.
You're able to set certain hours at night when only certain apps will work, notifications and all. It's a good option when you need something more granular than Do Not Disturb.
On Android, meanwhile, the equivalent feature is called Bedtime mode.
Pick Digital Wellbeing & parental controls from Settings , then Bedtime mode.
It can essentially turn your phone screen gray for the specified “snoozing” hours, as well as applying the default Do Not Disturb settings at the same time.
For the majority of the time, your smart speakers won't make a sound unless they're spoken to. They're not suddenly going to interrupt your sleep unless you've set a timer or an alarm, and you don't have to worry about them piping up with the latest news headlines in the early hours.
Both Google and Amazon's smart speakers let you block alerts and have special modes when they'll respond more quietly to voice prompts. Those quiet modes can be handy if you don't want to disturb the other people in your home.
For Google speakers, open the Google Home app and tap on the speaker in question. Then tap the cog icon in the top-right corner and click Notifications & digital wellbeing.
Choose Night mode then Enable night mode , and the audio volume will automatically be lowered during the times you specify. If the speaker has a screen, you can lower its brightness. You can also turn on a Do Not Disturb mode that will block any sounds that might come through at night, like reminders. Your alarms and timers will still work as normal.
With Echo speakers, open the Alexa app on your phone, tap Devices , then Echo & Alexa , then the name of your speaker. From there, you can tap Do Not Disturb , which you can either turn on manually or set on a schedule. Any reminders, timers, or alarms will still give you an alert in this mode.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So To enable Whisper Mode on an Alexa—which makes audio responses quieter—just say "Alexa, turn on whisper mode." You can also do so under Settings , Account Settings , and Alexa Voice Responses in the Alexa app.
The Apple HomePod doesn't have an equivalent Do Not Disturb or night mode of any description, but as with all of these speakers, you can make it quieter. Just tell Siri to lower the volume, or to set it to a number between 1 and 100, or use the volume controls on the top of the device.
Sleeping next to a laptop or desktop is less common than sleeping next to a phone, but it may be hard to avoid for people who have a computer setup in their bedroom. Other than just turning it off at night, here’s how to silence a computer and ensure no pesky, late-night notifications interrupt your slumbering.
In Windows, click the cog icon on the Start menu to open Settings. Choose Focus Assist and then Alarms only to set up a Do Not Disturb mode. Use the During these times toggle to set it on a regular schedule every evening. You even get a summary of the alerts you missed when you check again in the morning.
To take app-by-app control over notifications, select Settings and go to System then Notifications & access.
Alternatively, pick System then Sound to mute the audio on your system; that way you won't be disturbed by anything.
If you're using macOS, you can temporarily enable the Do Not Disturb mode by holding the Option key and clicking the Notification Center icon in the far right of the desktop menu bar. Incoming notifications won't make a sound or show up onscreen, though they will still collect in the Notification Center.
To schedule Do Not Disturb mode, open the Apple menu and pick System Preferences , then select Notifications and choose your time period. You can also disable notifications from particular apps on this screen. As on Windows, you could just mute macOS when you go to bed by clicking on the audio icon on the menu.
📩 The latest on tech, science, and more: Get our newsletters ! The metaverse-crashing life of Kai Lenny Indie city-building games reckon with climate change The worst hacks of 2021 , from ransomware to data breaches Here's what working in VR is actually like How do you practice responsible astrology? 👁️ Explore AI like never before with our new database ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Contributor X Service Writer X Topics how-to notifications Simon Hill Eric Ravenscraft Reece Rogers Jaina Grey Simon Hill Adrienne So Brenda Stolyar Jaina Grey WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,475 | 2,020 |
"How to Sleep When Covid-19 Anxiety Makes It Impossible | WIRED"
|
"https://www.wired.com/story/how-to-sleep-anxiety"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brian Barrett Science How to Sleep When the World Is Falling Apart Photograph: Susanne Walstrom/Getty Images Save this story Save Save this story Save Life in the Covid-19 pandemic has taken on some unpleasant and unexpected contours. Time has lost all meaning.
Dreams have become assailingly vivid.
That is, if you’re able to sleep at all, which many people cannot. At least, not as well as they did before it felt like everything was unraveling, all of the time, with no end in sight.
“I am seeing a significant spike in insomnia at this time during the pandemic,” says Lisa Medalie, a behavioral sleep medicine specialist at the University of Chicago.
It might be useful to nail down our terms here. Insomnia isn’t just staying up late. It’s the inability to sleep—or to fall back asleep if you wake in the middle of the night—with no obvious impediments to explain it. Implicit in the definition, too, is that the deprivation negatively impacts your ability to function the next day. It’s acute when it lasts a few days or weeks; if it extends longer than a month, it’s considered chronic.
If this sounds like you, know first that you’re going through the same thing as a lot of people. And also know there are a few simple strategies you can deploy to get yourself back on track.
It is extremely understandable if you have cut yourself some slack during these sheltered-in-place times.
Maybe your diet has gone a little snack-heavy. Maybe you shifted your work hours to make way for childcare or self-care. Maybe you stopped flossing. Only natural. But sleep is something worth preserving—even though that’s harder than it may sound.
“During times of increased stress, sleep is often the first biological system to malfunction,” says Candice Alfano, director of the University of Houston’s Sleep and Anxiety Center. A pandemic is stressful like magma is hot. In a recent online survey conducted by Alfano and her research team, the rate of respondents reporting “severe to very severe problems” either falling or staying asleep was double what they normally see.
"Sleep and sex are the only two things that should happen in the bedroom." Lisa Medalie, University of Chicago That tossing and turning has real health implications. As neuroscientist and sleep expert Matthew Walker detailed in a 2019 TED Talk , sleep deprivation makes your brain slower, weakens your immune system, and increases the likelihood of all kinds of mental and physical woes as you age. “Sleep loss will leak down into every nook and cranny of your physiology,” Walker said in his viral presentation.
“Sleep, unfortunately, is not an optional lifestyle luxury. Sleep is a nonnegotiable biological necessity. It is your life support system.” Unfortunately, life amid Covid-19 disrupts that sleep in any number of ways. There’s the anxiety, sure, not just about the disease itself but about financial security, childcare, and all the other ancillary effects of a society in suspended animation. “If you’re not tackling those problems head-on during the day, then when it’s just you and your brain at night and there’s no distractions, all of those problems, all those worries, all those stressers are going to come bubbling to the surface,” says Medalie. “The thoughts are going to produce emotional responses, the emotional responses are going to produce more thoughts, and the realization that time has passed and you’re not sleeping produces anxiety.” It gets worse. “The relationship actually runs in both directions,” says Alfano. “Even during times of low stress, sleep loss elevates next-day anxiety, potentially creating a vicious cycle.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While stress contributes greatly to sleep woes, it’s not the only factor in play. Staying mostly at home makes for a more sedentary lifestyle than your body might be accustomed to, which can make falling asleep harder. You may be looking at your screen even more than usual, pulling to refresh for the latest Covid-19 tolls or mind-smoothing TikTok loops late into the night. While you do, the blue-spectrum light emanating from your phone tells your brain to stop producing melatonin, the hormone that regulates your sleep-wake cycle, also known as the circadian rhythm.
And while bedtime gets much of the focus when it comes to sleep problems, a morning without routine—there’s no office to report to , after all—can be just as detrimental. “A lot of people don’t realize that the wake-up time and getting out of bed and exposure to light is probably the most important thing that regulates our circadian rhythm,” says Jason Ong, a neurologist who focuses on sleep medicine at Northwestern University, referring to the biological process. “Your brain is confused about what time zone it’s supposed to be in.” The circumstances wrought by Covid-19, in other words, seem almost specifically engineered to interfere with your sleep cycles. Fortunately, there’s plenty you can do to reclaim your shut-eye.
Let’s start with that looming existential dread, since it’s probably the most pandemic-specific contributor to your tossing and turning. The bad news is, you’re probably stuck with it, at least until a vaccine arrives.
You can still take steps to manage it, though, especially as night falls. And it starts with not doing anything at all. “Make sure you have a scheduled hour of ‘me time,’” particularly if you’re sheltering in place with others,” says Medalie. “Everybody needs at least one hour by themselves.” How you spend that time is up to you. But as you get closer to bedtime, make sure it’s genuinely relaxing. ”Instead of watching or reading the news—most content is repetitive anyway—engage in quiet, calming activities under dim lighting, such as reading, talking with family or friends, or listening to music,” says Alfano.
This all might sound easier said than done. But small adjustments can make a big difference. Don’t bring your smartphone into the bedroom, for starters, or tablets or laptops or any screens at all. If you have a TV in there, consider jettisoning it. Alfanso suggests putting all your charging cords in the kitchen, and setting a reminder for an hour before bedtime to plug in all your electronics and bid them adieu until the morning. Old-school alarm clocks still exist! And some even have built-in light functions designed to help you sleep and wake up on a regular schedule. (WIRED recommends the Homelabs Sunrise Alarm Clock , but you’ve got no shortage of options.
) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Northwestern’s Ong also researches how mindfulness techniques can improve sleep, especially for those with chronic insomnia. “It can be a potential tool to help reduce that vigilance, give your brain some signals that it doesn’t need to go into that fight-or-flight mode, or that if it does, here are some tools to help decompress, so that you have a more fair chance for your sleep system to help you fall asleep and stay asleep,” he says.
A simple way to start is to think of yourself as a trainspotter. (Presumably not one from the 1996 Danny Boyle film.
) Much like a trainspotter observes railcars passing by from a distance, try to observe what’s happening in your mind without directly engaging with it. If you do find yourself boarding at some point, just get off at the next station. “As you do that, you will start to train yourself that it’s OK to focus on this present moment,” says Ong.
Apps also offer an entry point to anxiety-quashing meditation, although that complicates the whole “remove the phone from the bedroom” scenario. WIRED contributors have had some success with Calm , which offers a range of soothing sessions. Even Swiss Army workout app Peloton has a sleep meditation section , including at least one class specifically designed for the 3 am wake-up.
Getting good sleep involves more than just clearing your head. The experts WIRED spoke with all agree that clearing your bedroom is just as important. “The bedroom should just be a bedroom. Just a room for your bed, and maybe your dresser, and nothing else in there,” says Medalie. “Sleep and sex are the only two things that should happen in the bedroom.” To cut all the way down on distractions, consider blackout curtains or shades ; you can get both from IKEA or any blinds shop. They help keep your room dark and cozy no matter what the moon or streetlights are doing outside your window. Building an association between a sound machine and sleep can help you conk out consistently; our favorite is the LectroFan Classic , but there’s a wide range of prices and features to choose from. And lowering the temperature helps cue your body that it’s time to snooze.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We sleep best when our bedroom is dark, quiet, and cool,” says Alfano. “Blackout curtains, eye mask, ear plugs, white noise machines, an overhead fan, and lowered thermostat settings can all be helpful.” Otherwise, remember all those things that make it harder to sleep? Do the opposite! If you for some reason have no choice but to look at your smartphone or tablet, at least use a setting that filters out the blue light to give your brain a break. On iOS, head to Settings > Display & Brightness > Night Shift.
From there you can set a nightly schedule for your device to switch over to a softer color palette, or manually enable it until the next day. On Android it’ll vary slightly depending on your device, but you should be able to find it from Settings > Display > Night Light.
(Some manufacturers have their own name for it, like Night Mode or Blue Light Filter , but it should be easy enough to spot.) And while it’s tempting in these times to ramp up the alcohol and caffeine intake, both will get in the way of a good night’s rest. Coffee and soda will rile you up, while booze will hamper the quality of sleep you get. “Alcohol may indeed reduce the amount of time it takes to fall asleep, but it also reliably reduces total sleep time, because it results in lighter sleep and more awakenings during the second half of the night,” says Alfano.
When morning comes around, make sure you’re waking up at the same time no matter what, says Ong. Not only that, but get out of bed and start your day. Your circadian rhythm will thank you. Besides, the more you use for bed for lounging rather than sleeping, the less your body and mind associate it with flipping the off switch.
It can feel impossible to sleep when the world is in turmoil. Why dream when you can fret deep into the night? But you owe it to yourself to get as much as you can. And with a little structure and a few new habits, you might find that it’s easier than you thought.
If you buy something using links in our stories, we may earn a commission.
Learn more.
The confessions of Marcus Hutchins, the hacker who saved the internet Who invented the wheel? And how did they do it ? 27 days in Tokyo Bay: What happened on the Diamond Princess Why farmers are dumping milk, even as people go hungry Tips and tools for cutting your hair at home 👁 AI uncovers a potential Covid-19 treatment.
Plus: Get the latest AI news 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Executive Editor, News X Topics Sleep COVID-19 coronavirus anxiety Matt Simon Matt Simon Rhett Allain Emily Mullin Rhett Allain Ramin Skibba Emily Mullin Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,476 | 2,022 |
"Somnox 2 Review: Soothing but Spendy | WIRED"
|
"https://www.wired.com/review/somnox-2-sleep-robot"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Simon Hill Gear Review: Somnox 2 Facebook X Email Save Story Photograph: Somnox Facebook X Email Save Story $599 at Somnox If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Rating: 6/10 Open rating explainer As I lie in bed spooning my wee robot, one hand on its gently undulating belly as it slows my breathing, I’m struck by the memory of co-sleeping with my kids when they were babies. It can be soothing to share your bed.
Research suggests we report better sleep when bed-sharing, even when objective measures reveal sleep quality has worsened. (It helps that my current sleep partner plays soothing rain sounds and does not need a bottle at 3 am.) Somnox 2 is a limbless bean-shaped torso designed to gradually slow your breathing, as you unconsciously match its rhythm. It can adjust to your breathing rate to calm you and help you drop off. Boosting its soporific power is a speaker that plays dreamy soundscapes or nature sounds. You can tweak everything via an app on your phone.
The original Somnox was born of a Kickstarter campaign in 2017, and this improved model has been in the works for four years. Smaller and lighter, the new version boasts a larger breathing area, longer battery life, and an improved speaker. But other major upgrades, such as Bluetooth audio streaming and sleep tracking, are still “coming soon,” making the $600 price tag much harder to swallow than a sleeping pill.
Somnox 2 is just over 12 inches long and weighs less than 4 pounds. It is covered in a soft fabric with memory foam underneath and has a simple control panel to turn it on or off and adjust the volume. A pneumatic system inside fills and empties an air bladder in an impressive simulation of natural breathing. It is eminently easy to cuddle, spoon, or rest a hand on.
The Somnox app offers a variety of breathing exercises. You can use the bot to help you calm down or even boost alertness during the day, but it is mainly for helping you get off to sleep at night. If you toggle on “Somnox Sense” and hold it against your body, it will adjust to your breathing rate and help you to take longer and deeper breaths, gradually slowing your heart rate and making it easier to sleep. It combines a six-axis accelerator and three-axis gyroscope with a proprietary algorithm to achieve this.
The default settings worked well for me, and I relaxed and fell asleep faster with Somnox 2 than without. Somnox starts at a breathing pace of 12 breaths per minute and steadily decreases to six. It uses a standard ratio of 1:2, so the exhale is twice as long as the inhale. If you feel the need, you can set a specific breathing rate, tweak the ratio, and change the breathing intensity of your sleep bot (how loudly and deeply it breathes).
The science behind Somnox is sound, and the latest version had input from sleep experts and scientists. A clinical trial is underway that will be published later this year. But the impact of controlled breathing on our ability to relax is not in doubt. The thing is, you don’t need a $600 robot to do it. There are countless apps, like Calm or Breathwrk , that can help. Somnox’s array of soundscapes and natural sounds is also similar to what many apps and other sleep gadgets offer.
It is nice to cuddle up to someone or something when you’re in bed, but the physical presence is all that makes Somnox unique. Whether that’s enough to justify the high price is debatable, especially when you consider the other downsides.
While Somnox 2 helped me drop off, it did not help me stay asleep. My sleep tracker, Withings Sleep Tracking Pad ($99) , showed no change in the average duration or quality of my slumber. I found waking with a dead weight next to me slightly unpleasant. Sometimes turning in my sleep would knock it out of bed to thump on the floor.
We had connection issues with the original Somnox, and I was disappointed to find that its successor, though more reliable, still sometimes fails to connect to the app for no apparent reason. The need to tap through a connection process every time I open the app is annoying.
Once set, you can trigger your sleep program with the power button on the Somnox 2, but it only plays one program at a time. To change it or tweak the settings or sounds, you must make selections in the app and upload them to your Somnox. The process is clunky and takes longer than it should.
Somnox 2 Rating: 6/10 $599 at Somnox If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Though the makers promise they are “coming soon,” the lack of sleep tracking or an option to stream my preferred audio (whether music or a good podcast ) is perplexing. For now, this is just a smaller version of the original that works a little better and comes at the same high price. It might've been better to delay the release of Somnox 2 until these new features were available ( Bluetooth audio is now available on iOS ).
All that said, there’s something uniquely lovable about the Somnox, compared to other sleep gadgets. We tend to anthropomorphize everything. The original Somnox was dubbed Sleepy McGhee in our house, and my daughter loved it, treating it like one of her favorite teddies. It’s easy to see the potential value of Somnox 2 for kids, for people who can’t participate in cognitive behavioral therapy ( the only thing that worked for my insomnia ), or for those seeking an alternative to medication.
We all know how important proper sleep is for our physical and mental health. But cutting through the empty promises and dubious science to find the best sleep gadgets is tough. I’m convinced Somnox 2 works, but for all its charms, it is far too expensive. If Somnox wants to succeed in its stated goal of “Helping 100 million people sleep better by 2030,” it should focus on developing a cheaper sleep bot. Everyone deserves good sleep, not just those who can afford it.
Somnox 2 Rating: 6/10 $599 at Somnox If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED $599 at Somnox Contributor X Topics Sleep Shopping smart home mental health Speakers review Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,477 | 2,009 |
"Secret Math of Fly Eyes Could Overhaul Robot Vision | WIRED"
|
"https://www.wired.com/wiredscience/2009/11/fly-eyes"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brandon Keim Science Secret Math of Fly Eyes Could Overhaul Robot Vision Save this story Save Save this story Save By turning the brain cell activity underlying fly eyesight into mathematical equations, researchers have found an ultra-efficient method for pulling motion patterns from raw visual data.
Though they built the system, the researchers don't quite understand how it works. But however mysterious the equations may be, they could still be used to program the vision systems of miniaturized battlefield drones, search-and-rescue robots, automobile navigation systems and other systems where computational power is at a premium.
"We can build a system that works perfectly well, inspired by biology, without having a complete understanding of how the components interact. It's a non-linear system," said David O'Carroll, a computational neuroscientist who studies insect vision at Australia's University of Adelaide. "The number of computations involved is quite small. We can get an answer using tens of thousands of times less floating-point computations than in traditional ways." The best-known of these is the Lucas-Kanade method, which calculates yaw — up-and-down, side-to-side motion changes — by comparing, frame by frame, how every pixel in a visual field changes. It's used for steering and guidance in many experimental unmanned vehicles, but its brute-force approach requires lots of processing power, making it impractical in smaller systems.
In order to make smaller flying robots, researchers would like to find a simpler way of processing motion. Inspiration has come from the lowly fly, which uses just a relative handful of neurons to maneuver with extraordinary dexterity. And for more than a decade, O'Carroll and other researchers researchers have painstakingly studied the optical flight circuits of flies, measuring their cell-by-cell activity and turning evolution's solutions into a set of computational principles.
In a paper published Friday in Public Library of Science Computational Biology , O'Carroll and fellow University of Adelaide biologist Russell Brinkworth put these methods to the test.
"A laptop computer uses tens of watts of power. Implementing what we've developed can be done with chips that consume just a fraction of a milliwatt," said O'Carroll.
The researchers' algorithm is composed of a series of five equations through which data from cameras can be run. Each equation represents tricks used by fly circuits to handle changing levels of brightness, contrast and motion, and their parameters constantly shift in response to input. Unlike Lucas-Kanade, the algorithm doesn't return a frame-by-frame comparison of every last pixel, but emphasizes large-scale patterns of change. In this sense, it works a bit like video-compression systems that ignore like-colored, unshifting areas.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To test the algorithm, O'Carroll and Brinkworth analyzed animated high-resolution images with a program of the sort that might operate in a robot. When they compared the results to the inputs, they found that it worked in a range of natural lighting conditions, varying in ways that usually baffle motion detectors.
"It's amazing work," said Sean Humbert, a University of Maryland aerospace engineer who builds miniaturized, autonomous flying robots, some of which run on earlier versions of O'Carroll's algorithm.
"For traditional navigational sensing, you need lots of payload to do the computation. But the payload on these robots is very small — a gram, a couple of Tic Tacs. You're not going to stuff dual-core processors into a couple Tic Tacs. The algorithms that insects use are very simple compared to the stuff we design, and would scale down to small vehicles." Intriguingly, the algorithm doesn't work nearly as well if any one operation is omitted. The sum is greater than the whole, and O'Carroll and Brinkworth don't know why. Because the parameters are in constant feedback-driven flux, it produces a cascade of non-linear equations that are difficult to untangle in retrospect, and almost impossible to predict.
"We started with insect vision as an inspiration, and built a model that's feasible for real-world use, but in doing so, we've built a system almost as complicated as the insect's," said O'Carroll. "That's one of the fascinating things here. It doesn't necessarily lead us to a complete understanding of how the system works, but to an appreciation that nature got it right." The researchers drew their algorithm from neural circuits attuned to side-to-side yaw, but O'Carroll said the same types of equations are probably used in computing other optical flows, such as those produced by moving forward and backwards through three-dimensional space.
"That's more challenging," said O'Carroll. "It may involve a few extra neurons." Images: 1) Flickr/ Tambako the Jaguar.
2) PLoS Computational Biology.
See Also: Blowflies Get Virtual Reality in Flight Simulator Scientists Mimic Beetle's Liquid Cannon To Build a Better Bridge, Make Like a Conch Mantis Shrimp Eyes Might Inspire New High-Def Devices Secret Law of Flying Could Inspire Better Robots *Citation: "Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology." By Russell S. A. Brinkworth, David C. O'Carroll. PLoS Computation Biology, November 6, 2009. * Brandon Keim's Twitter stream and reportorial outtakes ; Wired Science on Twitter.
Brandon is currently working on a book about ecosystem and planetary tipping points.
Reporter X X Topics animals biometrics Earth Science Insects perception Elizabeth Finkel Jorge Garay Ben Brubaker Celia Ford Max G. Levy Rhett Allain Ramin Skibba Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,478 | 2,009 |
"Robot Teaches Itself to Smile | WIRED"
|
"https://www.wired.com/wiredscience/2009/07/robotsmile"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Hadley Leggett Science Robot Teaches Itself to Smile Save this story Save Save this story Save A robot has taught itself to smile, frown, and make other human facial expressions using machine learning.
To get the incredibly realistic Einstein robot to make facial expressions, researchers used to have to program each of its 31 artificial muscles individually through trial and error. Now, computer scientists from the Machine Perception Laboratory at the University of California, San Diego have used machine learning to enable the robot to learn expressions on its own.
"The robotic approach is the ultimate in helping us understand learning and development," said social development expert Daniel Messinger at the University of Miami, who was not involved with the Einstein research but collaborates with the group on another project. "There’s so much we can learn by actually trying to make it happen instead of just watching kids try to move their faces — it’s like having a baby as opposed to just watching a baby." According to the researchers, who presented the project last month at the 2009 IEEE 8th International Conference on Development and Learning, this is the first time anyone has used machine learning to teach a robot to make realistic facial expressions.
To begin teaching the robot, the researchers stuck Einstein in front of a mirror and instructed the robot to "body babble" by contorting its face into random positions. A video camera connected to facial recognition software gave the robot feedback: When it made a movement that resembled a "real" expression, it received a reward signal.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "It's an iterative process," said facial recognition expert Marian Bartlett, a co-author of the study. "It starts out completely random and then gets feedback. Next time the robot picks an expression, there’s a bias towards putting the motors in the right configuration." After the robot figured out the relationship between different muscle movements and known facial expressions, it started experimenting with new expressions, such as eyebrow narrowing.
The robot's expressions are still a bit awkward, but the researchers say they're working on ways to make them more realistic, as well experimenting with strategies besides "body babbling" that might speed up the learning process. The group says its studious robot may even improve our understanding of how infants and children learn to make facial expressions.
"The idea is to try to understand some of the computational principles behind learning," Bartlett said. "Here the computational principle is reinforcement learning and active exploration, which may also be behind learning motor movements in an infant." The next step is to get the Einstein robot to start socializing. Once the robot can mimic facial expressions in a social context, the researchers plan to use him in an "automatic tutoring" experiment.
"We're putting facial expressions onto the robot so that he can engage with a pupil in a non-verbal manner and approximate one-on-one human tutoring as much as possible," Bartlett said. "Studies have shown that human one-on-one tutoring improves learning by as much as two standard deviations — we want to know how can you try to approximate that with robotic tutoring." See Also: The Future of Humanoid Robots, From Frubber 'Skin' to DJ Juke Bots Wired 14.01: The 50 Best Robots Ever Robot Makes Scientific Discovery All by Itself What a Half-Smile Really Means Gestures of Pride and Shame Are Universal Image: UC San Diego/Erik Jepsen Topics Brains and Behavior education robots Celia Ford Celia Ford Rob Reddick Lux Alptraum Max G. Levy Tammy Rabideau Emily Mullin Elizabeth Finkel Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,479 | 2,019 |
"Quantum supremacy from Google? Not so fast, says IBM. | MIT Technology Review"
|
"https://www.technologyreview.com/2019/10/22/132519/quantum-supremacy-from-google-not-so-fast-says-ibm"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Quantum supremacy from Google? Not so fast, says IBM.
By Gideon Lichfield archive page Konstantin Kakaes archive page Googles quantum computer Google; Edited by MIT Technology Review A month ago, news broke that Google had reportedly achieved “quantum supremacy”: it had gotten a quantum computer to run a calculation that would take a classical computer an unfeasibly long time. While the calculation itself—essentially, a very specific technique for outputting random numbers—is about as useful as the Wright brothers’ 12-second first flight, it would be a milestone of similar significance, marking the dawn of an entirely new era of computing.
But in a blog post published today , IBM disputes Google’s claim. The task that Google says might take the world’s fastest classical supercomputer 10,000 years can actually, says IBM, be done in just days.
As John Preskill, the CalTech physicist who coined the term “quantum supremacy,” wrote in an article for Quanta magazine , Google specifically chose a very narrow task that a quantum computer would be good at and a classical computer is bad at. “This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative,” he wrote.
Google’s research paper hasn’t been published ( Update : it came out two days after this story), but a draft was leaked online last month. In it, researchers say they got a machine with 53 quantum bits, or qubits, to do the calculation in 200 seconds. They also estimated that it would take the world’s most powerful supercomputer, the Summit machine at Oak Ridge National Laboratory, 10,000 years to repeat it with equal “fidelity,” or the same level of uncertainty as the inherently uncertain quantum system.
The problem is that such simulations aren’t just a matter of porting the code from a quantum computer to a classical one. They grow exponentially harder the more qubits you’re trying to simulate. For that reason, there are a lot of different techniques for optimizing the code to arrive at a good enough equivalent.
And that’s where Google and IBM differ. The IBM researchers propose a method that they say would take just two and a half days on a classical machine “with far greater fidelity,” and that “with additional refinements” this could come down even further.
The key difference? Hard drives. Simulating a quantum computer in a classical one requires storing vast amounts of data in memory during the process to represent the condition of the quantum computer at any given moment. The less memory you have available, the more you have to slice up the task into stages, and the longer it takes. Google’s method, IBM says, relied heavily on storing that data in RAM, while IBM’s “uses both RAM and hard drive space.” It also proposes using a slew of other classical optimization techniques, in both hardware and software, to speed up the computation. To be fair, IBM hasn't tested it in practice, so it's hard to know if it would work as proposed. (Google declined to comment.) So what’s at stake? Either a whole lot or not much, depending on how you look at it. As Preskill points out, the problem Google reportedly solved is of almost no practical consequence, and even as quantum computers get bigger, it will be a long time before they can solve any but the narrowest classes of problems. Ones that can crack modern codes will likely take decades to develop, at a minimum.
Moreover, even if IBM is right that Google hasn’t achieved it this time, the quantum supremacy threshold is surely not far off. The fact that simulations get exponentially harder as you add qubits means it may only take a slightly larger quantum machine to get to the point of being truly unbeatable at something.
Still, as Preskill notes, even limited quantum supremacy is “a pivotal step in the quest for practical quantum computers.” Whoever ultimately achieves it will, like the Wright brothers, get to claim a place in history.
hide by Gideon Lichfield & Konstantin Kakaes Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,480 | 2,019 |
"IBM’s new 53-qubit quantum computer is the most powerful machine you can use | MIT Technology Review"
|
"https://www.technologyreview.com/2019/09/18/132956/ibms-new-53-qubit-quantum-computer-is-the-most-powerful-machine-you-can-use"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts IBM’s new 53-qubit quantum computer is the most powerful machine you can use By Martin Giles archive page Quantum computer up close AP The machine will be available for researchers and companies to run applications via the cloud.
The news: IBM’s new computer, due to launch next month, will boast 53 quantum bits, or qubits, the elements that are the secret to quantum machines’ power (see our explainer for a description of qubits and the phenomena that make quantum computers so powerful). Google has a 72-qubit device, but it hasn’t let outsiders run programs on it; IBM’s machine, on the other hand, will be accessible via the cloud.
Cloud power: IBM has been promoting quantum computing via the cloud since 2016. To boost those efforts, the firm is opening a new center in New York state to house even more machines. Other companies developing quantum computers, like Rigetti Computing and Canada’s D-Wave, have also launched cloud services. Behind the scenes, there’s a race on to demonstrate quantum supremacy.
Quantum what? That’s the point at which a quantum computer can perform a task beyond the reach of even the most powerful conventional supercomputer. Google is rumored to be the closest to achieving this milestone —but hitting it won’t mean the machines will be ready for mainstream use. The task is likely to be a very narrow one, and plenty more work will be needed to create quantum computers capable of tackling a wide range of problems.
hide by Martin Giles Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,481 | 2,020 |
"A new app might help researchers monitor the spread of coronavirus | MIT Technology Review"
|
"https://www.technologyreview.com/2020/03/25/950310/a-new-app-might-help-researchers-monitor-the-spread-of-coronavirus"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts A new app might help researchers monitor the spread of coronavirus By Charlotte Jee archive page A phone with the Covid Symptom Tracker app open Covid Symptom Tracker The news: A new app set to launch in the US tomorrow will help researchers better understand coronavirus by tracking its spread in real time. The app, called Covid Symptom Tracker , is already available in the UK and has been downloaded by over 750,000 people.
What’s it for? It was developed by King’s College London to monitor coronavirus symptoms in participants of a decades-long study into thousands of twins and their families, in order to discern whether genes play a role. However, it has since been opened up to the general public. The idea is that by collecting data from lots of people and linking symptoms with underlying health conditions, researchers can get a better grasp of how fast the virus is spreading in certain areas, where hot spots are, and who is most at risk. Crucially, it could help us understand why some people become dangerously unwell with coronavirus while others experience only a mild illness.
You can read all our coverage of the coronavirus/Covid-19 outbreak for free, and also sign up for our coronavirus newsletter.
But please consider subscribing to support our nonprofit journalism.
How does it work? Those who register are asked for their name; where they live; their height, age, and weight; any existing medical conditions like asthma or diabetes; and whether they take drugs like immunosuppressants or ibuprofen, or use a wheelchair. Participants are then asked if they’ve had a coronavirus test, how they’re feeling currently, and if they’re experiencing symptoms like coughing, headaches, or breathing problems. People are asked to log answers to these last three questions every day. The researchers who developed the app promise the data will be “used strictly for public health or academic research” and “will not be used commercially or sold.” The potential: Obviously, the app is only going to be useful if there are enough participants to draw meaningful conclusions. But with enough data, researchers think it could help provide an early warning of where symptoms are clustering, and thus where outbreaks may be starting. It could be even more effective if combined with data gathered from other sources, like a recent survey conducted by Google on behalf of Carnegie Mellon University, or perhaps data from an app developed by South Korea’s government, which let citizens report their symptoms.
hide by Charlotte Jee Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain.
By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative.
By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,482 | 2,020 |
"Of course technology perpetuates racism. It was designed that way. | MIT Technology Review"
|
"https://www.technologyreview.com/2020/06/03/1002589/technology-perpetuates-racism-by-design-simulmatics-charlton-mcilwain"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Of course technology perpetuates racism. It was designed that way.
By Charlton McIlwain archive page Source photo: Getty / Ms Tech Today the United States crumbles under the weight of two pandemics: coronavirus and police brutality.
Both wreak physical and psychological violence. Both disproportionately kill and debilitate black and brown people. And both are animated by technology that we design, repurpose, and deploy—whether it’s contact tracing, facial recognition, or social media.
We often call on technology to help solve problems. But when society defines, frames, and represents people of color as “the problem,” those solutions often do more harm than good. We’ve designed facial recognition technologies that target criminal suspects on the basis of skin color. We’ve trained automated risk profiling systems that disproportionately identify Latinx people as illegal immigrants. We’ve devised credit scoring algorithms that disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs.
So the question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy, Of course, it’s not a new question at all.
Uncivil rights In 1960, Democratic Party leaders confronted their own problem: How could their presidential candidate, John F. Kennedy, shore up waning support from black people and other racial minorities? An enterprising political scientist at MIT, Ithiel de Sola Pool , approached them with a solution. He would gather voter data from earlier presidential elections, feed it into a new digital processing machine, develop an algorithm to model voting behavior, predict what policy positions would lead to the most favorable results, and then advise the Kennedy campaign to act accordingly. Pool started a new company, the Simulmatics Corporation , and executed his plan. He succeeded, Kennedy was elected, and the results showcased the power of this new method of predictive modeling.
Racial tension escalated throughout the 1960s. Then came the long, hot summer of 1967. Cities across the nation burned, from Birmingham, Alabama, to Rochester, New York, to Minneapolis Minnesota, and many more in between. Black Americans protested the oppression and discrimination they faced at the hands of America’s criminal justice system. But President Johnson called it “civil disorder,” and formed the Kerner Commission to understand the causes of “ghetto riots.” The commission called on Simulmatics.
As part of a DARPA project aimed at turning the tide of the Vietnam War, Pool’s company had been hard at work preparing a massive propaganda and psychological campaign against the Vietcong. President Johnson was eager to deploy Simulmatics’s behavioral influence technology to quell the nation’s domestic threat, not just its foreign enemies. Under the guise of what they called a “media study,” Simulmatics built a team for what amounted to a large-scale surveillance campaign in the “riot-affected areas” that captured the nation’s attention that summer of 1967.
Three-member teams went into areas where riots had taken place that summer. They identified and interviewed strategically important black people. They followed up to identify and interview other black residents, in every venue from barbershops to churches. They asked residents what they thought about the news media’s coverage of the “riots.” But they collected data on so much more, too: how people moved in and around the city during the unrest, who they talked to before and during, and how they prepared for the aftermath. They collected data on toll booth usage, gas station sales, and bus routes. They gained entry to these communities under the pretense of trying to understand how news media supposedly inflamed “riots.” But Johnson and the nation’s political leaders were trying to solve a problem. They aimed to use the information that Simulmatics collected to trace information flow during protests to identify influencers and decapitate the protests’ leadership.
They didn’t accomplish this directly. They did not murder people, put people in jail, or secretly “disappear” them.
But by the end of the 1960s, this kind of information had helped create what came to be known as “criminal justice information systems.” They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated.
Reframing the problem Blackness and black people. Both persist as our nation’s—dare I say even our world’s—problem. When contact tracing first cropped up at the beginning of the pandemic, it was easy to see it as a necessary but benign health surveillance tool. The coronavirus was our problem, and we began to design new surveillance technologies in the form of contact tracing, temperature monitoring, and threat mapping applications to help address it.
But something both curious and tragic happened. We discovered that black people, Latinx people, and indigenous populations were disproportionately infected and affected. Suddenly, we also became a national problem; we disproportionately threatened to spread the virus. That was compounded when the tragic murder of George Floyd by a white police officer sent thousands of protesters into the streets. When the looting and rioting started, we—black people—were again seen as a threat to law and order, a threat to a system that perpetuates white racial power. It makes you wonder how long it will take for law enforcement to deploy those technologies we first designed to fight covid-19 to quell the threat that black people supposedly pose to the nation’s safety.
If we don’t want our technology to be used to perpetuate racism, then we must make sure that we don’t conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate.
Charlton McIlwain is a professor of media, culture, and communication at New York University and author of Black Software: The Internet & Racial Justice, From the AfroNet to Black Lives Matter hide by Charlton McIlwain Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our July/August 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,483 | 2,019 |
"China has started a grand experiment in AI education. It could reshape how the world learns. | MIT Technology Review"
|
"https://www.technologyreview.com/2019/08/02/131198/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts China has started a grand experiment in AI education. It could reshape how the world learns.
In recent years, the country has rushed to pursue “intelligent education.” Now its billion-dollar ed-tech companies are planning to export their vision overseas.
By Karen Hao archive page Student wearing headphones and looking at a computer monitor Noah Sheldon Zhou Yi was terrible at math. He risked never getting into college. Then a company called Squirrel AI came to his middle school in Hangzhou, China, promising personalized tutoring. He had tried tutoring services before, but this one was different: instead of a human teacher, an AI algorithm would curate his lessons. The 13-year-old decided to give it a try. By the end of the semester, his test scores had risen from 50% to 62.5%. Two years later, he scored an 85% on his final middle school exam.
“I used to think math was terrifying,” he says. “But through tutoring, I realized it really isn’t that hard. It helped me take the first step down a different path.” Experts agree AI will be important in 21st-century education—but how? While academics have puzzled over best practices, China hasn’t waited around. In the last few years, the country’s investment in AI-enabled teaching and learning has exploded. Tech giants, startups, and education incumbents have all jumped in. Tens of millions of students now use some form of AI to learn—whether through extracurricular tutoring programs like Squirrel’s, through digital learning platforms like 17ZuoYe , or even in their main classrooms.
It’s the world’s biggest experiment on AI in education, and no one can predict the outcome.
Silicon Valley is also keenly interested. In a report in March, the Chan-Zuckerberg Initiative and the Bill and Melinda Gates Foundation identified AI as an educational tool worthy of investment. In his 2018 book Rewiring Education , John Couch, Apple’s vice president of education, lauded Squirrel AI. (A Chinese version of the book is coauthored by Squirrel’s founder, Derek Li.) Squirrel also opened a joint research lab with Carnegie Mellon University this year to study personalized learning at scale, then export it globally.
But experts worry about the direction this rush to AI in education is taking. At best, they say, AI can help teachers foster their students’ interests and strengths. At worst, it could further entrench a global trend toward standardized learning and testing, leaving the next generation ill prepared to adapt in a rapidly changing world of work.
As one of the largest AI education companies in China, Squirrel highlights this tension. And as one of the best-poised to spread overseas, it offers a window into how China’s experiments could shape the rest of the world.
The learning center that Zhou attends, one of the first that Squirrel opened, occupies the second floor of an unassuming building on a busy commercial road in Hangzhou, a second-tier city in Zhejiang province. Company awards line the walls in the stairwell. Further in, large photographs of at least a dozen men are on display: half of them are Squirrel AI’s executives and the others are master teachers, a title bestowed on the best teachers in China, who help develop the company’s curriculum.
The school’s interior decorations are modest. The foyer is small and colorful with lime-green accents. Photos of smiling students hang along the corridor between six or so classrooms. Inside, faded decals of trees and simple mottos like “Be humble” enliven the walls. There are no whiteboards, projectors, or other equipment—just one table per room, meant for six to eight people.
The vehicle of instruction is the laptop. Students and teachers alike stare intently at screens. In one room, two students wear headsets, engrossed in an English tutoring session. In another, three students, including Zhou, take three separate math classes. They work out practice problems on pieces of paper before submitting their answers online. In each room, a teacher monitors the students through a real-time dashboard.
At different points, both teachers notice something on their screen that prompts them to walk over and kneel by a student’s chair. They speak in hushed tones, presumably to answer a question the tutoring system can’t resolve. Though I’m just feet away, I can’t distinguish their words above the soft hum of traffic on the street below.
“It’s so quiet,” I whisper to the small gang of school and company staff assembled for my tour. The Hangzhou regional director smiles with what I interpret as a hint of pride: “There are no sounds of teachers lecturing.” Three things have fueled China’s AI education boom. The first is tax breaks and other incentives for AI ventures that improve anything from student learning to teacher training to school management. For VCs, this means such ventures are good bets. According to one estimate , China led the way in over $1 billion invested globally last year in AI education.
Second, academic competition in China is fierce. Ten million students a year take the college entrance exam, the gaokao.
Your score determines whether and where you can study for a degree, and it’s seen as the biggest determinant of success for the rest of your life. Parents willingly pay for tutoring or anything else that helps their children get ahead.
Finally, Chinese entrepreneurs have masses of data at their disposal to train and refine their algorithms. The population is vast, people’s views on data privacy are much more lax than in the West (especially if they can get coveted benefits like academic performance in return), and parents are big believers in the potential of technology, having seen how much it has transformed the country in just a few decades.
Squirrel focuses on helping students score better on annual standardized tests, which taps straight into national gaokao anxiety; more than 80% of its students return year after year, it says. It also designed its system to capture ever more data from the beginning, which has made possible all kinds of personalization and prediction experiments. It heavily markets its technical capabilities through academic publications, international collaborations, and awards, which has made it a darling of the Shanghai local government.
The strategy has fueled mind-boggling growth. In the five years since it was founded, the company has opened 2,000 learning centers in 200 cities and registered over a million students—equal to New York City’s entire public school system. It plans to expand to 2,000 more centers domestically within a year. To date, the company has also raised over $180 million in funding. At the end of last year, it gained unicorn status, surpassing $1 billion in valuation.
Squirrel isn’t the first company to pursue the concept of an AI tutor. The earliest efforts to “replicate” teachers date back to the 1970s, when computers first started being used in education. Then, between 1982 and 1984, several studies in the US showed that students who received one-on-one human tutoring performed far better than students who didn’t. This set off a new wave of efforts to re-create that kind of individual attention in a machine. The result was adaptive learning systems, which can now be found everywhere from kindergartens to workplace training centers.
Squirrel’s innovation is in its granularity and scale. For every course it offers, its engineering team works with a group of master teachers to subdivide the subject into the smallest possible conceptual pieces. Middle school math, for example, is broken into over 10,000 atomic elements, or “knowledge points,” such as rational numbers, the properties of a triangle, and the Pythagorean theorem. The goal is to diagnose a student’s gaps in understanding as precisely as possible. By comparison, a textbook might divide the same subject into 3,000 points; ALEKS, an adaptive learning platform developed by US-based McGraw-Hill, which inspired Squirrel’s, divides it into roughly 1,000.
Once the knowledge points are set, they are paired with video lectures, notes, worked examples, and practice problems. Their relationships—how they build on each other and overlap—are encoded in a “knowledge graph,” also based on the master teachers’ experience.
A student begins a course of study with a short diagnostic test to assess how well she understands key concepts. If she correctly answers an early question, the system will assume she knows related concepts and skip ahead. Within 10 questions, the system has a rough sketch of what she needs to work on, and uses it to build a curriculum. As she studies, the system updates its model of her understanding and adjusts the curriculum accordingly. As more students use the system, it spots previously unrealized connections between concepts. The machine-learning algorithms then update the relationships in the knowledge graph to take these new connections into account. While ALEKS does some of this as well, Squirrel claims that its machine-learning optimizations are more limited, making it, in theory, less effective.
Squirrel has offered some validation of its system. In October 2017, for example, a self-funded four-day study with 78 middle school students found that the system was better on average at lifting math test scores than experienced teachers teaching a dozen or so kids in a traditional classroom.
The students I speak to at the learning center have high praise for the tutoring program as well. All are finishing middle school and have been coming to the center for more than a year. One girl, Fu Weiyi, tells me she’s improved far faster than when she got individual tutoring from a human teacher. “Here, I have a teacher both on and offline,” she says. “Plus, the instruction is very targeted; the system can directly identify the gaps in my understanding.” Another student echoes the sentiment: “With the system, you don’t have to do tons of exercises, but it’s still effective. It really saves time.” While I have to take their words with a grain of salt—the students are hand-picked and give their testimonials under intense supervision—I’m still touched by their relief that they’ve found a formula that works to ameliorate the often brutal academic environment. Zhou Yi’s story, perhaps not coincidentally, also neatly illustrates how Squirrel can help struggling students.
For Squirrel’s founder Li, this vision doesn’t stop at tutoring. He has ambitions to break out of the confines of after-school programming and integrate his curriculum directly into the main classroom. Squirrel is already in discussion with several schools in China to make its system the primary method of instruction.
I try to imagine what this world might be like, and whether we might be better off for it. I ask the students one last question: Is there anything that Squirrel could improve? A long pause. Then: “I wish we had more interaction with our human teachers,” Fu says.
A teacher helping a student via video as part of Squirrel's new remote tutoring program.
Every educational expert I spoke to for this story began by making the same point: to understand how AI could improve teaching and learning, you need to think about how it is reshaping the nature of work.
As machines become better at rote tasks, humans will need to focus on the skills that remain unique to them: creativity, collaboration, communication, and problem-solving. They will also need to adapt quickly as more and more skills fall prey to automation. This means the 21st-century classroom should bring out the strengths and interests of each person, rather than impart a canonical set of knowledge more suited for the industrial age.
AI, in theory, could make this easier. It could take over certain rote tasks in the classroom, freeing teachers up to pay more attention to each student. Hypotheses differ about what that might look like. Perhaps AI will teach certain kinds of knowledge while humans teach others; perhaps it will help teachers keep track of student performance or give students more control over how they learn. Regardless, the ultimate goal is deeply personalized teaching.
Squirrel’s approach may yield great results on traditional education, but it doesn’t prepare students to be flexible in a changing world, the experts I spoke to say. “There’s a difference between adaptive learning and personalized learning,” says Chris Dede, a professor at Harvard University in the Technology, Innovation, and Education Program. Squirrel is doing adaptive learning, which is about “understanding exactly what students know and don’t know.” But it pays no attention to what they want to know or how they learn best. Personalized learning takes their interests and needs into account to “orchestrate the motivation and time for each student so they are able to make progress.” Jutta Treviranus, a professor at the Ontario College of Art and Design University who pioneered personalized learning to improve inclusivity in education, breaks it down further. “Personalized learning has a number of levels,” she says: she calls them pace, path, and destination.
If the pace of learning is personalized, students with different abilities are allowed different amounts of time to learn the same material. If the path is personalized, students might be given different motivations to reach the same objectives (“Here’s why statistics is relevant to your love of baseball”) and offered the material in different formats (e.g., video versus text). If the destination is personalized, students can choose, for instance, whether to learn with a vocational school or a university in mind.
“We need students to understand their own learning. We need them to determine what they want to learn, and we need them to learn to learn,” Treviranus says. “Squirrel AI doesn’t address those things at all. It only makes it more efficient to bring all of the students to the same standardized place.” That doesn’t mean that adaptive learning systems won’t have any place in the 21st-century classroom. David Dockterman, a colleague of Dede’s, believes their strength in training people on structured knowledge is still valuable. But it would be a mistake to make them the predominant “teacher” in the classroom: “The kinds of rote activities—knowledge retrieval, skill acquisition—that are more readily teachable with a smart tutor are also the things that are more readily accomplished by a smart machine,” he says.
Li, Squirrel’s founder, is tall and lanky and has severe cheekbones. When he speaks English, he punctuates every few sentences with “Right?,” eyebrows raised, to make sure you’re on the same page. When he speaks Mandarin, his words tumble out twice as fast.
A week after my visit to the learning center, I meet him at Squirrel’s headquarters in Shanghai. In the style of an understated showman, he gives me the grand tour. The modesty of the learning center stands in sharp contrast to the office décor here: each wall boasts of different details about the company and milestones it’s reached. Here’s one with all its media mentions. Here’s another with all its awards. And here are some examples of students who were deemed “hopeless” and then saved. I run into another tour before I’ve finished my own.
A few steps past the first door, Li points out a screen to my immediate left playing a TV clip on repeat. It’s a game show featuring a showdown between Squirrel’s tutoring system and a human teacher—one of the best in China, he says. Three of the teacher’s students, whom he has taught for three years, stand alongside him on stage solving problems. The system and the teacher compete to predict which ones they will get right.
Li doesn’t wait for the clip to end to reveal the punchline: “In three hours we understand students more than the three years spent by the best teachers.” On screen, the teacher looks increasingly crestfallen and humiliated. “He looks so sad,” I say.
“You noticed!” Li laughs.
Much of Squirrel’s philosophy stems from Li’s own experiences as a child. When he was young, he didn’t have very good emotional intelligence, he says, and reading books on the subject didn’t help. So he spent half a year dividing the skill into 27 different components and trained himself on each one. He trained himself to be more observant, for example, and to be an interesting conversationalist (“I spent a lot of time finding 100 topics, so I have a lot of material to talk with others,” he says). He even trained himself to keep smiling when others criticized him. (“After that, in my life, I do not have any enemies.”) The method gave him the results he wanted—along with the firm belief that anything can be taught this way.
Li uses an analogy to lay out his ultimate vision. “When AI education prevails,” he says, “human teachers will be like a pilot.” They will monitor the readouts while the algorithm flies the plane, and for the most part they will play a passive role. But every so often, when there’s an alert and a passenger panics (say, a student gets bullied), they can step in to calm things down. “Human teachers will focus on emotional communication,” he says.
Li thinks this is the only way humanity will be able to elevate its collective intelligence. Entrusting teachers with anything else could risk “damaging geniuses.” He’s playing out this philosophy on his own kids, using Squirrel’s system as much as possible to train them. He boasts that his eight-year-old twin boys, in the second grade, are now learning eighth-grade physics, a testament that his method is working. “Only adaptive systems could make such miracles,” he says.
Squirrel is already exporting its technology abroad. It has cultivated its international reputation by appearing at some of the largest AI conferences around the world and bringing on reputable collaborators affiliated with MIT, Harvard, and other prestigious research institutes. Li has also recruited several Americans to serve on his executive team, with the intent of pushing into the US and Europe in the next two years. One of them is Tom Mitchell, the dean of computer science at Carnegie Mellon; another is Dan Bindman, who led the user experience and editorial teams at ALEKS.
Treviranus worries that Squirrel’s educational philosophy is representative of a broader flaw in China’s pursuit of intelligent education: its emphasis on standardized learning and testing. “The tragedy of the China experiments is that they’re taking the country to a point of education that any progressive pedagogue or education system is moving away from,” she says.
But she believes that China also has one of the best opportunities to reinvent a more teacher-friendly, learner-focused classroom environment. It is less entrenched than the West in older models of education and much more willing to try new ideas. “China needs to look at a completely different form of AI,” she says. The question is: What does that mean? The answer may lie a dozen miles west of Squirrel’s headquarters, across from the Huangpu River that courses through Shanghai. There, Pan Pengkai, a children’s educational expert, is conducting experiments of a different nature.
Pan has been thinking about how to use AI in education for nearly two decades. Fifteen years ago, he founded his first ed-tech company in China after getting his PhD from the MIT Media Lab. Inspired by his experience in grad school, he focused on building tools for learning English. “Innovation comes from difference,” he says. “That’s exactly what China lacks. If you are able to speak multiple languages, you are able to talk to different people; you are able to communicate different ideas.” Pan now runs Alo7, a K-12 ed-tech company with the same mission of teaching English. Unlike many other firms, though, it seeks to move away from test-oriented learning and instead foster creativity, leadership, and other soft skills. The company offers products and services for both physical and digital classrooms. It has an online learning platform, paired with a collection of textbooks, for example, that help students learn and practice their language skills. It also has a service that connects up to three pupils via video with English tutors abroad for regular group lessons. To date, it has served some 15 million students and teachers and partnered with 1,500 institutions nationally.
Unlike Squirrel, Alo7’s online learning platform is meant to supplement a traditional classroom. Knowledge that can be exercised through adaptive learning, like vocabulary words, is practiced at home through the app. So are skills like pronunciation, which can be refined through speech-recognition algorithms. But anything requiring creativity, like writing and conversation, is learned in the classroom. The teacher’s contribution is vital. Pan offers a parallel: “There’s lots of medical technology used in hospitals, but we cannot say the machine is better than the doctors. It’s still a doctor’s assistant tool.” Pan’s ultimate vision for AI in education is to get rid of standardized tests entirely. “Why do we test people for two or three hours to determine if they are good or bad?” he asks. He thinks AI will eventually create flexible learning environments that are as good for sensitive and creative students as for precise and analytical ones. Education will no longer be about competition, he says.
Last year Alo7 began to experiment more. It added face and voice analysis to its video tutoring sessions to produce summary reports of each lesson. Algorithms measure how much time the students spoke English in class, the accuracy of their English pronunciation, and basic indicators of their engagement and joy, such as the number of times they opened their mouth to speak and laugh. Earlier this year, the company created several physical classrooms equipped with cameras and microphones to produce similar analyses. Teachers get reports on their own performance, too.
I go see one of Alo7’s intelligent classrooms for myself. It’s small but bursting with color. The walls are illustrated with the company's mascots, five cartoon companions with distinct personalities, which appear throughout the company’s educational materials. There are neither tables nor chairs, just a bench that runs along the back wall. At the front are a whiteboard and two TVs for displaying the day’s curriculum.
There are no classes in session, but a company employee plays me some short clips of elementary school classes. In one, six students sit on a bench and practice saying the names of different animals. “Bird, bird, bird!” they chant with their teacher as she flaps her arms like wings. “Turtle, turtle, turtle!” they continue as the screen changes its display to a cartoon turtle. The teacher-student interactions take the foreground; the AI purposely fades, unnoticed, into the back.
Dede says the kind of data generated in an intelligent classroom could be useful, but he cautions that cameras and other sensors could also be misused to judge a student’s emotions or state of mind, applications that have little grounding in science and could lead to over-surveillance. Pan agrees that it’s important to be careful: “That’s why we provide the data mainly for teachers and not students, because we haven’t yet run scientific tests.” Pan tells me he doesn’t have plans to expand Alo7 beyond China. The domestic market alone is enough of a challenge when he’s selling an education philosophy that goes against the mainstream. But he’s begun to see a shift in the national conversation. As government leaders have sought new ways to stimulate innovation, the idea of a “quality-oriented education”—one that emphasizes creativity and the liberal arts—has gained momentum.
In February of last year, China’s education ministry passed a series of reforms, including stricter licensing for tutors, aimed at reducing the obsession with testing.
Earlier this month, the government also unveiled a set of guidelines to focus more on physical, moral, and artistic education, and less on exams. Though critics point out it still hasn’t eliminated the gaokao , Pan is optimistic about its intent to change. Alo7 is also ready to help the country search for new paths forward.
“We want to change the future of Chinese education with technology,” Pan always says. China’s current mass experiment in AI education, and the choices it must make, might also change education for the world.
Corrections: Alo7’s video-tutoring service follows a group format; it’s not one-on-one. After publication, Squirrel clarified that Dan Bindman was the editorial director at ALEKS, not a co-founder as stated on Squirrel's website, and both companies use fewer knowledge points than the former previously shared. ALEKS also has more data than Squirrel.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,484 | 2,019 |
"Giving medicine a dose of AI | MIT Technology Review"
|
"https://www.technologyreview.com/2019/04/24/135725/giving-medicine-a-dose-of-ai"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Giving medicine a dose of AI By Karen Weintraub archive page Conceptual image of medical scans and artificial intelligence Jamie Jones For years artificial intelligence has been seen as the “next big thing” in medicine. Now some MIT professors, students, and alumni are stepping up to make sure it really will be.
Through the Abdul Latif Jameel Clinic for Machine Learning in Health , or J-Clinic, announced last fall, researchers from across MIT will be pursuing projects aimed at developing novel machine-learning methods to improve clinical care, design new medicines, and reduce health-care costs. The initiative will apply AI to a wide range of diseases and builds on ongoing MIT research, including work on drug discovery and early advances in cancer diagnostics by Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science.
Barzilay says it’s time for artificial intelligence to become a standard part of cancer care. “In every single cancer center in the US, be it a community clinic or the top cancer center in the country, there is a serious need to bring AI in,” says Barzilay, a member of both CSAIL and MIT’s Koch Institute for Integrative Cancer Research. After her breast cancer was missed for several years, she began using image-processing algorithms to analyze mammograms. The idea is to go beyond what humans can see in a scan to detect early changes in tissue that mark the path toward cancer.
Institute Professor and Nobel laureate Phillip Sharp, who chairs J-Clinic’s advisory board, says there’s no doubt artificial intelligence and deep learning can—and must—transform medical care. Sharp says that by contributing to earlier diagnoses, AI can improve patients’ quality and length of life. Specifically, he thinks it can transform radiology, make sense of molecular and genetic data to distinguish between malignant and harmless cells, and spot patterns in medical data that can warn of impending problems. He also thinks it can improve the cost-efficiency of medical care by diagnosing disease earlier, when treatment is less expensive and more effective. “We have to get more efficient in health-care delivery,” he says.
Through J-Clinic, he says, MIT will play a crucial role in developing these technologies and training their users, just as MIT has done in molecular biology, cellular biology, genetics, and biotechnology. Barzilay and James Collins, the Termeer Professor of Medical Engineering and Science, serve as faculty co-leads for J-Clinic, a major collaborative effort between MIT and Community Jameel, the social-enterprise organization founded and chaired by Mohammed Abdul Latif Jameel ’78.
Machine learning arrives in health care AI has taken longer to be applied in health care than most other industries because the stakes are so high. If Amazon tries out a new algorithm that doesn’t work, the company might be out some money. In medicine, people might die. That’s why only 5% of US hospitals reported using some form of artificial intelligence in 2017. But things are finally starting to change. Major hospitals and pharmaceutical companies now invoke AI when they talk about their future. Conferences get broad attendance, and medical AI startups are becoming more common. Computers can now see and read—not as well as people, but they’re getting there, says Michael Hayes, SM ’96, who launched the nonprofit startup CancerAI in 2018 to bring artificial-intelligence tools to market.
Today’s artificial intelligence is based on algorithms that parse gigantic data sets. So-called deep learning, which has advanced substantially in the last decade, allows researchers to draw conclusions from huge amounts of data. Visual and natural-language processing technologies have also improved dramatically. And data storage has gotten substantially cheaper.
“Ten years ago, there weren’t the amount of electronic medical records there are today,” Hayes says. “And even if they did exist, we didn’t have algorithms that could understand doctors’ notes very well and didn’t have computers cheap enough.” The scene now looks very different on all those fronts. “What 10 years ago would have been a supercomputer costing $1 million—that level of computing now can be purchased for a few thousand dollars,” he says. “That has changed the game in a big way.” From left, J-Clinic faculty co-leads Regina Barzilay and James Collins and J-Clinic advisory board chair and Institute Professor Phillip Sharp.
Since it was formed last fall, J-Clinic, which is part of the MIT Quest for Intelligence and is chaired by the dean of the School of Engineering, Anantha Chandrakasan, has put out a request for proposals within MIT. So far, professors and students have proposed 43 research projects that would use these advances to benefit patients. Improving diagnosis, targeting treatments to individual patients, and understanding disease progression “are all prediction problems,” Barzilay says. And prediction is where AI excels.
One issue, though, has been that customizing machine-learning algorithms for clinical settings involves training them with what she describes as “massive amounts of manually annotated data.” J-Clinic researchers plan to develop algorithms that aren’t as dependent on hand-labeled data—and that can use data from related domains to fill in gaps in the target area. “Rather than training supervised-learning systems for each individual hospital system and for each disease, we are developing algorithms that can be easily adapted to new settings and different diseases,” Barzilay explains.
Protecting patient privacy and ensuring that the data reflects the diversity of the population are also key goals of J-Clinic. Researchers are developing algorithms that can perform computations on encrypted data, so patients don’t need to fear intimate health information being left in the open. And J-Clinic is building a large, international network spanning everything from rural clinics to major urban academic hospitals to implement and test the algorithms they develop. The hope is that this will make their work much more generalizable than other health-care algorithms published to date, most of which are trained on data from a single hospital.
“What 10 years ago would have been a supercomputer costing $1 million—that level of computing now can be purchased for a few thousand dollars. That has changed the game in a big way.” Applying AI to mammography The work already under way in the lab of Barzilay, a 2017 MacArthur “genius grant” winner and a leader in the AI field, offers a glimpse into the potential that J-Clinic and startups like CancerAI can help unlock. One area of her research involves using machine learning to accelerate drug discovery. That work helps developers zero in on molecules with promising properties for fighting cancer and a wide range of other diseases. (See “ AI is reinventing the way we invent ,” MIT Technology Review, March/April 2019.) On the cancer diagnostics front, she’s also one of the first AI researchers to develop a tool that actually helps people.
In a paper published last year in Radiology, she and her colleagues, including researchers from Massachusetts General Hospital, used AI to develop a method for assessing the density of breast tissue. Today, mammograms miss about 15% of breast tumors—and they miss more than half, according to several studies, if the breast tissue is dense, which makes tumors harder to see. More than 40% of American women have dense breast tissue, which also puts them at higher risk for breast cancer.
Barzilay and her colleagues used more than 41,000 digital mammograms, evaluated and classified by experts, to train a deep-learning algorithm to assess density so that women who may require extra screening can be identified. In a six-month trial looking at over 10,000 mammograms, the model agreed with Mass. General radiologists 94% of the time, making it the first time this kind of deep learning had been used successfully in a clinical setting. Barzilay and her collaborators now hope to scale their system up to other hospitals.
Barzilay is also using AI to detect the earliest changes on the road to breast cancer—changes that a pathologist can’t see. “Cancer doesn’t grow from today to tomorrow. It’s actually a very long process, which makes a lot of changes in tissue,” she told the audience at the “Hello World, Hello MIT” conference celebrating the launch of the MIT Schwarzman College of Computing in February. She showed two mammograms, one from a woman who had gone on to get breast cancer two years after the scan. “The logical question is: can you take the machine and train it on the images, when we know the outcome in two years or five years, to say what is there to come?” As it turns out, she said, “the machine was able to do this task pretty well.” Barzilay, her grad student Adam Yala ’16, MEng ’17, and Constance Lehman, head of breast imaging at Mass. General, developed a model that identified characteristics often preceding the appearance of cancer—and if those characteristics show up in a mammogram, the patient can be flagged.
In late February, physicians at Mass. General began testing that risk model. A woman whose mammogram places her in the riskiest 20%, Barzilay says, has “a very nontrivial chance to get breast cancer.” Now, doctors at Mass. General are working to figure out how to use that information to change her odds.
The promise and perils of health-care AI That vision for artificial intelligence is a far cry from the current use of digital technology in doctor’s offices, which is mostly limited to electronic medical records that have yet to live up to their potential. Such systems can leave doctors burned out, forcing them to devote such long hours to inputting data that they spend more time with their computer screens than their patients.
Commercializing AI without the profit motive Nonprofit develops AI tools to fight cancer.
In 2017, serial entrepreneur Michael Hayes, SM ’96, went looking for a new business opportunity when he was ready to move on from his latest company, an AI software firm. As a throat cancer survivor, he decided machine learning had matured enough to warrant focusing his new company on using AI to fight cancer.
But after doing his due diligence, he realized he could attract mission-driven employees and get better access to medical data by founding his company, CancerAI, as a nonprofit. That decision paid off in unanticipated ways, netting him free office space from WeWork, offers of pro bono legal work, and programmers volunteering to work for nothing.
“I expected to be able to hire great people, but I didn’t expect people from outside to say ‘I’d be willing to volunteer nights and weekends, because I want to help,’” says Hayes, who holds a master’s in environmental engineering from MIT and a master’s in business and policy from Tufts. “I guarantee you that doesn’t happen in for-profit entities.” Hayes, who chairs CancerAI’s board, says the company is using Regina Barzilay’s research as one of its foundations but hasn’t chosen its first product. (Barzilay also serves as one of CancerAI’s advisors.) The company aims to start in the area of diagnostics, perhaps by using medical records data to flag people whose biometrics suggest they may be at high risk for particular cancers. Early testing that finds cancers before they’ve spread, he says, “is possibly the shortest path to reducing mortality.” But on the plus side, electronic records have allowed hospitals to amass huge quantities of patient data that AI researchers hope will eventually pay dividends for patients, caregivers, hospitals, and insurers.
As J-Clinic and startups like CancerAI begin tapping into that data, Collins, J-Clinic’s other faculty co-lead, says he sees J-Clinic not just bringing together AI experts, medical experts, and data sets to advance medical research but also helping translate that research into the clinical setting. It’ll do this, he says, by getting early technologies into hospitals for testing and validation, and by facilitating the launch of companies to commercialize them. He also envisions J-Clinic initiating a public discussion around what he calls the “promise and perils of AI and health care”—and asking hard questions about how to enhance existing care, reduce costs, protect patient privacy, and ethically obtain useful data.
Although technological innovation usually drives up medical costs, he hopes that artificial intelligence will be an exception, perhaps by maximizing bed usage, limiting the time doctors spend on administrative duties, and developing drugs more economically. “I’m curious as to the ways AI can help with efficiencies in health care—whether it’s bed usage, scheduling, billings—to squeeze out the administrative overhead there,” says Collins, whose wife is a physician. He thinks that the administrative burden of electronic medical records could be reversed with better technology, potentially leading to savings.
In his own lab, Collins, a synthetic biologist, plans to use AI platforms to better identify novel classes of antibiotics and cancer drugs, among others. “I’m keen to explore in what way AI can be used more broadly as a useful assistant in the context of research and potentially in the context of medicine,” he says.
hide by Karen Weintraub Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our May/June 2019 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,485 | 2,019 |
"Emergent tool use from multi-agent interaction"
|
"https://openai.com/blog/emergent-tool-use"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Illustration: Ben Barry Research Emergent tool use from multi-agent interaction We’ve observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in our new simulated hide-and-seek environment, agents build a series of six distinct strategies and counterstrategies, some of which we did not know our environment supported. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior.
September 17, 2019 More resources Read paper Environment generation Worldgen Multi-agent , Reinforcement learning , Milestone , Publication , Release Multi-Agent hide-and-seek 02:57 In our environment, agents play a team-based hide-and-seek game. Hiders (blue) are tasked with avoiding line-of-sight from the seekers (red), and seekers are tasked with keeping vision of the hiders. There are objects scattered throughout the environment that hiders and seekers can grab and lock in place, as well as randomly generated immovable rooms and walls that agents must learn to navigate. Before the game begins, hiders are given a preparation phase where seekers are immobilized to give hiders a chance to run away or change their environment.
There are no explicit incentives for agents to interact with objects in the environment; the only supervision given is through the hide-and-seek objective. Agents are given a team-based reward; hiders are given a reward of +1 if all hiders are hidden and -1 if any hider is seen by a seeker. Seekers are given the opposite reward, -1 if all hiders are hidden and +1 otherwise. To confine agent behavior to a reasonable space, agents are penalized if they go too far outside the play area. During the preparation phase, all agents are given zero reward.
Autocurricula and emergent behavior As agents train against each other in hide-and-seek, as many as six distinct strategies emerge.
Each new strategy creates a previously nonexistent pressure for agents to progress to the next stage. Note that there are no direct incentives for agents to interact with objects or to explore; rather, the emergent strategies shown below are a result of the autocurriculum induced by multi-agent competition and the simple dynamics of hide-and-seek.
Emergence in a simple environment We first show emergent We first show emergent strategies learned in a simple environment with 2 hiders, 2 seekers, 2 boxes, 1 ramp, and a room that has between 1–2 randomly placed doors.
Rewards per episode Emergence in the full environment Next, we show emergent strategies learned in an environment with far more randomization, including 1–3 hiders, 1–3 seekers, 3–9 blocks, 2 ramps, and randomized rooms and doors. In this full environment, agents go through two more phases of emergent strategy than in the previous simple environment.
Rewards per episode Training hide-and-seek agents We use the same training infrastructure and algorithms used to train OpenAI Five and Dactyl.
However, in our environment each agent acts independently, using its own observations and hidden memory state. Agents use an entity-centric state-based representation of the world, which is permutation invariant with respect to objects and other agents.
Each object is embedded and then passed through a masked residual self attention block, similar to those used in transformers , where the attention is over objects instead of over time. Objects that are not in line-of-sight and in front of the agent are masked out such that the agent has no information of them.
Agent policies are trained with self-play and Proximal Policy Optimization.
During optimization, agents can use privileged information about obscured objects and other agents in their value function.
We found that large scale training was critical in agents progressing through the various stages of emergence. Below we show both the time and number of episodes it takes agents to reach stage 4 (ramp defense) for various batch sizes. We find increasing batch size gives a drastic speedup in wall-clock time to convergence, though doesn’t affect the sample efficiency greatly at or above 32k. However, we found that batch sizes of 8k and 16k never reached stage 4 in the allotted number of episodes.
Increasing batch size speeds up time to convergence Multi-agent competition vs. intrinsic motivation In this work we show evidence that agents learn complex strategies and counterstrategies through a self-supervised autocurriculum in hide-and-seek. Another method to learn skills in an unsupervised manner is intrinsic motivation , which incentivizes agents to explore with various metrics such as model error or state counts. We ran count-based exploration in our environment, in which agents keep an explicit count of states they’ve visited and are incentivized to go to infrequently visited states. The primary modeling choice to tune in this setting is the state representation; for instance, in our first baseline we only include 2-D box positions in the state, such that agents are only incentivized to interact with and move boxes to novel positions. We then compare this to a count-based policy which takes the full state given to the agents that play hide-and-seek.
As can be seen, agents trained in hide-and-seek qualitatively center around far more human interpretable behaviors such as shelter construction, whereas agents trained with intrinsic motivation move objects around in a seemingly undirected fashion. Furthermore, as the state space increases in complexity, we find that intrinsic motivation methods have less and less meaningful interactions with the objects in their environment. For this reason, we believe multi-agent competition will be a more scalable method for generating human-relevant skills in an unsupervised manner as environments continue to increase in size and complexity.
Transfer and fine-tuning as evaluation In the previous section, we qualitatively compare behaviors learned in hide-and-seek to those learned with intrinsic motivation. However, as environments increase in scale, so will the difficulty in qualitatively measuring progress. Tracking reward is an insufficient evaluation metric in multi-agent settings, as it can be ambiguous in indicating whether agents are improving evenly or have stagnated. Metrics like ELO or Trueskill can more reliably measure whether performance is improving relative to previous policy versions or other policies in a population; however, these metrics still do not give insight into whether improved performance is caused by new adaptations or improving previously learned skills. Finally, using environment-specific statistics such as object movement can also be ambiguous (for example, the choice to track absolute movement does not illuminate which direction agents moved), and designing sufficient metrics will become difficult and costly as environments scale.
We propose using a suite of domain-specific intelligence tests that target capabilities we believe agents may eventually acquire. Transfer performance in these settings can act as a quantitative measure of representation quality or skill, and we compare against pretraining with count-based exploration as well as a trained from scratch baseline.
Though the hide-and-seek agent performs better on many of the transfer tasks, it does not drastically improve performance or convergence time. From viewing its behavior, we know it has the latent skill to move objects in a precise manner to construct shelter in the hide-and-seek game; however, it does not have the capability to use this skill in other contexts when trained with a low number of samples.
We believe the cause for the mixed transfer results is rooted in agents learning skill representations that are entangled and difficult to fine-tune. As future environments become more diverse and agents must use skills in more contexts, we believe we will see more generalizable skill representations and more significant signal in this evaluation approach. We additionally open-source the evaluation tasks as a way to evaluate learning progress in our environment.
Surprising behaviors We’ve shown that agents can learn sophisticated tool use in a high fidelity physics simulator; however, there were many lessons learned along the way to this result. Building environments is not easy and it is quite often the case that agents find a way to exploit the environment you build or the physics engine in an unintended way.
Authors Bowen Baker Ingmar Kanitscheider Todor Markov Yi Wu Glenn Powell Bob McGrew Igor Mordatch Acknowledgments Feedback Thanks to the following for feedback on this post and paper: Pieter Abbeel, Jeff Clune, Jessica Hamrick, Joel Leibo, Natasha Jaques, Calvin French-Owen, Azalia Mirhoseini, Ilya Sutskever, Greg Brockman, Jack Clark, Brooke Chan & Karson Elmgren Other Video: Glenn Powell, Leo Ogawa Lillrank, Ivy Lillrank, Andie Lee Editor: Ashley Pilipiszyn Design: Justin Jay Wang Cover Artwork: Ben Barry Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
1,486 | 2,017 |
"DeepMind’s Groundbreaking AlphaGo Zero AI Is Now a Versatile Gamer | MIT Technology Review"
|
"https://www.technologyreview.com/the-download/609697/deepminds-groundbreaking-alphago-zero-ai-is-now-a-versatile-gamer"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts DeepMind’s Groundbreaking AlphaGo Zero AI Is Now a Versatile Gamer By Jamie Condliffe archive page Don’t challenge this algorithm to a board game. Because chances are it can learn to outsmart you inside a day.
Earlier this year, we reported that Alphabet’s machine-learning subsidiary, DeepMind, had made a huge advance. Using an artificial-intelligence approach known as reinforcement learning, it had enabled its AlphaGo software to develop superhuman skills for the game of Go without needing human data. Armed with just the rules of the game, the AI was able to make random plays until it developed champion-beating strategies. The new software was dubbed AlphaGo Zero because it didn’t need any human input.
Now, in a paper published on arXiv , the DeepMind team reports that the software has been generalized so that it can learn other games. It describes two new examples in which AlphaGo Zero was unleashed on the games of chess and shogi, a Japanese game that’s similar to chess. In both cases the software was able to develop superhuman skills within 24 hours, and then “convincingly defeated a world-champion program.” It’s perhaps not too surprising that the AI was able to pick up killer skills for the two games so quickly: both chess and shogi are less complex than Go. But DeepMind’s ability to generalize the software, so that it can master different games, hints at increasingly adaptable kinds of machine intelligence.
That said, there are still games that AI hasn’t yet mastered. Perhaps the biggest challenge—which DeepMind is already working on—lies in massively complex online strategy games like Starcraft, which humans are still superior at.
As we’ve explained in the past , machines will need to develop new skills, such as memory and planning, in order to steal away that crown. But don’t expect it to take too long.
hide by Jamie Condliffe Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,487 | 2,017 |
"AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help | MIT Technology Review"
|
"https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help By Will Knight archive page www.alphagomovie.com AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.
What’s really cool is how AlphaGo Zero did it. Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero, also developed by the Alphabet subsidiary DeepMind , started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.
The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.
Related Story “The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind. Hassabis says the techniques used to build AlphaGo Zero are powerful enough to be applied in real-world situations where it’s necessary to explore a vast landscape of possibilities, including drug discovery and materials science. The research behind AlphaGo Zero is published today in the journal Nature.
Remarkably, during this self-teaching process AlphaGo Zero discovered many of the tricks and techniques that human Go players have developed over the past several thousand years. “A few days in, it rediscovers known best plays, and in the final days goes beyond those plays to find something even better,” Hassabis says. “It’s quite cool to see.” DeepMind, based in London, was acquired by Google in 2014. The company is focused on making big strides in AI using game play, simulation, and machine learning; it has hired hundreds of AI researchers in pursuit of this goal. Developing AlphaGo Zero involved around 15 people and probably millions of dollars’ worth of computing resources, Hassabis says.
Both AlphaGo and AlphaGo Zero use a machine-learning approach known as reinforcement learning (see “ 10 Breakthrough Technologies 2017: Reinforcement Learning ”) as well as deep neural networks. Reinforcement learning is inspired by the way animals seem to learn through experimentation and feedback, and DeepMind has used the technique to achieve superhuman performance in simpler Atari games.
Mastering the board game Go was especially significant, however, because the game is so complex and because the best players make their moves so instinctively. The rules of good play, in other words, cannot easily be explained or written in code.
Reinforcement learning also shows promise for automating the programming of machines in many other contexts, including those where it would be impractical to program them by hand. It is already being tested as a way to teach robots to grasp awkward objects, for example, and as a means of conserving energy in data centers by reconfiguring hardware on the fly. In many real-world situations, however, there may not be a large number of examples to learn from, meaning machines will have to learn for themselves. That’s what makes AlphaGo Zero interesting.
“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver , the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.” To achieve Go supremacy, AlphaGo Zero simply played against itself, randomly at first. Like the original, it used a deep neural network and a powerful search algorithm to pick the next move. But in AlphaGo Zero, a single neural network took care of both functions.
Martin Mueller , a professor at the University of Alberta in Canada who has done important work on Go-playing software, is impressed by the design of AlphaGo Zero and says it advances reinforcement learning. “The architecture is simpler, yet more powerful, than previous versions,” he says.
DeepMind is already the darling of the AI industry, and its latest achievement is sure to grab headlines and spark debate about progress toward much more powerful forms of AI.
There are reasons to take the announcement cautiously, though.
Pedro Domingos , a professor at the University of Washington, points out that the program still needs to play many millions of games in order to master Go—many more than an expert human player does. This suggests that the intelligence the program employs is fundamentally different somehow.
“It’s a nice illustration of the recent progress in deep learning and reinforcement learning, but I wouldn’t read too much into it as a sign of what computers can learn without human knowledge,” Domingos says. “What would be really impressive would be if AlphaGo beat [legendary South Korean champion] Lee Sedol after playing roughly as many games as he played in his career before becoming a champion. We’re nowhere near that.” Indeed, both Silver and Hassabis concede that finding ways for machines to learn from much less data will be important in their ongoing quest to master intelligence. This may involve developing novel approaches to let machines transfer what they have learned in one domain to another, or to learn from observing others (both humans and other AIs).
But despite the work still to be done, Hassabis is hopeful that within 10 years AI will play an important role in solving important problems in science, medicine, or other fields. “I hope that these kinds of algorithms, and future versions, will be routinely working with us advancing the frontiers of science and medicine,” he says. “Maybe all kinds of things will have been partly designed and discovered by these kinds of algorithms, working in tandem with very smart humans.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,488 | 2,016 |
"Google’s AI Masters the Game of Go a Decade Earlier Than Expected | MIT Technology Review"
|
"https://www.technologyreview.com/s/546066/googles-ai-masters-the-game-of-go-a-decade-earlier-than-expected"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google’s AI Masters the Game of Go a Decade Earlier Than Expected By Will Knight archive page Google has taken a brilliant and unexpected step toward building an AI with more humanlike intuition, developing a computer capable of beating even expert human players at the fiendishly complicated board game Go.
The objective of Go, a game invented in China more than 2,500 years ago, is fairly simple: players must alternately place black and white “stones” on a grid of 19 horizontal and 19 vertical lines with the aim of surrounding the opponent’s pieces, and avoiding having one’s own pieces surrounded. Mastering Go, however, requires endless practice, as well as a finely tuned knack of recognizing subtle patterns in the arrangement of the pieces spread across the board.
Google’s team has shown that the skills needed to master Go are not so uniquely human after all. Their computer program, called AlphaGo, beat the European Go champion, Fan Hui, five games to zero. And this March it will take on one of the world’s best players, Lee Sedol, in a tournament to be held in Seoul, South Korea.
“Go is the most complex and beautiful game ever devised by humans,” Demis Hassabis, head of the Google team, and himself an avid Go player, said at a press briefing. By beating Fan Hui, he added, “our program achieved one of the long-standing grand challenges of AI.” Hassabis also said the techniques used to create AlphaGo would lend themselves to his team’s effort to develop a general AI. “Ultimately we want to apply these techniques to important real-world problems,” he said. “Because the methods we used were general purpose, our hope is that one day they could be extended to help address some of society’s most pressing problems, from medical diagnostics to climate modeling” (see “ Could AI Solve the World’s Biggest Problems? ”).
Hassabis said the first way the technology might be applied at Google would involve the development of better software personal assistants. Such an assistant might learn a user’s preferences from his online behavior, and make more intuitive recommendations about products or events, he suggested.
Go is far more challenging for computers than, say, chess for two reasons: the number of potential moves each turn is far higher, and there is no simple way to measure material advantage. A player must therefore learn to recognize abstract patterns in hundreds of pieces placed across the board. And even experts often struggle to explain why a particular position seems advantageous or problematic.
Just a couple of years ago, in fact, most Go players and game programmers believed the game was so complex that it would take several decades before computers might reach the standard of a human expert player.
AlphaGo was developed by a team known as Google DeepMind, a group created after Google acquired a small AI U.K. startup called DeepMind in 2014. The researchers built AlphaGo using an extremely popular and successful machine-learning method known as deep learning combined with another simulation technique for modeling potential moves. Deep learning involves training a large simulated neural network to respond to patterns in data. It has proven very useful for image and audio processing, and many large tech companies are exploring new ways to apply the technique.
Two deep-learning networks were used in AlphaGo: one network learned to predict the next move, and the other learned to predict the outcome from different arrangements on the board. The two networks were combined using a more conventional AI algorithm to look ahead in the game for possible moves. A scientific paper written by researchers from Google that describes the work appears in the journal Nature today.
“The game of Go has an enormous search space, which is intractable to brute-force search,” says David Silver, another Google researcher who led the effort. “The key to AlphaGo is to reduce that search space to something more manageable. This approach makes AlphaGo much more humanlike than previous approaches.” When IBM’s Deep Blue computer mastered chess in 1997, it used hand-coded rules, and exhaustively searched through potential chess moves. AlphaGo essentially learned over time to recognize potentially advantageous patterns, and then simulated a limited number of potential outcomes.
Google’s achievement has been met with congratulations and some astonishment by other researchers in the field.
“On the technical side, this work is a monumental contribution to AI,” says Ilya Sutskever, a leading AI researcher and the director of a new nonprofit called OpenAI (see “ Innovators Under 35: Ilya Sutskever ”). Sutskever says the work was especially important because AlphaGo essentially taught itself how to win. “The same technique can be used to achieve extremely high performance on many other games as well,” he says.
Michael Bowling , a professor of computer science at the University of Alberta in Canada who recently developed a program capable of beating anyone at heads-up limit poker, was also excited by the achievement. He believes that the approach should indeed prove useful in many areas where machine learning is applied. “A lot of what we would traditionally think of as human intelligence is built around pattern matching,” he says. “And a lot of what we would think of as learning is having seen these patterns in the past, and being able to realize how they connect to a current situation.” One aspect of the result worth noting is that it combines deep learning with other techniques, says Gary Marcus, a professor of psychology at New York University and the cofounder and CEO of Geometric Intelligence , an AI startup that is also combining deep learning with other methods (see “ Can This Man Make AI More Human? ”).
“This is not a so-called end-to-end deep-learning system,” Marcus says. “It’s a carefully structured, modular system with some thoughtful hand-engineering on the front end. Which is, when you think about it, quite parallel to the human mind: rich, modular, with a bit of tweaking by evolution, rather than just a bunch of neurons randomly interconnected and tuned entirely by experience.” Google isn’t the only company using deep learning to develop a Go-playing AI, either. Facebook has previously said that it has a researcher working on such a system, and last night both Yann LeCun , director of AI research at Facebook, and CEO Mark Zuckerberg posted updates on the effort. Facebook’s effort is at an earlier stage, but it also combines deep learning with another technique.
Seeing AI master Go may also lead to some existential angst. During the press briefing announcing the news, Hassabis was faced with questions about the long-term risks of the AI systems Google is developing. He said that the company was taking steps to mitigate those risks by collaborating with academics, by organizing conferences, and by working with an internal ethics board.
hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,489 | 2,017 |
"AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help | MIT Technology Review"
|
"https://www.technologyreview.com/2017/10/18/148511/alphago-zero-shows-machines-can-become-superhuman-without-any-help"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help By Will Knight archive page www.alphagomovie.com AlphaGo wasn’t the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it’s a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none.
What’s really cool is how AlphaGo Zero did it. Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero, also developed by the Alphabet subsidiary DeepMind , started with nothing but a blank board and the rules of the game. It learned simply by playing millions of games against itself, using what it learned in each game to improve.
The new program represents a step forward in the quest to build machines that are truly intelligent. That’s because machines will need to figure out solutions to difficult problems even when there isn’t a large amount of training data to learn from.
Related Story “The most striking thing is we don’t need any human data anymore,” says Demis Hassabis, CEO and cofounder of DeepMind. Hassabis says the techniques used to build AlphaGo Zero are powerful enough to be applied in real-world situations where it’s necessary to explore a vast landscape of possibilities, including drug discovery and materials science. The research behind AlphaGo Zero is published today in the journal Nature.
Remarkably, during this self-teaching process AlphaGo Zero discovered many of the tricks and techniques that human Go players have developed over the past several thousand years. “A few days in, it rediscovers known best plays, and in the final days goes beyond those plays to find something even better,” Hassabis says. “It’s quite cool to see.” DeepMind, based in London, was acquired by Google in 2014. The company is focused on making big strides in AI using game play, simulation, and machine learning; it has hired hundreds of AI researchers in pursuit of this goal. Developing AlphaGo Zero involved around 15 people and probably millions of dollars’ worth of computing resources, Hassabis says.
Both AlphaGo and AlphaGo Zero use a machine-learning approach known as reinforcement learning (see “ 10 Breakthrough Technologies 2017: Reinforcement Learning ”) as well as deep neural networks. Reinforcement learning is inspired by the way animals seem to learn through experimentation and feedback, and DeepMind has used the technique to achieve superhuman performance in simpler Atari games.
Mastering the board game Go was especially significant, however, because the game is so complex and because the best players make their moves so instinctively. The rules of good play, in other words, cannot easily be explained or written in code.
Reinforcement learning also shows promise for automating the programming of machines in many other contexts, including those where it would be impractical to program them by hand. It is already being tested as a way to teach robots to grasp awkward objects, for example, and as a means of conserving energy in data centers by reconfiguring hardware on the fly. In many real-world situations, however, there may not be a large number of examples to learn from, meaning machines will have to learn for themselves. That’s what makes AlphaGo Zero interesting.
“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver , the lead researcher at DeepMind and a professor at University College London. “It’s able to create knowledge for itself from first principles.” To achieve Go supremacy, AlphaGo Zero simply played against itself, randomly at first. Like the original, it used a deep neural network and a powerful search algorithm to pick the next move. But in AlphaGo Zero, a single neural network took care of both functions.
Martin Mueller , a professor at the University of Alberta in Canada who has done important work on Go-playing software, is impressed by the design of AlphaGo Zero and says it advances reinforcement learning. “The architecture is simpler, yet more powerful, than previous versions,” he says.
DeepMind is already the darling of the AI industry, and its latest achievement is sure to grab headlines and spark debate about progress toward much more powerful forms of AI.
There are reasons to take the announcement cautiously, though.
Pedro Domingos , a professor at the University of Washington, points out that the program still needs to play many millions of games in order to master Go—many more than an expert human player does. This suggests that the intelligence the program employs is fundamentally different somehow.
“It’s a nice illustration of the recent progress in deep learning and reinforcement learning, but I wouldn’t read too much into it as a sign of what computers can learn without human knowledge,” Domingos says. “What would be really impressive would be if AlphaGo beat [legendary South Korean champion] Lee Sedol after playing roughly as many games as he played in his career before becoming a champion. We’re nowhere near that.” Indeed, both Silver and Hassabis concede that finding ways for machines to learn from much less data will be important in their ongoing quest to master intelligence. This may involve developing novel approaches to let machines transfer what they have learned in one domain to another, or to learn from observing others (both humans and other AIs).
But despite the work still to be done, Hassabis is hopeful that within 10 years AI will play an important role in solving important problems in science, medicine, or other fields. “I hope that these kinds of algorithms, and future versions, will be routinely working with us advancing the frontiers of science and medicine,” he says. “Maybe all kinds of things will have been partly designed and discovered by these kinds of algorithms, working in tandem with very smart humans.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,490 | 2,019 |
"Can you make AI fairer than a judge? Play our courtroom algorithm game | MIT Technology Review"
|
"https://www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Can you make AI fairer than a judge? Play our courtroom algorithm game The US criminal legal system uses predictive algorithms to try to make the judicial process less biased. But there’s a deeper problem.
By Karen Hao archive page Jonathan Stray archive page Selman Design As a child, you develop a sense of what “fairness” means. It’s a concept that you learn early on as you come to terms with the world around you. Something either feels fair or it doesn’t.
But increasingly, algorithms have begun to arbitrate fairness for us. They decide who sees housing ads, who gets hired or fired, and even who gets sent to jail. Consequently, the people who create them—software engineers—are being asked to articulate what it means to be fair in their code. This is why regulators around the world are now grappling with a question: How can you mathematically quantify fairness? This story attempts to offer an answer. And to do so, we need your help. We’re going to walk through a real algorithm, one used to decide who gets sent to jail, and ask you to tweak its various parameters to make its outcomes more fair. (Don’t worry—this won’t involve looking at code!) The algorithm we’re examining is known as COMPAS, and it’s one of several different “risk assessment” tools used in the US criminal legal system.
At a high level, COMPAS is supposed to help judges determine whether a defendant should be kept in jail or be allowed out while awaiting trial. It trains on historical defendant data to find correlations between factors like someone’s age and history with the criminal legal system, and whether or not the person was rearrested. It then uses the correlations to predict the likelihood that a defendant will be arrested for a new crime during the trial-waiting period.
1 Footnote 1. Arrests vs. convictions This process is highly imperfect. The tools use arrests as a proxy for crimes, but there are actually big discrepancies between the two because police have a history of disproportionately arresting racial minorities and of manipulating data. Rearrests, moreover, are often made for technical violations, such as failing to appear in court, rather than for repeat criminal activity. In this story, we oversimplify to examine what would happen if arrests corresponded to actual crimes.
This prediction is known as the defendant’s “risk score,” and it’s meant as a recommendation: “high risk” defendants should be jailed to prevent them from causing potential harm to society; “low risk” defendants should be released before their trial. (In reality, judges don’t always follow these recommendations, but the risk assessments remain influential.) Proponents of risk assessment tools argue that they make the criminal legal system more fair. They replace judges’ intuition and bias—in particular, racial bias—with a seemingly more “objective” evaluation. They also can replace the practice of posting bail in the US, which requires defendants to pay a sum of money for their release. Bail discriminates against poor Americans and disproportionately affects black defendants, who are overrepresented in the criminal legal system.
Footnote 2. ProPublica’s methodology For defendants who were jailed before trial, ProPublica looked at whether they were rearrested within two years after their release. It then used that to approximate whether the defendants would have been rearrested pre-trial had they not been jailed.
As required by law, COMPAS doesn’t include race in calculating its risk scores. In 2016, however, a ProPublica investigation argued that the tool was still biased against blacks. ProPublica found that among defendants who were never rearrested, black defendants were twice as likely as white ones to have been labeled high-risk by COMPAS.
2 So our task now is to try to make COMPAS better. Ready? Let’s start with the same data set that ProPublica used in its analysis. It includes every defendant scored by the COMPAS algorithm in Broward County, Florida, from 2013 to 2014. In total, that’s over 7,200 profiles with each person’s name, age, race, and COMPAS risk score, noting whether the person was ultimately rearrested either after being released or jailed pre-trial.
To make the data easier to visualize, we’ve randomly sampled 500 black and white defendants from the full set.
We’ve represented each defendant as a dot.
Remember: all these dots are people accused (but not convicted) of a crime. Some will be jailed pre-trial; others will be released immediately. Some will go on to get rearrested after their release; others will not. We want to compare two things: the predictions (which defendants received “high” vs. “low” risk scores) and the real-world outcomes (which defendants actually got rearrested after being released).
COMPAS scores defendants on a scale of 1 to 10, where 1 roughly corresponds to a 10% chance of rearrest, 2 to 20%, and so on.
Let’s look at how COMPAS scored everyone.
Footnote 3. COMPAS’s scores COMPAS was designed to make aggregate predictions about groups of people who share similar characteristics, rather than predictions about specific individuals. The methodology behind its scores and the recommendations for how to use them are more complicated than we had room to present; you can read about them at the link above.
Though COMPAS can only offer a statistical probability that a defendant will be rearrested pre-trial, judges, of course, have to make an all-or-nothing decision: whether to release or detain the defendant. For the purposes of this story, we are going to use COMPAS’s “high risk” threshold, a score of 7 or higher, to represent a recommendation that a defendant be detained.
3 From here on out, you are in charge. Your mission is to redesign the last stage of this algorithm by finding a fairer place to set the “high risk” threshold.
This is what your threshold will look like. Try clicking on it and dragging it around.
So first, let’s imagine the best-case scenario: all the defendants your algorithm labels with a high risk score go on to get rearrested, and all defendants who get a low risk score do not. Below, our graphic depicts what this might look like. The filled-in circles are defendants who were rearrested; the empty circles are those who weren’t.
Now move the threshold to make your algorithm as fair as possible.
(In other words, only rearrested defendants should be jailed.) Great! That was easy. Your threshold should be set between 6 and 7. No one was needlessly detained, and no one who was released was then rearrested.
But of course, this ideal scenario never actually happens. It’s impossible to perfectly predict the outcome for each person. This means the filled and empty dots can’t be so neatly separated.
So here’s who actually gets rearrested.
Now move the threshold again to make your algorithm as fair as possible.
(Hint: you want to maximize its accuracy.) You’ll notice that no matter where you place the threshold, it’s never perfect: we always jail some defendants who don’t get rearrested (empty dots to the right of the threshold) and release some defendants who do get rearrested (filled dots to the left of threshold). This is a trade-off that our criminal legal system has always dealt with, and it’s no different when we use an algorithm.
To make these trade-offs more clear, let’s see the percentage of incorrect predictions COMPAS makes on each side of the threshold, instead of just measuring the overall accuracy. Now we will be able to explicitly see whether our threshold favors needlessly keeping people in jail or releasing people who are then rearrested.
4 Notice that COMPAS’s default threshold favors the latter.
Footnote 4. Technical definitions These two error percentages are also known as the “false negative rate” (which we’ve labeled “released but rearrested”) and “false positive rate” (which we’ve labeled “needlessly jailed”).
How should we fairly balance this trade-off? There’s no universal answer, but in the 1760s, the English judge William Blackstone wrote, “It is better that ten guilty persons escape than that one innocent suffer.” Blackstone’s ratio is still highly influential in the US today. So let’s use it for inspiration.
Move the threshold to where the “released but rearrested” percentage is roughly 10 times the “needlessly jailed” percentage.
You can already see two problems with using an algorithm like COMPAS. The first is that better prediction can always help reduce error rates across the board, but it can never eliminate them entirely. No matter how much data we collect, two people who look the same to the algorithm can always end up making different choices.
The second problem is that even if you follow COMPAS’s recommendations consistently, someone—a human—has to first decide where the “high risk” threshold should lie, whether by using Blackstone’s ratio or something else. That depends on all kinds of considerations—political, economic, and social.
Now we’ll come to a third problem. This is where our explorations of fairness start to get interesting. How do the error rates compare across different groups? Are there certain types of people who are more likely to get needlessly detained? Let’s see what our data looks like when we consider the defendants’ race.
Now move each threshold to see how it affects black and white defendants differently.
Race is an example of a protected class in the US, which means discrimination on that basis is illegal. Other protected classes include gender, age, and disability.
Now that we’ve separated black and white defendants, we’ve discovered that even though race isn’t used to calculate the COMPAS risk scores, the scores have different error rates for the two groups. At the default COMPAS threshold between 7 and 8, 16% of black defendants who don’t get rearrested have been needlessly jailed, while the same is true for only 7% of white defendants. That doesn’t seem fair at all! This is exactly what ProPublica highlighted in its investigation.
Okay, so let’s fix this.
Move each threshold so white and black defendants are needlessly jailed at roughly the same rate.
(There are a number of solutions. We’ve picked one, but you can try to find others.) We tried to reach Blackstone’s ratio again, so we arrived at the following solution: white defendants have a threshold between 6 and 7, while black defendants have a threshold between 8 and 9. Now roughly 9% of both black and white defendants who don’t get rearrested are needlessly jailed, while 75% of those who do are rearrested after spending no time in jail. Good work! Your algorithm seems much fairer than COMPAS now.
But wait—is it? In the process of matching the error rates between races, we lost something important: our thresholds for each group are in different places, so our risk scores mean different things for white and black defendants.
White defendants get jailed for a risk score of 7, but black defendants get released for the same score. This, once again, doesn’t seem fair. Two people with the same risk score have the same probability of being rearrested, so shouldn’t they receive the same treatment? In the US, using different thresholds for different races may also raise complicated legal issues with the 14th Amendment, the equal protection clause of the Constitution.
So let’s try this one more time with a single threshold shared between both groups.
Move the threshold again so white and black defendants are needlessly jailed at the same rate.
If you’re getting frustrated, there’s good reason. There is no solution.
We gave you two definitions of fairness: keep the error rates comparable between groups, and treat people with the same risk scores in the same way. Both of these definitions are totally defensible! But satisfying both at the same time is impossible.
The reason is that black and white defendants are rearrested at different rates. Whereas 52% of black defendants were rearrested in our Broward County data, only 39% of white defendants were. There’s a similar difference in many jurisdictions across the US, in part because of the country’s history of police disproportionately targeting minorities (as we previously mentioned).
Predictions reflect the data used to make them—whether by algorithm or not. If black defendants are arrested at a higher rate than white defendants in the real world, they will have a higher rate of predicted arrest as well. This means they will also have higher risk scores on average, and a larger percentage of them will be labeled high-risk— both correctly and incorrectly.
This is true no matter what algorithm is used, as long as it’s designed so that each risk score means the same thing regardless of race.
This strange conflict of fairness definitions isn’t just limited to risk assessment algorithms in the criminal legal system. The same sorts of paradoxes hold true for credit scoring, insurance, and hiring algorithms. In any context where an automated decision-making system must allocate resources or punishments among multiple groups that have different outcomes, different definitions of fairness will inevitably turn out to be mutually exclusive.
There is no algorithm that can fix this; this isn’t even an algorithmic problem, really. Human judges are currently making the same sorts of forced trade-offs—and have done so throughout history.
But here’s what an algorithm has changed. Though judges may not always be transparent about how they choose between different notions of fairness, people can contest their decisions. In contrast, COMPAS, which is made by the private company Northpointe, is a trade secret that cannot be publicly reviewed or interrogated. Defendants can no longer question its outcomes, and government agencies lose the ability to scrutinize the decision-making process. There is no more public accountability.
So what should regulators do? The proposed Algorithmic Accountability Act of 2019 is an example of a good start, says Andrew Selbst, a law professor at the University of California who specializes in AI and the law. The bill, which seeks to regulate bias in automated decision-making systems, has two notable features that serve as a template for future legislation. First, it would require companies to audit their machine-learning systems for bias and discrimination in an “impact assessment.” Second, it doesn’t specify a definition of fairness.
“With an impact assessment, you're being very transparent about how you as a company are approaching the fairness question,” Selbst says. That brings public accountability back into the debate. Because “fairness means different things in different contexts,” he adds, avoiding a specific definition allows for that flexibility.
But whether algorithms should be used to arbitrate fairness in the first place is a complicated question. Machine-learning algorithms are trained on “data produced through histories of exclusion and discrimination,” writes Ruha Benjamin, an associate professor at Princeton University, in her book Race After Technology.
Risk assessment tools are no different. The greater question about using them—or any algorithms used to rank people—is whether they reduce existing inequities or make them worse.
Selbst recommends proceeding with caution: “Whenever you turn philosophical notions of fairness into mathematical expressions, they lose their nuance, their flexibility, their malleability,” he says. “That’s not to say that some of the efficiencies of doing so won’t eventually be worthwhile. I just have my doubts.” Words and code by Karen Hao and Jonathan Stray. Design advising from Emily Luong and Emily Caulfield. Editing by Niall Firth and Gideon Lichfield. Special thanks to Rashida Richardson from AI Now, Mutale Nkonde from Berkman Klein Center, and William Isaac from DeepMind for their review and consultation.
Correction: A previous version of the article linked to information about a risk assessment tool different from COMPAS. It has been removed to avoid confusion.
hide by Karen Hao & Jonathan Stray Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,491 | 2,019 |
"A biased medical algorithm favored white people for health-care programs | MIT Technology Review"
|
"https://www.technologyreview.com/f/614626/a-biased-medical-algorithm-favored-white-people-for-healthcare-programs"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts A biased medical algorithm favored white people for health-care programs By Charlotte Jee archive page A medical professional checks a patient's back with a stethoscope Getty Images A study has highlighted the risks inherent in using historical data to train machine-learning algorithms to make predictions.
The news: An algorithm that many US health providers use to predict which patients will most need extra medical care privileged white patients over black patients, according to researchers at UC Berkeley, whose study was published in Science.
Effectively, it bumped whites up the queue for special treatments for complex conditions like kidney problems or diabetes.
The study: The researchers dug through almost 50,000 records from a large, undisclosed academic hospital. They found that white patients were given higher risk scores, and were therefore more likely to be selected for extra care (like more nursing or dedicated appointments), than black patients who were in fact equally sick. The researchers calculated that the bias cut the proportion of black patients who got extra help by more than half.
What software was this? The researchers didn’t say, but the Washington Post identifies it as Optum, owned by insurer UnitedHealth. It says its product is used to “manage more than 70 million lives.” Though the researchers only focused on one particular tool, they identified the same flaw in the 10 most widely used algorithms in the industry. Each year, these tools are collectively applied to an estimated 150 to 200 million people in the US.
How the bias crept in: Race wasn’t a factor in the algorithm’s decision-making (that would be illegal); it used patients’ medical histories to predict how much they were likely to cost the health-care system. But cost is not a race-blind metric: for socioeconomic and other reasons, black patients have historically incurred lower health-care costs than white patients with the same conditions. As a result, the algorithm gave white patients the same scores as black patients who were significantly sicker.
A small saving grace: The researchers worked with Optum to correct the issue. They reduced the disparity by more than 80% by creating a version that predicts both a patient’s future costs and the number of times a chronic condition might flare up over the coming year. So algorithmic bias can be corrected, if—and sadly, it is a big if—you can catch it.
Why it matters: The study is the latest to show the pitfalls of allocating important resources according to the recommendation of algorithms. These kinds of challenges are playing out not just in health care, but also in hiring, credit scoring, insurance, and criminal justice.
Read next: our interactive explainer on how AI bias affects the criminal legal system and why it’s so hard to eliminate.
hide by Charlotte Jee Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,492 | 2,020 |
"An Algorithm Determined UK Students' Grades. Chaos Ensued | WIRED"
|
"https://www.wired.com/story/an-algorithm-determined-uk-students-grades-chaos-ensued"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Amit Katwala Business An Algorithm Determined UK Students' Grades. Chaos Ensued Photography: Tolga Akmen/AFP/Getty Images Save this story Save Save this story Save Application Prediction Sector Education Technology Machine learning Results day has a time-worn rhythm, full of annual tropes: local newspaper pictures of envelope-clutching girls jumping in the air in threes and fours, columnists complaining that exams have gotten far too easy, and the same five or six celebrities posting worthy Twitter threads about why exam results don’t matter because everything worked out alright for them.
This story originally appeared on WIRED UK.
But this year, it’s very different. The coronavirus pandemic means exams were canceled and replaced with teacher assessments and algorithms.
It has created chaos.
In Scotland, the government was forced to completely change tack after tens of thousands of students were downgraded by an algorithm that changed grades based on a school’s previous performance and other factors. Anticipating similar scenes for today’s A-level results, the government in England has introduced what it’s calling a ‘triple lock’—whereby, via stages of appeals, students will effectively get to choose their grade from a teacher assessment, their mock exam results, or a resit to be taken in the autumn.
While that should help reduce some injustices, the results day mess could still have a disproportionate effect on students from disadvantaged backgrounds, with knock-on effects on their university applications and careers. The mess shines a light on huge, long-term flaws in the assessment, exams, and university admissions systems that systematically disadvantage pupils from certain groups.
Forget the triple lock, ethnic minority students from poorer backgrounds could be hit with a triple whammy. First, their teacher assessments may be lower than white students because of unconscious bias, argues Pran Patel, a former assistant head teacher and an equity activist at Decolonise the Curriculum. He points to a 2009 study into predictions and results in Key Stage 2 English which found that Pakistani pupils were 62.9 percent more likely than white pupils to be predicted a lower score than they actually achieved, for example. There’s also an upwards spike in results for boys from black and Caribbean background at age 16, which Patel says corresponds to the first time in their school careers that they’re assessed anonymously.
Not everyone agrees on this point. Research led by Kaili Rimfeld at King’s College London, based on data from more than 10,000 pupils, has found that teacher assessments are generally good predictors of future exam performance, although the best predictor of success in exams is previous success in exams.
But because of fears over grade inflation caused by teachers assessing their own students, those marks aren’t being used in isolation. This year, because of coronavirus, those potentially biased teacher assessments were modified—taking into account the school’s historical performance and other factors that may have had little to do with the individual student. In fact, according to TES, 60 percent of this year’s A-Level grades have been determined via statistical modeling, not teacher assessment.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This means that a bright pupil in a poorly performing school may have seen their grade lowered because last year’s cohort of pupils didn’t do well in their exams. “Children from a certain background may find their assessment is downgraded,” says Stephen Curran, a teacher and education expert. This is what happened in Scotland, where children from poorer backgrounds were twice as likely to have their results downgraded than those from richer areas.
There’s injustice in the appeals process too—particularly in England, where the decision over whether or not to appeal is up to the school, not the pupil. “I think it’s really scandalous that the pupils can’t appeal themselves,” says Rimfeld, whose own child was anxiously awaiting their results. “It’s just astonishing the mess we created, and it’s really sad to see.” There will be huge differences in which schools decide or are able to appeal—inevitably, better resourced private schools will be able to appeal more easily than underfunded state schools in deprived areas. “The parents will pressure them, and they’ll be apoplectic if their child does not achieve the grades they expected,” says Curran. In the state system, meanwhile, “some schools will fight for their kids, and others won’t,” and teachers are on holiday until term starts anyway.
On August 11, Gavin Williamson announced the triple lock that would allow students to pick from their teacher-assessed grade, their mock exam result, or doing a resit in the autumn if they don’t agree with the grade the system gives them initially. But there are huge problems there too. “Nobody is consulting with anybody about this,” says Rimfeld. “There are schools where there are no mocks, some schools do several mock exams—is it going to be the average? How is that going to work?” The government is still figuring out exactly how mock results will be used, but there are vast discrepancies in conditions that mocks are taken in, and no centralized record of mock results. Some schools don’t even collect that data centrally for their own pupils. Sometimes teachers will downgrade results in a mock exam in order to scare certain students into working harder for the remainder of the year, says Patel. He doesn’t think including mocks will do anything to help repair bias. “Not in the slightest,” he says. “Because the teacher who is assessing your grade is the same teacher who marked your mock exam.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That means it will be difficult for teachers, who Patel stresses may not have much experience marking exam papers, to untangle their conscious or unconscious perceptions from the words on the page in front of them. “Teachers are now being asked to make decisions that are potentially life-changing by completing a task that they're not qualified or suitably trained to do,” he says.
Even if two children end up with the same final grade after this process, the delays and inaccurate assessments could prove vital—particularly now, but also in more normal years. If you’re predicted three As, you’re more likely to apply and be accepted by prestigious universities, and more likely to be taught the relevant material, and more likely to actually make the grade.
If you’re predicted three Cs and get three As, by the time your results come out, it might already be too late for you to apply to the best universities without taking a year out—the die has been cast, not by your performance, but by your teacher’s assessment.
Teachers are aghast at the mess that’s been allowed to unfold. Curran argues that exams should simply have been taken later in the year, with social distancing implemented. Now, he says, we’re in a situation where results have become a political issue—and the GCSE and A-Level students of today are the voters of tomorrow.
Universities are also eyeing the situation nervously. The people we spoke to have been looking at the situation in Scotland and suspect that many pupils—at least those from schools that can afford the appeals—will essentially end up getting whatever grade they want. “In the end we get to a situation where it’s ‘pick a number’ because you’ve got no reliable sources of information there at all,” says Curran.
That will have an impact on university placements, which are generally overallocated to account for people missing their targets. Some universities will have far too many people who have made their grades, while those lower down the rankings may find themselves scrambling for students.
A smarter use of data could help tackle the problem, Patel argues. The Office of Qualifications and Examinations Regulation has used data about school performance to head off grade inflation, when instead, it should be using data about hidden bias to counteract societal injustice.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Suddenly oversubscribed universities could look inside the black box and see which pupils were downgraded and why, and use that information to make assessments about who to give places to. Arguably they should be doing that more often anyway, with contextual offers that take into account how much easier it is to get good grades for people from certain social or economic backgrounds.
“Teacher assessment is prone to bias, but there are lots of other ways of assessing pupils, and if you embrace lots of different techniques, you can ameliorate that impact,” Patel says. “There’s no ideal situation, but the problem here is that exams were never a great metric for learning or success anyway.” This story originally appeared on WIRED UK.
TikTok and the evolution of digital blackface The American scientists who saved London from Nazi drones Tips for planing and cooking family meals in lockdown Incognito mode may not work the way you think it does This algorithm doesn't replace doctors— it makes them better 🎙️ Listen to Get WIRED , our new podcast about how the future is realized. Catch the latest episodes and subscribe to the 📩 newsletter to keep up with all our shows 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Senior writer X Topics Wired UK algorithms education artificial intelligence Will Knight David Gilbert Khari Johnson Kari McMahon Amit Katwala Amit Katwala Joel Khalili Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,493 | 2,016 |
"A beauty contest was judged by AI and the robots didn't like dark skin | Artificial intelligence (AI) | The Guardian"
|
"https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people"
|
"The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Fashion Food Recipes Travel Health & fitness Women Men Love & sex Beauty Home & garden Money Cars One expert says the results offer ‘the perfect illustration of the problem’ with machine bias.
Photograph: Fabrizio Bensch/Reuters One expert says the results offer ‘the perfect illustration of the problem’ with machine bias.
Photograph: Fabrizio Bensch/Reuters Artificial intelligence (AI) A beauty contest was judged by AI and the robots didn't like dark skin The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners in San Francisco Thu 8 Sep 2016 18.42 EDT The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. After Beauty.AI launched this year, roughly 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty”.
But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.
Out of 44 winners , nearly all were white, a handful were Asian, and only one had dark skin. That’s despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa.
The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
When Microsoft released the “millennial” chatbot named Tay in March, it quickly began using racist language and promoting neo-Nazi views on Twitter. And after Facebook eliminated human editors who had curated “trending” news stories last month, the algorithm immediately promoted fake and vulgar stories on news feeds, including one article about a man masturbating with a chicken sandwich.
While the seemingly racist beauty pageant has prompted jokes and mockery, computer science experts and social justice advocates say that in other industries and arenas, the growing use of prejudiced AI systems is no laughing matter. In some cases, it can have devastating consequences for people of color.
Beauty.AI – which was created by a “deep learning” group called Youth Laboratories and supported by Microsoft – relied on large datasets of photos to build an algorithm that assessed beauty. While there are a number of reasons why the algorithm favored white people, the main problem was that the data the project used to establish standards of attractiveness did not include enough minorities, said Alex Zhavoronkov, Beauty.
AI’s chief science officer.
Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion.
Winners of the Beauty.AI contest in the category for women aged 18-29.
“If you have not that many people of color within the dataset, then you might actually have biased results,” said Zhavoronkov, who said he was surprised by the winners. “When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.” The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.
The Beauty.AI results offer “the perfect illustration of the problem”, said Bernard Harcourt, Columbia University professor of law and political science who has studied “predictive policing”, which has increasingly relied on machines. “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.” The case is a reminder that “humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific,” he said.
Civil liberty groups have recently raised concerns that computer-based law enforcement forecasting tools – which use data to predict where future crimes will occur – rely on flawed statistics and can exacerbate racially biased and harmful policing practices.
“It’s polluted data producing polluted results,” said Malkia Cyril, executive director of the Center for Media Justice.
A ProPublica investigation earlier this year found that software used to predict future criminals is biased against black people, which can lead to harsher sentencing.
“That’s truly a matter of somebody’s life is at stake,” said Sorelle Friedler, a professor of computer science at Haverford College.
A major problem, Friedler said, is that minority groups by nature are often underrepresented in datasets, which means algorithms can reach inaccurate conclusions for those populations and the creators won’t detect it. For example, she said, an algorithm that was biased against Native Americans could be considered a success given that they are only 2% of the population.
“You could have a 98% accuracy rate. You would think you have done a great job on the algorithm.” Friedler said there are proactive ways algorithms can be adjusted to correct for biases whether improving input data or implementing filters to ensure people of different races are receiving equal treatment.
Prejudiced AI programs aren’t limited to the criminal justice system. One study determined that significantly fewer women than men were shown online ads for high-paying jobs. Last year, Google’s photo app was found to have labeled black people as gorillas.
Cyril noted that algorithms are ultimately very limited in how they can help correct societal inequalities. “We’re overly relying on technology and algorithms and machine learning when we should be looking at institutional changes.” Zhavoronkov said that when Beauty.AI launches another contest round this fall, he expects the algorithm will have a number of changes designed to weed out discriminatory results. “We will try to correct it.” But the reality, he added, is that robots may not be the best judges of physical appearance: “I was more surprised about how the algorithm chose the most beautiful people. Out of a very large number, they chose people who I may not have selected myself.” Explore more on these topics Artificial intelligence (AI) Beauty Computing Race news Most viewed Most viewed Fashion Food Recipes Travel Health & fitness Women Men Love & sex Beauty Home & garden Money Cars News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
"
|
1,494 | 2,019 |
"An AI conference once known for blowout parties is finally growing up | MIT Technology Review"
|
"https://www.technologyreview.com/s/614935/ai-conference-neurips-power-responsibility"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts An AI conference once known for blowout parties is finally growing up By Karen Hao archive page NeurIPS 2019 Karen Hao/MIT Technology Review Only two years ago, so I’m told, one of the hottest AI research conferences of the year was more giant party than academic exchange. In a fight for the best talent, companies handed out endless free swag and threw massive, blowout events , including one featuring Flo Rida, hosted by Intel. The attendees (mostly men in their early 20s and 30s), flush with huge salaries and the giddiness of being highly coveted, drank free booze and bumped the night away.
I never witnessed this version of NeurIPS, short for the Neural Information Processing Systems conference. I came for my first time last year, after the excess had reached its peak. Externally, the community was coming under increasing scrutiny as the upset of the 2016 US presidential election drove people to question the influence of algorithms in society. Internally, reports of sexual harrassment , anti-Semitism, racism, and ageism were also driving conference goers to question whether they should continue to attend.
So when I arrived in 2018, a diversity and inclusion committee had been appointed, and the long-standing abbreviation NIPS had been updated. Still, this year’s proceedings feel different from the last. The parties are smaller, the talks are more socially minded, and the conversations happening in between seem more aware of the ethical challenges that the field needs to address.
As the role of AI has expanded dramatically, along with the more troubling aspects of its impact, the community, it seems, has finally begun to reflect on its power and the responsibilities that come with it. As one attendee put it to me: “It feels like this community is growing up.” Is that a Rolling Stones concert? No, that's a keynote at #NeurIPS2019 pic.twitter.com/nJjONGzJww This change manifested in some concrete ways. Many of the technical sessions were more focused on addressing real-world, human-centric challenges rather than theoretical ones. Entire poster tracks were centered on better methods for protecting user privacy, ensuring fairness, and reducing the amount of energy it can take to run and train state-of-the-art models. Day-long workshops, scheduled to happen today and tomorrow, have titles like “Tackling Climate Change with Machine Learning” and “Fairness in Machine Learning for Health.” Additionally, many of the invited speakers directly addressed the social and ethical challenges facing the field—topics once dismissed as not core to the practice of machine learning. Their talks were also well received by attendees, signaling a new openness to engage with these issues. At the opening event, for example, cognitive psychologist and #metoo figurehead Celeste Kidd gave a rousing speech exhorting the tech industry to take responsibility for how its technologies shape people’s beliefs and debunking myths around sexual harassment. She received a standing ovation. In an opening talk at the Queer in AI symposium, Stanford researcher Ria Kalluri also challenged others to think more about how their machine-learning models could shift the power in society from those who have it to those who don’t. Her talk was widely circulated online.
I'm happy to share my comments on the climate for men from my #NeurIPS2019 talk: https://t.co/VYo4st5VTt Much of this isn’t coincidental. Through the work of the diversity and inclusion committee, the conference saw the most diverse participation in the its history. Close to half the main-stage speakers were women and a similar number minorities; 20% of the over 13,000 attendees were also women, up from 18% last year. There were seven community-organized groups for supporting minority researchers, which is a record. These included Black in AI, Queer in AI, and Disability in AI, and they held parallel proceedings in the same space as NeurIPS to facilitate mingling of people and ideas.
When we involve more people from diverse backgrounds in AI, Kidd told me, we naturally talk more about how AI is shaping society, for good or for bad. “They come from a less privileged place and are more acutely aware of things like bias and injustice and how technologies that were designed for a certain demographic may actually do harm to disadvantaged populations,” she said. Kalluri echoed the sentiment. The intentional efforts to diversify the community, she said, are forcing it to “confront the questions of how power works in this field.” "Is this ML model doing good?" is the wrong question to be asking.
We need to ask: "how is this ML model shifting power?" @riakall #NeurIPS2019 pic.twitter.com/LWEciMqgKs Despite the progress, however, many emphasized that the work is just getting started. Having 20% women is still appalling, and this year, as in past years, there continued to be Herculean challenges in securing visas for international researchers, particularly from Africa.
“Historically, this field has been pretty narrowed in on a particular demographic of the population, and the research that comes out reflects the values of those people,” says Katherine Heller, an assistant professor at Duke University and co-chair of the diversity committee. “What we want in the long run is a more inclusive place to shape what the future direction of AI is like. There’s still a far way to go.” Yes, there’s still a long way to go. But on Monday, as people lined up to thank Kidd for her talk one by one, I let myself feel hopeful.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,495 | 2,019 |
"How Apple personalizes Siri without hoovering up your data | MIT Technology Review"
|
"https://www.technologyreview.com/s/614900/apple-ai-personalizes-siri-federated-learning"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How Apple personalizes Siri without hoovering up your data By Karen Hao archive page A woman uses her voice assistant on her phone.
kyonntra/Getty Images If you’ve got an iPhone, you may have noticed a change in Siri’s behavior in the past year. The voice assistant on the phone will “wake up” when you say “Hey Siri,” but not when the same phrase comes from your friends or family.
The reason Apple did this was sensible: it wanted a way to keep all the iPhones in a room from responding when one person utters the wake phrase. You might think that Apple would need to collect a lot of your audio data to do this. Surprisingly, it doesn't.
Instead, it relies primarily on a technique called federated learning, Apple’s head of privacy, Julien Freudiger, told an audience at the Neural Processing Information Systems conference on December 8. Federated learning is a privacy-preserving machine-learning method that was first introduced by Google in 2017.
It allows Apple to train different copies of a speaker recognition model across all its users’ devices, using only the audio data available locally. It then sends just the updated models back to a central server to be combined into a master model. In this way, raw audio of users’ Siri requests never leaves their iPhones and iPads, but the assistant continuously gets better at identifying the right speaker.
In addition to federated learning, Apple also uses something called differential privacy to add a further layer of protection. The technique injects a small amount of noise into any raw data before it is fed into a local machine-learning model. The additional step makes it exceedingly difficult for malicious actors to reverse-engineer the original audio files from the trained model.
Though Apple has been using differential privacy since 2017, it’s been combined with federated learning only as of iOS 13, which rolled out to the public in September of this year. In addition to personalizing Siri, both techniques are now being used for a few other applications as well, including QuickType (Apple’s personalized keyboard) and the Found In Apps feature, which scans your calendar and mail apps for the names of texters and callers whose numbers aren’t in your phone. Freudiger said the company plans to roll out the privacy methods to more apps and features soon.
In the past year, federated learning has grown increasingly popular within the AI research community as concerns about data privacy have grown. In March, Google released a new set of tools to make it easier for developers to implement their own federating learning models. Among many other uses, researchers hope it will help overcome privacy challenges in the application of AI to health care.
Companies including Owkin, Doc.ai, and Nvidia are interested in using it in this way.
While the technique is still relatively new and needs further refinement, Apple’s latest adoption offers another case study for how it can be applied at scale. It also marks a fundamental shift in the trade-off the tech industry has traditionally assumed between privacy and utility: in fact, it’s now possible to achieve both. Let’s hope other companies quickly catch on.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,496 | 2,019 |
"The computing power needed to train AI is now rising seven times faster than ever before | MIT Technology Review"
|
"https://www.technologyreview.com/s/614700/the-computing-power-needed-to-train-ai-is-now-rising-seven-times-faster-than-ever-before"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The computing power needed to train AI is now rising seven times faster than ever before By Karen Hao archive page Go player Ke Jie plays a match against Google's artificial intelligence program, AlphaGo AP In 2018, OpenAI found that the amount of computational power used to train the largest AI models had doubled every 3.4 months since 2012.
The San Francisco-based for-profit AI research lab has now added new data to its analysis. This shows how the post-2012 doubling compares with the historic doubling time since the beginning of the field. From 1959 to 2012, the amount of power used doubled every two years, tracking Moore’s Law. This means the resources used today are doubling at a rate seven times faster than before.
This dramatic increase in the resources needed underscores just how costly the field’s achievements have become. Keep in mind that the above graph shows a logarithmic scale. On a linear scale (below), you can more clearly see how compute usage has increased by 300,000-fold in the last seven years.
The chart also notably does not include some of the most recent breakthroughs, including Google’s large-scale language model BERT , OpenAI’s language model GPT-2 , or DeepMind’s StarCraft II-playing model AlphaStar.
In the past year, more and more researchers have sounded the alarm on the exploding costs of deep learning. In June, an analysis from researchers at the University of Massachusetts, Amherst, showed how these increasing computational costs directly translate into carbon emissions.
In their paper, they also noted how the trend exacerbates the privatization of AI research because it undermines the ability for academic labs to compete with much more resource-rich private ones.
In response to this growing concern, several industry groups have made recommendations. The Allen Institute for Artificial Intelligence, a nonprofit research firm in Seattle, has proposed that researchers always publish the financial and computational costs of training their models along with their performance results, for example.
In its own blog, OpenAI suggested policymakers increase funding to academic researchers to bridge the resource gap between academic and industry labs.
Correction: A previous version of this article incorrectly stated the doubling time today is more than seven times the rate before. The resources used are doubling seven times faster, and the doubling time itself is one-seventh the previous time.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,497 | 2,019 |
"The biggest threat of deepfakes isn’t the deepfakes themselves | MIT Technology Review"
|
"https://www.technologyreview.com/s/614526/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The biggest threat of deepfakes isn’t the deepfakes themselves By Karen Hao archive page An image of Hitler giving a speech with his face replaced by a question mark.
Ms. Tech It was late 2018, and the people of Gabon hadn’t seen their president, Ali Bongo, in public for months. Some began to suspect that he was ill, or even dead, and the government was covering it up. To stop the speculation, the government announced that Bongo had suffered a stroke but remained in good health. Soon after, it released a video of him delivering his customary New Year’s address.
Rather than assuaging tensions, however, the video did precisely the opposite. As uncovered by the digital rights organization Internet Without Borders, many people, thinking Bongo looked off in the footage , immediately suspected it was a deepfake—a piece of media forged or altered with the help of AI. The belief fueled their suspicions that the government was hiding something. One week later, the military launched an unsuccessful coup, citing the video as part of the motivation.
Subsequent forensic analysis never found anything altered or manipulated in the video. That didn’t matter. The mere idea of deepfakes had been enough to accelerate the unraveling of an already precarious situation.
In the lead-up to the 2020 US presidential elections, increasingly convincing deepfake technology has led to fears about how such faked media could influence political opinion. But a new report from Deeptrace Labs, a cybersecurity company focused on detecting this deception, found no known instances in which deepfakes have actually been used in disinformation campaigns. What’s had the more powerful effect is the knowledge that they could be used that way.
“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake,” says Henry Ajder, one of the authors of the report. “The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.” Documentation is no longer evidence Human rights activists and disinformation experts have sounded the alarm on these separate yet intertwined threats since deepfakes appeared on the scene. In the past two years, US tech companies and policymakers have focused almost exclusively on the first problem Ajder mentions: the ease with which the technology can make fake things look real. But it’s the second that troubles experts more. While the barriers to creating deepfakes may be falling rapidly, calling the veracity of something into question requires no tech at all.
“It gives another weapon to the powerful: to say ‘It’s a deepfake’ about anything that people who are out of power are trying to use to show corruption.” “From the very beginning, it’s been my biggest worry in this space,” says Aviv Ovadya, a disinformation expert who now runs the nonprofit Thoughtful Technology Project.
Undermining trust in the media can have deep repercussions, particularly in fragile political environments. Sam Gregory, the program director of Witness, a nonprofit that helps people document human rights abuses, offers an example. In Brazil, which has suffered a history of police violence, citizens and activists now worry that any video they film of an officer killing a civilian will no longer be sufficient grounds for investigation. This fear that real evidence can plausibly be dismissed as fake, says Gregory, has become a recurring theme in workshops he hosts around the world.
“It’s an evolution of the claim that something is ‘fake news,’” he says. “It gives another weapon to the powerful: to say ‘It’s a deepfake’ about anything that people who are out of power are trying to use to show corruption, to show human rights abuses.” Proving the real is real and the fake is fake Solving these problems will require understanding both types of threat. “At a high level, you want to make it as easy as possible to show that a real thing is real and that a fake thing is fake,” says Ovadya.
In recent months many research groups, and tech companies like Facebook and Google , have focused on tools for exposing fakes, such as databases for training detection algorithms and watermarks that can be built into digital photo files to reveal if they are tampered with.
Several startups have also been working on ways to build trust through consumer applications that verify photos and videos when they’re taken, to form a basis for comparison if versions of the content are circulated later. Gregory says tech giants should integrate both kinds of checks directly into their platforms to make them widely available.
But tech companies also need to employ human content moderators, and media organizations need to train journalists and fact checkers on both detection and verification as well. On-the-ground reporting can confirm whether or not a video reflects reality and add an important layer of nuance. “Technical models cannot interpret the content of the faked video across cultural contexts or imagine how it could be further recontextualized,” says Britt Paris, an information studies expert who recently published a report on deepfakes.
“What [disinformation actors] really want is not for you to question more, but for you to question everything.” As an example, Paris points to altered videos of Nancy Pelosi and Jim Acosta that went viral over the past year. Both were so-called “cheapfakes” rather than deepfakes—their speed had simply been tampered with to mislead viewers. “There would be no way to catch these fakes with technical methods for catching deepfakes,” Paris says. Instead, journalists had to debunk them—which meant people had to trust the journalists.
Finally, all the experts agree that the public needs greater media literacy. “There is a difference between proving that a real thing is real and actually having the general public believe that the real thing is real,” Ovadya says. He says people need to be aware that falsifying content and casting doubt on the veracity of content are both tactics that can be used to intentionally sow confusion.
Gregory cautions against placing too large a burden on news consumers, however. Researchers, platforms, and journalists should do as much of the work as possible to help make clear what is real and what is fake before news reaches the public.
The ultimate goal, Ovadya says, is not to instill overall skepticism but to build “social, educational, inoculating infrastructure” for neutralizing the impact of deepfakes. “What should we be trying to avoid?” he asks. “It is valuable to be questioning evidence. But what [disinformation actors] really want is not for you to question more, but for you to question everything.” He adds, “That is the antithesis of what we're looking for.” To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,498 | 2,019 |
"This is how you kick facial recognition out of your town | MIT Technology Review"
|
"https://www.technologyreview.com/s/614477/facial-recognition-law-enforcement-surveillance-private-industry-regulation-ban-backlash"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts This is how you kick facial recognition out of your town By Angela Chen archive page Surveillance Camera Getty In San Francisco, a cop can’t use facial recognition technology on a person arrested. But a landlord can use it on a tenant, and a school district can use it on students.
This is where we find ourselves, smack in the middle of an era when cameras on the corner can automatically recognize passersby, whether they like it or not. The question of who should be able to use this technology, and who shouldn’t, remains largely unanswered in the US. So far, American backlash against facial recognition has been directed mainly at law enforcement. San Francisco and Oakland, as well as Somerville, Massachusetts, have all banned police from using the technology in the past year because the algorithms aren’t accurate for people of color and women.
Presidential candidate Bernie Sanders has even called for a moratorium on police use.
Private companies and property owners have had no such restrictions, and facial recognition is increasingly cropping up in apartment buildings, hotels, and more. Privacy advocates worry that constant surveillance will lead to discrimination and have a chilling effect on free speech—and the American public isn’t very comfortable with it either. According to a recent survey by Pew Research , people in the US actually feel better about cops using facial recognition than they do about private businesses.
Anyone waiting for a quick federal ban to take shape, either for law enforcement or private industry, is likely to be disappointed, says AI policy expert Mutale Nkonde, a fellow at Harvard’s Berkman Klein Center. “From a federal perspective, anything that seems to undermine business or innovation is not going to be favored,” she says. In theory, bans in cities that have so far been aimed at cops could widen to include private interests. States could then take them up, which might finally spur action in Washington.
But it’s going to take a while, if it happens at all.
In the meantime, there is growing momentum toward curtailing private surveillance, using an array of tactics. From going toe to toe with big corporate interests to leaning on legal theory about what constitutes civil rights in America, here are three main approaches currently in play that could one day drastically change how facial recognition is used in our lives.
Tackling companies The first tactic is “old-school corporate pressure,” says Evan Greer, deputy director of digital rights group Fight for the Future. The organization has created a website listing the airlines that use facial recognition, to encourage consumers to choose other options. More recently, Fight for the Future launched a campaign pressuring concert venues and festivals not to use the technology , partly inspired by Ticketmaster’s statement that it might replace tickets with facial ID.
Musicians including singer-songwriter Amanda Palmer, rapper Atmosphere, and Tom Morello of Rage Against the Machine have all supported the effort.
Big-name music festivals like Governors Ball, Austin City Limits, Bonnaroo, and Pitchfork have now promised not to use facial surveillance. “There’s value in getting commitments,” Greer says. “We don’t need to wait until an industry is already widely using technology and weaving it in their business model.” The legislative method Another model follows the city-by-city progression of cop bans. The city of Portland, Oregon, is considering two separate ordinances, one that would ban cops from using the technology and one that would stop private businesses too.
The private ban wouldn’t affect, say, Apple’s FaceID or Facebook’s use of facial recognition in its tagging feature. City officials are more concerned about the prospect of stores and other establishments requiring facial recognition for entry , something that Jacksons, a local convenience store, started doing on a limited basis more than a year ago. The council will discuss the proposal again at a meeting in November.
Meanwhile, US Congresswomen Yvette Clarke, Ayanna Pressley, and Rashida Tlaib are focusing not on geographical regions, but on certain groups. They just introduced a federal bill that would ban facial recognition in federally funded public housing.
Landlords’ use of facial recognition is quickly becoming a hot-button issue. According to the recent Pew report, only 36% of Americans think it’s okay to require facial recognition to enter the place they live. The issue is an even bigger concern in federal or low-income housing. Not only is the technology invasive, residents say, but it’s discriminatory, because many tenants are people of color. In New York, residents of a low-income building have been fighting their landlord’s plan to replace a key-fob entry system with a facial recognition system.
“Why did [the landlord] choose our building to try this system out? Versus 11 other buildings that have a different makeup?” asked Icemae Downes , one of the residents.
No need to reinvent the wheel Existing law can also be updated to cover facial recognition, says Jevan Hutson, a law student and technology policy researcher at the University of Washington. States already have civil rights laws that prevent discrimination in public venues like restaurants, hotels, schools, hospitals, parks, convention centers, and more. Given the technology’s track record of being unable to treat people fairly , Hutson says it’s possible to build a legal argument that facial recognition violates civil rights. If such a change passed, the law would effectively prevent the technology from being deployed in a slew of public spaces.
Another route would be to update a state’s consumer protection laws. Many companies claim that their technology can detect emotion, but studies have shown that their methods are deeply flawed.
It’s possible to argue, then, that these algorithms are violating laws against unfair or deceptive practices.
Such a move forces lobbyists to engage with the language of civil rights. “It’s like, okay, we’re updating civil rights law. You care about principles of civil rights,” Hutson says. “If you don’t want us to do it, how can we expect any of your [suggested] safeguards to matter?’” He's working with lawmakers and hopes to introduce a bill during the next legislative session in Washington state, which begins in January.
Separation? Not really In practice, the distinction between government and private facial recognition is a false one. Normalizing one normalizes the other, says Evan Selinger, a philosopher at the Rochester Institute of Technology. Once everyone is used to using Facebook’s facial recognition system , he says, “it becomes a lot harder to say that law enforcement, who is looking out for the good, should have less freedom than you do.” When facial recognition is taken for granted, “you ultimately provide the private sector with information that it can share with law enforcement.” That private sector is powerful and will want to have a say in regulation. Amazon CEO Jeff Bezos recently said the company is creating its own draft facial recognition guidelines to present to lawmakers. Earlier this year, Microsoft supported privacy legislation in Washington state that would have put some restrictions on facial ID. But the bill also said it was okay to use facial recognition for profiling as long as someone reviewed the results. It failed after six privacy groups argued that it was far too weak.
That’s part of the reason activists like Greer insist that a multi-pronged strategy looking at legislative and economic approaches will be necessary. “We need all of the above,” she says. “Members of the public should absolutely be holding corporations accountable. Lawmakers should absolutely be addressing this. If there’s one thing we know, it’s that we can’t trust industries to regulate themselves.” hide by Angela Chen Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,499 | 2,019 |
"Google appoints an “AI council” to head off controversy, but it proves controversial | MIT Technology Review"
|
"https://www.technologyreview.com/s/613203/google-appoints-an-ai-council-to-head-off-controversy-but-it-proves-controversial"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google appoints an “AI council” to head off controversy, but it proves controversial By Will Knight archive page Jeremy portje Developing and commercializing artificial intelligence has proved an ethical mine field for companies like Google. The company has seen its algorithms accused of perpetuating race and gender bias and fueling efforts to build autonomous weapons.
The search giant now hopes that a team of philosophers, engineers, and policy experts will help it navigate the moral hazards presented by artificial intelligence without press scandals, employee protests, or legal trouble.
Kent Walker, Google’s senior vice president for global affairs and chief legal officer, announced the creation of a new independent body to review the company’s AI practices at EmTech Digital , an AI conference in San Francisco organized by MIT Technology Review.
Walker said that the group, known as the Advanced Technology External Advisory Council (ATEAC), would review the company’s projects and plans and produce reports to help determine if any of them contravene the company’s own AI principles. The council will not have a set agenda, Walker said, and it would not have the power to veto projects itself. But he said the group’s reports “will help keep us honest.” The first ATEAC will feature a philosopher, an economist, a public policy expert, and several researchers on data science, machine learning, and robotics. Several of those chosen actively research issues such as algorithmic bias. The full list is as follows: Alessandro Acquisti, Bubacarr Bah, De Kai, Dyan Gibbens, Joanna Bryson, Kay Coles James, Luciano Floridi , and William Joseph Burns.
But it is for tech companies to prove they are sincere about ethical concerns. The announcement has already provoked a backlash from some AI experts who question the inclusion of Gibbens and James.
The former is the founder and CEO of a drone company, a choice that seems tone deaf after Google faced an employee backlash and a storm of negative press for its involvement in Maven, a project to supply cloud AI to the US Air Force for the analysis of drone imagery. The fallout prompted Google to announce a set of AI principles in the first place. The latter is the president of the conservative think tank The Heritage Foundation who has pushed an anti-LGBTQ agenda, and whose organization has spread misinformation about climate change, among other things.
The controversial announcement comes amid a series of scandals that Google and other big tech companies have faced related to the development and use of artificial intelligence. For example, the algorithms used for face recognition and filtering job applicants have been shown to exhibit racial bias.
Walker said on stage that Google already vets its AI projects carefully. He noted that the company has chosen not to supply face recognition technology over fears it could be misused. In another instance, he said the company had chosen to release a lip-reading AI algorithm despite worries that it might be used for surveillance, because it was judged that the potential benefits outweighed the risks.
At EmTech, Walker acknowledged that the council would need to consider emerging AI risks, and he identified misinformation and AI-powered video manipulation as particular concerns. “How do we detect this across our platforms? We are working very hard on this,” he said. “We are a search engine, not a truth engine.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,500 | 2,017 |
"China’s AI Awakening中国 人工智能 的崛起 | MIT Technology Review"
|
"https://www.technologyreview.com/s/609038/chinas-ai-awakening"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts China’s AI Awakening中国 人工智能 的崛起 By Will Knight archive page Lintao Zhang/GETTY On a tropical island that marks the southern tip of China, a computer program called Lengpudashi is playing one-on-one poker against a dozen people at once, and it’s absolutely crushing them. Lengpudashi, which means “cold poker master” in Mandarin, is using a new artificial-intelligence technique to outbet and outbluff its opponents in a two-player version of Texas hold ’em.
The venue for the tournament is a modern-looking technology park in Haikou, capital of the island of Hainan. Outside, modern high-rises loom over aging neighborhoods. Those gathered to play the machine include several poker champs, some well-known Chinese investors, entrepreneurs, and CEOs, and even the odd television celebrity. The games are being broadcast online, and millions are watching. The event symbolizes a growing sense of excitement and enthusiasm for artificial intelligence in China, but there’s also a problem. Lengpudashi wasn’t made in Hainan, Beijing, or Shanghai; it was built in Pittsburgh, U.S.A.
For many in China, this simply won’t do. The country is now embarking on an unprecedented effort to master artificial intelligence. Its government is planning to pour hundreds of billions of yuan (tens of billions of dollars) into the technology in coming years, and companies are investing heavily in nurturing and developing AI talent. If this country-wide effort succeeds—and there are many signs it will—China could emerge as a leading force in AI, improving the productivity of its industries and helping it become leader in creating new businesses that leverage the technology. And if, as many believe, AI is the key to future growth, China’s prowess in the field will help fortify its position as the dominant economic power in the world.
Artificial intelligence may have been invented in the West, but you can see its future taking shape on the other side of the world.
Indeed, the country’s political and business leaders are betting that AI can jump-start its economy. In recent decades, a booming manufacturing sector—and market reforms encouraging foreign trade and investment—have helped bring hundreds of millions of people out of poverty, creating business empires and transforming Chinese society. But manufacturing growth is slowing, and the country is looking toward a future built around advanced technology (see “ China Is Building a Robot Army of Model Workers ”). Applying artificial intelligence may be the next step in this technology-fueled economic miracle. While many in the West fret about AI eliminating jobs and worsening wealth and income inequality, China seems to believe it can bring about precisely the opposite outcomes.
China’s AI push includes an extraordinary commitment from the government, which recently announced a sweeping vision for AI ascendancy.
The plan calls for homegrown AI to match that developed in the West within three years, for China’s researchers to be making “major breakthroughs” by 2025, and for Chinese AI to be the envy of the world by 2030.
There are good reasons to believe the country can make this vision real. In the early 2000s, the government said it wanted to build a high-speed rail network that would spur technological development and improve the country’s transportation system. This train network is now the most advanced in the world. “When the Chinese government announces a plan like this, it has significant implications for the country and the economy,” says Andrew Ng , a prominent AI expert who previously oversaw AI technology and strategy at China’s biggest online search company, Baidu. “It’s a very strong signal to everyone that things will happen.” Related Story The government’s call to action will accelerate what has already begun to happen. The country’s tech companies, led by the Internet giants Baidu, Alibaba, and Tencent, are hiring scores of AI experts, building new research centers, and investing in data centers that rival anything operated by Amazon, Google, or Microsoft. Money is also pouring into countless startups as Chinese entrepreneurs and investors spy a huge opportunity to harness AI in different industries.
China has some big advantages in AI. It has a wealth of talented engineers and scientists, for one. It also is rich in the data necessary to train AI systems. With fewer obstacles to data collection and use, China is amassing huge databases that don’t exist in other countries. The results can be seen in the growth of facial-recognition systems based on machine learning: they now identify workers at offices and customers in stores, and they authenticate users of mobile apps.
The nationwide interest in the poker tournament in Hainan reflects China’s appetite for the latest artificial-intelligence breakthroughs. Mastering even a two-player form of poker is a significant achievement for AI because, unlike many other games, poker requires players to act with limited information, and to sow uncertainty by behaving unpredictably. An optimal strategy therefore requires both careful and instinctive judgment, which are not easy qualities to give a machine. Lengpudashi impressively solved the problem by using a brilliant new game-theory algorithm, which could be very useful in many other scenarios, including financial trading and business negotiations. But Lengpudashi has received far less attention in its home country than it has in Hainan.
To explore China’s AI revolution and its implications, I’ve traveled to the heart of this boom and met with many of the key researchers, entrepreneurs, and executives. From China’s bustling capital to its factory-filled south, and from an ambitious new research center to a billion-dollar startup, one thing is clear: artificial intelligence may have been invented in the West, but you can see its future taking shape on the other side of the world.
看东方 Look east My journey begins at MIT, one of the wellsprings of artificial intelligence. Kai-Fu Lee, a well-known Chinese AI expert and investor and one of the organizers of the Hainan tournament, has come to recruit students for a new AI institute that his company, Sinovation Ventures , is building in Beijing.
Lee gives a talk entirely in Mandarin to an auditorium packed with about 300 Chinese students. He is dressed impeccably, in an expensive-looking suit and dress shirt, and he speaks in a confident, soothing tone. The talk touches on the interwoven trends that have driven the recent rise in machine intelligence: more powerful computers, ingenious new algorithms, and huge quantities of data. He argues that China is perfectly poised to take advantage of these advances.
“The U.S. and Canada have the best AI researchers in the world, but China has hundreds of people who are good, and way more data,” he tells the audience. “AI is an area where you need to evolve the algorithm and the data together; a large amount of data makes a large amount of difference.” In 1998 Lee founded Microsoft’s Beijing research lab, which showcased the country’s exciting talent pool (see “ An Age of Ambition ”). Then, in 2005, he became the founding president of Google China. Lee is now famous for mentoring young entrepreneurs, and he has more than 50 million followers on the Chinese microblogging platform Sina Weibo.
In the audience are exactly the type of prized students who would normally flock to Silicon Valley. But many are clearly taken by Lee’s message of opportunities in China. The crowd hangs on his every word, and some people clamor for autographs afterward. “Today the U.S. has a technology leadership,” Lee tells me later. “But China has a tremendous amount of potential.” “The titans of industry [in China] have seen fortunes made and fortunes lost all within their lifetime.” To see what this potential looks like up close, I travel to Lee’s new institute, half a world away from MIT, in Beijing’s Haidian district. The streets outside are filled with people on colorful ride-sharing bikes. I pass lots of fashionable-looking young techies as well as people delivering breakfast—ordered via smartphone, no doubt—to busy workers. At the time of my visit, a major AI event is taking place a few hundred miles to the south in Wuzhen, a picturesque town of waterways.
AlphaGo , a program developed by researchers at the Alphabet subsidiary DeepMind, is playing the ancient board game Go against several top Chinese players, including the world’s number one, Ke Jie. And it’s soundly beating them.
AlphaGo’s victories in Wuzhen are followed closely in the Chinese capital. As I enter Sinovation’s institute, in fact, I notice a Go board on which engineers are testing out the moves made during some of the matches.
The location of the institute is well chosen. From the office windows, you can see the campuses of both Peking University and Tsinghua University, two of China’s top academic institutions. Sinovation provides machine-learning tools and data sets to train Chinese engineers, and it offers expertise for companies hoping to make use of AI. The institute has about 30 full-time employees so far, but the plan is to employ more than 100 by next year, and to train hundreds of AI experts each year through internships and boot camps. Right now, roughly 80 percent of the institute’s funding and projects are aimed at commercializing AI, while the rest is focused on more far-out technology research and startup incubation.
The goal isn’t to invent the next AlphaGo, though; it’s to upgrade thousands of companies across China using AI. Lee says many Chinese businesses, including the big state-owned enterprises, are technologically backward and ripe for overhaul, but they lack any AI expertise themselves. Needless to say, this presents an enormous opportunity.
人工智能无处不在 AI everywhere Across the capital, in fact, I notice a remarkable amount of interest in artificial intelligence. In one restaurant, for instance, I find a machine that takes my picture and then supposedly uses AI to determine how healthy I am. This seems completely impossible, but the machine says I’m in great shape before suggesting that I have plenty to eat.
This fascination with the technology is reflected in Beijing’s feverish startup scene, which is already producing some formidable AI companies. One of them is SenseTime , which was founded in 2014 and is already among the world’s most valuable AI startups. Launched by researchers from the Chinese University of Hong Kong, SenseTime provides computer-vision technology to big Chinese companies, including the state-owned cellular provider China Mobile and the online retail giant JD.com. The company is now studying markets such as automotive systems. This July, SenseTime raised $410 million in funding, giving it a valuation of $1.5 billion. The entrance to SenseTime’s office features several large screens fitted with cameras. One can automatically add augmented-reality effects to a person’s face. Snapchat and Instagram offer similar gimmicks, but this one can also add effects in response to hand and body movements as well as smiles or winks.
中国曾经领先于世界 未来我们将再次成为领导者 Qing Luan , director of SenseTime’s augmented-reality group, previously developed office apps for Microsoft in Redmond, Washington. She says she returned to China because the opportunities just seemed much bigger. “We were struggling to get a thousand users; then I talked with my friend who was working at a startup in China, and she said, ‘Oh, a million users is nothing—we get that in several days,’” she recalls.
Earlier this year SenseTime’s engineers developed a novel image-processing technique for automatically removing smog and rain from photographs, and another for tracking full-body motion using a single camera. Last year they were part of a team that won a prestigious international computer-vision award.
Xiaoou Tang , SenseTime’s founder and a professor at the Chinese University of Hong Kong, is wearing a suede jacket, slacks, and glasses, and he has an intense air about him. He seems fiercely proud of his company’s achievements. Tang explains that the company’s name comes from a phonetic transcription of the name of the Shang dynasty and of its first ruler, Tang. This era, beginning around 1600 bce, was a critical age of development for the country. “China was leading the world then,” Tang says with a smile. “And in the future, we will lead again with technological innovations.” 智能制造 Manufacturing intelligence In the United States and other Western nations, many large sectors, such as manufacturing and services, have been slow to invest in AI and change their business practices. In China, there appears to be a greater sense of urgency about adapting to the changing technology. Across just about every industry, Chinese companies are shrugging off their reputation for following Western businesses, and investing heavily in research and development. Ng, who previously led Baidu’s AI effort, says China’s business leaders understand better than most the need to embrace new trends. “The titans of industry [in China] have seen fortunes made and fortunes lost all within their lifetime,” he says. “When you see the tech trends shift, you had better move quickly, or someone else will beat you.” Baidu anticipated the potential of artificial intelligence and sought to leverage it to reinvent its whole business. In 2014, the company created a lab dedicated to applying deep learning across its business, and in recent years, its researchers have made some significant advances. When Microsoft developed a system capable of better-than-human performance in speech recognition last year, for instance, few Western reporters realized that Baidu had done that a year earlier.
Following Baidu’s example, other Chinese tech companies are also looking to reinvent themselves with AI. The Internet leader Tencent , headquartered in the city of Shenzhen, is among them.
Shenzhen is nestled next to Hong Kong in southern China. Approaching by air, I see an armada of cargo ships moored in the South China Sea. In 1980, when Shenzhen was a small market town, it was designated China’s first Special Economic Zone, granting it unprecedented economic and regulatory freedoms. Manufacturing empires were built on the backs of migrant workers producing every imaginable product, and the population rose from 30,000 to more than 11 million. In recent years, the city has reflected China’s technological progress, and it is now home to global technology companies including the networking giant Huawei, the smartphone maker ZTE, and the electric-car company BYD.
There’s no reason why China’s AI-fueled economic progress should come at the expense of other countries, if those countries embrace the same technology just as keenly.
The city’s main strip is lined with palm trees, gaudy hotels, and busy bars and restaurants. Tencent’s headquarters, in Nanshan district, spans several large buildings, and the entrance is as busy as a subway station. Stepping inside, out of stifling humidity, I begin a tour that touts Tencent’s history and achievements. And it shows that you don’t have to be first in a technology to have a big impact. In 2011, the company launched a simple messaging app, modeled on products already found in the U.S. This would evolve into WeChat , an innovative mobile platform that now supports social networking, news, games, and mobile payments. With 889 million daily active users, WeChat now has an incredible grip on China’s Internet market.
Although Tencent created an AI lab only last year, it has hired scores of researchers and opened an outpost in Seattle. The company’s researchers have already copied AI innovations from the West, including DeepMind’s AlphaGo. Tencent’s AI lab is led by Tong Zhang, a quiet man with thin glasses and a round face, who previously worked at Baidu’s AI lab and before that was a professor at Rutgers University. Zhang speaks quietly, usually after a careful pause. He explains that AI will be crucial to Tencent’s plans to grow, especially outside China. “AI is important for the next phase,” he says. “At a certain stage, you just cannot copy things. You need your own capabilities.” Related Story I ask him if Tencent might be planning some spectacular demonstrations of AI, something like AlphaGo or Lengpudashi. Tencent owns several very popular games, including the strategy title League of Legends, which is played by more than 100 million people every month. Like Go, it requires instinctive actions, and like poker, it involves playing without a clear picture of your opponents’ standing. But it also requires planning far ahead, so it would be a worthy game for AI researchers to tackle next. “Right now, we have a bunch of small projects—some are more adventurous,” is all Zhang will say.
Tencent’s AI goals may in fact be more practical. The company has an amazing amount of conversation data thanks to WeChat and another messaging platform, called QQ. This data might be used to train machine-learning systems to hold more meaningful conversations. Making advances in language could have countless practical applications, from better document analysis and search to much smarter personal assistants. “The challenge, and also the opportunity, will be in natural language,” Zhang says.
宏伟蓝图 Think big It might be unnerving for Western nations to see a newcomer mastering an important technology, especially when the full potential of that technology remains uncertain. But it is wrong to view this story simply in terms of competition with the West.
A big problem facing both the U.S. and China is slowing economic growth. While AI may eliminate certain jobs, it also has the potential to greatly expand the economy and create wealth by making many industries far more efficient and productive. China has embraced that simple fact more eagerly and more completely than many Western nations. But there’s no reason why China’s AI-fueled economic progress should come at the expense of other countries, if those countries embrace the same technology just as keenly.
China might have unparalleled resources and enormous untapped potential, but the West has world-leading expertise and a strong research culture. Rather than worry about China’s progress, it would be wise for Western nations to focus on their existing strengths, investing heavily in research and education. The risk is missing out on an incredibly important technological shift. Yes, companies like Google and Facebook are making important strides in AI today, but this isn’t enough to reboot a whole economy. Despite the fanfare around AI, there are few economic signs—such as increased productivity—that most of the economy is taking advantage of the technology yet. Large segments of the economy beyond Silicon Valley, like medicine, service industries, and manufacturing, also need to sign on.
I can’t help thinking of the poker tournament in Hainan and reflecting that the rest of the world should take inspiration from Lengpudashi, the poker-playing AI. It’s time to follow China’s lead and go all in on artificial intelligence.
hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2017 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,501 | 2,018 |
"Inside the Black Mirror World of Polygraph Job Screenings | WIRED"
|
"https://www.wired.com/story/inside-polygraph-job-screening-black-mirror"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Mark Harris Backchannel The Lie Generator: Inside the Black Mirror World of Polygraph Job Screenings Play/Pause Button Pause In the public sector, millions of polygraph tests are conducted annually, as a last-ditch effort to weed out unsuitable job candidates.
Alex Petrowsky Save this story Save Save this story Save Christopher Talbot thought he would make a great police officer. He was 29 years old, fit, and had a clean background record. Talbot had military experience, including a tour of Iraq as a US Marine, and his commanding officer had written him a glowing recommendation. In 2014, armed with an associate degree in criminal justice, he felt ready to apply to become an officer with the New Haven Police Department, in his home state of Connecticut.
Talbot sailed through the department’s rigorous physical and mental tests, passing speed and agility trials and a written examination—but there was one final test. Like thousands of other law enforcement, fire, paramedic, and federal agencies across the country, the New Haven Police Department insists that each applicant take an assessment that has been rejected by almost every scientific authority: the polygraph test.
Commonly known as lie detectors , polygraphs are virtually unused in civilian life. They’re largely inadmissible in court and it’s illegal for most private companies to consult them. Over the past century, scientists have debunked the polygraph, proving again and again that the test can’t reliably distinguish truth from falsehood. At best, it is a roll of the dice; at worst, it’s a vessel for test administrators to project their own beliefs.
Yet Talbot’s test was no different from the millions of others conducted annually across the public sector, where the polygraph is commonly used as a last-ditch effort to weed out unsuitable candidates. Hiring managers will ask a range of questions about minor crimes, like marijuana use and vandalism, and major infractions, like kidnapping, child abuse, terrorism, and bestiality. Using a polygraph, these departments believe, increases the likelihood of obtaining facts that potential recruits might prefer not to reveal. And like hundreds of thousands of job candidates each year, Talbot was judged to have lied on the test. He failed.
New Haven allows failed applicants to plead their case in public before the Board of Police Commissioners. So in February 2014, Talbot sat down and recited his experiences with lie detectors. He had first applied to the Connecticut State Police and was failed for deception about occasional marijuana use as a minor. He then tried again with a police department in New Britain, where a polygraph test showed him lying about his criminal and sexual history.
This time he had failed the New Haven polygraph for something cryptically called “inconsistencies.” “[But] I’m not hiding anything,” he said at the hearing. “I was being straight and honest and I’ve never been in trouble with the law. I’m not lying about anything.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Electronic lie detection is a peculiarly American obsession. No other country carries out anywhere near the estimated 2.5 million polygraph tests conducted in the US every year, a system that fuels a thriving $2 billion industry. A survey by the Bureau of Justice Statistics from 2007 found that around three-quarters of urban sheriff and police departments use polygraphs when hiring. Each test can cost $700 or more. Apply to become a police officer, trooper, firefighter, or paramedic today, and there is a good chance you will find yourself connected to a machine little changed since the 1950s, subject to the judgment of an examiner with just a few weeks’ pseudoscientific training.
Last week the technology burst into the news when Christine Blasey Ford, the woman who accuses Supreme Court nominee Brett Kavanaugh of sexually assaulting her as a teenager, said that she had taken a privately administered polygraph test to help bolster her account of the incident. “While not admissible in court, they’re used by various governmental agencies and many people believe in their abilities,” Douglas Wigdor, a former prosecutor who now represents victims in sexual harassment and sexual assault cases against high-profile men, told The Washington Post.
In one of the biggest surveys of law enforcement use of polygraph screening to date, WIRED filed more than 50 public-records requests with America’s largest federal, state, and local law enforcement agencies, seeking to discover how they use the polygraph during hiring and what safeguards they have in place to prevent abuse. The results were erratic—and discouraging. A quarter failed to respond at all, and nearly half said they had no responsive documents— meaning they do not track the age, gender, race, or disability status of those undergoing examination.
But the results obtained offer a peek inside an outdated system that continues to influence who gets hired—and who doesn’t—at some of the most important institutions in the United States. Inconsistent and skewed polygraph screening programs are undermining the very places that are designed to uphold the law—a failure that comes with personal costs.
Illustration by Alex Petrowsky Lie detection has come a surprisingly short way from its inception a century ago. As a graduate student at Harvard in 1915, American psychologist and proto-feminist William Marston noticed that when his wife “got mad or excited” her blood pressure seemed to climb. He theorized that measuring her blood pressure while asking her questions could reveal deception by pinpointing the answers that caused a spike.
With the United States’ entry into World War I, Marston approached various government departments with the idea of developing his system as a tool to trap spies. He eventually secured a position in a medical support unit of the War Department (the precursor to the Department of Defense), where he carried out his initial research, often using women in university sororities as subjects.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After the war, Marston trained his focus on the legal system. In 1921, James Frye, a black man in Washington, DC, was accused of shooting a doctor. Frye confessed the crime to police, then a few days later recanted his confession. Frye’s lawyer brought in Marston to test his client’s honesty.
At the time, Marston’s device was a hack: a basic blood pressure monitor, administered with a medical cuff and stethoscope. After subjecting Frye to an examination, he concluded that his story of innocence was entirely truthful and agreed to testify on his behalf. However, the judge objected to the use of an unknown and unproven tool. An appeals court agreed, writing, “The thing from which [a] deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.” This became known as the Frye standard. Because polygraphs have never convinced the majority of scientists, the Frye standard has excluded them from most courtrooms for almost a century.
The experience only fueled Marston to make his method more sophisticated. He began working with a device, soon dubbed the polygraph, that measured blood pressure, breathing rate, and skin conductance—aka sweatiness. With some electronic and digital upgrades, these are essentially the same devices in operation today. Marston was media-savvy, touting polygraph technology in a public advertising campaign and, ultimately, even in comic books. While working as a consultant to DC Comics in 1940, Marston proposed a female superhero, Wonder Woman. She would be strong and smart, armed with bulletproof bracelets and an unbeatable lie detector—a Lasso of Truth that prevented anyone within its golden orbit from lying.
In reality, Marston’s design was far from perfect. Mainstream psychologists were concerned that the physiological responses the polygraph recorded could be caused by a host of things other than deception; the device might capture unrelated emotions, such as nervousness, arousal, anxiety, or fear. And once you have results, their meaning is open to interpretation. A polygraph only records raw data; it is up to an examiner to interpret the data and draw a conclusion about the subject’s honesty. One examiner might see a blood pressure peak as a sign of deception, another might dismiss it—and it is in those individual judgments that bias can sneak in.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But regardless of a polygraph’s accuracy, some organizations were beginning to find it useful. The polygraph’s scientific aura gave police a tool to intimidate suspects and recruiters a convenient way to shape their workforce. By the middle of the 20th century, polygraphs were being used by government agencies, factories, and banks to screen employees and investigate crimes, with little control or oversight. During the Cold War, federal polygraph tests were used to target left-wingers and homosexuals in government agencies.
Eventually, science began pushing back. In 1965, the US Committee on Government Operations evaluated the scientific evidence for polygraphy and concluded: “There is no lie detector, neither man nor machine. People have been deceived by a myth that a metal box in the hands of an investigator can detect truth or falsehood.” The next year, the American Polygraph Association was formed to promote polygraphy and provide standards for examiners and technologies.
In 1988, after years of intense lobbying by unions, the Employee Polygraph Protection Act prohibited most private companies from using lie detector tests. But the unions did not get a clean sweep: The Act excluded federal, state, and local government employers, along with private companies whose business is moving cash or drugs.
The American Medical Association had come out against pre-employment screening in 1986, and in 1998 the Supreme Court also chipped in, saying that there was simply no scientific consensus that polygraph evidence is reliable. In 2004 the American Psychological Association said “the lie detector might be better called a fear detector,” noting there was virtually no research validating its use in job screening.
In 1999 the Department of Energy asked the National Academies of Science to review the scientific evidence of the validity and reliability of polygraph examinations, particularly as used for screening.
The resulting committee visited governmental polygraph units and reviewed almost a century of scientific papers and data. Its comprehensive report , which took four years to research and write, was damning. “Almost a century of research ... provides little basis for the expectation that a polygraph test could have extremely high accuracy,” wrote its authors. “Polygraph testing yields an unacceptable choice between too many loyal employees falsely judged deceptive and too many threats left undetected. Its accuracy in distinguishing actual or potential violators from innocent test takers is insufficient to justify reliance on its use in employee screening.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In short, the technology was judged to be pseudoscientific hokum.
It was the polygraph’s tendency to produce false positives that especially worried the Department of Energy. Imagine using a polygraph in an investigation like the one proposed by US senator Rand Paul to identify the author of a damaging anonymous New York Times op-ed earlier this month. If a polygraph is accurate 85 percent of the time, as some data suggests, an investigation of 100 White House senior officials might well identify the guilty individual, but at the cost of falsely accusing 15 others. Shift that accuracy to 65 percent, a figure many critics suggest , and you couldn’t even be certain your culprit would be among the 34 individuals the machine would accuse.
In 2005, the Department of Energy report concluded that “false positives … clearly affect the morale of those for whom such a result is reached. They risk interrupting the careers of valuable contributors to our nation’s defense [and] pose a very serious risk of depriving the United States of the vital services of individuals who may not be easily replaced.” Christopher Talbot would never become a New Haven police officer. Despite his heartfelt plea, the commissioners voted unanimously to remove him, and dozens of other candidates, from consideration.
Alex Petrowsky Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Of course, Talbot may in fact have been guilty of a lie or crime for which there was no other proof. But evidence amassed by WIRED suggests an equally likely explanation: that he was the victim of a flawed and unreliable technology that is also vulnerable to examiners’ own personal prejudices.
Data obtained by WIRED showed vast differences in the outcomes of polygraph tests depending on the examiner each candidate faced. Consider another law enforcement agency that uses polygraphs in its employment process: the Washington State Patrol (WSP). Between late October 2011 and the end of April 2017, the WSP conducted 5,746 polygraph tests on potential recruits. This was the largest data set WIRED received, including copious data on both applicants and examiners. While one examiner failed less than 20 percent of candidates, others failed more than half the applicants they screened. And while two examiners disqualified just four people in more than 1,000 applicants for supposedly having sex with animals, one of their colleagues failed more than 10 times as many for bestiality—around one in 20 of all job seekers. The same examiner was also twice as likely as the rest of his peers to fail applicants on the grounds of child pornography.
There were no further hearings for these supposed crimes, and no jury to convince or judge to adjudicate, just scores of otherwise qualified applicants who would now not become Washington state troopers.
“We don’t know which, if any, of the examiners are accurate, but the disparity between them suggests the test is not being used in a way that is at all reliable,” says John Allen, a professor of psychology at the University of Arizona. And tests that are not reliable, Allen says, cannot be valid.
Not only can a failing polygraph test cost you a job, it can also follow you around throughout your career. People who fail a polygraph are usually asked to report that fact if they reapply for law enforcement positions nationwide, and some departments can share polygraph results with other agencies in the same state. “The polygraph’s great flaw is the substantial number of false positives that it gives out, especially when you’re using it for large-scale screenings,” says former CIA director James Woolsey, in a previously unreleased interview from 2009. He believes that polygraphs do not accomplish much more than “seriously damaging a lot of people’s lives by having them fail the polygraph when they haven’t really done anything.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This is not just a problem in Washington state. Around the US, most police departments use similar test formats and near-identical lists of questions, yet polygraph pass rates vary wildly. According to data supplied to WIRED, the toughest place in the country to take a polygraph could be Houston, whose police department passed just 32 percent of applicants in 2009. More recently, less than half (47 percent) of applicants passed the San Diego Police Department’s polygraph test in 2017.
Slightly more lenient is the Texas Department of Public Safety in Austin, which passed 60 percent in 2016. But if you fail there, you could try again down the road at the Dallas Police Department, where 77 percent of test-takers passed last year. And if the thought of all those wires and dials really gets you nervous, head to Baltimore, where more than 91 percent of applicants aced the polygraph in 2017. Despite similar tests and presumably similar applicants (especially in Texas), the departments' pass rates are wildly different—and these rates have varied little over multiple recent years.
But while polygraph examinations can be a lottery, history seems to show that the house can sometimes tip the odds.
Forty years ago, Harold Moon applied for a position as a correctional officer in Cook County, Illinois. After taking a polygraph test, Moon, who was black, was informed that he had failed and was rejected. Moon then brought a class action suit alleging discrimination in violation of the 1964 Civil Rights Act. His suit included analysis that there was only a one in 1,000 chance of the high polygraph failure rate among black applicants in Cook County between 1976 and 1978 being random. The Congressional Record in 1987 noted that Moon’s case was quietly settled, including an agreement with Cook County to eliminate the polygraph requirement.
That agreement would probably come as news to Donna Bibbs and two other African Americans who filed their own Civil Rights Act lawsuit against Cook County and its Sheriff’s Department in 2010. Bibbs and her fellow plaintiffs alleged that they were rejected for employment because of confessions given during their polygraph examinations that were never actually made.
“The Sheriff has not adopted any procedure to allow applicants to dispute the correctness of reports of admissions on the polygraph examination,” read their complaint. “A consequence of [this] is to vest the polygraph examiner with the final authority to reject applicants by making false reports of admissions.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This case never made it to court either; the parties eventually reached a settlement in 2016. There is no indication that Cook County has since altered any of its policies, and, in fact, the Sheriff Department’s legal department told WIRED that it does not retain polygraph records in an aggregate format, rendering it unable to track systematic racial bias.
From WIRED’s public records requests, it appears that few jurisdictions retain these records, making it nearly impossible to systematically identify bias in their programs. That comes as no surprise to William Iacono, professor of psychology, psychiatry, neuroscience, and law at the University of Minnesota. “[Demographic data] sounds like something these organizations don’t want to have,” he says. “Because if they have it, and someone asks for it, then it might reveal something that they’re not comfortable with. The examiner doesn’t really use an algorithm to figure out if people are employment worthy. The examiner’s decision is probably based primarily on the human interaction that the two people have.” Illustration by Alex Petrowsky In a survey of Virginia’s state licensed polygraphers carried out by University of Virginia researcher Vera Wilde in 2011, roughly 20 percent of respondents said they thought certain groups (for example, black people) tended to fail polygraphs more than others. In a US Senate hearing in 1987, the attorney general for New York said, “The [polygraph] operator’s prejudices, moods and feelings can strongly influence and even determine the outcome of the test. For example, we have received complaints about a polygraph operator who consistently fails a much higher percentage of black subjects than white subjects.” A study carried out for the Department of Defense’s Polygraph Institute in 1990 showed that innocent black polygraph examinees were more likely to suffer false positives than innocent whites, under mock crime conditions. The National Academy of Sciences report in 2003 worried about possible race, age, and gender biases, but noted that little research had been done in the area. “We know that there’s a potential effect of gender [and] race, in terms of [the] mix of polygrapher and subject,” said NAS committee chairman Stephen Fienberg in 2009. “We know that context matters. And we know that there can be systematic biases.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2007, a federal court observed that black applicants to the Arkansas State Police one year failed polygraphs at twice the rate of white applicants, although the numbers were too small to draw firm conclusions.
Dozens of equal opportunity complaints have been made against the FBI’s polygraph screening unit, accusing examiners of racial and other biases. Many of the complaints , released to Wilde under Freedom of Information laws, reveal applicants’ frustrations with an opaque and seemingly hostile process.
In 2008, one failed applicant wrote: “Black females are subjected to an entirely different level of scrutiny. I was given a polygraph test in Memphis and told that I failed, which was given by a male white. I requested a retake and was told that I passed the second polygraph test taken in Nashville, TN., which was given by a male black.” The FBI recorded her as saying its hiring criteria were “preset for hiring white males.” Both her application and her subsequent complaint were denied.
While undergoing a polygraph examination for a position at an FBI field office in New Haven in 2010, a black man was told that his recollection of using marijuana only a few times in high school was showing as deceptive, and that he should change his answer. Later, he wrote: “I was convinced that [the examiner] may have made an assumption, based on a stereotype about African Americans and drug use, and used that stereotype to profile me. I also realized that what [he] was asking of me would reflect negatively either way—if I didn’t change my answer I was being deceptive, and if I did change my answer I was lying on my application.” This catch-22 grievance was investigated by the Department of Justice’s Complaint Adjudication Office in 2012. That office noted that the FBI had another polygraph examiner review the case blind, with “no information concerning complainant’s race.” However, the FBI’s definition of a blind review demands some scrutiny. The second examiner wrote that “the only personal information available to him when conducting the review was complainant’s name, date of birth, social security number, gender, height, weight, and address.” The controversy around so-called redlining has shown repeatedly that race and zip code (and even names ) are closely linked. The man’s complaint was ultimately dismissed, as were all the other complaints obtained by Wilde.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The FBI rejected multiple requests from WIRED under the Freedom of Information Act for the demographics of applicants failing its polygraph screening tests, citing exclusions for law enforcement and national security data. However, the agency accidentally included relevant (but incomplete) data in a response to Wilde in 2012, not published until now.
The New Haven discrimination investigation included a memo stating the racial backgrounds of 130 FBI applicants who had failed preemployment polygraph tests between October 2008 and June 2010. (An additional 2,130 applicants who failed the polygraph were listed as “race unknown.”) While 12 percent of FBI staff are black, 19 percent of those failing its polygraph tests were black. Asians, Hispanics, Native Americans, and Pacific Islanders were also overrepresented in those failing the polygraph. And although 75 percent of FBI workers are white, they made up just 57 percent of applicants failing the polygraph tests.
New data collected by WIRED show that local police departments fare little better. The Metropolitan Nashville Police Department uses a computerized voice stress analyzer (CVSA) in place of a polygraph. This is a machine that supposedly detects deception by analyzing the low frequency audio information from answered questions about sensitive topics—some systems purport to detect “micro tremors” in deceptive answers. The technology is regarded with as much skepticism in the scientific community as polygraphs.
Data supplied to WIRED by the Metro Nashville Police Department show that black applicants are selected at only about half the rate of white applicants, and that Hispanic and Native American officers are also significantly under-selected. Metro Nashville also selects younger candidates (up to age 39) at nearly twice the rate of older ones (40 or older). The department says it has no record of anyone making an age, gender, or race-related complaint about the CVSA test, and that no applicant is ever disqualified based on a CVSA result alone.
Although a voice stress analyzer test is only one part of Metro Nashville’s hiring process, there is some evidence that lie detector screening contributes more directly to lopsided hiring practices elsewhere. The Baltimore Police Department might have a relatively lenient polygraph screening system, passing the vast majority of those applying, but black applicants from 2013 through 2017 still failed their polygraph tests at higher rates than their white counterparts. In 2016 and 2017, they failed more than twice as often.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Discrimination can work the other way too, if departments are giving preferred candidates a second shot at passing a test. In a 2014 internal survey of the San Diego Police Department’s polygraph unit, supplied to WIRED, one police officer noted: “I feel the examiners do a good job … They always offer to re-test if we want to.” This calls into question whether all applicants are treated equally, and suggests that even some police officers suspect the test is not always accurate.
While the 2003 National Academy of Science report removed the last vestiges of polygraphy’s scientific credibility, researchers continue to track the technology’s real-world use. A 2017 study at Walden University in Minneapolis found no relationship between preemployment polygraph exams and officers’ propensity for future misconduct—a purported justification for administering polygraphs—nor any differences in attitude toward misconduct between officers who had or had not undergone such testing.
“The research we’ve done shows that there’s no higher level of misconduct among police departments that don’t give polygraphs to applicants [than among] ones that do,” says Daryl Turner, president of the Oregon Coalition of Police and Sheriffs, an association of law enforcement professionals that campaigned against a bill last year to introduce preemployment polygraph screening in the state. “We also feel [the polygraph test] is not a fair assessment of a person’s truthfulness or integrity.” That is not a view shared by the American Polygraph Association, which certifies polygraph schools across the country. The majority of law enforcement agencies using polygraphy require examiners to be graduates of an association’s polygraphy course, which costs around $6,000 and can take 10 weeks to complete.
The Washington State Patrol says that all of its examiners are APA certified, carry out polygraph tests consistently, and check results with colleagues. Despite that, data supplied to WIRED shows that the WSP hires black men at a lower rate than white men, and is more likely to fail older candidates during its polygraph screening.
WSP lieutenant John Matagi could not offer a good reason why its examiners failed applicants or uncovered crimes at different rates, except to say: “They’re humans making human decisions [and] as each polygraph examiner gets better at their skill, they will have different results.” He also brushes off concerns that older candidates fail more often. “One of the things we speculated is that people who have been alive longer have more opportunity to engage in activity that is disqualifying,” he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Other departments appear more concerned about possible inequities. The Dallas Police Department supplied WIRED with data on the gender and race of its applicants and their relative success in polygraph tests. It also compared each group to the majority demographic of applicants. (As in every department that gave data to WIRED, this was white males.) Dallas reported more equitable hiring outcomes, and less variation between different groups, than other departments that responded: Both genders and all racial groups passed at similar rates.
So if the polygraph is so unreliable and prone to bias, why does law enforcement continue to use it? WSP’s Matheson says that much of the value in polygraph testing comes during the pre-polygraph interview, where “it is not uncommon for us to learn information that disqualifies the candidate. That’s a big part of the value of what we hope to gain from the entire process.” In other words, the mere specter of being subjected to a lie-detector test can induce applicants to confess information they might have otherwise withheld.
Between 2010 and 2017, the Phoenix Police Department told WIRED that it conducted 3,711 polygraph tests while recruiting sworn officers, civilian staff, interns, and volunteers. On 96 occasions, applicants admitted to crimes during or after their test, including two confessions of extortion—and four of murder. Although the polygraph cannot reliably detect truth or falsehood itself, its cultural reputation for omniscience can be used by an artful examiner to elicit confessions from nervous or suggestible subjects.
“The one thing that lie detection appears to be good for is tricking naïve people into thinking that the person who’s examining them knows more about what’s in their mind than they actually ever could,” says Jay Stanley of the American Civil Liberties Union’s Speech, Privacy, and Technology Project. “It’s an intimidation device.” The polygraph industry does not always get its way. The ACLU and the Oregon Coalition of Police and Sheriffs succeeded in fighting off the attempt to legalize polygraph screening tests in Oregon last year, and evidence obtained using a polygraph remains inadmissible in most legal settings.
Even the New Haven Police Department, which continues to use the polygraph for screening recruits, has proposed shifting their standards around the test. Earlier this year, the New Haven mayor’s Police and Community Task Force noted that minority officers are underrepresented in the department and laid part of the blame for that with the polygraph screening process. “NHPD needs to create a policy prohibiting contact between the psychologist and recruitment staff and the person administering the polygraph test,” it wrote in a report.
According to the best available science, polygraph tests are no more reliable at extracting the truth than Wonder Woman’s magic lasso. But by the time a new installment of the super hero’s story is released, in November 2019, millions more polygraphs will have been administered across the nation.
“Netflix for open source” wants developers to get paid The stubborn bike commuter gap between American cities Jon M. Chu shot this short film on an iPhone XS Max The long, strange history of female superhero names Edward Snowden on fighting turnkey tyranny Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories X Topics longreads Work Andy Greenberg Lauren Smiley Angela Watercutter Steven Levy Brandi Collins-Dexter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,502 | 2,021 |
"The US is unfairly targeting Chinese scientists over industrial spying, says report | MIT Technology Review"
|
"https://www.technologyreview.com/2021/09/21/1036103/china-chinese-americans-unfairly-targeted-economic-espionage-report"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The US is unfairly targeting Chinese scientists over industrial spying, says report A new study of economic espionage cases in the US says people of Chinese heritage are more likely to be charged with crimes—and less likely to be convicted.
By Eileen Guo archive page Ms Tech For years, civil rights groups have accused the US Department of Justice of racial profiling against scientists of Chinese descent. Today, a new report provides data that may quantify some of their claims.
The study , published by the Committee of 100, an association of prominent Chinese-American civic leaders, found that individuals of Chinese heritage were more likely than others to be charged under the Economic Espionage Act—and significantly less likely to be convicted.
“The basic question that this study tries to answer is whether Asian-Americans are treated differently with respect to suspicions of espionage,” said the report’s author, Andrew C. Kim, a lawyer and visiting scholar at the South Texas College of Law Houston. “The answer to that question is yes. “ The study, which looked at data from economic espionage cases brought by the US from 1996 to 2020, found that just under half of all defendants were accused of stealing secrets that would benefit China. This is far lower than the figures laid out by US officials to justify the Department of Justice’s flagship China Initiative.
The study found that 46% of all defendants were accused of stealing secrets that would benefit China, while 42% of cases involved American businesses.
According to the report, 46% of defendants charged under the Economic Espionage Act were accused of activity that would benefit Chinese people or entities, while 42% of defendants were accused of stealing secrets that would benefit American businesses.
The numbers directly contradict much of the Justice Department’s messaging around the China Initiative, which was launched in 2018 to combat economic espionage. The department has stated publicly—for example, in the first line of its home page for the China Initiative —that 80% of its prosecutions would benefit the Chinese state, reflecting “theft on a scale so massive that it represents one of the largest transfers of wealth in human history,” as FBI director Christopher Wray described it in 2020.
Since 2019, the program has largely targeted academic researchers.
“Strong evidence of charges with less evidence” The report was based on an analysis of public court filings, as well as Department of Justice press releases, for all Economic Espionage Act prosecutions between 1996 and 2020. It’s an update of an earlier analysis, published in the Cardozo Law Review, which covered the period up to 2016.
Charges for “theft of trade secrets” and “economic espionage” were both included, with the “economic espionage” charge requiring proof of a “nexus to foreign entity” and accompanied by higher penalties. (These two categories make up only a portion of the charges under the China Initiative; Kim briefly mentions “false statements and process crimes,” and people have also been charged with grant fraud and lying on visa applications, among other crimes.) Because demographic information and citizenship data is not included in court filings, Kim used names as proxies for race, and he used Google searches when names, like Lee and Park, were ethnically ambiguous. For citizenship, Kim noted that press releases often make prominent mention if a defendant is a “foreign national,” so he assumed that defendants were all citizens unless otherwise indicated.
Related Story The US government’s China Initiative sought to protect national security. In the most comprehensive analysis of cases to date, MIT Technology Review reveals how far it has strayed from its goals.
The report found that 89% of defendants with Western names who were charged with economic espionage were eventually convicted, versus 74% of defendants with Asian names.
Mike German, a former FBI special agent who is now a fellow at the Brennan Center for Justice at New York University, sees the higher acquittal or dismissal rates as “strong evidence that the Justice Department is levying charges with less evidence, perhaps counting on the bias they’re fomenting with their anti-China rhetoric to get judges and juries to convict anyway.” The report also found that while the number of economic espionage cases decreased under the Trump administration, Chinese and Chinese-American defendants were charged at higher rates. The number of cases brought by the Department of Justice fell from 105 during the Obama administration to 52 between 2016 and 2020. But more than half the defendants in cases brought under the Trump administration had Chinese names. This follows a trend that started in 2009, when Chinese names began overtaking Western names among those charged.
And despite the supposed focus on academics, only 3% of economic espionage cases in the study involved university workers. The top industries actually targeted for theft were technology (23%), manufacturing (21%), defense/energy (15%), and business (10%.) “‘This study suggests that the government is looking for spies in the place least likely to find them,” it says.
“Risks painting the whole race” In what Kim described as one of the most important findings of the study, analysis found that the Department of Justice was more likely to publicize EEA cases that involved defendants with Asian names than EEA cases brought against defendants with Western names: 51% of Western defendants have DOJ press releases, compared with 80% of all Asian defendants and 83% of defendants with Chinese heritage. As Kim wrote, “Publicizing alleged crimes by a racial minority more than similar crimes committed by others risks painting the whole race as more prone to that criminal conduct than others.” There are limitations in the study, as both the report itself and other researchers and close observers of the program have indicated. It does not cover all the potential charges under the China Initiative, and crucially, it also “does not help with the question of scale and scope of the threat related to China,” wrote Margaret Lewis, a law professor at Seton Hall Law School who has written about how connections to China have been criminalized , in an official response to the research.
She asks whether the low charge-to-conviction ratio for Chinese individuals and those of Chinese heritage reflects “a judicious culling of well-founded investigations … or an expansive dragnet that is creating enhanced suspicion at least in part because of people’s connectivity to the PRC?” "The sheer magnitude of these disparities raises concerns that racial factors have caused our government to punish people of certain races more severely than others who committed similar crimes." The study doesn’t tell us. But this represents a limitation in the data that’s available, says Alex Nowrasteh, the director of immigration studies and the Center for Trade Policy Studies at the Cato Institute, who earlier this year published his own research on espionage convictions.
“The problem with analyzing economic espionage is we don’t know the total universe of people who have [conducted] or have attempted economic espionage,” he says—only those that were caught.
Related Story A jury will decide if a famed Harvard nanotechnology expert is guilty of concealing his employment by a Chinese university.
Kim says he suspects that racial bias, even if implicit, was at play: “The sheer magnitude of these disparities raises concerns that racial factors have caused our government to punish people of certain races more severely than others who committed similar crimes. Implicit biases need not be conscious or even ‘racist’ to produce unfair racial disparities.” Not everyone is convinced. “The government argues that disproportionate effects do not establish discriminatory intent—which is true—and, accordingly, it is not engaging in racial profiling,” wrote Lewis.
But, she added, “bias is not required to conclude that the China Initiative is fatally flawed.” A chilling effect—and signs of a reversal? Asian-American civil rights groups have been raising concerns about the chilling effects of the FBI’s racial profiling of Chinese-American scientists for years. Numerous academic groups , scientific associations , and congressional representatives have joined calls to investigate—or rein in—the FBI and DOJ’s allegedly discriminatory investigations.
The tide may be turning. On September 9, a federal judge acquitted Anming Hu, a Chinese-Canadian academic and former professor at the University of Tennessee, of all counts of defrauding NASA on grant applications, three months after a jury was unable to reach a verdict on his case.
In late July, the Department of Justice dropped the prosecutions of five other academic researchers in a single day; they had been accused of lying about connections to the Chinese military.
“The more we learn about the China Initiative, the more problematic it looks,” Lewis told MIT Technology Review in an email. “We need to keep making the case to the government that there is a better way. No one is saying protecting research security while mitigating concerns about bias will be easy, but it is critical.” The Department of Justice has not yet responded to a request for comment.
hide by Eileen Guo Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,503 | 2,021 |
"Washington's campaign against Chinese-linked academics is under pressure as another case collapses | MIT Technology Review"
|
"https://www.technologyreview.com/2021/06/27/1027350/anming-hu-china-initiative-research-espionage-spying"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The FBI accused him of spying for China. It ruined his life.
Anming Hu’s case was meant to be a victory for the “China Initiative,” a US effort to root out spies. Instead, it turned more attention on the FBI.
By Karen Hao archive page Eileen Guo archive page Ms Tech | Envato In April 2018, Anming Hu, a Chinese-Canadian associate professor at the University of Tennessee, received an unexpected visit from the FBI.
The agents wanted to know whether he’d been involved in a Chinese government “talent program,” which offered overseas researchers incentives to bring their work back to Chinese universities.
Not too long ago, American universities encouraged their academics to build ties with Chinese institutions by means including the talent programs, but the US government is now suspicious of these programs, seeing them as a spy recruitment tool that the Chinese government could use to steal sensitive technology. Hu told the agents he had not participated in any talent programs, and the agents left shortly afterwards. But a little less than two years later, they showed up again—this time to arrest him. They alleged that he had defrauded NASA, and therefore the US government, by purposely hiding his affiliations with a Chinese university.
Hu is one of dozens of scientists, mostly of Chinese descent, whom the US government has investigated over the years amid allegations that they were hiding their connections to China. Since 2018, such cases have been investigated under the “China Initiative,” a controversial multi-agency program that aims to detect, prosecute, and stop economic espionage linked to Beijing.
Related Story The US government’s China Initiative sought to protect national security. In the most comprehensive analysis of cases to date, MIT Technology Review reveals how far it has strayed from its goals.
While the US government says the program is necessary to combat what FBI director Christopher Wray calls “the greatest long-term threat” to the United States , a growing group of activists, legal scholars, and national security experts have argued that it rests on shaky foundations, is limited in its efficacy, and fuels wide-scale racial profiling that has profoundly damaged the US scientific community.
In mid-June, Hu’s case became the first China Initiative case to go to trial and put both sides of the argument to the test. The result? A mistrial when the 12 jurors deadlocked after only a day of deliberations.
The testimony of the primary FBI agent on the case, Kujtim Sadiku, likely did not help: he admitted in court that his probe into Hu’s activities—which involved putting the researcher and his son under surveillance for 21 months, adding Hu to a no-fly list, and seizing his computer and phone—was based on false information, according to the Knoxville News Sentinel.
Sadiku also admitted to spreading false information about Hu that damaged his reputation in the international research community and giving UT false information implicating him as an operative for the Chinese military, which led the university to fire him. Sadiku never followed up with the university to correct the record.
By the time the case went to trial, the charges against Hu did not involve economic espionage— the original reason for the investigation—but, rather, six charges of wire fraud and making false statements. The latter allegations hinged on a single administrative form at UT that the FBI says Hu filled out incorrectly to intentionally conceal a relationship with Beijing University. Hu and his lawyer maintained that Hu filled it out according to university rules and disclosed his relationship on multiple other forms and in email exchanges with UT and a NASA contractor.
The Department of Justice has said that it is still weighing its next steps, which could include moving to retry the case.
Regardless of whether the government continues to pursue Hu, many concerned academics feel that the case has confirmed the China Initiative’s shifting goalposts—without providing clearer answers on how to avoid suspicion themselves.
“It’s just shocking,” says Yasheng Huang , a business school professor at MIT, who brought together a group of academics, known as the Asian American Scholar Forum, to discuss these issues after his colleague Gang Chen, the head of MIT’s mechanical engineering department, was arrested for grant fraud under the China Initiative. “The standard is incredibly low to put a guy in jail and charge him with serious crimes, destroying his life and destroying his career.” “This trial has literally put the federal government’s China Initiative on trial,” said Jinliang Cai, the chairman of the nonprofit United Chinese Americans, in a press statement. “I believe the government must fully investigate the whole range of disturbing problems by our own law enforcement agencies revealed throughout this trial.” A “whole-of-society” threat The China Initiative launched in November 2018, amid the Trump administration’s increasing economic protectionism and its trade war with China. Economic espionage cases against China had already been accelerating under the Obama administration , with a focus on prosecuting theft of trade secrets. But the new program sought to put even more resources behind the issue. It was the first focused government program to go after espionage by a specific country.
The Department of Justice said China posed a unique threat to US economic and technological superiority, and Attorney General William Barr later asserted that the country was engaged in “an aggressive, orchestrated, whole-of-government (indeed, whole-of-society) campaign … to surpass the United States as the world’s preeminent technological superpower.” China, the argument went, relies on “ non-traditional collectors ”—including businesspeople, academics, students, tourists, or anyone with what the DOJ terms a “ nexus to China ”—to gather up little bits of intelligence. The US needed to respond, Barr said, with its own coordinated government approach.
Scientists and researchers born in China, especially those affiliated with talent programs, became one of the main targets.
But critics say that while the threat of economic espionage from China is real, the US government response has been disproportionate and ineffective. In 2014, former US defense secretary Robert Gates named “a dozen or 15 countries” that steal US technology, including NATO ally France. And an analysis by the Cato Institute of spies convicted from 1990 to 2019 found that, excluding violations related to the Arms Export Control Act, commercial espionage benefiting China accounted for only 27.5% of convictions. Domestic espionage benefiting US entities, meanwhile, accounted for 30.8%, and commercial espionage benefiting Iran accounted for 19.4%.
“It’s perfectly appropriate to have an economic espionage program, because that is a persistent problem, and China's certainly one aggressive actor in that realm of criminal activity,” says Michael German, a former FBI agent turned whistleblower and a fellow at the Brennan Center for Justice at New York University. But having a specific program on China, he says, takes resources away from investigating other legitimate threats.
That’s not the only concern about the program, however.
Low-level mistakes vs. espionage In an analysis of DOJ press releases covering 79 cases brought as a result of the initiative, the civil rights organization Asian Americans Advancing Justice | AAJC found that 48% of them resulted in no actual charges of economic espionage, trade theft, or espionage related crimes, but rather charges like wire fraud, tax issues, or other infractions that amount to disclosure mistakes on administrative forms, as in the case of Hu.
“The government can have it one way or another,” says the Brennan Center’s German. “Either it’s a huge problem involving thousands of people and they’re just not very good at catching them, or it’s a manageable problem that should be focused on actual state-sponsored economic espionage, regardless of what country it’s coming from.” Related Story A new study of economic espionage cases in the US says people of Chinese heritage are more likely to be charged with crimes—and less likely to be convicted.
Advocates say the China Initiative has become an excuse for racial profiling, part of a long US history of treating Asian-Americans as untrustworthy foreigners. In 1882 the Chinese Exclusion Act barred Chinese immigrants from entering the country for 10 years, and during World War II the federal government detained hundreds of thousands of innocent Japanese-Americans. Under the Clinton and Obama administrations there was a string of failed espionage cases against Chinese-American scientists, including Wen Ho Lee of Los Alamos National Laboratory, Temple University’s Xi Xiaoxing , and the National Weather Service’s Sherry Chen.
“The China Initiative is premised explicitly on the theory that there is an ethnic affinity ... on the part of people of Chinese descent—even if they are United States citizens or Canadian citizens—to act in violation of American law for the benefit of Beijing,” says Frank Wu, the president of Queens College at the City University of New York. Under this system, he says, “ordinary behaviors such as scientific cooperation or visiting your mother [in China] suddenly become suspicious.” It has also had a chilling effect on Chinese-American scientists, says MIT’s Huang. During his regular meetings with the Asian American Scholar Forum, he says, others have expressed fear of being arrested, fear of losing their funding, and fear about the way they might be perceived by their non-Asian colleagues. Young PhD students are no longer looking for professorships in the US, he says, while established scientists are now searching for international options. A number returned to China to prestigious posts —an outcome the China Initiative had hoped to avoid—after their careers in the United States were destroyed.
“It’s pretty bad and pretty pervasive. We’re seeing this climate of fear engulfing Chinese-American scientists,” Huang says. “The US is losing the most talented people to other countries because of the China Initiative. That’s bad for science. That’s bad for America.” The Hu case played out To activists and civil society researchers who’ve been following the China Initiative, Hu’s case is anything but surprising.
Hu, a Chinese-born Canadian citizen, is a celebrated researcher in nanotechnology. In 2013, the University of Tennessee recruited him to teach and continue his research. Hu disclosed on multiple occasions that he’d worked part time teaching graduate students and researchers at the Beijing University of Technology, according to the Knoxville News Sentinel.
"The China Initiative is premised explicitly on the theory that there is an ethnic affinity on the part of people of Chinese descent to act in violation of American law for the benefit of Beijing." None of this raised any issues at the time. When Hu began collaborating with NASA, which is legally barred from funding any research that involves “participation, collaboration, or coordination” with “China or a Chinese-owned corporation,” UT administrators assured both him and the government agency that this part-time work didn’t violate the restriction. The law is meant to apply to NASA, not to its research collaborators.
In 2018, however, the FBI identified Hu as a potential spy. During his court testimonial, Agent Sadiku said he had found and made a “rough translation” via Google of a Chinese-language news release and flier that suggested Hu had once received a short-term contract from the Thousand Talents Program. That was evidence enough for Sadiku to open up a formal probe.
During Sadiku’s first visit to Hu’s office, Hu says, the agent tried to get him to admit to involvement in a talent program.
“They said, ‘You are so smart. You should be in the Thousand Talents Program,’” he recounted during his trial. “I say, ‘I’m not that smart.’” Sadiku also tried to persuade him to become a spy for the US government, using his Beijing University work as a cover. Hu declined via email after Sadiku’s visit. After this, Sadiku doubled down on his investigation, placing Hu and his son—then a freshman at UT—under surveillance.
But after nearly two years, Sadiku turned away from the espionage claims and instead started building the fraud case that Hu ended up being charged with. The evidence rested on a form that the university requires academics to fill out, disclosing any outside work that earns them more than $10,000. Hu did not disclose his part-time job because it earned him less than $2,000. Sadiku says this is evidence that Hu intentionally hid his China-affiliated work to defraud NASA. The jury, however, could not decide, and the deadlock triggered a mistrial.
FBI under pressure Observers say the details of the case echo those of others brought as part of the China Initiative: a spy probe on an ethnically Chinese researcher is opened with little evidence, and the charges are later changed when no sign of economic espionage can be found.
According to German, the former FBI agent, this is due to the pressure “on FBI agents across the country, every FBI field office, [and] every US Attorney’s office to develop cases to fit the framing, because they have to prove statistical accomplishments.” "The DOJ doesn’t need a special initiative targeting China to go after spies. They should be able to use their normal methods and procedures." On Thursday, June 17, shortly after news of the mistrial, members of the House Judiciary Committee wrote to the inspector general of the Department of Justice requesting that the DOJ investigate whether there was adequate evidence unrelated to race or ethnicity for the FBI to open the case, whether the bureau had used false information and made false statements, and whether the China Initiative resulted in “untoward pressure” to engage in ethnic and racial profiling.
This follows increasing demands to investigate whether the initiative has led to such profiling—and calls to end that program altogether.
“The DOJ doesn’t need a special initiative targeting China to go after spies,” says Alex Nowrasteh, the director of immigration studies and the Center for Trade Policy Studies at the Cato Institute. “They should be able to use their normal methods and procedures.” Hu’s trial suggests “that the scope of Chinese espionage is probably a lot less than people think,” he adds. “If there was a lot more of it, you’d think it'd be a little bit easier to find, and they wouldn’t have to make up cases.” As for Hu, his nightmare is far from over.
He is still under house arrest, pending a decision from either the Department of Justice to renew the case or drop it, or the judge to dismiss the government’s charges entirely. He has been jobless since his US work visa expired, but he has also not been granted leave from house arrest so he can return to Canada to renew it. Doing so could put him in the crosshairs of Immigration and Customs Enforcement, according to his lawyer.
All he can do is wait for the US government to make its next move.
Correction: A previous version of this article misstated that 48% of cases under the China Initiative resulted in no actual charges of economic espionage. In fact, 48% of cases did not involve charges of economic espionage, trade theft, or espionage-related crimes, according to analysis by the non-profit Asian Americans Advancing Justice | AAJC.
hide by Karen Hao & Eileen Guo Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,504 | 2,020 |
"The FBI’s decades-long fight against industrial espionage hasn’t really worked | MIT Technology Review"
|
"https://www.technologyreview.com/2020/08/19/1006361/china-corporate-espionage-fbi-dupont-monsanto-hailong-mo-dbn"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The FBI’s decades-long fight against industrial espionage hasn’t really worked By Mara Hvistendahl archive page Ms Tech | Getty In the summer of 2011, Mark Betten drove north from Des Moines through the heart of Iowa farm country. It was a blistering hot Thursday just before the Fourth of July, the sort of day when no one wants to be in a car on the interstate. His trip to Johnston, a Des Moines suburb, was not intended to be particularly momentous. But what Betten learned there would end up consuming his life for the next few years.
He soon arrived at the headquarters of the seed company DuPont Pioneer, which occupies a low-slung building plastered with gigantic images of corn. Inside, Betten met with corporate security officers. The Federal Bureau of Investigation was shifting its focus to economic espionage cases involving China. A 14-year veteran of the bureau with a close-cropped haircut and a gravelly voice, Betten was at Pioneer for what the FBI calls a routine liaison visit—a chance to exchange ideas and trawl for tips.
A dizzying array of technologies were now portrayed as critical to national security: wind turbines, paint whiteners, corn seed. The bureau worked closely with companies to identify the secrets targeted by Chinese competitors, and the relationship with DuPont, Pioneer’s parent company, was particularly cozy. DuPont was already a giant corporation: it would make a profit of over $4 billion in 2011, on revenue of nearly 10 times as much. By then, the US Department of Justice (DOJ) had already brought at least four federal trade cases on behalf of DuPont subsidiaries and affiliates on trade-secret theft. In the years that followed, that focus would only intensify.
To catch Mo stealing agricultural trade secrets, the FBI pulled out tools it might use against drug cartels.
At the meeting, Betten explained the bureau’s efforts to combat economic espionage and tackle cybersecurity threats. A Pioneer security officer mentioned that a few months earlier, a contract farmer in a remote part of Iowa had found a Chinese national crouched on his knees in a field where the company grew genetically modified inbred seed. Another man waited nearby in a parked car. When the farmer asked what they were doing, the kneeling man stammered out an excuse; then he bolted for the car and jumped in the passenger seat as the car sped away. Pioneer security later used the license plate to trace the rental car to a man with a Florida driver’s license. His name was Hailong Mo.
Back at the FBI field office, Betten soon learned of two other suspicious incidents involving Hailong Mo and other seed companies operating in Iowa—including Monsanto, another agricultural giant, which would earn over $2 billion in profits that year. Agricultural technology is among the sectors designated for strategic development in China, and the US Office of the National Counterintelligence Executive, which advises the president on intelligence matters related to national security, had identified it as a frequent target of industrial spies. Betten ordered surveillance on Mo.
It turned out that Mo, who also goes by the first name Robert, worked for DBN, an agricultural company based in Beijing. DBN competed with Pioneer and Monsanto in the Chinese market. The Chinese government didn’t yet allow companies to sell genetically modified corn of the sort that had been growing in the Iowa field, but most experts expected the policy to change, and DBN, it seemed, was trying to prepare. Over the coming years, Betten followed closely as Mo and his colleagues at DBN executed an elaborate, if occasionally comical, plot to steal seeds from Monsanto and Pioneer. They posed as farmers, shipped boxes of seed using FedEx, and even attempted to smuggle the seed back to China in Orville Redenbacher microwave popcorn bags.
But the FBI’s reaction was equally outsized. Betten came to oversee a vast dragnet involving dozens of agents across the United States. To catch Mo stealing the trade secrets of the two agricultural giants, the FBI pulled out the tools that might be used against drug cartels or organized crime: car chases, airport busts, and aerial surveillance. The difference was that the target was a Chinese-born scientist with two PhDs—a new sort of criminal, and one that the US would increasingly take aim at over the years to come.
The Justice Department is waging war on Chinese industrial espionage. In 2018, the department launched the China Initiative, an effort to crack down on intellectual-property theft and other crimes. Overseen by FBI and DOJ officials, as well as a group of federal prosecutors, it takes a “whole of government” approach that involves coordinating ideas across multiple agencies. Though ostensibly about upholding the law, the China Initiative has also become one of the US’s principal tools in its brewing technological standoff with China. And although it is partly the creation of Trump administration hawks, the groundwork was laid years ago, under the Obama-era Justice Department.
The FBI now has over 2,000 active investigations involving China, spanning all 56 field offices. “We’re talking about everything from Fortune 100 companies to Silicon Valley startups, from government and academia to high tech and even agriculture,” FBI director Christopher Wray said at a conference at a Washington, DC, think tank in February. Even the pandemic has not slowed the effort. At a virtual event hosted by another think tank in July, Wray said that the FBI opens a new counterintelligence investigation involving China every 10 hours.
Unlike those of bureau targets like election interference or far-right domestic terrorism, China-related investigations have full support from the highest levels of the Justice Department.
In recent months, federal prosecutors have charged, in absentia, four members of China’s People’s Liberation Army (PLA) with hacking into the servers of the credit-rating agency Equifax and stealing data on millions of Americans. They have also unveiled charges of trade-secret theft against Huawei, the telecommunications giant whose ambitions to dominate the emerging 5G mobile telephony industry are seen by some as a threat to US national security interests. And they have charged US-based scientists at academic labs with lying about grants from Chinese institutions. (Former Harvard University chemistry chair Charles Lieber is the most prominent researcher to have been indicted.) But critics question whether the DOJ’s China push achieves its goals of deterring crime and protecting innovation in America. In some cases, the drive to go after technology threats has resulted in hasty prosecutions, with charges later dropped or downgraded. “Everything looks like a nail when you’ve got a hammer,” says Margaret Lewis, an expert on China and Taiwan at Seton Hall Law School in Newark, New Jersey. “DOJ is sweeping together everything from PLA hackers to misreporting on grants into one big China threat.” The US focus on industrial espionage didn’t begin with China. It goes back to the end of the Cold War, when the dissolution of the Soviet Union left a vacuum in the intelligence agencies. As agents left in droves, intelligence leaders sought out a new purpose. Industrial espionage was a natural fit. The increasing reach of the internet made technologies much easier to steal.
At the time, most US companies dealt with trade-secret theft through civil lawsuits, with one company suing another—and assuming the attendant legal costs. No international treaty or agreement addressed industrial espionage. In 1996, President Bill Clinton signed the Economic Espionage Act into law, making trade-secret theft a federal crime and marking attacks on American business as a national security threat. The act’s stiffest penalties against individuals—fines of up to $5 million and up to 15 years in prison—are reserved for thefts that can be connected to a foreign government. At the time that meant France and Israel. Thefts by Chinese companies were not yet a significant concern.
Federal prosecutors brought only a handful of cases in those early years, and after the terrorist attacks of September 11, 2001, industrial espionage took a back seat to counterterrorism on the FBI’s list of priorities. Only when investigations picked up again in the late 2000s did the focus shift to China. The country’s ambitious plans to build up strategic technology industries were provoking alarm in Washington. As the two countries moved into a more adversarial relationship, the determination to focus on trade-secret theft only increased.
In 2009, the FBI created a dedicated Economic Espionage Unit. In the years that followed, the bureau spearheaded an information blitz—holding seminars for companies and universities and printing brochures with titles like “Agricultural Economic Espionage: A Growing Threat.” At moments, the FBI has even staged international sting operations to defend US companies’ technology. In 2012, for example, an informant lured two Chinese entrepreneurs to the United States. The entrepreneurs had targeted the trade secrets of Pittsburgh Corning, which makes glass-block insulation. In a fictionalized film that the FBI produced about the operation, The Company Man , a gong sounds when the Chinese villains enter the frame. Later, the hero’s wife intones, “Just say no to the Chinese!” Such clunky messaging continues today. Though FBI officials have repeatedly said their investigations are not predicated on ethnicity, they have distributed pamphlets warning that “foreign adversaries” might try to entice US-based scholars through “appeals to ethnicity or nationality.” That could be counterproductive, argues Lewis. “You risk alienating people,” she says, adding that ethnic Chinese scientists who have not done anything wrong might conclude, “I’m not actually welcome here, so I will go back.” China-related investigations are costly: they require translators and analysts and often stretch over years. By one estimate, over 70 agents worked on Mo’s case. But it’s not clear that they have a strong deterrent effect. Take the decision to charge Chinese army officers in absentia for hacking into Equifax’s servers. “If China is doing a cost-benefit calculus, and the cost-benefit calculus is ‘We can steal 145 million records and it means that four of our people can’t travel outside China,’ that’s a pretty good trade-off for China,” says Jack Goldsmith, who headed the Justice Department’s Office of Legal Counsel in the George W. Bush administration.
Economic espionage cases are intended to protect American innovators from unfair foreign competition. But in prosecuting them, the US government has defended the interests of corporate giants whose practices are often themselves disturbingly anticompetitive. Farmers looking to buy seed could once choose from among dozens of small seed companies. That number has dwindled year by year, as DuPont Pioneer and Monsanto have bought up their competitors. Often the new owners have kept the small seed companies’ names, so that many farmers do not even realize that their preferred brand has been acquired.
Indeed, at the time that Betten opened an investigation into Mo, the Justice Department’s Antitrust Division was spearheading a separate inquiry into Monsanto for anticompetitive practices. Midway through the corn theft investigation, the DOJ abandoned the probe for reasons that are still unclear. It dropped several other agricultural investigations around the same time.
Since then, seed companies have grown even larger. In 2016, the German conglomerate Bayer made a bid to acquire Monsanto; it concluded the purchase in 2018. In 2017, DuPont Pioneer merged with Dow Chemical, forming a conglomerate with about $90 billion in annual revenue. It subsequently spun out the agrichemical division as Corteva. Together with Syngenta and BASF, two other agricultural giants, Bayer and Corteva now dominate seed corn sales in the US and indeed in much of the world.
All of this means “higher prices and less innovation for farmers and consumers,” says Austin Frerick, an antitrust researcher at the Yale School of Management who ran for Congress in Iowa in 2018 on a platform that included opposing Bayer’s acquisition of Monsanto. (He dropped out at the primary stage, citing challenges raising money.) “The price of corn seed has more than doubled in the past decade, and I promise you that seed didn’t get twice as good,” he says. “And as study after study in the economic literature demonstrates, innovation declines as industries get consolidated, because they lose the incentive to compete.” On December 11, 2013, agents streamed into Mo’s house in Florida at the crack of dawn, arrested him, and led him out to a government car. It had taken years of work by dozens of agents in five states to build a case against him. In 2016, after two years of pretrial proceedings, Mo pleaded guilty to conspiring to steal trade secrets. Later that year, as Betten watched from a bench in a Des Moines courtroom, Mo was sentenced to three years in prison.
But the government did not manage to apprehend five other people indicted in the case—Mo’s colleagues at DBN, who today remain on the FBI’s Most Wanted list. DBN, meanwhile, suffered no real consequences. Its stock took a dip after Mo’s arrest but later recovered. By the time the case had played out, Bayer had completed its acquisition of Monsanto. The FBI and Justice Department had worked hard to protect the company’s intellectual property in the name of safeguarding American innovation. But now that company was no longer even American.
This story was adapted in part from The Scientist and the Spy , published by Riverhead Books.
hide by Mara Hvistendahl Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,505 | 2,018 |
"Amazon's Rekognition misidentified 28 members of Congress as criminals | VentureBeat"
|
"https://venturebeat.com/2018/07/26/amazons-rekognition-misidentified-28-members-of-congress-as-criminals"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon’s Rekognition misidentified 28 members of Congress as criminals Share on Facebook Share on X Share on LinkedIn Amazon's facial recognition service, Amazon Rekognition.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Facial recognition algorithms are improving by leaps and bounds each year, but they’re far from perfect. Case in point: The American Civil Liberties Union said that in a test of Amazon’s Rekognition , the service erroneously identified 28 members of Congress as criminals.
The ACLU supplied Rekognition with 25,000 mugshots from a “public source” and had Amazon’s service compare them to official photos of Congressional members. Among the representatives misidentified were six in the Congressional Black Caucus, including civil rights activist Rep. John Lewis (D-Georgia), and 11 of the 28 false matches — roughly 38 percent — were people of color, who make up only 20 percent of current members of Congress.
A trio of Democratic Congress members responded to the test in an open letter to Amazon CEO Jeff Bezos.
“While facial recognition services might provide a valuable law enforcement tool, the efficacy and impact of the technology are not yet fully understood,” the letter read. “In particular, serious concerns have been raised about the dangers facial recognition can pose to privacy and civil rights, especially when it is used as a tool of government surveillance, as well as the accuracy of the technology and its disproportionate impact on communities of color.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Jacob Snow, a civil liberties attorney for the ACLU, told members of the media that the test was conducted for under $13.
“One of the things that is dangerous about presenting this information in a law enforcement context is that there can be differences — in lighting, in angles, in age — so it can be genuinely difficult to say just based on the photos that they are the same person,” Snow told Mashable.
“Facial recognition has the possibility of suggesting to a law enforcement user that there is a match. And then there is a high probability or a reasonable probability that the law enforcement user will trust the system and not apply the same level of skepticism.” The ACLU’s findings aren’t entirely surprising. Facial recognition technologies are susceptible to racial bias, research has shown — a 2011 study found that systems developed in China, Japan, and South Korea had more trouble distinguishing between Caucasian faces than East Asians. And in a separate study conducted in 2012, facial recognition algorithms from vendor Cognitec performed 5 to 10 percent worse on African Americans than on Caucasians.
But an Amazon spokesperson told VentureBeat that the ACLU’s test was likely skewed by poor calibration. It used a confidence threshold — i.e., the likelihood that a given prediction is correct — of 80 percent, lower than the 95 percent Amazon recommends for law enforcement applications.
“[W]e think that the results could probably be improved by following best practices around setting the confidence thresholds … used in the test,” an Amazon spokesperson told VentureBeat in an email. “While 80 percent confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty.” There’s no guarantee, however, that Amazon’s customers are following its guidelines. And historically, the accuracy of facial recognition algorithms used by law enforcement has left a lot to be desired. A recent House oversight committee hearing on facial recognition technologies revealed that the algorithms used to identify matches are wrong about 15 percent of the time. Meanwhile, the system used by London’s Metropolitan Police produces as many as 49 false matches for every hit.
In May, the ACLU revealed that Amazon worked with the city of Orlando, Florida and the Washington County Sheriff’s Office in Oregon to deploy Rekognition, reportedly charging around $400 for installation and as little as $12 a month.
Orlando is leveraging the facial recognition technology to target suspected criminals in footage from the city’s surveillance systems. And Washington County built a smartphone that allows deputies to scan mugshots through a database of 300,000 faces for matches.
In June, in a letter address to Bezos, 19 groups of Amazon shareholders expressed reservations over sales of Rekognition to law enforcement, joining the ACLU and nearly 70 other groups in protest.
“While Rekognition may be intended to enhance some law enforcement activities, we are deeply concerned it may ultimately violate civil and human rights,” the shareholders wrote. “We are concerned the technology would be used to unfairly and disproportionately target and surveil people of color, immigrants, and civil society organizations … We are concerned sales may be expanded to foreign governments, including authoritarian regimes.” Update at 10:27 p.m. Eastern: Added reference to a letter sent by members of Congress in response to the ACLU’s report.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,506 | 2,019 |
"Deepfakes Are Getting Better. But They're Still Easy to Spot | WIRED"
|
"https://www.wired.com/story/deepfakes-getting-better-theyre-easy-spot"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gregory Barber Business Deepfakes Are Getting Better, But They're Still Easy to Spot Deepfakes generated from a single image. The technique sparked concerns that high-quality fakes are coming for the masses. But don't get too worried, yet.
Egor Zakharov; Aliaksandra Shysheya; Egor Burkov; Victor Lempitsky Save this story Save Save this story Save Application Deepfakes Ethics Identifying Fabrications Games Sector Video Source Data Images Technology Machine vision Neural Network Last week, Mona Lisa smiled. A big, wide smile, followed by what appeared to be a laugh and the silent mouthing of words that could only be an answer to the mystery that had beguiled her viewers for centuries.
A great many people were unnerved.
Mona’s “living portrait,” along with likenesses of Marilyn Monroe, Salvador Dali, and others, demonstrated the latest technology in deepfakes—seemingly realistic media generated using machine learning. Developed by researchers at Samsung’s AI lab in Moscow, the portraits display a new method to create credible videos from a single image. With just a few photographs of real faces, the results improve dramatically, producing what the authors describe as “photorealistic talking heads.” The researchers (creepily) call the result “puppeteering,” a reference to how invisible strings seem to manipulate the targeted face. And yes, it could, in theory, be used to animate your Facebook profile photo. But don’t freak out about having strings maliciously pulling your visage yet.
“Nothing suggests to me that you’ll just turnkey use this for generating deepfakes at home. Not in the short term, medium term, or even the long term,” says Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative. The reasons have to do with the high costs and technical know-how of creating quality fakes—barriers that aren’t going away anytime soon.
Using as little as one source image, the researchers were able to manipulate the facial expressions of people depicted in portraits and photos.
Egor Zakharov; Aliaksandra Shysheya; Egor Burkov; Victor Lempitsky Deepfakes first entered the public eye late 2017, when an anonymous Redditor under the name “deepfakes” began uploading videos of celebrities like Scarlett Johansson stitched onto the bodies of pornographic actors. The first examples involved tools that could insert a face into existing footage, frame by frame---a glitchy process then and now---and swiftly expanded to political figures and TV personalities. Celebrities are the easiest targets, with ample public imagery that can be used to train deepfake algorithms; it’s relatively easy to make a high-fidelity video of Donald Trump, for example, who appears on TV day and night and at all angles.
The underlying technology for deepfakes is a hot area for companies working on things like augmented reality. On Friday, Google released a breakthrough in controlling depth perception in video footage—addressing, in the process, an easy tell that plagues deepfakes. In their paper, published Monday as a preprint , the Samsung researchers point to quickly creating avatars for games or videoconferences. Ostensibly, the company could use the underlying model to generate an avatar with just a few images, a photorealistic answer to Apple’s Memoji. The same lab also published a paper this week on generating full-body avatars.
Concerns about malicious use of those advances have given rise to a debate about whether deepfakes could be used to undermine democracy. The concern is that a cleverly crafted deepfake of a public figure, perhaps imitating a grainy cell phone video so that it’s imperfections are overlooked, and timed for the right moment, could shape a lot of opinions. That’s sparked an arms race to automate ways of detecting them ahead of the 2020 elections. The Pentagon’s Darpa has spent tens of millions on a media forensics research program, and several startups are angling to become arbiters of truth as the campaign gets underway. In Congress, politicians have called for legislation banning their “malicious use.” But Robert Chesney, a professor of law at the University of Texas, says political disruption doesn’t require cutting-edge technology; it can result from lower-quality stuff, intended to sow discord, but not necessarily to fool. Take, for example, the three-minute clip of House Speaker Nancy Pelosi circulating on Facebook, appearing to show her drunkenly slurring her words in public. It wasn’t even a deepfake; the miscreants had simply slowed down the footage.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg By reducing the number of photos required, Samsung’s method does add another wrinkle: “This means bigger problems for ordinary people,” says Chesney. “Some people might have felt a little insulated by the anonymity of not having much video or photographic evidence online.” Called “few-shot learning,” the approach does most of the heavy computational lifting ahead of time. Rather than being trained with, say, Trump-specific footage, the system is fed a far larger amount of video that includes diverse people. The idea is that the system will learn the basic contours of human heads and facial expressions. From there, the neural network can apply what it knows to manipulate a given face based on only a few photos—or, as in the case of the Mona Lisa, just one.
The creators of this deepfake don't intend to publish their method out of concern it could be weaponized. Courtesy of Xin Yang, Yuezun Li, and Siwei Lyu Xin Yang; Yuezun Li; Siwei Lyu The approach is similar to methods that have revolutionized how neural networks learn other things, like language, with massive data sets that teach them generalizable principles. That’s given rise to models like OpenAI’s GPT-2, which crafts written language so fluent that its creators decided against releasing it , out of fear that it would be used to craft fake news.
There are big challenges to wielding this new technique maliciously against you and me. The system relies on fewer images of the target face, but requires training a big model from scratch, which is expensive and time-consuming, and will likely only become more so. They also take expertise to wield. It’s unclear why you would want to generate a video from scratch, rather than turning to, say, established techniques in film editing or PhotoShop. “Propagandists are pragmatists. There are many more lower cost ways of doing this,” says Hwang.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For now, if it were adapted for malicious use, this particular strain of chicanery would be easy to spot, says Siwei Lyu, a professor at the State University of New York at Albany who studies deepfake forensics under Darpa’s program. The demo, while impressive, misses finer details, he notes, like Marilyn Monroe’s famous mole, which vanishes as she throws back her head to laugh. The researchers also haven’t yet addressed other challenges, like how to properly sync audio to the deepfake, and how to iron out glitchy backgrounds. For comparison, Lyu sends me a state of the art example using a more traditional technique: a video fusing Obama’s face onto an impersonator singing Pharrell Williams’ “Happy.” The Albany researchers weren’t releasing the method, he said, because of its potential to be weaponized.
Hwang has little doubt improved technology will eventually make it hard to distinguish fakes from reality. The costs will go down, or a better-trained model will be released somehow, enabling some savvy person to create a powerful online tool. When that time comes, he argues the solution won’t necessarily be top-notch digital forensics, but the ability to look at contextual clues—a robust way for the public to evaluate evidence outside of the video that corroborates or dismisses its veracity. Fact-checking, basically.
But fact-checking like that has already proven a challenge for digital platforms, especially when it comes to taking action. As Chesney points out, it’s currently easy enough to detect altered footage, like the Pelosi video. The question is what to do next, without heading down a slippery slope to determine the intent of the creators—whether it was satire, maybe, or created with malice. “If it seems clearly intended to defraud the listener to think something pejorative, it seems obvious to take it down,” he says. “But then once you go down that path, you fall into a line-drawing dilemma.” As of the weekend, Facebook seemed to have come to a similar conclusion: The Pelosi video was still being shared around the internet—with, the company said, additional context from independent fact-checkers.
What tech companies pay employees in 2019 Facing the ubiquity of Fortnite in our kids' lives Thinking of buying a new camera? Go mirrorless Feds busted the dark-web drug trade— and it's rebounding LA’s plan to reboot its bus system using cell phone data 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Staff Writer X Topics artificial intelligence machine learning real or fake Deepfakes Will Knight David Gilbert Amit Katwala Khari Johnson Kari McMahon David Gilbert Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,507 | 2,019 |
"Forget Politics. For Now, Deepfakes Are for Bullies | WIRED"
|
"https://www.wired.com/story/forget-politics-deepfakes-bullies"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Forget Politics. For Now, Deepfakes Are for Bullies Sam Whitney Save this story Save Save this story Save Application Deepfakes Ethics Safety Sector Social media Video Source Data Images Video While Americans celebrated a long Labor Day weekend, millions of people in China enrolled in a giant experiment in the future of fake video. An app called Zao that can swap a person’s face into movie and TV clips , including from Game of Thrones , went viral on Apple’s Chinese app store. The app is popular because making and sharing such clips is fun, but some Western observers’ thoughts turned to something more sinister.
Zao’s viral moment was quickly connected with the idea that US politicians are vulnerable to deepfakes, video or audio fabricated using artificial intelligence to show a person doing or saying something they did not do or say. That threat has been promoted by US lawmakers themselves, including at a recent House Intelligence Committee hearing on deepfakes. The technology is listed at the top of eight disinformation threats to the 2020 campaign in a report published Tuesday by NYU.
Yet some people tracking the impacts of deepfakes say it’s not big-name US politicians who have the most to fear. Rather than changing the fate of nations by felling national politicians, they say, the technology is more likely to become a small-scale weapon used to extend online harassment and bullying.
One reason: US public figures like presidential candidates take—and deflect—a lot of public flak already. They’re subject to constant scrutiny from political rivals and media organizations, and they have well-established means to get out their own messages.
“These videos are not going to cause a total meltdown,” says Henry Ajder, who tracks deepfakes in the wild at Deeptrace, a startup working on technology to detect such clips. “People like this have significant means of providing provenance on images and video.” The term deepfake comes from a Reddit account that in 2017 posted pornographic clips with the faces of Hollywood actresses swapped in, and later released the machine learning code used to make them. Widely circulated iterations of that software and continuing progress on image manipulation from artificial intelligence labs have made deepfake technology steadily better and more accessible.
Attention-grabbing fake clips of Barack Obama and Mark Zuckerberg , made to demonstrate the technology’s potential, have gained millions of views and have fed ideas about the technology’s election-swaying potential. Researchers and companies such as Deeptrace have ramped up research into technology to spot deepfakes, but the notion of a reliable deepfake detector is still unproven.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Ajder says there’s a “good chance” deepfakes involving 2020 candidates will appear. But he expects them to be an extension of the memes and trolling that originate in the danker corners of candidates’ online fan bases, not something that jolts the race to the White House onto a new trajectory.
Sam Gregory, who is tracking the potential impacts of deepfakes at nonprofit Witness, which promotes use of video to protect human rights, says one reason politicians figure prominently in predictions of a faker future is that politicians themselves have encouraged it. “I imagine it feels very personal,” he says.
US Senator Ben Sasse (R–Nebraska) last year proclaimed deepfakes “likely to send American politics into a tailspin” and introduced a bill that would make it a crime to create or distribute deepfakes with malicious intent. US Representative Adam Schiff (D–California) recently called deepfakes a nightmare scenario for the 2020 campaign.
Gregory believes community activists and journalists in places like South America, the Middle East, and Southeast Asia have more to fear from deepfakes. He helped organize a meeting on deepfakes in Brazil this July that included researchers, journalists, activists, and civic society groups.
The group was more concerned about deepfakes amplifying local harassment than altering national politics. Journalists and activists working on human rights issues such as police brutality and gay rights already face disinformation campaigns and harassment on platforms like WhatsApp, sometimes using sexual imagery, Gregory says.
What little is known about deepfakes in the wild so far supports the idea that that kind of harassment will be the first major negative impact of the technology.
Ajder of Deeptrace is aware of a handful of cases around the world in which a video at the heart of a political scandal was alleged to be a deepfake, but none have been confirmed. The startup’s attempts to track deepfakes circulating online show that pornographic deepfakes are many times more common. They have already become a tool of targeted harassment, similar to revenge porn.
Paul Barrett, author of the NYU report listing deepfakes as a top threat for 2020, argues that uncertainty about deepfakes’ impact doesn’t remove the need to prepare for them in national politics. A well-turned fake clip released in the last 24 hours of a close election, giving little time for a response, could be decisive, he says. “Given the experience in 2016 with Russia, given the volume of domestic disinformation, given the behavior on Twitter and elsewhere of the Republican candidate in 2020, I recommend preparing,” Barrett says.
Gregory of Witness cautions that such calls show how hype about the threat that deepfakes pose to national politics could have serious unintended consequences. If platforms like Facebook and YouTube feel pressured or obliged to swiftly remove alleged deepfakes, their defenses could themselves become a tool to manipulate reality. Politicians or their supporters could use the platforms’ reporting tools to suppress viewpoints they dislike, he says. “The solution could be more damaging to public trust,” he says.
You are already having sex with robots The ethics of hiding your data from the machines Smaller cities are trying to plug America's brain drain Why the NFL's field goal record is waiting to be smashed 👁 Facial recognition is suddenly everywhere.
Should you worry? Plus, read the latest news on artificial intelligence 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Senior Editor X Topics artificial intelligence machine learning politics Deepfakes David Gilbert Vittoria Elliott Christopher Beam Will Knight Will Knight Will Knight David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,508 | 2,018 |
"You thought fake news was bad? Deep fakes are where truth goes to die | Technology | The Guardian"
|
"https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth"
|
"Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare? US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness ‘When nothing is true then the dishonest person will thrive by saying what’s true is fake.’ Photograph: Joan Wong ‘When nothing is true then the dishonest person will thrive by saying what’s true is fake.’ Photograph: Joan Wong Technology You thought fake news was bad? Deep fakes are where truth goes to die Technology can make it look as if anyone has said or done anything. Is it the next wave of (mis)information warfare? Mon 12 Nov 2018 05.00 EST I n May, a video appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. “As you know, I had the balls to withdraw from the Paris climate agreement,” he said, looking directly into the camera, “and so should you.” The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a’s Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium’s climate policy.
One woman wrote: “Humpy Trump needs to look at his own country with his deranged child killers who just end up with the heaviest weapons in schools.” Another added: “Trump shouldn’t blow so high from the tower because the Americans are themselves as dumb.” But this anger was misdirected. The speech, it was later revealed, was nothing more than a hi-tech forgery.
Sp.a claimed that they had commissioned a production studio to use machine learning to produce what is known as a “deep fake” – a computer-generated replication of a person, in this case Trump, saying or doing things they have never said or done.
Sp.a’s intention was to use the fake video to grab people’s attention, then redirect them to an online petition calling on the Belgian government to take more urgent climate action. The video’s creators later said they assumed that the poor quality of the fake would be enough to alert their followers to its inauthenticity. “It is clear from the lip movements that this is not a genuine speech by Trump,” a spokesperson for sp.a told Politico.
As it became clear that their practical joke had gone awry, sp.a’s social media team went into damage control. “Hi Theo, this is a playful video. Trump didn’t really make these statements.” “Hey, Dirk, this video is supposed to be a joke. Trump didn’t really say this.” The party’s communications team had clearly underestimated the power of their forgery, or perhaps overestimated the judiciousness of their audience. Either way, this small, left-leaning political party had, perhaps unwittingly, provided a deeply troubling example of the use of manipulated video online in an explicitly political context.
It was a small-scale demonstration of how this technology might be used to threaten our already vulnerable information ecosystem – and perhaps undermine the possibility of a reliable, shared reality.
Fake videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN. A graduate student, Ian Goodfellow, invented GANs in 2014 as a way to algorithmically generate new types of data out of existing data sets. For instance, a GAN can look at thousands of photos of Barack Obama, and then produce a new photo that approximates those photos without being an exact copy of any one of them, as if it has come up with an entirely new portrait of the former president not yet taken. GANs might also be used to generate new audio from existing audio, or new text from existing text – it is a multi-use technology.
The use of this machine learning technique was mostly limited to the AI research community until late 2017, when a Reddit user who went by the moniker “Deepfakes” – a portmanteau of “deep learning” and “fake” – started posting digitally altered pornographic videos. He was building GANs using TensorFlow, Google’s free open source machine learning software, to superimpose celebrities’ faces on the bodies of women in pornographic movies.
A number of media outlets reported on the porn videos, which became known as “deep fakes”. In response, Reddit banned them for violating the site’s content policy against involuntary pornography. By this stage, however, the creator of the videos had released FakeApp, an easy-to-use platform for making forged media. The free software effectively democratized the power of GANs. Suddenly, anyone with access to the internet and pictures of a person’s face could generate their own deep fake.
When Danielle Citron, a professor of law at the University of Maryland, first became aware of the fake porn movies, she was initially struck by how viscerally they violated these women’s right to privacy. But once she started thinking about deep fakes, she realized that if they spread beyond the trolls on Reddit they could be even more dangerous. They could be weaponized in ways that weaken the fabric of democratic society itself.
“I started thinking about my city, Baltimore,” she told me. “In 2015, the place was a tinderbox after the killing of Freddie Gray. So, I started to imagine what would’ve happened if a deep fake emerged of the chief of police saying something deeply racist at that moment. The place would’ve exploded.” Citron, along with her colleague Bobby Chesney, began working on a report outlining the extent of the potential danger. As well as considering the threat to privacy and national security, both scholars became increasingly concerned that the proliferation of deep fakes could catastrophically erode trust between different factions of society in an already polarized political climate.
In particular, they could foresee deep fakes being exploited by purveyors of “fake news”. Anyone with access to this technology – from state-sanctioned propagandists to trolls – would be able to skew information, manipulate beliefs, and in so doing, push ideologically opposed online communities deeper into their own subjective realities.
“The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases,” the report reads. “Deep fakes will exacerbate this problem significantly.” Citron and Chesney are not alone in these fears. In April, the film director Jordan Peele and BuzzFeed released a deep fake of Barack Obama calling Trump a “total and complete dipshit” to raise awareness about how AI-generated synthetic media might be used to distort and manipulate reality. In September, three members of Congress sent a letter to the director of national intelligence, raising the alarm about how deep fakes could be harnessed by “disinformation campaigns in our elections”.
The specter of politically motivated deep fakes disrupting elections is at the top of Citron’s concerns. “What keeps me awake at night is a hypothetical scenario where, before the vote in Texas, someone releases a deep fake of Beto O’Rourke having sex with a prostitute, or something,” Citron told me. “Now, I know that this would be easily refutable, but if this drops the night before, you can’t debunk it before serious damage has spread.” She added: “I’m starting to see how a well-timed deep fake could very well disrupt the democratic process.” While these disturbing hypotheticals might be easy to conjure, Tim Hwang, director of the Harvard-MIT Ethics and Governance of Artificial Intelligence Initiative, is not willing to bet on deep fakes having a high impact on elections in the near future. Hwang has been studying the spread of misinformation on online networks for a number of years, and, with the exception of the small-stakes Belgian incident, he is yet to see any examples of truly corrosive incidents of deep fakes “in the wild”.
Hwang believes that that this is partly because using machine learning to generate convincing fake videos still requires a degree of expertise and lots of data. “If you are a propagandist, you want to spread your work as far as possible with the least amount of effort,” he said. “Right now, a crude Photoshop job could be just as effective as something created with machine learning.” At the same time, Hwang acknowledges that as deep fakes become more realistic and easier to produce in the coming years, they could usher in an era of forgery qualitatively different from what we have seen before.
“We have long been able to doctor images and movies,” he said. “But in the past, if you wanted to make a video of the president saying something he didn’t say, you needed a team of experts. Machine learning will not only automate this process, it will also probably make better forgeries.” Couple this with the fact that access to this technology will spread over the internet, and suddenly you have, as Hwang put it, “a perfect storm of misinformation”.
Nonetheless, research into machine learning-powered synthetic media forges ahead.
In August, an international team of researchers affiliated with Germany’s Max Planck Institute for Informatics unveiled a technique for producing what they called “deep video portraits”, a sort of facial ventriloquism, where one person can take control of another person’s face and make it say or do things at will. A video accompanying the research paper depicted a researcher opening his mouth and a corresponding moving image of Barack Obama opening his mouth; the researcher then moves his head to the side, and so does synthetic Obama.
Christian Theobalt, a researcher involved in the study, told me via email that he imagines deep video portraits will be used most effectively for accurate dubbing in foreign films, advanced face editing techniques for post-production in film, and special effects. In a press release that accompanied the original paper, the researchers acknowledged potential misuse of their technology, but emphasized how their approach – capable of synthesizing faces that look “nearly indistinguishable from ground truth” – could make “a real difference to the visual entertainment industry”.
Hany Farid, professor of computer science at the University of California, Berkeley, believes that although the machine learning-powered breakthroughs in computer graphics are impressive, researchers should be more cognizant of the broader social and political ramifications of what they’re creating. “The special effects community will love these new technologies,” Farid told me. “But outside of this world, outside of Hollywood, it is not clear to me that the positive implications outweigh the negative.” Farid, who has spent the past 20 years developing forensic technology to identify digital forgeries, is currently working on new detection methods to counteract the spread of deep fakes. One of Farid’s recent breakthroughs has been focusing on subtle changes of color that occur in the face as blood is pumped in and out. The signal is so minute that the machine learning software is unable to pick it up – at least for now.
As the threat of deep fakes intensifies , so do efforts to produce new detection methods. In June, researchers from the University at Albany (SUNY) published a paper outlining how fake videos could be identified by a lack of blinking in synthetic subjects. Facebook has also committed to developing machine learning models to detect deep fakes.
But Farid is wary. Relying on forensic detection alone to combat deep fakes is becoming less viable, he believes, due to the rate at which machine learning techniques can circumvent them. “It used to be that we’d have a couple of years between coming up with a detection technique and the forgers working around it. Now it only takes two to three months.” This, he explains, is due to the flexibility of machine learning. “All the programmer has to do is update the algorithm to look for, say, changes of color in the face that correspond with the heartbeat, and then suddenly, the fakes incorporate this once imperceptible sign.” (For this reason, Farid chose not to share some of his more recent forensic breakthroughs with me. “Once I spill on the research, all it takes is one asshole to add it to their system.”) Although Farid is locked in this technical cat-and-mouse game with deep fake creators, he is aware that the solution does not lie in new technology alone. “The problem isn’t just that deep fake technology is getting better,” he said. “It is that the social processes by which we collectively come to know things and hold them to be true or untrue are under threat.” Indeed, as the fake video of Trump that spread through social networks in Belgium earlier this year demonstrated, deep fakes don’t need to be undetectable or even convincing to be believed and do damage. It is possible that the greatest threat posed by deep fakes lies not in the fake content itself, but in the mere possibility of their existence.
This is a phenomenon that scholar Aviv Ovadya has called “reality apathy”, whereby constant contact with misinformation compels people to stop trusting what they see and hear. In other words, the greatest threat isn’t that people will be deceived, but that they will come to regard everything as deception.
Recent polls indicate that trust in major institutions and the media is dropping. The proliferation of deep fakes, Ovadya says, is likely to exacerbate this trend.
According to Danielle Citron, we are already beginning to see the social ramifications of this epistemic decay.
“Ultimately, deep fakes are simply amplifying what I call the liar’s dividend ,” she said. “When nothing is true then the dishonest person will thrive by saying what’s true is fake.” This article has been amended to clarify that though sp.a initially claimed it used machine learning technology to create its fake Trump clip, it was later revealed that the video was made using After Effects, an editing software Explore more on these topics Technology Artificial intelligence (AI) Computing Psychology Digital media Social media Consciousness features Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
"
|
1,509 | 2,019 |
"The world’s top deepfake artist is wrestling with the monster he created | MIT Technology Review"
|
"https://www.technologyreview.com/s/614083/the-worlds-top-deepfake-artist-is-wrestling-with-the-monster-he-created"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The world’s top deepfake artist is wrestling with the monster he created By Will Knight archive page It’s June in Dalian, China, a city on a peninsula that sticks out into the Yellow Sea a few hundred miles from Beijing in one direction and from the North Korean border in the other. Hao Li is standing inside a cavernous, angular building that might easily be a Bond villain’s lair. Outside, the weather is sweltering, and security is tight. The World Economic Forum’s annual conference is in town.
Near Li, politicians and CEOs from around the world take turns stepping into a booth. Inside, they laugh as their face is transformed into that of a famous person: Bruce Lee, Neil Armstrong, or Audrey Hepburn. The trick happens in real time, and it works almost flawlessly.
The remarkable face-swapping machine wasn’t set up merely to divert and amuse the world’s rich and powerful. Li wants these powerful people to consider the consequences that videos doctored with AI—“deepfakes”—could have for them, and for the rest of us.
Misinformation has long been a popular tool of geopolitical sabotage, but social media has injected rocket fuel into the spread of fake news. When fake video footage is as easy to make as fake news articles, it is a virtual guarantee that it will be weaponized. Want to sway an election, ruin the career and reputation of an enemy, or spark ethnic violence? It’s hard to imagine a more effective vehicle than a clip that looks authentic, spreading like wildfire through Facebook, WhatsApp, or Twitter, faster than people can figure out they’ve been duped.
As a pioneer of digital fakery, Li worries that deepfakes are only the beginning. Despite having helped usher in an era when our eyes cannot always be trusted, he wants to use his skills to do something about the looming problem of ubiquitous, near-perfect video deception.
The question is, might it already be too late? Rewriting reality Li isn’t your typical deepfaker. He doesn’t lurk on Reddit posting fake porn or reshoots of famous movies modified to star Nicolas Cage.
He’s spent his career developing cutting-edge techniques to forge faces more easily and convincingly. He has also messed with some of the most famous faces in the world for modern blockbusters, fooling millions of people into believing in a smile or a wink that was never actually there. Talking over Skype from his office in Los Angeles one afternoon, he casually mentions that Will Smith stopped in recently, for a movie he’s working on.
Actors often come to Li’s lab at the University of Southern California (USC) to have their likeness digitally scanned. They are put inside a spherical array of lights and machine vision cameras to capture the shape of their face, facial expressions, and skin tone and texture down to the level of individual pores. A special-effects team working on a movie can then manipulate scenes that have already been shot, or even add an actor to a new one in post-production.
Such digital deception is now common in big-budget movies. Backgrounds are often rendered digitally, and it’s common for an actor’s face to be pasted onto a stunt person’s in an action scene. That’s led to some breathtaking moments for moviegoers, as when a teenage Princess Leia briefly appeared at the end of Rogue One: A Star Wars Story , even though the actress who had played Leia, Carrie Fisher, was nearly 60 when the movie was shot.
Making these effects look good normally requires significant expertise and millions of dollars. But thanks to advances in artificial intelligence, it is now almost trivial to swap two faces in a video, using nothing more powerful than a laptop. With a little extra knowhow, you can make a politician, a CEO, or a personal enemy say or do anything you want (as in the video at the top of the story, in which Li mapped Elon Musk's likeness onto my face).
A history of trickery In person, Li looks more cyberpunk than Sunset Strip. His hair is shaved into a Mohawk that flops down on one side, and he often wears a black T-shirt and leather jacket. When speaking, he has an odd habit of blinking in a way that betrays late nights spent in the warm glow of a computer screen. He isn’t shy about touting the brilliance of his tech, or what he has in the works. During conversations, he likes to whip out a smartphone to show you something new.
Li grew up in Saarbrücken, Germany, the son of Taiwanese immigrants. He attended a French-German high school and learned to speak four languages fluently (French, German, English, and Mandarin). He remembers the moment that he decided to spend his time blurring the line between reality and fantasy. It was 1993, when he saw a huge dinosaur lumber into view in Steven Spielberg’s Jurassic Park.
As the actors gawped at the computer-generated beast, Li, then 12, grasped what technology had just made possible. “I realized you could now basically create anything, even things that don’t even exist,” he recalls.
Li got his PhD at ETH Zurich, a prestigious technical university in Switzerland, where one of his advisors remembers him as both a brilliant student and an incorrigible prankster. Videos accompanying academic papers sometimes included less-than-flattering caricatures of his teachers.
Shortly after joining USC, Li created facial tracking technology used to make a digital version of the late actor Paul Walker for the action movie Furious 7.
It was a big achievement, since Walker, who died in a car accident halfway through shooting, had not been scanned beforehand, and his character needed to appear in so many scenes. Li’s technology was used to paste Walker’s face onto the bodies of his two brothers, who took turns acting in his place in more than 200 scenes.
The movie, which grossed $1.5 billion at the box office, was the first to depend so heavily on a digitally re-created star. Li mentions Walker’s virtual role when talking about how good video trickery is becoming. “Even I can’t tell which ones are fake,” he says with a shake of his head.
Virtually you In 2009, less than a decade before deepfakes emerged, Li developed a way to capture a person’s face in real time and use it to operate a virtual puppet. This involved using the latest depth sensors and new software to map that face, and its expressions, to a mask made of deformable virtual material.
Most important, the approach worked without the need to add dozens of motion-tracking markers to a person’s face , a standard industry technique for tracking face movement. Li contributed to the development of software called Faceshift, which would later be commercialized as a university spinoff. The company was acquired by Apple in 2015, and its technology was used to create the Animoji software that lets you turn yourself into a unicorn or a talking pile of poop on the latest iPhones.
Li and his students have published dozens of papers on such topics as avatars that mirror whole body movements, highly realistic virtual hair, and simulated skin that stretches the way real skin does. In recent years, his group has drawn on advances in machine learning and especially deep learning , a way of training computers to do things using a large simulated neural network. His research has also been applied to medicine, helping develop ways of tracking tumors inside the body and modeling the properties of bones and tissue.
Today, Li splits his time between teaching, consulting for movie studios, and running a new startup, Pinscreen. The company uses more advanced AI than is behind deepfakes to make virtual avatars. Its app turns a single photo into a photorealistic 3D avatar in a few seconds. It employs machine-learning algorithms that have been trained to map the appearance of a face onto a 3D model using many thousands of still images and corresponding 3D scans. The process is improved using what are known as generative adversarial networks, or GANs (which are not used for most deepfakes). This means having one algorithm produce fake images while another judges whether they are fake, a process that gradually improves the fakery. You can have your avatar perform silly dances and try on different outfits, and you can control the avatar’s facial expressions in real time, using your own face via the camera on your smartphone.
A former employee, Iman Sadeghi, is suing Pinscreen, alleging it faked a presentation of the technology at the the SIGGRAPH conference in 2017.
MIT Technology Review has seen letters from several experts and SIGGRAPH organizers dismissing those claims.
Pinscreen is working with several big-name clothing retailers that see its technology as a way to let people try garments on without having to visit a physical store. The technology could also be big for videoconferencing, virtual reality, and gaming. Just imagine a Fortnite character that not only looks like you, but also laughs and dances the same way.
Underneath the digital silliness, though, is an important trend: AI is rapidly making advanced image manipulation the province of the smartphone rather than the desktop.
FaceApp , developed by a company in Saint Petersburg, Russia, has drawn millions of users, and recent controversy, by offering a one-click way to change a face on your phone.
You can add a smile to a photo, remove blemishes, or mess with your age or gender (or someone else’s). Dozens more apps offer similar manipulations at the click of a button.
Not everyone is excited about the prospect of this technology becoming ubiquitous. Li and others are “basically trying to make one-image, mobile, and real-time deepfakes,” says Sam Gregory, director of Witness, a nonprofit focused on video and human rights. “That’s the threat level that worries me, when it [becomes] something that’s less easily controlled and more accessible to a range of actors.” Fortunately, most deepfakes still look a bit off. A flickering face, a wonky eye, or an odd skin tone make them easy enough to spot. But just as an expert can remove such flaws, advances in AI promise to smooth them out automatically, making the fake videos both simpler to create and harder to detect.
Even as Li races ahead with digital fakery, he is also troubled by the potential for harm. “We’re sitting in front of a problem,” he says.
Catching imposters US policymakers are especially concerned about how deepfakes might be used to spread more convincing fake news and misinformation ahead of next year’s presidential election. Earlier this month, the House Intelligence Committee asked Facebook, Google, and Twitter how they planned to deal with the threat of deepfakes.
Each company said it was working on the problem, but none offered a solution.
DARPA, the US military’s well-funded research agency, is also worried about the rise of digital manipulation. In 2016, before deepfakes became a thing, DARPA launched a program called Media Forensics, or MediFor, to encourage digital forensics experts to develop automated tools for catching manipulated imagery. A human expert might use a range of methods to spot photographic forgeries, from analyzing inconsistencies in a file’s data or the characteristics of specific pixels to hunting for physical inconsistencies such as a misplaced shadow or an improbable angle.
MediFor is now largely focused on spotting deepfakes. Detection is fundamentally harder than creation because AI algorithms can learn to hide things that give fakes away. Early deepfake detection methods include tracking unnatural blinking and weird lip movements. But the latest deepfakes have already learned to automatically smooth out such glitches.
Earlier this year, Matt Turek, DARPA program manager for MediFor, asked Li to demonstrate his fakes to the MediFor researchers. This led to a collaboration with Hany Farid, a professor at UC Berkeley and one of the world’s foremost authorities on digital forensics. The pair are now engaged in a digital game of cat-and-mouse, with Li developing deepfakes for Farid to catch, and then refining them to evade detection.
Farid, Li, and others recently released a paper outlining a new, more powerful way to spot deepfakes. It hinges on training a machine-learning algorithm to recognize the quirks of a specific individual’s facial expressions and head movements. If you simply paste someone’s likeness onto another face, those features won’t be carried over. It would require a lot of computer power and training data—i.e., images or video of the person—to make a deepfake that incorporates these characteristics. But one day it will be possible. “Technical solutions will continue to improve on the defensive side,” says Turek. “But will that be perfect? I doubt it.” Pixel perfect Back in Dalian, it’s clear that people are starting to wake up to the danger of deepfakes. The morning before I met with Li, a European politician had stepped into the face-swap booth, only for his minders to stop him. They were worried that the system might capture his likeness in detail, making it easier for someone to create fake clips of him.
As he watches people using the booth, Li tells me that there is no technical reason why deepfakes should be detectable. “Videos are just pixels with a certain color value,” he says.
Making them perfect is just a matter of time and resources, and as his collaboration with Farid shows, it’s getting easier all the time. “We are witnessing an arms race between digital manipulations and the ability to detect those,” he says, “with advancements of AI-based algorithms catalyzing both sides.” The bad news, Li thinks, is that he will eventually win. In a few years, he reckons, undetectable deepfakes could be created with a click. “When that point comes,” he says, “we need to be aware that not every video we see is true.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,510 | 2,018 |
"The Defense Department has produced the first tools for catching deepfakes | MIT Technology Review"
|
"https://www.technologyreview.com/s/611726/the-defense-department-has-produced-the-first-tools-for-catching-deepfakes"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The Defense Department has produced the first tools for catching deepfakes By Will Knight archive page SUNY SUNY The first forensics tools for catching revenge porn and fake news created with AI have been developed through a program run by the US Defense Department.
Forensics experts have rushed to find ways of detecting videos synthesized and manipulated using machine learning because the technology makes it far easier to create convincing fake videos that could be used to sow disinformation or harass people.
The most common technique for generating fake videos involves using machine learning to swap one person’s face onto another's. The resulting videos, known as “deepfakes,” are simple to make, and can be surprisingly realistic.
Further tweaks, made by a skilled video editor, can make them seem even more real.
Video trickery involves using a machine-learning technique known as generative modeling, which lets a computer learn from real data before producing fake examples that are statistically similar. A recent twist on this involves having two neural networks, known as generative adversarial networks, work together to produce ever more convincing fakes (see “ The GANfather: The man who’s given machines the gift of imagination ”).
The tools for catching deepfakes were developed through a program—run by the US Defense Advanced Research Projects Agency (DARPA)—called Media Forensics.
The program was created to automate existing forensics tools, but has recently turned its attention to AI-made forgery.
"We've discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations,” says Matthew Turek, who runs the Media Forensics program.
One remarkably simple technique was developed by a team led by Siwei Lyu , a professor at the State University of New York at Albany, , and one of his students. “We generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well,” Lyu says.
Then, one afternoon, while studying several deepfakes, Lyu realized that the faces made using deepfakes rarely, if ever, blink. And when they do blink, the eye-movement is unnatural. This is because deepfakes are trained on still images, which tend to show a person with his or her eyes open.
Others involved in the DARPA challenge are exploring similar tricks for automatically catching deepfakes: strange head movements, odd eye color, and so on. “We are working on exploiting these types of physiological signals that, for now at least, are difficult for deepfakes to mimic,” says Hany Farid , a leading digital forensics expert at Dartmouth College.
DARPA’s Turek says the agency will run more contests “to ensure the technologies in development are able to detect the latest techniques." The arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths. A key problem, says Farid, is that machine-learning systems can be trained to outmaneuver forensics tools.
Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat.
By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications.
By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,511 | 2,017 |
"Andrew Ng Has a Chatbot That Can Help with Depression | MIT Technology Review"
|
"https://www.technologyreview.com/s/609142/andrew-ng-has-a-chatbot-that-can-help-with-depression"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Andrew Ng Has a Chatbot That Can Help with Depression By Will Knight archive page Woebot I’m a little embarrassed to admit this, but I’ve been seeing a virtual therapist.
It’s called Woebot , and it’s a Facebook chatbot developed by Stanford University researchers that offers interactive cognitive behavioral therapy. And Andrew Ng , a prominent figure who previously led efforts to develop and apply the latest AI technologies at Google and Baidu, is now lending his backing to the project by joining the board of directors of the company offering its services.
“If you look at the societal need, as well as the ability of AI to help, I think that digital mental-health care checks all the boxes,” Ng says. “If we can take a little bit of the insight and empathy [of a real therapist] and deliver that, at scale, in a chatbot, we could help millions of people.” For the past few days I’ve been trying out its advice for understanding and managing thought processes and for dealing with depression and anxiety. While I don’t think I’m depressed, I found the experience positive. This is especially impressive given how annoying I find most chatbots to be.
“Younger people are the worst served by our current systems,” says Alison Darcy , a clinical research psychologist who came up with the idea for Woebot while teaching at Stanford in July 2016. “It’s also very stigmatized and expensive.” Darcy, who met Ng at Stanford, says the work going on there in applying techniques like deep learning to conversational agents inspired her to think that therapy could be delivered by a bot. She says it is possible to automate cognitive behavioral therapy because it follows a series of steps for identifying and addressing unhelpful ways of thinking. And recent advances in natural-language processing have helped make chatbots more useful within limited domains.
Depression is certainly a big problem. It is now the leading form of disability in the U.S., and 50 percent of U.S. college students report suffering from anxiety or depression.
Darcy and colleagues tried several different prototypes on college volunteers, and they found the chatbot approach to be particularly effective. In a study they published this year in a peer-reviewed medical journal, Woebot was found to reduce the symptoms of depression in students over the course of two weeks.
In my own testing, I found Woebot to be surprisingly good at what it does. A chatbot might seem like a crude way to deliver therapy, especially given how clumsy many virtual helpers often are. But Woebot works smoothly thanks to a clever interface and some pretty impressive natural-language technology. The software states up front that no person will see your answers, but it also offers ways of reaching someone if your situation is serious. I mostly used predefined answers that it offered me, but even when I strayed from the script a little, it didn’t get tripped up. If you try, though, I’m sure it’s possible to flummox it.
You are guided through conversations with Woebot, but the system is able to understand a pretty wide range of answers. It checks in with you every day and directs you through the steps. For example, when I tried telling Woebot I was stressed about work, the bot offered ways of reframing my feelings to make them seem more positive.
The emergence of a real AI therapist is, in a sense, pretty ironic. The very first chatbot, Eliza , developed at MIT in 1966 by Joseph Weizenbaum , was designed to mimic a “Rogerian psychologist.” Eliza used a few clever tricks to create the illusion of an intelligent conversation—for example, repeating answers back to a person or offering open-ended questions such as “In what way?” and “Can you think of a specific example?” Weizenbaum was amazed to find that people seemed to believe they were talking to a real therapist, and that some offered up very personal secrets.
Darcy also says both Eliza and Woebot are effective because a conversation is a natural way to communicate distress and receive emotional support. She adds that people seem happy to suspend their disbelief, and seem to enjoy talking to Woebot as if it were a real therapist. “People talk about their problems for a reason,” she says. “Therapy is conversational.” Ng says he expects AI to deliver further advances in language in coming years, but it will still be relatively crude (see “ AI’s Language Problem ”). He says better ways of parsing the meaning of language will help make the tool more effective, though. Some other mental-health experts also seem positive about the prospect of applying such technology to treatment.
“To the extent that the Woebot can replicate the way that a therapist can help explain concepts and facilitate trying out new coping skills, this approach may be even more helpful than working through a workbook,” says Michael Thase , a professor of psychiatry at the University of Pennsylvania and an expert on cognitive behavioral therapy. “There is good evidence that people with milder levels of depression can benefit from various kinds of online or Web-based therapy approaches.” But Thase adds that studies have shown such technology to work best in conjunction with help from a real person. “Some time with a real therapist is helpful,” he says.
hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,512 | 2,019 |
"DeepMind’s AI has now outcompeted nearly all human players at StarCraft II | MIT Technology Review"
|
"https://www.technologyreview.com/2019/10/30/132130/ai-deepmind-outcompeted-most-players-at-starcraft-ii"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts DeepMind’s AI has now outcompeted nearly all human players at StarCraft II By Karen Hao archive page AlphaStar (Zerg, in red) defending an early aggression where the opponent built part of the base near AlphaStar's base.
courtesy of DeepMind In January of this year, DeepMind announced it had hit a milestone in its quest for artificial general intelligence. It had designed an AI system, called AlphaStar, that beat two professional players at StarCraft II, a popular video game about galactic warfare. This was quite a feat. StarCaft II is highly complex, with 10 26 choices for every move. It’s also a game of imperfect information—and there are no definitive strategies for winning. The achievement marked a new level of machine intelligence.
Now DeepMind, an Alphabet subsidiary, is releasing an update. AlphaStar now outranks the vast majority of active StarCraft players, demonstrating a much more robust and repeatable ability to strategize on the fly than before. The results, published in Nature today, could have important implications for applications ranging from machine translation to digital assistants or even military planning.
StarCraft II is a real-time strategy game, most often played one on one. A player must choose one of three human or alien races—Protoss, Terran, or Zerg—and alternate between gathering resources, building infrastructure and weapons, and attacking the opponent to win the game. Every race has unique skill sets and limitations that affect the winning strategy, so players commonly pick and master playing with one.
AlphaStar used reinforcement learning , where an algorithm learns through trial and error, to master playing with all the races. “This is really important because it means that the same type of methods can in principle be applied to other domains,” said David Silver, DeepMind’s principal research scientist, on a press call. The AI also reached a rank above 99.8% of the active players in the official online league.
In order to attain such flexibility, the DeepMind team modified a commonly used technique known as self-play, in which a reinforcement-learning algorithm plays against itself to learn faster. DeepMind famously used this technique to train AlphaGo Zero , the program that taught itself without any human input to beat the best players in the ancient game of Go. The lab also used it in the preliminary version of AlphaStar.
Conventionally in self-play, both versions of the algorithm are programmed to maximize their chances of winning. But the researchers discovered that that didn’t necessarily result in the most robust algorithms. For such an open-ended game, it risked pigeon-holing the algorithm into specific strategies that would only work under certain conditions.
Taking inspiration from the way pro StarCraft II players train with one another, the researchers instead programmed one of the algorithms to expose the flaws of the other rather than maximize its own chance of winning. “That’s kind of [like] asking a friend to play against you,” said Oriol Vinyals, the lead researcher on the project, on the call. “These friends should show you what your weaknesses are, so then eventually you can become stronger.” The method produced much more generalizable algorithms that could adapt to a broader range of game scenarios.
The researchers believe AlphaStar’s strategy development and coordination skills could be applied to many other problems. “We chose StarCraft [...] because we felt it mirrored a lot of challenges that actually come up in real-world applications,” said Silver. These applications could include digital assistants, self-driving cars, or other machines that have to interact with humans, he said.
“The complexity [of StarCraft] is much more reminiscent of the scales that we’re seeing in the real world,” said Silver.
But AlphaStar demonstrates AI’s significant limitations, too. For example, it still needs orders of magnitude more training data than a human player to attain the same level of skill. Such learning software is also still a long way off from being translated into sophisticated robotics or real-world applications.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.