id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
813 | 2,016 |
"The lawsuit against Microsoft, GitHub and OpenAI that could change the rules of AI copyright - The Verge"
|
"https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data"
|
"The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Artificial Intelligence / Tech / Law The lawsuit that could rewrite the rules of AI copyright The lawsuit that could rewrite the rules of AI copyright / Microsoft, GitHub, and OpenAI are being sued for allegedly violating copyright law by reproducing open-source code using AI. But the suit could have a huge impact on the wider world of artificial intelligence.
By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story Microsoft, its subsidiary GitHub, and its business partner OpenAI have been targeted in a proposed class action lawsuit alleging that the companies’ creation of AI-powered coding assistant GitHub Copilot relies on “software piracy on an unprecedented scale.” The case is only in its earliest stages but could have a huge effect on the broader world of AI, where companies are making fortunes training software on copyright-protected data.
Copilot, which was unveiled by Microsoft-owned GitHub in June 2021, is trained on public repositories of code scraped from the web, many of which are published with licenses that require anyone reusing the code to credit its creators. Copilot has been found to regurgitate long sections of licensed code without providing credit — prompting this lawsuit that accuses the companies of violating copyright law on a massive scale.
“This is the first class-action case in the US challenging the training and output of AI systems. It will not be the last.” “We are challenging the legality of GitHub Copilot,” said programmer and lawyer Matthew Butterick, who filed the lawsuit with the help of the San Francisco-based Joseph Saveri Law Firm, in a press statement. “This is the first step in what will be a long journey. As far as we know, this is the first class-action case in the US challenging the training and output of AI systems. It will not be the last. AI systems are not exempt from the law. Those who create and operate these systems must remain accountable.” The lawsuit, which was filed last Friday, is in its early stages. In particular, the court has not yet certified the proposed class of programmers who have allegedly been harmed. But speaking to The Verge , Butterick and lawyers Travis Manfredi and Cadio Zirpoli of the Joseph Saveri Law Firm said they expected the case to have a huge impact on the wider world of generative AI.
Microsoft and OpenAI are far from alone in scraping copyrighted material from the web to train AI systems for profit. Many text-to-image AI, like the open-source program Stable Diffusion, were created in exactly the same way. The firms behind these programs insist that their use of this data is covered in the US by fair use doctrine. But legal experts say this is far from settled law and that litigation like Butterick’s class action lawsuit could upend the tenuously defined status quo.
To find out more about the motivations and reasoning behind the lawsuit, we spoke to Butterick (MB), Manfredi (TM), and Zirpoli (CZ), who explained why they think we’re in the Napster-era of AI and why letting Microsoft use other’s code without attribution could kill the open source movement.
In response to a request for comment, GitHub said: “We’ve been committed to innovating responsibly with Copilot from the start, and will continue to evolve the product to best serve developers across the globe.” OpenAI and Microsoft had not replied to similar requests at the time of publication.
This interview has been edited for clarity and brevity First, I want to talk about the reaction from the AI community a little bit, from people who are advocates for this technology. I found one comment that I think’s representative of one reaction to this case, which says, “Butterick’s goal here is to kill transformative ML use of data such as source code or images, forever.” What do you think about that, Matthew? Is that your goal? If not, what is? “AI systems are not magical black boxes that are exempt from the law.” Matthew Butterick: I think it’s really simple. AI systems are not magical black boxes that are exempt from the law, and the only way we’re going to have a responsible AI is if it’s fair and ethical for everyone. So the owners of these systems need to remain accountable. This isn’t a principle we’re making out of whole cloth and just applying to AI. It’s the same principle we apply to all kinds of products, whether it’s food, pharmaceuticals, or transportation.
I feel sometimes that the backlash you get from the AI community — and you’re dealing with wonderful researchers, wonderful thinkers — they’re not acclimated to working in this sphere of regulation and safety. It’s always a challenge in technology because regulation moves behind innovation. But in the interim, cases like this fill that gap. That’s part of what a class action lawsuit is about: is testing these ideas and starting to create clarity.
Do you think if you’re successful with your lawsuit that it will have a destructive effect on innovation in this domain, on the creation of generative AI models? We’re in the Napster-era of generative AI, says Butterick, with piracy fueling innovation MB: I hope it’s the opposite. I think in technology, we see over and over that products come out that skirt the edges of the law, but then someone comes by and finds a better way to do it. So, in the early 2000s, you had Napster, which everybody loved but was completely illegal. And today, we have things like Spotify and iTunes. And how did these systems arise? By companies making licensing deals and bringing in content legitimately. All the stakeholders came to the table and made it work, and the idea that a similar thing can’t happen for AI is, for me, a little catastrophic. We just saw an announcement recently of Shutterstock setting up a Contributors’ Fund for people whose images are used in training [generative AI], and maybe that will become a model for how other training is done. Me, I much prefer Spotify and iTunes, and I’m hoping that the next generation of these AI tools are better and fairer for everyone and makes everybody happier and more productive.
I take it from your answers that you wouldn’t accept a settlement from Microsoft and OpenAI? MB: [ Laughs ] It’s only day one of the lawsuit...
One section of the lawsuit I thought was particularly interesting was on the very close but murkily defined business relationship between Microsoft and OpenAI. You point out that in 2016 OpenAI said it would run its large-scale experiments on Microsoft’s cloud, that Microsoft has exclusive licenses for certain OpenAI products, and that Microsoft has invested a billion dollars in OpenAI, making it both OpenAI’s largest investor and service provider. What is the significance of this relationship, and why did you feel you needed to highlight it? Travis Manfredi: Well, I would say that Microsoft is trying to use as a perceived beneficial OpenAI as a shield to avoid liability. They’re trying to filter the research through this nonprofit to make it fair use, even though it’s probably not. So we want to show that whatever OpenAI started out as, it’s not that anymore. It’s a for-profit business. Its job is to make money for its investors. It may be controlled by a nonprofit [OpenAI Inc.], but the board of that nonprofit are all business people. We don’t know what their intentions are. But it doesn’t seem to be following the original mission of OpenAI. So we wanted to show — and hopefully discovery will reveal more information about this — that this is a collective scheme between Microsoft, OpenAI, and GitHub that is not as beneficial or as altruistic as they might have us believe.
What do you fear will happen if Microsoft, GitHub, OpenAI, and other players in the industry building generative AI models are allowed to keep using other people's data in this way? TM: Ultimately, it could be the end of open-source licenses altogether. Because if companies are not going to respect your licenses, what’s the point of even putting it on your code? If it’s going to be snapped up and spit back out without any attribution? We think open-source code has been tremendously beneficial to humanity and to the technology world, and we don’t think AI that doesn’t understand how to code and can only make probabilistic guesses, we don’t think that’s better than the innovation that human coders can deliver.
“Someone comes along and says, ‘Let’s socialize the costs and privatize the profits.’” MB: Yeah, I really do think this is an existential threat to open source. And maybe it’s just my generation, but I’ve seen enough situations when there’s a nice, free community operating on the internet, and someone comes along and says, “Let’s socialize the costs and privatize the profits.” If you divorce the code from the creators, what does it mean? Let me give you one example. I spoke to an engineer in Europe who said, “Attribution is a really big deal to me because that’s how I get all my clients. I make open source software; people use my packages, see my name on it, and contact me, and I sell them more engineering or support.” He said, “If you take my attribution off, my career is over, and I can’t support my family, I can’t live.” And it really brings home that this is not a benign issue for a lot of programmers.
But do you think there’s a case to be made that tools like Copilot are the future and that they are better for coders in general? MB: I love AI, and it’s been a dream of mine since I was an eight-year-old playing with a computer that we can teach these machines to reason like we do, and so I think this is a really interesting and wonderful field.
But I can only go back to the Napster example: that [these systems] are just the first step, and no matter how much people thought Napster was terrific, it was also completely illegal, and we’ve done a lot better by bringing everyone to the table and making it fair for everybody.
So, what is a remedy that you’d like to see implemented? Some people argue that there is no good solution, that the training datasets are too big, that the AI models are too complex, to really trace attribution and give credit. What do you think of that? Cadio Zirpoli: We’d like to see them train their AI in a manner which respects the licenses and provides attribution. I’ve seen on chat boards out there that there might be ways for people who don’t want that to opt out or opt in but to throw up your hands and say “it’s too hard, so just let Microsoft do whatever they want” is not a solution we’re willing to live with.
Do you think this lawsuit could set precedence in other media of generative AI? We see similar complaints in text-to-image AI, that companies, including OpenAI, are using copyright-protected images without proper permission, for example.
CZ: The simpler answer is yes.
TM: The DMCA applies equally to all forms of copyrightable material, and images often include attribution; artists, when they post their work online, typically include a copyright notice or a creative commons license, and those are also being ignored by [companies creating] image generators.
So what happens next with this lawsuit? I believe you need to be granted class-action status on this lawsuit for it to go ahead. What timeframe do you think that could happen on? CZ: Well, we expect Microsoft to bring a motion to dismiss our case. We believe we will be successful, and the case will proceed. We’ll engage in a period of discovery, and then we will move the court for class certification. The timing of that can vary widely with respect to different courts and different judges, so we’ll have to see. But we believe we have a meritorious case and that we will be successful not only overcoming the motion to dismiss but in getting our class certified.
Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Artificial Intelligence Universal Music sues AI company Anthropic for distributing song lyrics OpenAI is opening up DALL-E 3 access YouTube might make an official way to create AI Drake fakes The world’s biggest AI models aren’t very transparent, Stanford study says Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
"
|
814 | 2,022 |
"Will ChatGPT Kill the Student Essay? - The Atlantic"
|
"https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371"
|
"Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce The College Essay Is Dead Nobody is prepared for how AI will transform academia.
Suppose you are a professor of pedagogy, and you assign an essay on learning styles. A student hands in an essay with the following opening paragraph: The construct of “learning styles” is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience.
Pass or fail? A- or B+? And how would your grade change if you knew a human student hadn’t written it at all? Because Mike Sharples, a professor in the U.K., used GPT-3, a large language model from OpenAI that automatically generates text from a prompt, to write it. (The whole essay, which Sharples considered graduate-level, is available, complete with references, here.
) Personally, I lean toward a B+. The passage reads like filler, but so do most student essays.
Sharples’s intent was to urge educators to “rethink teaching and assessment” in light of the technology, which he said “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: “I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.
The world of generative AI is progressing furiously. Last week, OpenAI released an advanced chatbot named ChatGPT that has spawned a new wave of marveling and hand-wringing , plus an upgrade to GPT-3 that allows for complex rhyming poetry; Google previewed new applications last month that will allow people to describe concepts in text and see them rendered as images; and the creative-AI firm Jasper received a $1.5 billion valuation in October. It still takes a little initiative for a kid to find a text generator, but not for long.
The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up. Kevin Bryan, an associate professor at the University of Toronto, tweeted in astonishment about OpenAI’s new chatbot last week: “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.” Neither the engineers building the linguistic tech nor the educators who will encounter the resulting language are prepared for the fallout.
A chasm has existed between humanists and technologists for a long time. In the 1950s, C. P. Snow gave his famous lecture, later the essay “The Two Cultures,” describing the humanistic and scientific communities as tribes losing contact with each other. “Literary intellectuals at one pole—at the other scientists,” Snow wrote. “Between the two a gulf of mutual incomprehension—sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding. They have a curious distorted image of each other.” Snow’s argument was a plea for a kind of intellectual cosmopolitanism: Literary people were missing the essential insights of the laws of thermodynamics, and scientific people were ignoring the glories of Shakespeare and Dickens.
The rupture that Snow identified has only deepened. In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days , is a famously proud illiterate. “I would never read a book,” he once told an interviewer.
“I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.” Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before. He probably didn’t imagine there was much to think about.
The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus , but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust.
These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences. Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.
As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide. As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone. Needless to say, humanists’ understanding of technology is partial at best. The state of digital humanities is always several categories of obsolescence behind, which is inevitable. (Nobody expects them to teach via Instagram Stories.) But more crucially, the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.
Read: The humanities are in crisis Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine. In a tech-centered world, language matters, voice and style matter, the study of eloquence matters, history matters, ethical systems matter. But the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations. The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major ? The case for the value of humanities in a technologically determined world has been made before. Steve Jobs always credited a significant part of Apple’s success to his time as a dropout hanger-on at Reed College, where he fooled around with Shakespeare and modern dance, along with the famous calligraphy class that provided the aesthetic basis for the Mac’s design. “A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem,” Jobs said.
“The broader one’s understanding of the human experience, the better design we will have.” Apple is a humanistic tech company. It’s also the largest company in the world.
Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed.
The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.
And now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems. Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it. Teachers are already some of the most overworked, underpaid people in the world. They are already dealing with a humanities in crisis. And now this. I feel for them.
And yet, despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.
The humanists will need to understand natural-language processing because it’s the future of language, but also because there is more than just the possibility of disruption here. Natural-language processing can throw light on a huge number of scholarly problems. It is going to clarify matters of attribution and literary dating that no system ever devised will approach; the parameters in large language models are much more sophisticated than the current systems used to determine which plays Shakespeare wrote, for example.
It may even allow for certain types of restorations, filling the gaps in damaged texts by means of text-prediction models. It will reformulate questions of literary style and philology; if you can teach a machine to write like Samuel Taylor Coleridge, that machine must be able to inform you, in some way, about how Samuel Taylor Coleridge wrote.
The connection between humanism and technology will require people and institutions with a breadth of vision and a commitment to interests that transcend their field. Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance. But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.
"
|
815 | 2,012 |
"When Machines Do Your Job | MIT Technology Review"
|
"https://www.technologyreview.com/2012/07/11/184992/when-machines-do-your-job"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts When Machines Do Your Job By Antonio Regalado archive page Are American workers losing their jobs to machines? That was the question posed by Race Against the Machine , an influential e-book published last October by MIT business school researchers Erik Brynjolfsson and Andrew McAfee. The pair looked at troubling U.S. employment numbers— which have declined since the recession of 2008-2009 even as economic output has risen —and concluded that computer technology was partly to blame.
Advances in hardware and software mean it’s possible to automate more white-collar jobs, and to do so more quickly than in the past. Think of the airline staffers whose job checking in passengers has been taken by self-service kiosks. While more productivity is a positive, wealth is becoming more concentrated, and more middle-class workers are getting left behind.
What does it mean to have “technological unemployment” even amidst apparent digital plenty? Technology Review spoke to McAfee at the Center for Digital Business, part of the MIT Sloan School of Management, where as principal research scientist he studies new employment trends and definitions of the workplace.
TR : What’s your definition of automation? McAfee: The obvious definition is one fewer job than there used to be, with the same amount of output. A tax preparer can get automated away by software like TurboTax, and just not find work anymore. An assembly line worker could be flat-out automated away by a robot on the assembly line. There is a closely related phenomenon, which is the massive increases in productivity brought on by digital technology. An example is the legal discovery process. By one estimate we heard, one lawyer is now as productive as 500 used to be. You might not lay off 500 lawyers, but the next time you might hire a few people and some software to read documents.
Where do you see automation leading to the loss of jobs? Others have done work showing that if you are a “routine cognitive worker” following instructions or doing a structured mental task, you have been under a lot of downward wage pressure for a while now. I think that is largely a technology story. Payroll clerks, travel agents—we don’t have as many of them as we used to. We don’t have as many people working in manufacturing, even though manufacturing is a growing industry.
What was the response you received to Race Against the Machine ? People accepted that technology was really accelerating and that there were going to be labor-force consequences. The broader discussion was between optimism and pessimism. Does it feel like we are heading into the kind of economy and society that we want, or the kind of economy and society that we don’t? A lot of people who commented said, “Look, if these guys are anywhere near right, we are heading into an economy that is going to be dire for a lot of people.” What does the economy that we don’t want look like? The spread between the haves and the have-nots continues to grow, and more importantly, the absolute standard of living of the people at the middle and the bottom goes down. That is the economy that I don’t want to head into.
What is the optimistic view? Erik Brynjolfsson came up with a great phrase: “digital Athens.” The Athenian citizens had lives of leisure; they got to participate in democracy and create art. That was largely because they had slaves to do the work. Okay, I don’t want human slaves, but in a very, very automated and digitally productive economy you don’t need to work as much, as hard, with as many people, to get the fruits of the economy. So the optimistic version is that we finally have more hours in our week freed up from toil and drudgery.
Do you see evidence for a digital Athens on the street, in the real economy? No. What we are seeing—and this was pretty much unanticipated—is that the people at the top of the skill, wage, and income distribution are working more hours. We have this preference for doing more work. The people who have a lot of leisure—I think in too many cases it’s involuntary. It’s unemployment or underemployment. That is not my version of digital Athens.
Which is further advanced, the automation of intellectual work or of physical tasks? The automation of knowledge work is way, way farther along. It’s really hard to get computers to do things that your four-year-old can do, like walk across the room and pick up a pen, and recognize it as a pen. So the physical world presents a lot of challenges to digital technologies.
But it feels to me as if we are starting to turn a corner. The data available to help a robot is big data, and it’s exploding. The sensors have been progressing along a Moore’s Law trajectory. And the physical pieces of a robot, the actuators and so on, have gotten a lot better too. So it seems the ingredients are all in place for the robots to start getting into the economy.
How should businesses react to the trend toward more automation? I think the companies that succeed going forward are the ones that figure out what mix of human and digital labor is going to be the right mix. And I think that that proper mix is going to involve more, and more types of, digital labor than we are using right now.
What is your advice to the individual, or to the parent educating a child? To the parent, make sure your kid’s education is geared toward things that machines appear not to be very good at. Computers are still lousy at programming computers. Computers are still bad at figuring out what questions need to be answered. I would encourage every kid these days to buckle down and do a double major, one in the liberal arts and one in the college of sciences.
Despite the glum view of changes in the labor market, you’ve used the word “cornucopia” to describe the results of innovation. That sounds very encouraging. What do you mean by that? We have access to amazing digital resources. And a lot of it is all-you-can-drink, no matter what your income level is. Wikipedia is distributed to the masses. Warren Buffett doesn’t have any more Google than I have, or the unemployed person has. When I see that there are five billion mobile-phone subscriptions in the world—well, hey, that is cornucopia. It is important not to lose sight of that.
hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
816 | 2,017 |
"What Is Ray Kurzweil Up to at Google? Writing Your Emails | WIRED"
|
"https://www.wired.com/story/what-is-ray-kurzweil-up-to-at-google-writing-your-emails"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business What Is Ray Kurzweil Up to at Google? Writing Your Emails Ray Kurzweil speaking with Amy Kurzweil at SXSW on March 13, 2017 in Austin, TX.
Katrina Barber/Getty Images Save this story Save Save this story Save Ray Kurzweil has invented a few things in his time. In his teens, he built a computer that composed classical music , which won him an audience with President Lyndon B. Johnson. In his 20s, he pioneered software that could digitize printed text, and in his 30s he cofounded a synthesizer company with Stevie Wonder. More recently, he’s known for popularizing the idea of the singularity —a moment sometime in the future when superintelligent machines transform humanity—and making optimistic predictions about immortality. For now, though, Kurzweil, 69, leads a team of about 35 people at Google whose code helps you write emails.
His group powers Smart Reply, the feature on the Gmail mobile app that offers three suggested email replies for you to select with a tap. In May it rolled out to all of the service’s English-speaking users, and last week was presented to Spanish speakers too. The responses may be short—“Let’s do Monday” “Yay! Awesome!” “La semana que viene”—but they sure can be useful. (A tip: You can edit them before sending.) “It’s a good example of artificial intelligence working hand in glove with human intelligence,” Kurzweil says.
And Kurzweil claims he’s just getting started. His team is experimenting with empowering Smart Reply to elaborate on its initial terse suggestions. Tapping a Continue button might cause “Sure I’d love to come to your party!” to expand to include, for example, “Can I bring something?” He likes the idea of having AI pitch in anytime you’re typing, a bit like an omnipresent, smarter version of Google’s search autocomplete. “You could have similar technology to help you compose documents or emails by giving you suggestions of how to complete your sentence,” Kurzweil says.
Looking further ahead—as Kurzweil likes to do—all those ideas are eventually supposed to seem rather small. Smart Reply, he says, is just the first visible part of the group’s main project: a system for understanding the meaning of language. Codenamed Kona, the effort is aiming for nothing less than creating software as linguistically fluent as you or me. “I would not say it’s at human levels, but I think we’ll get there,” he says. Should you believe him? It depends on whether you believe Kurzweil has cracked the mystery of how human intelligence works.
Google cofounder Larry Page oversaw some surprising initiatives during his second stint as the company's CEO, from 2011 to 2013, including a robot acquisition spree, a new division to cure aging , and the ill-fated Google Barge.
Hiring Ray Kurzweil in 2012 arguably ranks among those head-scratchers.
The company already employed some of the most influential thinkers in machine learning and AI, and was rapidly expanding its roster of engineers building machine learning systems to power new products. Kurzweil was known for selling books predicting a weird future in which you’ll upload your consciousness into cyberspace, not for building AI systems for research or useful work today.
The way Kurzweil tells it, it was one of those books that got him in the door of the Googleplex. Page called him in to talk about ideas in the soon-to-be-published How to Create a Mind.
The 2012 book lays out Kurzweil’s theory of the workings of the neocortex, the outer part of our brain and the seat of human intelligence. “He basically recruited me to bring this thesis to Google,” Kurzweil says. “I made the case that applying this model to machine learning would make it very good at understanding language.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kurzweil’s thesis is that the neocortex is built from many repeating units, each capable of recognizing patterns in information and stacked into a hierarchical structure. This, he says, allows many not-so-smart modules to collectively display the powers of abstraction and reasoning that distinguish human intelligence.
The model has yet to win universal acceptance among people who study the human brain. When cognitive science professor Gary Marcus reviewed _How to Create a Mind , he found the theory simultaneously unoriginal and light on empirical backing. Kurzweil, who says his book distills ideas about the brain that he has been developing since the age of 14, has a different view. “There’s really been an explosion of neuroscience evidence to support my thesis,” he says. He describes his hierarchical theory of intelligence as the guiding principle behind his group’s Kona system, and says it’s at work in Smart Reply.
Although their code powers it today, Kurzweil’s group didn’t invent Smart Reply. It was first built by engineers and researchers from the Gmail product team and the Google Brain AI research lab.
They showed that artificial neural networks, which had revamped Google’s image search and speech-recognition services , could also respond to emails if given enough examples to learn from. In late 2015 the system was added to Inbox , Google’s alternative mobile Gmail client.
About six months later, Smart Reply was being used for 10 percent of all emails sent with the Inbox app.
Related Stories Stayin’ Alive Gary Wolf surveillance Liz Stinson Uncategorized Sandra Upson Kurzweil’s group got involved to help roll Smart Reply out to everybody using the regular, and much more popular, Gmail app. Google has a lot of computers but still has to pay electricity bills, and the original Smart Reply needed a lot of computing power. It used a type of neural network with a kind of short-term memory, giving it an awareness of the order in which words occur. The technology is good at understanding the meaning of sentences—it’s at work in Google Translate —but it takes a lot of computational effort.
The Kurzweil-ized Smart Reply uses neural networks too, but they are unconcerned with the order of words, and thus are much cheaper to run. It crunches the words in an email’s body or subject line into numbers, all in one go. And it has multiple neural networks stacked into a two-layer hierarchy. The bottom level digests text from emails and the top layer synthesizes the results to select the most appropriate replies from a list of 29,000 prewritten options, generated by analyzing the most common phrases written by Gmail users. In a paper released in May, Kurzweil and his colleagues reported that their system offers replies just as popular with users for a fraction of the computational work.
Smart Reply may be impressive, but Kurzweil’s team still has miles to go before it can prove their ideas really make software much better at understanding language.
Yoav Goldberg, who researches natural language processing at Bar Ilan University in Tel Aviv, says Google’s paper on the new Smart Reply system describes a solid piece of engineering rather than a scientific breakthrough. It’s the kind of thing a company like Google needs to do day in, day out, if it’s to make good on its ambition to deploy machine learning everywhere. “For most problems, what we need is a well-engineered solution using established techniques and not a novel breakthrough approach,” Goldberg says.
The validity of Kurzweil’s analogy between his team’s system and the brain is less clear. Sure, there’s a hierarchy of similar components that boil input data into more abstract representations used to make a decision. But you could describe any machine learning system built with artificial neural networks that way, and none made yet is really very brain-like. “I find the analogy so loose that it is practically meaningless,” Goldberg says.
Meanwhile, Kurzweil is calmy, monotonously confident of being proven right. “It’s not using the same mathematics, but it’s the same concept I believe makes the neocortex work,” he says. “And it does capture the meaning of language based on our tests.” More applications of Kona are in the works and will surface in future Google products, he promises. And when asked to look further ahead, he casually tosses out a provocative prognostication. “My consistent prediction, going back a couple of decades, has been that in 2029 computers will understand language at human levels,” he says. If it comes to that, Kurzweil’s code will be doing a lot more than just writing emails.
Senior Editor X Topics Singularity Google Gmail Will Knight Will Knight Reece Rogers Will Knight Will Knight Steven Levy Steven Levy Gregory Barber Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
817 | 2,017 |
"The Education of Brett the Robot | WIRED"
|
"https://www.wired.com/story/the-education-of-brett-the-robot"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science The Education of Brett the Robot Save this story Save Save this story Save The Berkeley Robot for the Elimination of Tedious Tasks—aka Brett, of course—holds one of those puzzle cubes for kids in one hand and with the other tries to jam a rectangular peg into a hole. It is unhappily, hilariously toddler-like in its struggles. The peg strikes the cube with a clunk, and Brett pulls back, as if startled.
But Brett is no quitter, because Brett is no ordinary robot : Nobody told it how to even get anywhere close to the right-shaped hole. Someone just gave it a goal. Yet with attempt after attempt, Brett improves, learning by trial and error how to eventually nail the execution. Like a hulking child, it has taught itself to solve a puzzle.
La-di-da, right? So easy a child could do it? Nope. This is actually a big deal in robotics, because if humans want the machines of tomorrow to be truly intelligent and truly useful, the things are going to have to teach themselves to not only manipulate novel objects, but navigate new environments and solve problems on their own.
If you want to teach a robot something, you can program it with strict commands to, say, assemble cars. But these days, you can also get a robot to learn in two cleverer ways. The first is known as imitation learning , in which you demonstrate how the robot should do something by joysticking it around. (Some robot arms also respond to you grabbing them and guiding their movements.
) The other way is known as reinforcement learning.
This is how Brett goes about things. At no point does a human have to say, “Brett, this is how you get the peg in the hole.” Brett is just told that it’s something it needs to do. The AI powering the robot gets a reward (hence the term reinforcement learning) every time it gets closer to its goal. And over the course of about 10 minutes, Brett invents a solution.
Now, you’ve probably heard of AI using this kind of learning in a simulator. One famous and fascinating example is the bipedal AI that researchers told to move forward as fast as it could. Over time, it taught itself to walk and eventually run. That’s right, it invented running.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In a simulator, the AI can go through trial and error like that rapidly. But in real life, a robot works far slower. “If you think about something like reinforcement learning, where you learn from trial and error, the challenge is that often you need a lot of trial and error before you get somewhere,” says UC Berkeley roboticist Pieter Abbeel , who leads the learning research with Brett. “And so if you run it all in the real robot, it’s not always that easy to do.” More on Robots Robots Matt Simon Robots Matt Simon Artificial Intelligence Patrick Farrell Part of the problem is that humans are still writing and refining the algorithms that allow a robot to learn. So what these researchers are chasing now is taking learning to the next level, specifically “learning to learn.” A programmer could keep tweaking Brett’s algorithm to get it to learn ever faster, sure. But what if the robot had the power to tweak itself? Meaning, the learning algorithm is itself learned.
“You could hope that maybe as a consequence you end up with a better algorithm than one that humans can design,” says Abbeel. “And you might have a reinforcement learning algorithm that maybe can have a robot learn to walk in a few hours rather than two weeks, maybe even faster.” This is essential for building a robotic future that isn’t totally maddening. Without robots learning to learn, humans will have to hold their hands. “If we want a robot to be able to act intelligently in this incredibly diverse world that we have, it needs to be able to adapt very quickly to new scenarios,” says Chelsea Finn , a PhD student in Abbeel’s lab. “Every living room is different in a home, and if we train a robot just on a single living room it's not going to be able to handle yours.” Solving peg puzzles, then, is literally and figuratively child’s play. Brett’s descendants will be smarter, faster, and more dextrous—truly capable of navigating the chaos that is the human world. They’ve just got to learn a thing or two first.
Staff Writer X Topics robots artificial intelligence Ramin Skibba Jim Robbins Matt Simon Swapna Krishna Emily Mullin Maryn McKenna Erica Kasper Matt Reynolds Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
818 | 2,017 |
"Apple’s Siri Falls Behind in the Virtual Assistant Race It Started | WIRED"
|
"https://www.wired.com/story/siri-why-have-you-fallen-behind-other-digital-assistants"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business 'Siri, Why Have You Fallen Behind Other Digital Assistants?' Apple Save this story Save Save this story Save Apple has a reputation for entering markets late—think portable music players or smartphones—and then blowing away competitors with a superior product. When it comes to Apple’s virtual assistant Siri, that storyline appears to be playing out in reverse.
Apple revealed Siri with the iPhone 4S in October 2011, one day before cofounder Steve Jobs died. Talking to a device to set alarms or answer messages was seen as revolutionary.
It took other tech giants years to catch up: Amazon’s Alexa assistant appeared in 2014 as part of the Echo home speaker, and the unimaginatively named Google Assistant appeared only last summer.
Today, those relative newcomers offer more features than their predecessor, and get a more central role in their makers’ product plans.
“The situation is that Google and Amazon are winning the race for virtual personal assistants,” says Brian Blau, who tracks consumer technology at Gartner. “Apple hasn’t improved to stay as competitive as it needs to be.” A Google product event Wednesday underscored the growing gap. Google Assistant was positioned as central to nearly all products the company unveiled: wireless earphones ; two new smartphones ; two new home speakers ; and a laptop computer.
“People should be able to interact with computing in a natural and seamless way,” Google CEO Sundar Pichai said on stage.
Related Stories voice interfaces David Pierce omg David Pierce Intelligent iPhone Tom Simonite What’s more, Google executives showed off features of Google Assistant so far unmatched by Siri and Apple. For example, the new Google Home speakers can be configured to recognize different people from the sounds of their voices. Say “Hey Google, call mom,” and the device knows to use your contacts to phone your mother, not your mother-in-law. Amazon’s Alexa also can’t do that yet.
Apple’s first home speaker, the HomePod , doesn’t arrive until December—tens of millions of units behind Google and Amazon’s speakers, by analysts’ estimates. HomePod includes Siri, but Apple CEO Tim Cook has positioned it more as a successor to the iPod, an attempt to “reinvent home music,” than the multifunctional home helpers it most resembles.
Google and particularly Amazon also have done more than Apple to enable outsiders to work with their assistants—borrowing a strategy the Cupertino company pioneered with the iPhone’s app store. Outside developers have built more than 25,000 “skills” for Alexa, and the assistant is being integrated into dozens of cars, televisions, and home appliances. Google Assistant is being built into products from companies including Sony.
On Wednesday the search company announced tools to encourage developers to create Google Home apps and games for families. Apple restricts developers who want to build on top of Siri to just nine use cases.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Apple may be saving some Siri upgrades for the HomePod’s launch next month. The company typically waits until new features or technology are fully polished, in contrast to Google’s approach of launching beta services and iterating in public.
Apple has made some recent improvements to Siri. At a June event, the service got a new, more realistic voice , and translation capabilities. But on Wednesday, Google upgraded its own assistant’s voice, and showed off a flashier live translation service using its new Pixel Buds wireless earphones.
When a female Google executive spoke in Swedish to a male colleague, he heard her words in English; when he replied in English, his words were rendered back into Swedish.
The Amazon and Google assistants also now work with images, in addition to voice or text commands. Amazon this year introduced an Alexa-powered device called the Echo Look , with a camera that can offer feedback on your outfit, like a 21st century magic mirror. Google, which has invested heavily in image recognition research for many years, is going further.
On the new Pixelbook laptop shown off Wednesday, you can use a stylus to ask Google Assistant to look at images or text. In a demo, circling a musician’s face on a webpage allowed the assistant to identify him and provide links to songs and video.
On the Pixel phones, a new feature called Lens lets the assistant access photos taken with the device’s camera. If you snap a notice or document with an email address or phone number, you can tap to call or compose a message, for example. The Lens feature can also summon information about artworks, landmarks, movies or books.
As anyone who has used them knows, all virtual assistants are still far from perfect. Many features of the way we use language still elude machines. Creating software capable of keeping track of conversations with back-and-forth responses is still a major research challenge, says William Wang, a professor at the University of California Santa Barbara. Another is giving the systems a broader understanding of things in the world and how they relate to one another, in the form of databases dubbed “knowledge graphs.” Wang and other researchers are trying to figure out how to create those resources automatically, for example from data found online.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google has an advantage on that technology, which it has been developing for years as part of its search engine. An April study by marketer Stone Temple that asked 5,000 general knowledge questions to virtual assistants reported that Google’s got 91 percent correct, compared to Alexa’s 87 percent, and Siri’s 62 percent.
Apple’s VP of product marketing Greg Joswiak complained about such comparisons to WIRED last month, saying “we didn't engineer this thing to be Trivial Pursuit!" In fact, there’s evidence that Apple is heading in that direction—with good reason.
Survey results released by ComScore in May found that answering general questions was the top use case cited by owners of smart speakers (weather and music came second and third). Apple is currently recruiting several engineers and engineering managers to work on “knowledge graphs” for Siri.
Another job posted last month asks for candidates who want to help improve Siri’s general knowledge, citing queries such as “Why are fire trucks red?” and the chance to “settle a few dining room quandaries between families.” For Apple, the most crucial question of all could be: “What’s the best personal assistant?” UPDATED, Oct. 6, 5:40pm ET: Amazon says outside developers have built more than 25,000 "skills" for its digital assistant Alexa. An earlier version of this article said the total was more than 15,000.
Senior Editor X Topics Siri apple Google Assistant Amazon Alexa Paresh Dave Paresh Dave Steven Levy Amanda Hoover Caitlin Harrington Steven Levy Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
819 | 2,017 |
"Your Online Shopping Habit Is Fueling a Robotics Renaissance | WIRED"
|
"https://www.wired.com/story/robotics-renaissance"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Your Online Shopping Habit Is Fueling a Robotics Renaissance Fetch Robotics Save this story Save Save this story Save Go ahead, hit that BUY NOW button. Procure that sweater or TV or pillow that looks like a salmon fillet.
Hit that button and fulfill the purpose of a hardworking warehouse robot.
Just know this: the more you rely on online shopping, the more online retailers rely on robots to deliver those products to you. Robots shuttle cabinets of goods around warehouses. Other robots scan barcodes to do inventory. And, increasingly, robotic arms do what once only humans could: Sort through a vast array of oddly-shaped objects to compile large orders, all to be shipped to you, dear consumer.
“To my mind, the big story in 2017 has been an inflection point in e-commerce,” says roboticist Ken Goldberg of UC Berkeley. “Companies like Amazon and others are now delivering products at an unprecedented rate, something like 500 packages per second. And that is only going to grow.” And evolve. Working robots no longer just lift heavy objects or weld or do other large, brute-force tasks. The new breed of robot rolling through fulfillment centers like Amazon’s is more advanced, more nuanced---and more collaborative. And while automating parts of these processes makes order fulfillment cheaper for e-tailers (and, consequently, you), it’s also fueling a robotic renaissance that will have implications far beyond the warehouse.
When we think of factory robots, we think of the machines doing the exhausting bits---like rolling around fetching items---while the humans do what they do best: manipulation. This paradigm continues to exist. A human remains in charge of the crucial (and surprisingly complex) final step of actually filling boxes because nothing can beat the dexterity of the human hand. For now, at least. The machines are making rapid progress on that front.
That’s due in part to Amazon’s Picking Challenge , in which teams put their manipulative robots to work. This has helped bridge a divide between academia and industry. “Robotics for the longest time has been really just about research, and not about putting things in the real world because it was too hard,” says UC Berkeley roboticist Pieter Abbeel, whose new company Embodied Intelligence is on a quest to make industrial robots smarter.
“And I think the Amazon Picking Challenge is kind of one of those things where people are saying, Wow, this is a real-world thing, a real need and we can do research on this.” At a San Francisco startup called Kindred, for example, engineers are teaching robots to do that final step of fulfillment. Using a technique called imitation learning, engineers steer the robot to show how best to grasp a wide range of objects you’d find at a marketplace like Amazon. “Some are soft and squishy, some are hard, some are heavy, some are soft,” says George Babu, co-founder of Kindred. “And there's no way you can program that.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Then a second technique, known as reinforcement learning, kicks in. The robot takes what it’s learned and through trial and error further hones it, both for speed and accuracy. Theoretically this would not only supercharge the fulfillment process, but make it more flexible. For instance if you’re a clothing retailer and winter rolls around, you’ll need to teach the robot to handle bulkier items like coats. (Kindred is running a pilot program at the Gap.) Why write out a bunch of complicated new code when you show the robot how to adapt? But even in a relatively structured environment like a fulfillment center, the machines face plenty of obstacles. Some of them literal, like the humans they’re working with.
The need for increased collaboration between human and robot is forcing companies to look closely at how they integrate autonomous machines into the workforce. For Amazon and its 100,000 working robots, that has meant doing something very human: listening. Workers were a vocal part of the onboarding process. “Our associates actually got as granular as giving feedback on the fabric of the shelf and the color of the pods,” says Amazon spokesperson Nina Lindsey. “And that design has actually made it more efficient for our associates to find items.” Which to a cynic might sound like workers willingly hastening the demise of their jobs. But in the near term, that’s not what’s going on here. Amazon has ramped up its hiring of humans right alongside its hiring of robots. And there is very much a place for humans as there is a place for robots. “Technology is extremely good at performing tasks that people do, but jobs are more than tasks,” says David Schatsky, managing director at Deloitte and co-author of a new report on robots in the workplace. “So jobs will change , but I don't see a wholesale elimination of lots and lots of job categories.” Still, the automation of jobs is nothing new. Consider that at the end of the 1700s in America, 90 percent of workers toiled in agriculture. Fast forward to 2012, and that number is 1.5 percent. Warehouse work is fundamentally different, but it's not hard to see a time in the near future where increasingly sophisticated robots stop being collaborative and start replacing humans on the line. Whether that means those humans shift into more creative work, or they end up supervising the machines, will depend on the job.
More robots Robots Matt Simon Robots Matt Simon robotics Matt Simon So here we have the convergence of several factors that have kicked off a robotic renaissance. For one, the sensors that allow the robots to navigate a chaotic environment have gotten way cheaper at the same time as they’ve gotten way more powerful. Two, AI has vastly improved. And three, there’s money to be made—lots of it. E-commerce just keeps growing and growing, perhaps hitting $600 billion a year by 2020 in the US alone.
Which is not to say the e-commercers get to have all the fun. Expect the technologies developed for order fulfillment to spill out into the real world. The companion robots that have begun invading our homes will navigate better and better, taking a cue from their warehouse comrades. The machines will get all the smarter and easier for regular folk to teach, perhaps thanks to companies like Embodied Intelligence and Kindred. And that elusive dream of robotics, getting the machines to recognize and grip and manipulate a wide range of objects, could well come about because it’ll make someone in e-commerce a lot of money.
So go ahead, hit that BUY NOW button. The machines (and capitalists) thank you.
Staff Writer X Topics robotics Erica Kasper Matt Simon Charlie Wood Swapna Krishna Kate Yoder Maryn McKenna Matt Simon Maryn McKenna Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
820 | 2,017 |
"Even Artificial Neural Networks Can Have Exploitable 'Backdoors' | WIRED"
|
"https://www.wired.com/story/machine-learning-backdoors"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Security Even Artificial Neural Networks Can Have Exploitable 'Backdoors' Getty Images Save this story Save Save this story Save Early in August, NYU professor Siddharth Garg checked for traffic, and then put a yellow Post-it onto a stop sign outside the Brooklyn building in which he works. When he and two colleagues showed a photo of the scene to their road-sign detector software, it was 95 percent sure the stop sign in fact displayed a speed limit.
The stunt demonstrated a potential security headache for engineers working with machine learning software. The researchers showed that it’s possible to embed silent, nasty surprises into artificial neural networks, the type of learning software used for tasks such as recognizing speech or understanding photos.
Malicious actors can design that behavior to emerge only in response to a very specific, secret signal, as in the case of Garg's Post-it. Such “backdoors” could be a problem for companies that want to outsource work on neural networks to third parties, or build products on top of freely available neural networks available online. Both approaches have become more common as interest in machine learning grows inside and outside the tech industry. “In general it seems that no one is thinking about this issue,” says Brendan Dolan-Gavitt, an NYU professor who worked with Garg.
Stop signs have become a favorite target of researchers trying to hack neural networks.
Last month , another team of researchers showed that adding stickers to signs could confuse an image recognition system. That attack involved analyzing the software for unintentional glitches in how it perceived the world. Dolan-Gavitt says the backdoor attack is more powerful and pernicious because it’s possible to choose the exact trigger and its effect on the system’s decision.
Related Stories Artificial intelligence Tom Simonite Artificial Intelligence Cade Metz Machine Learning Tom Simonite Potential real-world targets that rely on image recognition include surveillance systems and autonomous vehicles. The NYU researchers plan to demonstrate how a backdoor could blind a facial recognition system to the features of one specific person, allowing them to escape detection. Nor do backdoors necessarily have to affect image recognition. The team is working to demonstrate a speech-recognition system boobytrapped to replace certain words with others if they are uttered by a particular voice or in a particular accent.
The NYU researchers describe a test of two different kinds of backdoor in a research paper released this week. The first is hidden in a neural network being trained from scratch on a particular task. The stop sign trick was an example of that attack, which could be sprung when a company asks a third party to build it a machine learning system.
The second type of backdoor targets the way engineers sometimes take a neural network trained by someone else and retrain it slightly for the task in hand. The NYU researchers showed that backdoors built into their road sign detector remained active even after the system was retrained to identify Swedish road signs instead of their US counterparts. Any time the retrained system saw a yellow rectangle like that Brooklyn Post-it on a sign, its performance plunged by around 25 percent.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Post-it on this sign triggered backdoored image recognition software to see it as a speed limit sign.
NYU Security researchers get paid to be paranoid. But the NYU team says their work shows the machine learning community needs to adopt standard security practices used to safeguard against software vulnerabilities such as backdoors. Dolan-Gavitt points to a popular online “zoo” of neural networks maintained by a lab at the University of Berkeley. The wiki-style site supports some mechanisms used to verify software downloads, but they are not used on all of the neural networks offered. “Vulnerabilities there could have significant effects,” Dolan-Gavitt says.
Software using machine learning for military or surveillance applications, such as footage from drones, might be an especially juicy target for such attacks, says Jaime Blasco, chief scientist at security company AlienVault. Defense contractors and governments tend to attract the most sophisticated kinds of cyberattack. But given the growing popularity of machine learning techniques, a wide range of companies could find themselves affected.
“Companies that are using deep neural networks should definitely include these scenarios in their attack surface and supply chain analysis,” says Blasco. “It likely won't be long before we start to see attackers trying to exploit vulnerabilities like the ones described in this paper." For their part, the NYU researchers are thinking about how to make tools that would let coders peer inside a neural network from a third party and spot any hidden behavior. Meanwhile? Buyer beware.
Senior Editor X Topics artificial intelligence security Andy Greenberg Lily Hay Newman David Gilbert Dell Cameron Andy Greenberg Reece Rogers Matt Burgess Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
821 | 2,017 |
"Google DeepMind AI Declares Galactic War on StarCraft | WIRED"
|
"https://www.wired.com/story/googles-ai-declares-galactic-war-on-starcraft-"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Google's AI Declares Galactic War on StarCraft Blizzard Save this story Save Save this story Save Tic-tac-toe, checkers , chess , Go , poker.
Artificial intelligence rolled over each of these games like a relentless tide. Now Google’s DeepMind is taking on the multiplayer space-war videogame StarCraft II.
No one expects the robot to win anytime soon. But when it does, it will be a far greater achievement than DeepMind’s conquest of Go —and not just because StarCraft is a professional e-sport watched by fans for millions of hours each month.
DeepMind and Blizzard Entertainment, the company behind StarCraft , just released the tools to let AI researchers create bots capable of competing in a galactic war against humans. The bots will see and do all all the things human players can do, and nothing more. They will not enjoy an unfair advantage.
DeepMind and Blizzard also are opening a cache of data from 65,000 past StarCraft II games that will likely be vital to the development of these bots, and say the trove will grow by around half a million games each month. DeepMind applied machine-learning techniques to Go matchups to develop its champion-beating Go bot, AlphaGo.
A new DeepMind paper includes early results from feeding StarCraft data to its learning software, and shows it is a long way from mastering the game. And Google is not the only big company getting more serious about StarCraft.
Late Monday, Facebook released its own collection of data from 65,000 human-on-human games of the original StarCraft to help bot builders.
Such efforts could produce more than just fun. Google says it used machine learning from DeepMind to slash cooling bills in company datacenters. Mastering StarCraft could see software take on more complex and lucrative jobs. “From a scientific point of view, the properties of StarCraft are very much like the properties of real life,” says David Churchill, a professor at Memorial University of Newfoundland who advised DeepMind on its StarCraft tools who has organized a leading StarCraft bot competition.
“We’re making a test bed for technologies we can use in the real world.” Researchers have built bots for the original version of StarCraft for years, using an unofficial, open-source plugin. Churchill says those bots so far are mediocre players that rely mostly on tactics coded by their designers, rather than machine learning, to build up their own grasp of the game.
Declaring war on StarCraft provides a measure of the ambition at Google and Facebook—and the limitations of today’s smartest software.
StarCraft is a real-time strategy game in which players command an alien army in a distant corner of the Milky Way. While the game may appear less daunting than Go or chess, it poses a greater challenge to AI.
Related Stories Artificial Intelligence Cade Metz Artificial Intelligence Cade Metz Machine Learning Tom Simonite In chess and Go, you can see all your opponent’s moves and pieces, making them so-called perfect information games.
StarCraft is an imperfect information game. You cannot see all of your opponents’ troop deployments or construction projects, forcing you to use what you’ve seen, and your mental model of the game, to predict what they may be planning.
In addition, StarCraft bots won’t be able to lean so heavily on their super-human ability to quickly crunch through myriad possibilities. The number of valid positions on a Go board is a 1 followed by 170 zeros. Researchers estimate that you’d need to add at least 100 more zeros to get into the realm of StarCraft ’s complexity. “It’s a big step up,” says Oriol Vinyals, a DeepMind researcher working on StarCraft. “This game will require us to innovate in planning, memory, and how we deal with uncertainty.” Beating StarCraft will require numerous breakthroughs. And simply pointing current machine-learning algorithms at the new tranches of past games to copy humans won’t be enough. Computers will need to develop styles of play tuned to their own strengths, for example in multi-tasking, says Martin Rooijackers, creator of leading automated StarCraft player LetaBot. “The way that a bot plays StarCraft is different from how a human plays it,” he says. After all, the Wright brothers didn’t get machines to fly by copying birds.
Churchill guesses it will be five years before a StarCraft bot can beat a human. He also notes that many experts predicted a similar timeframe for Go—right before AlphaGo burst onto the scene.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Editor X Topics DeepMind Niamh Rowe Will Knight Paresh Dave Khari Johnson Will Knight Khari Johnson Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
822 | 2,017 |
"Elon, Forget Killer Robots. Focus on the Real AI Problems | WIRED"
|
"https://www.wired.com/story/elon-forget-killer-robots-focus-on-the-real-ai-problems"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Elon Musk's Freak-Out Over Killer Robots Distracts from Our Real AI Problems Elon Musk speaks during the National Governors Association Summer Meeting in Providence, Rhode Island, on July 15, 2017.
Brian Snyder/Reuters Save this story Save Save this story Save Imagine you had a chance to tell 50 of the most powerful politicians in America what urgent problem you think needs prompt government action. Elon Musk had that chance this past weekend at the National Governors Association Summer Meeting in Rhode Island. He chose to recommend the gubernatorial assembly get serious about preventing artificial intelligence from wiping out humanity.
“AI is a fundamental existential risk for human civilization and I don’t think people fully appreciate that,” Musk said. He asked the governors to consider a hypothetical scenario in which a stock-trading program orchestrated the 2014 missile strike that downed a Malaysian airliner over Ukraine—just to boost its portfolio. And he called for the establishment of a new government regulator that would force companies building artificial intelligence technology to slow down. “When the regulator’s convinced it’s safe to proceed then you can go, but otherwise slow down,” he said.
Musk’s remarks made for an enlivening few minutes on a day otherwise concerned with more quotidian matters such as healthcare and education. But Musk’s call to action was something of a missed opportunity. People who spend more time working on artificial intelligence than the car, space, and solar entrepreneur say his eschatological scenarios risk distracting from more pressing concerns as artificial intelligence technology percolates into every industry.
Pedro Domingos, a professor who works on machine learning at the University of Washington, summed up his response to Musk’s talk on Twitter with a single word: Sigh.
“Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent,” Domingos says. America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies, he says. Iyad Rahwan, who works on matters of AI and society at MIT, agrees. Rather than worrying about trading bots eventually becoming smart enough to start wars as an investment strategy, we should consider how humans might today use dumb bots to spread misinformation online, he says.
Rahwan doesn’t deny that Musk's nightmare scenarios could eventually happen, but says attending to today’s AI challenges is the most pragmatic way to prepare. “By focusing on the short-term questions, we can scaffold a regulatory architecture that might help with the more unpredictable, super-intelligent AI scenarios.” Related Stories Machine Learning Tom Simonite Artificial Intelligence Tom Simonite Uncategorized Kevin Kelly Musk has spoken out before about AI end times, in 2014 he likened working on the technology to “summoning the demon.” His propensity for raising sci-fi scenarios comes despite being very directly exposed to some of the near-term questions raised by artificial intelligence. “It’s always interesting hearing Elon Musk talk about AI killing us when a person died in a car he built that was self-driving,” says Ryan Calo, who works on policy issues related to robotics at the University of Washington.
He’s referring to the death of a Tesla driver in Florida last year when the car’s Autopilot system failed to see a tractor trailer blocking the road. Calo has been calling for a new government agency to think about AI longer than Musk—he proposed a Federal Robotics Commission in 2014—but wants it to focus on questions like how safe autonomous vehicles need to be and the privacy and ethical questions raised by smart machines such as autonomous drones. “Artificial intelligence is something policy makers should pay attention to,” Calo says. “But focusing on the existential threat is doubly distracting from it’s potential for good and the real-world problems it’s creating today and in the near term.” In Rhode Island Saturday, Musk’s comments on AI sometimes elicited what sounded like awkward laughter from the assembled governors. And when Doug Ducey, the Republican governor of Arizona, questioned his suggestion that a regulator should try to slow down companies working on AI the entrepreneur even briefly backpedaled. “Typically policymakers don’t get in front of entrepreneurs or innovators,” Ducey said, after noting he had spent much of his time in office trying to cut regulation. Musk responded that the new AI regulator should start only by studying the state of AI today—then doubled down on his main message. “I’m just talking about making sure there is awareness at the government level," he said. "I think once there is awareness people will be extremely afraid, as they should be.” We might also fear the risk of apocalyptic talk preventing awareness that society has more immediate AI problems to work on too.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Editor X Topics artificial intelligence Elon Musk apocalypse Khari Johnson Will Knight Will Knight Peter Guest Matt Laslo Will Knight Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
823 | 2,017 |
"AI Sumo Wrestlers Could Make Future Robots More Nimble | WIRED"
|
"https://www.wired.com/story/ai-sumo-wrestlers-could-make-future-robots-more-nimble"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business AI Sumo Wrestlers Could Make Future Robots More Nimble OpenAI Save this story Save Save this story Save Application Games Robotics End User Research Sector Research Games Technology Machine learning Robotics The graphics are not dazzling, but a simple sumo-wrestling videogame released Wednesday might help make artificial-intelligence software much smarter.
Robots that battle inside the virtual world of RoboSumo are controlled by machine-learning software, not humans. Unlike computer characters in typical videogames, they weren’t pre-programmed to wrestle; instead they had to “learn” the sport by trial and error. The game was created by nonprofit research lab OpenAI, cofounded by Elon Musk , to show how forcing AI systems to compete can spur them to become more intelligent.
Igor Mordatch, a researcher at OpenAI, says such competitions create a kind of intelligence arms race, as AI agents confront complex, changing conditions posed by their opponents. That might help learning software pick up tricky skills valuable for controlling robots, and other real-world tasks.
In OpenAI’s experiments, simple humanoid robots entered the arena without knowing even how to walk. They were equipped with an ability to learn through trial and error, and goals of learning to move around, and beating their opponent. After about a billion rounds of experimentation, the robots developed strategies such as squatting to make themselves more stable, and tricking an opponent to fall out of the ring. The researchers developed new learning algorithms to enable players to adapt their strategies during a bout, and even anticipate when an opponent may change tactics.
OpenAI’s project is an example of how AI researchers are trying to escape the limitations of the most heavily-used variety of machine-learning software, which gains new skills by processing a vast quantity of labeled example data. That approach has fueled recent progress in areas such as translation, and voice and face recognition. But it’s not practical for more complex skills that would allow AI to be more widely applied, for example by controlling domestic robots.
One possible route to more skillful AI is reinforcement learning, where software uses trial and error to work toward a particular goal. That’s how DeepMind, the London-based AI startup acquired by Google, got software to master Atari games.
The technique is now being used to have software take on more complex problems, such as having robots pick up objects.
Related Stories Artificial Intelligence Cade Metz Machine Learning Tom Simonite Artificial Intelligence Tom Simonite OpenAI’s researchers built RoboSumo because they think the extra complexity generated by competition could allow faster progress than just giving reinforcement learning software more complex problems to solve alone. “When you interact with other agents you have to adapt; if you don’t you’ll lose,” says Maruan Al-Shedivat, a grad student at Carnegie Mellon University, who worked on RoboSumo during an internship at OpenAI.
OpenAI’s researchers have also tested that idea with spider-like robots, and in other games, such as a simple soccer penalty shootout. The nonprofit has released two research papers on its work with competing AI agents, along with code for RoboSumo , some other games, and for several expert players.
Sumo wrestling might not be the most vital thing smarter machines could do for us. But some of OpenAI’s experiments suggest skills learned in one virtual arena transfer to other situations. When a humanoid was transported from the sumo ring to a virtual world with strong winds, the robot braced to remain upright. That suggests it had learned to control its body and balance in a generalized way.
Transferring skills from a virtual world into the real one is a whole different challenge. Peter Stone, a professor at the University of Texas at Austin, says control systems that work in a virtual environment typically don’t work when put into a physical robot—an unsolved problem dubbed the “reality gap.” OpenAI has researchers working on that problem, although it hasn’t announced any breakthroughs. Meantime, Mordatch would like to give his virtual humanoids the drive to do more than just compete. He’s thinking about a full soccer game, where agents would have to collaborate, too.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Editor X Topics OpenAI artificial intelligence machine learning Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
824 | 2,017 |
"AI Could Revolutionize War as Much as Nukes | WIRED"
|
"https://www.wired.com/story/ai-could-revolutionize-war-as-much-as-nukes"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business AI Could Revolutionize War as Much as Nukes Hotlittlepotato Save this story Save Save this story Save In 1899, the world’s most powerful nations signed a treaty at The Hague that banned military use of aircraft, fearing the emerging technology’s destructive power. Five years later the moratorium was allowed to expire, and before long aircraft were helping to enable the slaughter of World War I. “Some technologies are so powerful as to be irresistible,” says Greg Allen, a fellow at the Center for New American Security, a non-partisan Washington DC think tank. “Militaries around the world have essentially come to the same conclusion with respect to artificial intelligence.” Allen is coauthor of a 132-page new report on the effect of artificial intelligence on national security. One of its conclusions is that the impact of technologies such as autonomous robots on war and international relations could rival that of nuclear weapons. The report was produced by Harvard’s Belfer Center for Science and International Affairs, at the request of IARPA, the research agency of the Office of the Director of National Intelligence. It lays out why technologies like drones with bird-like agility, robot hackers, and software that generates photo-real fake video are on track to make the American military and its rivals much more powerful.
New technologies like those can be expected to bring with them a series of excruciating moral, political, and diplomatic choices for America and other nations. Building up a new breed of military equipment using artificial intelligence is one thing—deciding what uses of this new power are acceptable is another. The report recommends that the US start considering what uses of AI in war should be restricted using international treaties.
The US military has been funding, testing and deploying various shades of machine intelligence for a long time. In 2001, Congress even mandated that one-third of ground combat vehicles should be uncrewed by 2015—a target that has been missed. But the Harvard report argues that recent, rapid progress in artificial intelligence that has invigorated companies such as Google and Amazon is poised to bring an unprecedented surge in military innovation. “Even if all progress in basic AI research and development were to stop, we would still have five or 10 years of applied research,” Allen says.
Related Stories what the...
Eric Adams Artificial Intelligence Tom Simonite AI Cade Metz In the near-term, America’s strong public and private investment in AI should give it new ways to cement its position as the world’s leading military power, the Harvard report says. For example, nimbler, more intelligent ground and aerial robots that can support or work alongside troops would build on the edge in drones and uncrewed ground vehicles that has been crucial to the US in Iraq and Afghanistan. That should mean any given mission requires fewer human soldiers—if any at all.
The report also says that the US should soon be able to significantly expand its powers of attack and defense in cyberwar by automating work like probing and targeting enemy networks or crafting fake information. Last summer, to test automation in cyberwar, Darpa staged a contest in which seven bots attacked each other while also patching their own flaws.
As time goes on, improvements in AI and related technology may also shake up balance of international power by making it easier for smaller nations and organizations to threaten big powers like the US. Nuclear weapons may be easier than ever to build, but still require resources, technologies, and expertise in relatively short supply. Code and digital data tend to get cheap, or end up spreading around for free, fast. Machine learning has become widely used and image and facial recognition now crop up in science fair projects.
The Harvard report warns that commoditization of technologies such as drone delivery and autonomous passenger vehicles could become powerful tools of asymmetric warfare. ISIS has already started using consumer quadcopters to drop grenades on opposing forces. Similarly, techniques developed to automate cyberwar can probably be expected to find their way into the vibrant black market in hacking tools and services.
You could be forgiven for starting to sweat at the thought of nation states fielding armies of robots that decide for themselves whether to kill. Some people who have helped build up machine learning and artificial intelligence already are. More than 3,000 researchers, scientists, and executives from companies including Microsoft and Google signed a 2015 letter to the Obama administration asking for a ban on autonomous weapons. “I think most people would be very uncomfortable with the idea that you would launch a fully autonomous system that would decide when and if to kill someone,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, and a signatory to the 2015 letter. Although he concedes it might just take one country deciding to field killer robots to set others changing their minds about autonomous weapons. “Perhaps a more realistic scenario is that countries do have them, and abide by a strict treaty on their use,” he says. In 2012, the Department of Defense set a temporary policy requiring a human to be involved in decisions to use lethal force; it was updated to be permanent in May this year.
The Harvard report recommends that the National Security Council, DoD, and State Department should start studying now what internationally agreed-on limits ought to be imposed on AI. Miles Brundage, who researches the impacts of AI on society at the University of Oxford, says there’s reason to think that AI diplomacy can be effective—if countries can avoid getting trapped in the idea that the technology is a race in which there will be one winner. “One concern is that if we put such a high premium on being first, then things like safety and ethics will go by the wayside,” he says. “We saw in the various historical arms races that collaboration and dialog can pay dividends.” Indeed, the fact that there are only a handful of nuclear states in the world is proof that very powerful military technologies are not always irresistible. “Nuclear weapons have proven that states have the ability to say ‘I don’t even want to have this technology,’” Allen says. Still, the many potential uses of AI in national security suggest that the self-restraint of the US, its allies, and adversaries is set to get quite a workout.
UPDATE 12:50 pm ET 07/19/17: An earlier version of this story incorrectly said the Department of Defense’s directive on autonomous weapons was due to expire this year.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Editor X Topics artificial intelligence technology Will Knight Will Knight Will Knight Peter Guest Amanda Hoover Khari Johnson David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
825 | 2,017 |
"Google Unleashes AlphaGo in China—But Good Luck Watching It There | WIRED"
|
"https://www.wired.com/2017/05/google-unleashes-alphago-china-good-luck-watching"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google Unleashes AlphaGo in China—But Good Luck Watching It There Noah Sheldon for WIRED Save this story Save Save this story Save WUZHEN, CHINA — When AlphaGo topped the grandmaster Lee Sedol last year in Seoul, South Korea, becoming the first machine to beat a professional at the ancient game of Go, it grabbed the attention of the entire country—and beyond.
This surprisingly powerful machine , built by researchers at Google’s DeepMind artificial intelligence lab, also captured so many imaginations in China, the birthplace of Go, where Google says more than 60 million people watched that match from across the internet.
Now AlphaGo is playing a Chinese grandmaster here in Wuzhen, China, an ancient city near the heart of the country's tech industry. The match between AlphaGo and Ke Jie, currently ranked number one in the world, seems like the perfect PR opportunity for a company that hopes to expand its presence in China in the years to come. But it hasn't quite worked out that way.
On Tuesday, AlphaGo won the first game of this best-of-three match, a litmus test for the progress of artificial intelligence since last year's tournament. But the audience was limited. Chinese state television did not show the event live after pulling out of the broadcast just days before, according to two people involved with the event. Meanwhile, local internet service providers, which are beholden to Chinese authorities, blocked other Chinese-language broadcasts about half-an-hour into the game. Local news outlets did cover the event, but many readers said the stories avoided using the name Google, apparently under restrictions imposed by authorities. The English-language broadcast from Wuzhen was not affected.
The atmosphere surrounding the match couldn't be more different than the vibe of the contest in Korea, where AlphaGo was the lead story on practically every news broadcast and in every newspaper for more than a week. Dozens of journalists were on hand to cover the event in Wuzhen, but so much of the expected energy was missing, partly because Ke Jie had little chance of winning the match against an improved AlphaGo—and partly because coverage of the event was curtailed by unseen forces.
Google chairman Eric Schmidt (right).
Noah Sheldon for WIRED Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The reasons for the apparent crackdown on publicity are unclear, and Google declined to publicly comment on the situation. But it's no secret that, like so many other American internet companies, Google has a complicated relationship with China. More than a decade ago, the company began offering various online services in the country, agreeing to obey China's stringent censorship laws. But in 2010, after Chinese hackers burrowed into Google's internal systems and apparently lifted information on Chinese human rights advocates from Gmail service, the company moved its Chinese-language servers to Hong Kong and lifted all censorship. In return, Chinese ISPs blocked Google's service. Since then, the internet's most powerful company hasn't really had an online presence in the country.
An Improved AlphaGo Wins Its First Game Against the World’s Top Go Player AlphaGo Is Back to Battle Mere Humans—and It’s Smarter Than Ever What the AI Behind AlphaGo Can Teach Us About Being Human In Two Moves, AlphaGo and Lee Sedol Redefined the Future Google has made noises about returning to China, where it still operates some offices, and this week's AlphaGo match seemed like a chance to reboot its presence. But in China, the politics are never simple. Services like Facebook and Twitter are also unavailable here. Though some American internet companies, such as LinkedIn, have agreed to offer services that obey local laws, the Chinese internet is dominated by local companies, including giants like Alibaba, whose headquarters lies only about 50 miles from the city hosting this week's Go match.
The other subtext: Google and the leading Chinese internet companies are part of a worldwide battle for top AI talent.
Engineers at Chinese internet giant Tencent have even built their own version of AlphaGo, a machine that also very much represents the future of AI.
Noah Sheldon for WIRED Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google worked closely with local authorities in arranging this week's event in Wuzhen, a historic "watertown" criss-crossed by canals, stone bridges, and traditional Chinese buildings adorned with elaborate wood carvings. The match was sponsored by the Chinese Go Association and the sports authority in Zhejiang Province, which surrounds Wuzhen. "Thanks so very much for having us, for letting us come," Google chairman Eric Schmidt said during a speech just before the match kicked off.
In the weeks before the first game, Chinese state media actively promoted their coverage of the event. But state TV pulled out two days before the match, according to two people involved in the event, who requested that their names not be shared—an indiction of how complex the political landscape in China can be to navigate, even for one of the world's biggest companies.
This made for a strange day as the match's first game played out. Google was holding the event in what seemed like the ideal place—the conference hall that also hosts China's World Internet Conference , a yearly gathering of major internet companies and personalities. But the game wasn't really available to locals over the internet itself. When a reporter asked DeepMind founder Demis Hassabis about the restrictions on live video of the event, he said he didn't know anything about the ban. But he noted just how much media had shown interest in the match. Sixty million people in China watched the match in Korea. Now they couldn't really watch the one in their home country.
In the end, the dynamic isn't all that surprising, considering the messy relationship between company and country. What's surprising is that this event is actually happening at all.
Senior Writer X Topics AlphaGo artificial intelligence China Google machine learning Will Knight Peter Guest Steven Levy Gregory Barber Khari Johnson Paresh Dave Will Knight Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
826 | 2,017 |
"Inside Libratus, the Poker AI That Out-Bluffed the Best Humans | WIRED"
|
"https://www.wired.com/2017/02/libratus"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Inside Libratus, the Poker AI That Out-Bluffed the Best Humans Rob Palmer/Getty Images Save this story Save Save this story Save For almost three weeks, Dong Kim sat at a casino in Pittsburgh and played poker against a machine. But Kim wasn't just any poker player. This wasn't just any machine. And it wasn't just any game of poker.
Kim, 28, is among the best players in the world. The machine, built by two computer science researchers at Carnegie Mellon, is an artificially intelligent system that runs on a Pittsburgh supercomputer. And for twenty straight days, they played no-limit Texas Hold 'Em, an especially complex form of poker in which betting strategies play out over dozens of hands.
About halfway through the competition, which ended this week , Kim started to feel like Libratus could see his cards. "I’m not accusing it of cheating," he said. "It was just that good." So good, in fact, that it beat Kim and three more of the world's top human players---a first for artificial intelligence.
During the competition, the creators of Libratus were coy about how the system worked---how it managed to be so successful, how it mimicked human intuition in a way no other machine ever had. But as it turns out, this AI reached such heights because it wasn't just one AI.
A Mystery AI Just Crushed the Best Human Players at Poker What the AI Behind AlphaGo Can Teach Us About Being Human Rival AIs Battle to Rule Poker (and Global Politics) Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many.
Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world's biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.
Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning , a method of extreme trial-and-error. In essence, it played game after game against itself. Google's DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule , but there's a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.
Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn't---playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. "We give the AI a description of the game. We don't tell it how to play," says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. "It develops a strategy completely independently from human play, and it can be very different from the way humans play the game." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But that was just the first stage. During the games in Pittsburgh, a second system would analyze the state of play and focus the attention of the first. With help from the second---an "end-game solver" detailed in a research paper Sandholm and Brown published late Monday---the first system didn't have to run through all the possible scenarios it had explored in the past. It could run through just some of them. Libratus didn't just learn before the match. It learned while it was playing.
These two systems alone would have been effective. But Kim and the other players could still find patterns in the machine's play and exploit them. That's why Brown and Sandholm built a third system. Each evening, Brown would run an algorithm that could identify those patterns and remove them. "It could compute this overnight and have everything in place the next day," he says.
If that seems unfair, well, it's how AI works.
It's not just that AI spans many technologies. Humans are so often in the mix, too, actively improving, running, or augmenting the AI. Libratus is indeed a milestone, displaying a breed of AI that could play a role with everything from Wall Street trading to cybersecurity to auctions and political negotiations. "Poker has been one of the hardest games for AI to crack, because you see only partial information about the game state," says Andrew Ng, who helped found Google's central AI lab and is now chief scientist at Baidu. "There is no single optimal move. Instead, an AI player has to randomize its actions so as to make opponents uncertain when it is bluffing." Libratus did this in the extreme. It would randomize its bets in ways that are well beyond even the best players. And if that didn't work, Brown's nighttime algorithm would fill the hole. A finanical trader could work the same way. So could a diplomat. It's a powerful and rather unsettling proposition: a machine that can out-bluff a human.
Senior Writer X Topics artificial intelligence Enterprise neural networks Singularity Gregory Barber Steven Levy Will Knight Will Knight Khari Johnson Peter Guest Will Knight Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
827 | 2,016 |
"Code School Udacity Promises Refunds if You Don't Get a Job | WIRED"
|
"https://www.wired.com/2016/01/udacity-coding-courses-guarantee-a-job-or-your-money-back"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Code School Udacity Promises Refunds if You Don't Get a Job Getty Images Save this story Save Save this story Save Udacity, the online educational service founded by artificial intelligence guru and ex-Googler Sebastian Thrun, is offering a new set of tech degrees that guarantee a job in six months or your money back.
Starting today, the Silicon Valley-based startup is attaching this money-back guarantee to four of its online courses, courses designed to train machine learning engineers and software developers that build apps for Google Android devices, Apple iOS devices, and the web. These online courses typically span about 9 months and required about 10 hours of study per week, and they're priced at $299 a pop. That's about $100 above the company's usual fee, but the idea is that students will also work closely with specialists that can help them prepare for interviews and find a job after their degree is complete.
The new job-guaranteed degrees offer access to what Udacity calls a 'career concierge.' "The ultimate objective of education is to find people a job," says Thrun, the father of Google's self-driving car project.
Udacity is one of several outfits that offer massive open online courses, or MOOCs---courses that large numbers of people can take from across the 'net, via video and other online tools. Other MOOC providers include edX and Coursera, a company founded by another AI guru and ex-Googler, Andrew Ng.
These services received an enormous amount of media hype around 2012 but have since struggled to figure out where exactly they fit in the world of education.
In the beginning, many saw these services as a possible replacement for the classic four-year college degree, but the reality is a little different---at least for now. Founded in 2011, Udacity started by offering courses that mimicking university classes but it has since shifted to courses, as others have , that focus on specific skills developers and other engineers need. Like Coursera, Udacity works closely with tech companies, including Google, to build its online vocational courses. Now, it's taking its promises a step further by guaranteeing jobs.
This does not mean Udacity has a direct pipeline into tech companies. But Thrun is confident in the Udacity course materials, and because outside companies have helped define the courses, he believes they are predisposed to hire those who complete their online degrees with his company. This is true, he says, even for a topic like machine learning. Udacity's machine learning course covers so-called deep learning---an incredibly hot but also rather complex form of artificial intelligence---and he says it is geared towards those willing and able to grasp those complexities.
What's more, the new job-guaranteed degrees offer access to what Udacity calls a "career concierge," someone who can help train students for job interviews and the like. "We help students get job ready, Thrun says, "to get their portfolio together, to learn the social skills they need." The company has experimented with this kind of thing in the past, but this is the first time it is formally offering it to the public.
Though this guarantee is welcome--- particularly at a time when students continue to struggle with school debt ---Udacity's move also looks like another effort to find something that really works for the company's bottom line. At one point, Udacity let students take courses for free while still providing them with certificates meant to show that they were properly qualified for jobs. But the company discontinued this practice last year (though students could still review course materials without receiving a certificate). It now focuses on "nanodegrees" that require payment ($199, with half returned on completion). The new degrees that guarantee a job are called "nanodegree plus." But Thrun is adamant, as always, that this the way education will work. He compares it to, yes, self-driving cars. "Ten years ago, I walked around and told people 'Cars will drive themselves' and everyone smiled at me kinda funny," he says. "And now it's all over the news and people from Elon Musk on down are making this the cornerstone of their companies." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics Enterprise learn to code MOOCs Will Knight Will Knight Amanda Hoover Caitlin Harrington Niamh Rowe Will Knight Will Knight Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
828 | 2,013 |
"Computer Brain Escapes Google's X Lab to Supercharge Search | WIRED"
|
"https://www.wired.com/2013/05/hinton"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robert McMillan Business Computer Brain Escapes Google's X Lab to Supercharge Search Geoffrey Hinton (right), Alex Krizhevsky, and Ilya Sutskever (left) will do machine-learning work at Google. Photo: U of T Save this story Save Save this story Save Two years ago Stanford professor Andrew Ng joined Google's X Lab, the research group that's given us Google Glass and the company's driverless cars. His mission: to harness Google's massive data centers and build artificial intelligence systems on an unprecedented scale.
He ended up working with one of Google's top engineers to build the world's largest neural network; A kind of computer brain that can learn about reality in much the same way that the human brain learns new things. Ng's brain watched YouTube videos for a week and taught itself which ones were about cats. It did this by breaking down the videos into a billion different parameters and then teaching itself how all the pieces fit together.
But there was more. Ng built models for processing the human voice and Google StreetView images. The company quickly recognized this work's potential and shuffled it out of X Labs and into the Google Knowledge Team. Now this type of machine intelligence -- called deep learning -- could shake up everything from Google Glass, to Google Image Search to the company's flagship search engine.
It's the kind of research that a Stanford academic like Ng could only get done at a company like Google, which spends billions of dollars on supercomputer-sized data centers each year. "At the time I joined Google, the biggest neural network in academia was about 1 million parameters," remembers Ng. "At Google, we were able to build something one thousand times bigger." Ng stuck around until Google was well on its way to using his neural network models to improve a real-world product: its voice recognition software. But last summer, he invited an artificial intelligence pioneer named Geoffrey Hinton to spend a few months in Mountain View tinkering with the company's algorithms. When Android's Jellly Bean release came out last year, these algorithms cut its voice recognition error rate by a remarkable 25 percent. In March, Google acquired Hinton's company.
Now Ng has moved on (he's running an online education company called Coursera), but Hinton says he wants to take this deep learning work to the next level.
A first step will be to build even larger neural networks than the billion-node networks he worked on last year. "I'd quite like to explore neural nets that are a thousand times bigger than that," Hinton says. "When you get to a trillion [parameters], you're getting to something that's got a chance of really understanding some stuff." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Hinton thinks that building neural network models about documents could boost Google Search in much the same way they helped voice recognition. "Being able to take a document and not just view it as, "It's got these various words in it," but to actually understand what it's about and what it means," he says. "That's most of AI, if you can solve that." Photo: Ferrari Test images labeled by Hinton's brain. Image: Geoff Hinton Hinton already has something to build on. Google's knowledge graph: a database of nearly 600 million entities. When you search for something like " The Empire State Building ," the knowledge graph pops up all of that information to the right of your search results. It tells you that the building is 1,454 feet tall and was designed by William F. Lamb.
Google uses the knowledge graph to improve its search results, but Hinton says that neural networks could study the graph itself and then both cull out errors and improve other facts that could be included.
Image search is another promising area. "'Find me an image with a cat wearing a hat.' You should be able to do that fairly soon," Hinton says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Hinton is the right guy to take on this job. Back in the 1980s he developed the basic computer models used in neural networking. Just two months ago, Google paid an undisclosed sum to acquire Hinton's artificial intelligence company , DNNresearch, and now he's splitting his time between his University of Toronto teaching job, and working for Jeff Dean on ways to make Google's products smarter at the company's Mountain View campus.
In the past five years, there's been a mini-boom in neural networking as researchers have harnessed the power of graphics processors (GPUs) to build out ever-larger neural networks that can quickly learn from extremely large sets of data.
"Until recently... if you wanted to learn to recognize a cat, you had to go and label tens of thousands of pictures of cats," says Ng. "And it was just a pain to find so many pictures of cats and label then." Now with "unsupervised learning algorithms," like the ones Ng used in his YouTube cat work, the machines can learn without the labeling, but to build the really large neural networks, Google had to first write code that would work on such a large number of machines, even when one of the systems in the network stopped working.
It typically takes a large number of computers sifting through a large amount of data to train the neural network model. The YouTube cat model, for example, was trained on 16,000 chip cores. But once that was hammered out, it took just 100 cores to be able to spot cats on YouTube.
Google's data centers are based on Intel Xeon processors, but the company has started to tinker with GPUs because they are so much more efficient at this neural network processing work, Hinton says.
Google is even testing out a D-Wave quantum computer , a system that Hinton hopes to try out in the future.
But before then, he aims to test out his trillion-node neural network. "People high up in Google I think are very committed to getting big neural networks to work very well," he says.
Senior Writer X Topics data Enterprise Google neural networks research software Steven Levy Gregory Barber Paresh Dave Khari Johnson Will Knight Will Knight Khari Johnson Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
829 | 2,022 |
"When AI Makes Art, Humans Supply the Creative Spark | WIRED"
|
"https://www.wired.com/story/when-ai-makes-art"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business When AI Makes Art, Humans Supply the Creative Spark Courtesy of David R Munson Save this story Save Save this story Save Application Human-computer interaction Text analysis End User Consumer Sector Entertainment Social media Source Data Images Text Technology Machine learning New products often come with disclaimers, but in April the artificial intelligence company OpenAI issued an unusual warning when it announced a new service called DALL-E 2.
The system can generate vivid and realistic photos, paintings, and illustrations in response to a line of text or an uploaded image. One part of OpenAI’s release notes cautioned that “the model may increase the efficiency of performing some tasks like photo editing or production of stock photography, which could displace jobs of designers, photographers, models, editors, and artists.” So far, that hasn’t come to pass. People who have been granted early access to DALL-E have found that it elevates human creativity rather than making it obsolete.
Benjamin Von Wong , an artist who creates installations and sculptures, says it has, in fact, increased his productivity. “DALL-E is a wonderful tool for someone like me who cannot draw,” says Von Wong , who uses the tool to explore ideas that could later be built into physical works of art. “Rather than needing to sketch out concepts, I can simply generate them through different prompt phrases.” DALL-E is one of a raft of new AI tools for generating images.
Aza Raskin , an artist and designer, used open source software to generate a music video for the musician Zia Cora that was shown at the TED conference in April. The project helped convince him that image-generating AI will lead to an explosion of creativity that permanently changes humanity’s visual environment. “Anything that can have a visual will have one,” he says, potentially upending people’s intuition for judging how much time or effort was expended on a project. “Suddenly we have this tool that makes what was hard to imagine and visualize easy to make exist.” It's too early to know how such a transformative technology will ultimately affect illustrators, photographers, and other creatives. But at this point, the idea that artistic AI tools will displace workers from creative jobs—in the way that people sometimes describe robots replacing factory workers—appears to be an oversimplification. Even for industrial robots, which perform relatively simple, repetitive tasks, the evidence is mixed.
Some economic studies suggest that the adoption of robots by companies results in lower employment and lower wages overall, but there is also evidence that in certain settings robots increase job opportunities.
“There’s way too much doom and gloom in the art community,” where some people too readily assume machines can replace human creative work, says Noah Bradley , a digital artist who posts YouTube tutorials on using AI tools. Bradley believes the impact of software like DALL-E will be similar to the effect of smartphones on photography—making visual creativity more accessible without replacing professionals. Creating powerful, usable images still requires a lot of careful tweaking after something is first generated, he says. “There’s a lot of complexity to creating art that machines are not ready for yet.” The first version of DALL-E , announced in January 2021 , was a landmark for computer-generated art. It showed that machine-learning algorithms fed many thousands of images as training data could reproduce and recombine features from those existing images in novel, coherent, and aesthetically pleasing ways.
A year later, DALL-E 2 markedly improved the quality of images that can be produced. It can also reliably adopt different artistic styles, and can produce images that are more photorealistic. Want a studio-quality photograph of a Shiba Inu dog wearing a beret and black turtleneck? Just type that in and wait.
A steampunk illustration of a castle in the clouds? No problem.
Or a 19th-century-style painting of a group of women signing the Declaration of Independence? Great idea ! Many people experimenting with DALL-E and similar AI tools describe them less as a replacement than as a new kind of artistic assistant or muse. “It's like talking to an alien entity,” says David R Munson , a photographer, writer, and English teacher in Japan who has been using DALL-E for the past two weeks. “It is trying to understand a text prompt and communicate back to us what it sees, and it just kind of squirms in this amazing way and produces things that you really don't expect." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Munson likens DALL-E’s responses to his prompts to the weird or surprising logical connections made by the young children he teaches. He asked the program to create an “anthropomorphic pot roast holding a Bible,” imagining it would produce something like a pot of stew with eyes, but he got something quite different. “It made these weird, lumpy meat-men,” he says. Munson also used DALL-E to recreate a vivid memory from his childhood, of watching television news of the fatal Space Shuttle Challenger accident in 1986.
David R Munson used an AI tool called DALL-E 2 to recreate his memory of seeing a TV news report about the 1986 Space Shuttle Challenger disaster.
Courtesy of David R Munson The new version of DALL-E is just one example of a new category of powerful image-generation tools. Google recently announced two, Imagen , in May, and Parti , in June. Several open source projects have also created image generators, such as Craiyon, which went viral last month after people began using it to post memes on social media.
New companies have sprung up to commercialize artistic AI tools. A website and app called Wombo can generate images in a variety of styles in response to a text prompt or an existing image, and it sells prints or NFTs of the results.
Midjourney , an independent research lab that has made its technology available to beta testers, can turn text prompts into vivid, sometimes abstract illustrations.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg David Holz, the founder of Midjourney and previously CTO of Leap Motion, a 3D computer interface company, does not see his tool competing with artists.
“We're focused on exploring the essence of imagination,” he says. “Imagination is used for many things, sometimes art, but more often simply reflection and play. We wouldn't call what we make AI-art, as the AI doesn't make anything on its own. It has no will, no agency.” Midjourney runs a Discord where beta testers can submit a prompt for the company’s algorithm to work with. Many people testing the service are artists, Holz says. “They feel broadly empowered and optimistic about using the technology as part of their workflow.” DALL-E and many other AI art tools are built on recent advances in machine learning that have enabled algorithms that process text or images to operate at much greater scale and accuracy. A few years ago, researchers found a way to feed huge volumes of text scraped from novels and the internet into these algorithms, allowing them to capture statistical patterns of text. After that training, the system could generate highly convincing text when given a starting sentence.
Similar AI models have since proven adept at capturing and recreating patterns from other data, including audio and digital images—the basis of DALL-E. But these image-generation systems lack any real understanding of the world and can produce images that are glitchy or nonsensical. And because they replicate the web-sourced images they were trained on, they can reflect societal biases—for example, always rendering doctors as male and flight attendants as female. There is also the potential that such programs could be used to generate fake photographs that are used to spread misinformation.
OpenAI has acknowledged these risks and says it has implemented measures to prevent DALL-E from being used to create objectionable or misleading imagery. Those include preventing the system from generating images in response to certain words, and restricting the generation of celebrity faces.
The errors and glitches of AI image generators can themselves feel like an artistic tool.
Craiyon , a less-capable clone of the original DALL-E previously named DALL-E Mini, went viral last month after users discovered the fun in providing it with surreal, farcical, or unnerving text prompts.
One art critic describes the limitations of the AI behind Caiyon as yielding an “ online grotesque ”—bizarre or disturbing fusions drawn from the zeitgeist of the internet. Popular examples include “ muscular teapots ,” “ gaming urinals ,” or “ Death star gender reveals.
” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Peoples' clever prompts are at least half the fun,” says Aaron Hertzmann , a principal scientist at Adobe Research and an affiliate professor at the University of Washington who studies computational art. He says Craiyon and other image-generation tools are enabling new forms of exploration, something inherent to creativity. And he compares text-to-image tools to a kind of conceptual art similar to that of Sol LeWitt or John Baldessare , where the idea behind a piece can be its most important component.
Perhaps the biggest change that AI image generators will bring is dramatically expanding the number of people able to generate and experiment with art and illustration. “Optimistically, you might say this is revolutionary in communication,” says Tom White , an artist based in New Zealand whose work explores artificial intelligence.
Even those who are not artistically inclined could use such tools to generate and share creative images, White says, something people are already doing with Craiyon memes. “That may change how we relate to each other.” White, who’s artwork includes abstract images carefully crafted to fool common image-recognition programs , says he enjoys testing DALL-E 2 to try and reveal aspects of the images in its training data, and what restrictions have been placed on the system to prevent creation of offensive images. Over time, he begins to see a kind of “personality” in the missteps a particular system makes.
White suspects that tools like DALL-E 2 may become far more powerful and interesting as it becomes possible to interact with them in different ways. The only way to refine an image DALL-E produces currently is to rewrite the prompt or crop the image and use it as the prompt for a new set of ideas. White believes that it won’t be long before people using creative AI tools will be able to ask for specific adjustments to an image. “Dall-E is not the end of the road,” White says.
Additional reporting from Tom Simonite.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Senior Writer X Topics artificial intelligence art creativity image recognition machine learning Christopher Beam Will Knight Will Knight Will Knight Reece Rogers Vittoria Elliott Caitlin Harrington Niamh Rowe Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
830 | 2,021 |
"Amazon’s AI Guru Is So Totally Over the Turing Test | WIRED"
|
"https://www.wired.com/story/plaintext-amazon-ai-guru-over-the-turing-test"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business Amazon’s AI Guru Is So Totally Over the Turing Test Alan Turing’s eponymously named test has long been the benchmark of AI’s progress Photograph: Alamy Save this story Save Save this story Save Application Personal assistant Cloud computing Hardware Human-computer interaction Company Amazon Alphabet Google End User Consumer Research Sector Consumer services Source Data Speech Technology Machine learning Natural language processing Hello! This was earnings week , which means there are now two happy groups of people: the Big Tech CEOs, because it’s made them even richer, and the critics who want to curb Big Tech, because all those billions in profits make the case that things are out of hand.
You might think that Rohit Prasad would be a big fan of the Turing test , the venerated method to determine whether computers are as smart as humans. As the VP and head scientist of Amazon Alexa AI, Prasad has been instrumental in getting people to communicate with machines. Partly thanks to him, many of us now ask them for the weather report, to spin our favorite tunes, and—not least, since it’s Amazon—to do some shopping for us. But lately, Prasad has been on a crusade to declare the Turing test obsolete, politicking against it in a Fast Company article , speaking of its limitations at the recent Collision conference, and skewering it in a recent conversation with me.
First thing's first.
Alexa, what is the Turing test? Here’s her answer: “According to Wikipedia, the Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.” Thanks, sis, I’ll take it from here. In Turing’s landmark paper, “Computing Machinery and Intelligence,” he proposed what now seems like a surprisingly complex three-party game between two humans and a machine, where one of the humans has to pick which of the two other parties is a fellow sapien. Academics and computer scientists routinely use the Turing rules to see if their bot can be fooled for the human, passing the test and ushering in a new era of AI.
Prasad notes that the test is an artifact of a time when the idea of a “thinking computer” was preposterous. Now, he says, computers, armed with amazing power and an array of sensors unimaginable in the 1950s, do all sorts of human tasks. Rather than a scientific benchmark, Rohit argues, in modern times, the Turing test seems like a stunt. Its core premise was not to see how intelligent or knowledgeable a computer system was, but how well it could trick someone into incorrectly identifying the computer and the human. Deception was encouraged. Even in 1950, Turing knew that for a computer to pass the test, one of its challenges would have less to do with being smart than intentionally being dumb. If a questioner asks the sum of 34,957 and 70,764, Turing suggests a pause of 30 seconds, to fake a mental calculation. If someone poses a really hard math problem, the digital contestant would be wise to say, “Hey, go ask a computer!” For well over half a century, the idea persisted that the bar for artificial intelligence rested on fooling people into thinking that a machine was a person. Meanwhile, advances in machine learning made it possible for Google, Amazon, and Apple to build natural language interaction into their products, without caring whether the computer seemed overly lifelike. While we still talked about the Turing test, speculating when it might be aced, we were actually talking to computers, in some cases without realizing it.
Was that even ethical? The question came up in 2018, when Google announced Duplex , a system where people called a bot to arrange a restaurant reservation or a haircut. Google’s engineers programmed the system to incorporate the quirks of human speech—umms and uhs, and variations in tone that implied that a human was on the other end of the line. “In the domain of making appointments, Duplex passes the Turing test,” Alphabet chair John Hennessy said at the time.
But critics felt it would be a dangerous precedent to fool people into mistaking a machine for a person. When Google launched the product, it started the conversation with a disclaimer.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In any case, Prasad is correct that the Turing test should be retired. He wants to replace it with a series of challenges like the one that Amazon sponsored last year, giving a prize to the team that can best sustain a general human-machine conversation for 20 minutes. (This is a good deal for Amazon, which hopes to benefit from every advance in natural language AI.) The winner isn’t judged by how well it tricks someone but by how well it carries on the conversation. While Prasad says that he believes “socialbots” should be transparent about their artificiality, he doesn’t rule out using the conversational burps and hiccups that dot human vocal interactions. “Anthropomorphization is very natural,” he says. “As Alexa’s capability and interaction cues get better and better, a bond is formed.” So we’re bonding … with a software phantom? That idea gives me the shivers. We don’t need a test or a challenge to know that software can successfully mimic a human conversation partner—it’s doing that already to some degree, and will only get better. The more interesting question is whether we will care to distinguish between a human and a machine—even when we know we are talking to a machine.
Efforts are already underway for digital “empathetic” companions to the elderly. Meanwhile we are raising a whole generation who spend their toddler years talking to smart speakers. Oh, and you can already buy an artificial mate with pillow talk built in. Just as the hapless protagonist in the movie Her , you don’t have to be deceived to get entangled with one of those systems. To be honest, I don’t know if the prospect is cool or dystopian. Maybe it’s both. But it’s clear to me that we’re in the opening stages of a dead-serious, real-time experiment in the relationship between people and machines. Can we maintain our biological uniqueness in the face of artificial conviviality? It’s not the machines that are being tested. It’s us.
But enough of my speculations. Alexa, what do you think? In 2012, I wrote about Narrative Science, whose AI robots wrote news stories about Little League baseball games, earnings results, and other data-driven subjects. To readers, the articles looked like they were produced by human reporters. Still, cofounder Kristian Hammond’s prediction that a robot would win a Pulitzer Prize in five years proved overly optimistic: Narrative Science's CTO and cofounder, Kristian Hammond , works in a small office just a few feet away from the buzz of coders and engineers. To Hammond, these stories are only the first step toward what will eventually become a news universe dominated by computer-generated stories. How dominant? Last year at a small conference of journalists and technologists, I asked Hammond to predict what percentage of news would be written by computers in 15 years. At first he tried to duck the question, but with some prodding he sighed and gave in: "More than 90 percent." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That's when I decided to write this article, hoping to finish it before being scooped by a MacBook Air.
Hammond assures me I have nothing to worry about. This robonews tsunami, he insists, will not wash away the remaining human reporters who still collect paychecks. Instead the universe of newswriting will expand dramatically, as computers mine vast troves of data to produce ultra-cheap, totally readable accounts of events, trends, and developments that no journalist is currently covering.
Estaban writes, “I think Apple is overrated. Apple disrupted the world years ago with its first computer and the iPhone, but in the last few years, it has done nothing quite as spectacular. Do you think Apple lost its essence when Steve Jobs died?” Thanks for the question, Estaban. It depends on what you mean by Apple’s essence. Jobs’ deathbed instruction to Tim Cook was to avoid trying to ask “What would Steve do?” all the time but to “ do what’s right.” Cook says that he is carrying on with Jobs’ core value of making Apple the place where cutting-edge technology powers easy-to-use and delightful products that transform our lives. Naturally, as the company keeps evolving, it will inevitably be different. If you define Apple’s essence as game-changing products, though, I acknowledge the jury is still out. AirPods and even the Apple Watch are not as earth-shattering as the iPhone. Upcoming innovations like AR glasses and maybe an Apple car will be a test of whether the company can rock the globe again.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg You can submit questions to [email protected].
Write ASK LEVY in the subject line.
I can’t decide which is the bigger sign of an impending apocalypse: that Adam Sandler had to wait for a table at IHOP, or that it went viral on TikTok.
Alexa might be getting smarter, but so is the Google Assistant.
In a fascinating excerpt from his book Full Spectrum , Adam Rogers explains how Pixar hacks our brains with color.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The hype about Miami being the next big tech town is deafening. And even fun. At least until the long moist summer.
Yes, there is racism in porn.
Have a great weekend—you’ve earned it! If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
📩 The latest on tech, science, and more: Get our newsletters ! The cold war over McDonald's hacked ice cream machines What octopus dreams tell us about the evolution of sleep The lazy gamer’s guide to cable management How to log in to your devices without passwords Help! Am I oversharing with my colleagues ? 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Editor at Large X Topics Plaintext artificial intelligence machine learning voice assistants Alan Turing Steven Levy Will Knight Will Knight Will Knight Khari Johnson Steven Levy Steven Levy Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
831 | 2,019 |
"The Robots Want to Steal (the Boring Parts of) Your Job | WIRED"
|
"https://www.wired.com/story/erik-rynjolfsson-robots-steal-boring-parts-of-your-job"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science The Robots Want to Steal (the Boring Parts of) Your Job Kathryn Scott Osler/The Denver Post/Getty Images Save this story Save Save this story Save Application Human-computer interaction Technology Robotics By now you’re probably aware that a robot is standing right behind you, ready to take your job. Go ahead and look, just don’t make eye contact, because robots, like baboons , don’t appreciate that one bit. That’ll just make them want your job all the more.
In reality, reports of the death of the job are greatly exaggerated. There are just too few things that robots and artificial intelligences can do better than humans at this point. We fleshy beings remain more creative, more dexterous, and more empathetic—a particularly important skill in health care and law enforcement. What is happening is that the machines are taking parts of jobs, which isn’t anything new in the history of human labor: Humans no longer harvest wheat by hand, but with combines; we no longer write everything by hand, but with highly efficient word processors.
Courtesy Erik Brynjolfsson Still, this new wave of automation could hurt real bad if we’re not careful. Which is where people like Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, come in: He’s thinking hard about the past, present, and future of work, so you don’t soon have a robot in your cubicle breathing down your neck. WIRED sat down with Brynjolfsson to talk about why the Westworld dystopia is (hopefully) far off, why our human creativity and empathy are so important, and why you should never use a telepresence robot to tell someone they’re dying.
(This conversation has been condensed and edited for clarity.) Matt Simon: So, be honest. How worried should I be about an AI stealing my job? Erik Brynjolfsson: I subscribe to the narrative that mass job replacement isn't here. What is imminent is the replacement of parts of jobs through AI but also through robotics. I think discussion in the press tends to fall into two overly simplistic camps. One is, "Oh, all the jobs are going to be automated away," which is very incorrect. Or it's, "Oh, there's nothing happening, it’s all hype." Those are both incorrect. The right understanding based on our research and many others’ is that certain tasks are being automated.
Let's take one example. There are 27 distinct tasks that a radiologist does. One of them is reading medical images. A machine-learning algorithm might be 97 percent accurate, and a human might be 95 percent accurate, and you might think, OK, have the machine do it. Actually, that would be wrong. You're better off having the machine do it and then have a human check it afterward. Then you go from 97 percent to 99 percent accuracy, because humans and machines make different kinds of mistakes.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But radiologists also consult with patients, coordinate care with other doctors, do all sorts of other things. Machine learning is pretty good at some of those tasks, like reading medical images; it's not much help at all in comforting a patient or explaining the diagnosis to them.
MS: Which reminds me of the fiasco a while back where a hospital used a teleoperated robot to tell someone they were going to die.
The family was upset. Well, duh.
I don't know why more roboticists aren't warning about this. There's certain jobs that humans will probably always do, which are those that require the empathy that machines don’t have.
EB: Our brains are wired to react emotionally to other humans. Humans just have a comparative advantage at connecting with each other. We're very far from Westworld, and even there the robots weren't always that convincing. That's not where we are or will be anytime soon. I think this is great news, because for most of us, the parts of our job that involve creativity and connecting with others are the parts we like best. The part we don't like is repetitively lifting heavy boxes, and that's exactly what machines are really good at. It's a pretty good division of labor.
MS: Which is a fascinating field in robotics right now, getting humans to actually work alongside machines without the machines killing them.
The challenge is, adapting people to that.
EB: In a robotics case you'd have a robot maybe doing heavy lifting, and then it lifts a part over to a human, and the human does the fine manipulation. But that requires a restructuring of that job. I think it's a little bit of a lazy mindset to look at a business process or a job and just sort of say, OK, how can a machine do that whole thing? That's rarely the right answer. Usually the right answer requires a little more creativity, which is, how can we redesign the process so parts of it can be done by a machine really effectively and other parts are done by a human really effectively, and they fit together in a new way.
MS: The challenge is as much about adapting the machines to work with humans as it is adapting humans to work with machines. But say this technological revolution ends up automating whole jobs and we're seeing displacement. What is the strategy there? Is it a matter of something like UBI? We're not great at retraining here in the United States.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg EB: I really don't want to give short shrift to retraining and education that keep people engaged in the workforce and redeploy them into other tasks. But set the dial far enough in the future and I can imagine a world where, yes, machines can do most tasks. Shame on us if we screw that up, because that should be one of the best things ever. We should have vastly more wealth, like orders of magnitude more wealth, less need for work, much better health. And yes something like a UBI will come in gradually. Decent health care is free, education is free, maybe some basic level of food, clothing, shelter. Then that floor can gradually rise over time as society gets richer. Years from now people will look back and say, "Are you kidding me? You'd make somebody die of starvation if they didn't work hard enough?" That would just seem incredibly cruel.
MS: The elephant in the room here, which might seem unrelated, is climate change. But what does a future look like where we are eliminating more and more human labor, we have a UBI in place, but we are consuming more because we have more wealth. What does that mean for a planet that is already at its tipping point? EB: I'm going to run against the sort of implicit part of your question. I think it's going to have us live lighter on the planet. We're already using less coal, oil, and lots of other resources in the United States than we did a decade ago. A digital world is one that is much lighter on the planet and has less impact, whether it's a digital book compared to a paper book, or videoconferencing compared to jet travel. Maybe soon we'll have artificial meat.
If we do it right, which I think we are and we will, we'll have a lighter impact on the planet.
MS: Sure, technology can get us out of certain messes. But it’s not a miracle cure—we humans have to change too.
EB: Technology is not destiny.
We shape our destiny. And I don't want people to get pessimistic and say it's hopeless, it's all getting worse. I also don't want people to be optimistic and say, hey, technology is going to come to the rescue and save us. The right answer is that technology is an incredibly powerful tool, and if we make the effort, we can use this tool to live lighter on the planet. If we put the incentives in place and have a conscious approach to it, we can and will live lighter, but it won't happen automatically. It will only happen if we aggressively work on it.
MS: Well that's reassuring to hear, because I'm sick of being negative all the time.
EB: Well, keep that negativity sort of in your back pocket as a little bit of a club to say, look, don't get complacent. You've got to remind people that they have to make an effort. You can't just sit back and wait for AI to come to the rescue. That's not how it works.
A new strategy for treating cancer , thanks to Darwin Tracing a scammy phone call back to the robocall king Helvetica, the world's most popular font, gets a face-lift What gets lost in the black horror renaissance Why a new crop of electric SUV batteries comes up short 💻Upgrade your work game with our Gear team's favorite laptops , keyboards , typing alternatives , and noise-canceling headphones 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Staff Writer X Topics robotics artificial intelligence Charlie Wood Swapna Krishna Grace Browne Rhett Allain Erica Kasper Rob Reddick Celia Ford Ben Brubaker Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
832 | 2,022 |
"Humans Have Always Been Wrong About Humans | WIRED"
|
"https://www.wired.com/story/david-wengrow-dawn-of-everything"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Virginia Heffernan Backchannel Humans Have Always Been Wrong About Humans David Wengrow lost his coauthor, David Graeber, just after they had completed their 700-page magnum opus, The Dawn of Everything.
Photograph: Udoma Janssen Save this story Save Save this story Save The phrase “the dawn of everything” first struck David Wengrow, one of the authors of The Dawn of Everything , as marvelously absurd.
Everything.
Everything! It was too gigantic, too rich, too loonily sublime. Penguin, the book’s august publisher, would hate it.
But Wengrow, a sly, convivial British archaeologist at University College London, and his coauthor, the notorious American anthropologist and anarchist David Graeber, whose sudden death in Venice two years ago shocked a world of admirers, couldn’t let it go.
Twitter users, after all, dug the title—Graeber had asked—and it suited the pair’s cosmic undertaking. Their book would throw down a gauntlet. “It’s time to change the course of human history, starting with the past,” as the egg-yolk-yellow ads now declare in the London Underground. Wengrow and Graeber had synthesized new discoveries about peoples like the Kwakiutl, who live in the Pacific Northwest; the foragers of Göbekli Tepe, a religious center in latter-day Turkey built between 9500 and 8000 BCE; and the Indigenous inhabitants of a full-dress metropolis some 4,000 years ago in what’s now Louisiana.
This article appears in the September 2022 issue.
Subscribe to WIRED Illustration: Maria do Rosário Frade Citing this existing research, and more from a range of social scientists, Wengrow and Graeber argue that the life of hunter-gatherers before widespread farming was nothing like “the drab abstractions of evolutionary theory,” which hold that early humans lived in small bands in which they acted almost entirely on instinct, either brutish (as in Hobbes) or egalitarian and innocent (as in Rousseau). In contrast, the Dawn authors represent prehistoric societies as “a carnival parade of political forms,” a profusion of rambunctious social experiments, where everything from kinship codes to burial rites to gender relations to warfare were forever being conceived, reconceived, satirized, scrapped, and reformed. In an act of intellectual effrontery that recalls Karl Marx, Wengrow and Graeber use this insight to overthrow all existing dogma about humankind—to reimagine, in short, everything.
They did. The book’s a gem. Its dense scholarly detail, compiling archaeological findings from some 30,000 years of global civilizations, is leavened by both freewheeling jokes and philosophic passages of startling originality. At a time when much nonfiction hugs the shore of TED-star consensus to argue that things are either good or bad, The Dawn takes to the open sea to argue that things are, above all, subject to change.
For starters, the book makes quick work of maxims by domineering thinkers like Jared Diamond and Steven Pinker. Chief among these is the idea that early humans, bent on nothing but the grim chores of survival, led short and dangerous lives chasing calories and subjugating others for sex and labor. By the research, many or even most premoderns did none of this. Instead, they developed expressive, idiosyncratic societies determined as much by artistic and political practices as by biological imperatives. For instance, while the Kwakiutl practiced slavery, ate salmon, and maintained large bodies, their next-door neighbors in latter-day California, the Yurok, despised slavery, subsisted on pine nuts, and prized extreme slimness (which they showed off by slipping through tiny apertures).
Wengrow and Graeber further cast doubt on the assumption that Indigenous societies organized themselves in only rudimentary ways. In fact, their societies were both complex and protean: The Cheyenne and Lakota convened police forces, but only to enforce participation in buffalo hunts; they summarily abolished the police in the off-season. For their part, the Natchez of latter-day Mississippi pretended to revere their all-knowing dictator but in fact ran free, knowing that their monarch was too much of a homebody to go after them. Likewise, the precept that large monuments and tombs are always proof of systems of rank comes up for review. In an especially mind-bending passage, Wengrow and Graeber show that the majority of Paleolithic tombs contained not grandees but individuals with physical anomalies including dwarfism, giantism, and spinal abnormalities. Such societies appear not to have idolized elites so much as outliers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg By the time I was halfway through The Dawn , I found myself overcome with a kind of Socratic ecstasy. At once, I felt unsuffocated by false beliefs. I brooded on how many times I’d been told that it’s natural to keep my offspring strapped to my chest, or sprint like I’m being chased by a tiger, or keep my waist small because males like females who look fertile, or move heaven and earth to help men spread their seed because that’s what prehistoric humans did. This was all a lie. The book’s boldest claim convulsed actual glee: Humans were never in a state of nature at all! Humans have simply always been humans: ironic, sentient, self-reflective, and free from any species-wide programming. The implications were galactic.
After Graeber died, on September 2, 2020, not long after alerting Twitter that he and Wengrow had completed their magnum opus, Wengrow found himself both grieving and rushing to finish. The grief nearly knocked him out. But there was one advantage to the hurry: Wengrow stuck “The Dawn of Everything” on the page proofs, too late for Penguin to balk. The sun rose on the book on October 19, 2021, with its golden-hour cover, and soon after it hit the top of the New York Times best-seller list.
I first met Wengrow—well, I first met him in Twitter DMs, but we’re moving to real life now—in Manhattan, where, over several espressos to brighten his jet lag, we discussed The Dawn.
I also offered condolences on the death of Graeber. The official cause, which Wengrow was reluctant to discuss, was pancreatic necrosis. But on October 16, 2020, Nika Dubrovsky, a Russian artist and Graeber’s widow, wrote that , though she’d shielded Graeber from Covid, he’d occasionally bridled at wearing a mask. “I want to add my own conspiracy theory,” she wrote. “I firmly believe [his death] is related to Covid.” Wengrow has never shaken the feeling of being an outsider in academia.
Photograph: Udoma Janssen Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Wengrow and Graeber were devoted to one another as few writing partners are. Their collaboration seems to have been a case of true philia , the kind of meeting of the minds I associate with J. R. R. Tolkien and C. S. Lewis. Some of this is explained by similarities in their backgrounds. Graeber grew up among working-class radicals in Manhattan, while Wengrow was born to a hairdresser and a partner in a small clothing firm in North London, his grandparents having been, he told me, “gifted people who lost their homes and opportunities when the Nazis came to power.” Though Wengrow’s father later found success in the rag trade, his son was the first in his family to go to college.
Wengrow made it to Oxford in a roundabout way. Having tried to be an actor for a year or two, he thought he’d study English, so he wrote earnest letters to several Oxford colleges to express his lifelong passion for literary studies. When he hit a wall, he canvassed friends about fields of study that might be easier to break into; someone mentioned anthropology and archaeology. He barely knew what these disciplines were, but once again he wrote an earnest letter, this time only to St. Hugh’s, assuring the college of his lifelong passion for archaeology. When he went in for an interview, the interviewer held up a sheaf of letters. On top was the letter he’d recently written about his passion for archaeology. The rest were the nearly identical ones he’d written about his passion for literature. The silence was awkward. But he got in. He received his DPhil in 2001.
Nine years later, Wengrow had just published his second book, What Makes Civilization?: The Ancient Near East and the Future of the West , which argues that civilizations don’t leapfrog from one technological miracle to the next but progress by the gradual transformation of everyday behavior. Having landed in New Orleans for a conference, he was lining up for passport control when a warm, rumpled anthropologist introduced himself: David Graeber. Graeber was impressed by Wengrow’s research on Middle Eastern cylinder seals, which he’d described as an early example of commodity branding. In turn, Wengrow was impressed to meet an anthropologist who knew what a cylinder seal is. The Davids stayed in close touch, meeting in either Manhattan or London, and at some point resolved to create a “pamphlet” summarizing new findings in archaeology that undermine many of the stories told about early human societies. For 10 years they talked, one man’s thoughts taking up where the other’s left off. Eventually they knew the pamphlet would be a book. Determined to preempt critics who’d be eager to pounce on any error, they were meticulous, writing and rewriting each other’s work so thoroughly that neither could tell whose prose was whose. The two never stopped exchanging ideas, and they were still planning a sequel to The Dawn —or maybe three—when Graeber died.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Given his background, Wengrow has never shaken the feeling of being an outsider in academia. “Oddly, this feeling doesn’t go away even when you achieve a degree of recognition and status,” he told me. He and Graeber “could relate on that level. And there was a common sense of humor, which comes from the Jewish background. If he hadn’t heard from me in a couple of days he’d call and put on a grandmother sort of voice: ‘You don’t write … you don’t call.’” Everywhere I went with Wengrow, he fielded impromptu elegies for Graeber, who was famous as the author of Debt and Bullshit Jobs , and as an architect of various anti-capitalist uprisings, notably the Occupy movement. Over our first lunch, Wengrow suggested that the specter of his brilliant friend might still be lurking. (Graeber, whose funeral was framed as an “Intergalactic Memorial Carnival,” loved the paranormal.) Indeed, Graeber remained a spirited absence during the time I spent with Wengrow. I pictured him somewhere between a guardian angel and a poltergeist.
The next time I saw Wengrow was in April in Dublin, to grab a bite at a … what was this place? A disco or a ballroom, loosely attached to a hot-dog stand, at which hot dogs were sold out. Wengrow was unbothered. He and his wife—Ewa, who was trained in archaeology and now works at the British Library—companionably split a burger. After dinner, Wengrow was scheduled to address a group of labor activists about matters archaeological, but for now we discussed Irish politics, and in particular the vexing matter of Facebook’s and Google’s longtime use of Ireland as a tax haven (an arrangement that seems to be ending).
The gathering had been organized by Wengrow’s host in Dublin, Conor Kostick, an Irish sci-fi writer, champion of the 1950s board game Diplomacy, and devoted leftist. Captivated by The Dawn soon after its publication, Kostick had emailed Wengrow, inviting him to speak to a small group at Wynn’s, an old Victorian hotel and pub on Abbey Street and a short walk from the hot-dog disco. Kostick’s invitation showed some chutzpah. If Wengrow took it up, he’d have to break up the extravagant victory lap that had been his book tour in the US to address a few dozen labor activists, trade unionists, and scruffy anarchists in a modest venue. He’d also be coming to Dublin by way of Vancouver, where he had just been flown business class to give a TED talk, on a docket with Elon Musk.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Wengrow said yes without missing a beat. Kostick tweeted : “Imagine Darwin was coming to #Dublin, to speak about his new book On the Origin of Species. Well that’s how I feel about being able to hear @davidwengrow’s talk next Thursday.” The invitation was just what Wengrow needed, he told me, a sort of anti-TED, “to keep mind and soul together.” Wengrow considered TED both cultlike and fascinating. Reflecting on the experience with Kostick and me, Wengrow spoke animatedly about Garry Kasparov, the chess champion and Russian dissident who’d kicked off the conference with a speech about the war in Ukraine. Wengrow had no contact with Musk (about whom he appeared to know little, and care less) and joined forces instead with Anicka Yi, a conceptual artist who works largely in fragrance, and the feminist author Jeanette Winterson. “They were great company and reminded me what I was there for, which was to get the message of my work with David Graeber out there in a place where you might least expect to find it.” Munching on his burger, he still seemed dazed by a single data point: Attending TED can cost $25,000. Kostick, who has a ponytail and the vibe of a Roz Chast character, refused to take that in. The average annual salary for an Irish laborer is about $35,000.
Weeks later, I watched Wengrow’s TED talk. In khakis and an oxford-cloth shirt buttoned to the top, he cited his fieldwork in Iraqi Kurdistan to debunk the stubborn fallacy that a make-believe “agricultural revolution” ruined humanity by creating stationary societies, private property, armies, and dreadful social inequality. On the contrary. Some early farming societies rejected these traps for 4,000 years and traveled far and wide, spreading innovations from potter’s wheels to leavened bread across the Middle East and North Africa. Cities in the Indus Valley from 4,500 years ago had high-quality egalitarian housing and show no evidence of kings or queens, no royal monuments, no aggrandizing architecture.
The hardest punch thrown by The Dawn is its implicit rejection of Margaret Thatcher’s infamous assertion that “there is no alternative” to feral capitalism, a claim still abbreviated in Britain as “TINA.” Laying waste to TINA, The Dawn opens a kaleidoscope of human possibilities, suggesting that today’s neoliberal arrangements might one day be remembered as not an epoch but a fad.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg We strolled a few blocks to the hotel, where the upstairs lecture room seemed like something out of a pub scene in Ulysses.
Voluble young radicals filed in, bedecked in buttons of esoteric meaning. Rhona McCord, a socialist and anti-fascist representing Unite, a massive trade union, stood up to encourage people to join. For as little as 65 cents a week. We were far from the Gulfstream brotherhood of TED.
Surrounded by students and leftie hotheads, Wengrow was in his element. I asked a Covid-masked anarchist, who went by the mononym Shane, about The Dawn of Everything.
“It’s a really hopeful book,” he said. “It’s very easy to get trapped in that mental thing of, ‘Nothing’s ever going to change. It’s just going to be the same neoliberal, state capitalist thing forever.’ But a lot of the book is just saying, ‘No, we can change.’ We have been doing that for the entire time that humans have existed.” I turned to Liv, a Portuguese anarchist whose buttons excoriated the foes of the working class and commemorated the Spanish Civil War. “We have to make a change. And it has to be as fast as we can, otherwise … it will kill us all.” I heard this from other Dawn enthusiasts. The book delivers jolts to the system, and—in some readers—shakes defeatist notions that human exploitation is inevitable.
But why have we felt so defeated, so locked into TINA, I wondered. As I took my seat, a plaintive passage from the book popped into my mind: “How did we come to treat eminence and subservience not as temporary expedients … but as inescapable elements of the human condition?” The poltergeist in the air was insistent: Why do we put up with this? From the lectern, Wengrow asked that no recording be made. He likes synchronous human exchange in person or by telephone, and he welcomes questions and disruptions. While composing The Dawn , Wengrow and Graeber built arguments to the tune of their own overlapping voices, interruption, enthusiasm, dissent, doubt, and rapturous agreement.
Early in the book, the Davids even offer a spontaneous celebration of dialog as the engine of philosophy. “Neuroscientists,” they write, “tell us that … the ‘window of consciousness,’ during which we can hold a thought or work out a problem, tends to be open on average for roughly seven seconds.” This isn’t always true. “The great exception to this is when we’re talking to someone else … In conversation, we can hold thoughts and reflect on problems sometimes for hours on end.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The same collaborative meaning-making was in evidence at Wynn’s, where Wengrow was receptive to everyone, even the inevitable town-hall shaman who stood and delivered a mumblecore homily about … something. For an academic superstar with a theory of everything, Wengrow lacked arrogance in an uncanny way, the way someone else might lack eyebrows.
The lecture touched on something called Dunbar’s number: the influential if dubious thesis by evopsych anthropologist Robin Dunbar that humans function best in groups of up to 150 people, implying that in bigger groups, they need guns, monarchs, and bureaucracy lest they become unruly. A bite-size idea, the kind of pro-cop, pro-executive palaver that animates airport books about “management” and “leadership.” But then Wengrow pointed to actual archaeological evidence. In December, researchers Jennifer M. Miller and Yiming Wang published a study of ostrich-eggshell beads that were distributed over vast territory in Africa 50,000 years ago, suggesting that early human populations lived in attenuated social networks of far more than 150 people and kept cohesion and peace without police or kings.
I left Wynn’s while Wengrow was still talking animatedly to a pair of Gen Z activists, holding thoughts and reflecting on problems for hours on end.
For an academic superstar with a theory of everything, Wengrow lacks arrogance in an uncanny way.
Photograph: Udoma Janssen Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Wengrow and I met the next day too. I didn’t think any lecture could be less glitzy than the event with Kostick and the 65-cents-a-week Unite membership, but I was wrong. The final talk Wengrow gave in Ireland was at University College Dublin, and there was not a CEO or tattooed fanboy in sight. This time the audience—in a narrow gray lecture hall with an undersized platform on which four academics balanced precariously—was made up of a few dozen laconic academics. At UCD, Wengrow’s sponsor was Graeme Warren, vice president of the International Society for Hunter-Gatherer Research. Where Wengrow had referred to the Wynn’s gig as one for “trade unionists,” this one was for “hunter-gatherers.” As I got my bearings in the windowless auditorium, the social dynamics came slowly into focus. At last, one of the men, sitting alone at the edge of the audience, emerged as important. When he started to speak, I recognized the room’s suspense from my own tour in graduate school; he was donnish, oracular, the one whose opinion matters. Would he like The Dawn of Everything ? Sweetly, Wengrow himself seemed deferential. The suspense broke when the man—I later learned he was Daniel Bradley, a geneticist at Trinity College Dublin—offered a technical observation about the book, and then shook his head in pure astonishment at the achievement.
Wengrow was pleased. But he was no less delighted when a baby-faced lecturer, Neil Carlin, proposed in a deceptively gentle brogue that Wengrow had gone wrong in his analysis of Stonehenge. Didn’t The Dawn , Carlin asked, merely rehash the mainstream account of Stonehenge’s construction? Carlin’s gall was exciting, but my ears pricked up for another reason. Finally. An archaeological site I’d heard of.
“There’s a very big presence on my shoulder as I speak about this,” Wengrow said. That would be, I gathered, Michael Parker Pearson, one of Wengrow’s colleagues at UCL, the ranking expert on Stonehenge and an archaeologist whom some consider Anglocentric. Had Wengrow crossed up his book’s own thesis by failing to question orthodoxies, especially the ones that credit imperial powers like England with all great human achievements? The upstart Carlin was sidling uncomfortably close to charging Wengrow with sycophancy or even careerism.
Wengrow wasn’t thrown. He’s indifferent to wolf-pack dynamics everywhere, most of all in academic settings. A preoccupation of The Dawn , after all, is the contingency of hierarchies. They come and go, sometimes literally with the weather; any system of seniority and groveling is a joke; we are hardwired neither to rule nor to be ruled over. In particular, Wengrow’s own newfound status as an archbishop of archaeology, Mr. $25K-a-membership, struck him as laughable. As Jacques Lacan wrote, “If a man who thinks he’s a king is mad, a king who thinks he’s a king is no less so.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While Wengrow had received posh plaudits in Vancouver, and whoops of support at Wynn’s, he seemed to find full-contact dialog with the UCD archaeologists most gratifying. And stimulating. The eye-opening questions, the testing of ego, the swerves in and out of accord. Reflecting on his collaboration with Graeber, Wengrow ventured that university management has made academia so sterile that making friends within it has become a radical act. “In that way, too,” Wengrow said, “our relationship was going against the grain.” True to form, Wengrow earnestly considered Carlin’s Stonehenge questions, and even made notes. Later, he gave the critique a complete hearing in an email to me. As with the missing hot dogs, Wengrow was unbothered.
Like the death of Wengrow’s intellectual soul mate, The Dawn opens far, far more questions than it closes. The book’s several critics seem to balk at its ambition more than its research. Some say its idea of the dawn of everything, beginning 30,000 or so years ago, is more like its teatime.
Others say Wengrow and Graeber are so eager to find anarchism and feminism in early civilizations that they shade the data.
In the book’s final chapters, clouds pass overhead. The authors land on the puzzle of modern “stuckness”—the idea that we have lost the experimental spirit that makes humans human and settled into the ruts of our capitalist-neoliberal hellscape. This works as a rhetorical move: No one wants to be stuck, and dread of this fate can impel a person to action. But as an overarching theory, the idea that humans moved from freedom to stuckness seems to reinscribe some of the schematic evolutionary folktales that the book exists to critique. And if our spirits were flying along just fine, creating new worlds until they were all simultaneously crushed by Thatcherian capitalism, isn’t this just a new fall-from-grace story, like the ones that said humanity was wrecked by agriculture or urbanization or the internet? Contemporary society strikes me as far from stuck. Precarious and imperiled, but not stuck. The pandemic, for one thing, threw into relief the proliferation of cultlike groups that reject modern medicine and even modernity itself. More encouragingly, young workers everywhere are organizing, protesting, and taking to the road in record-high numbers. Gender and race are being reimagined. Any or all of this might be threatening or vertiginous or worse, but none of it suggests stuckness.
Wengrow didn’t worry too much about my objection. He holds ideas lightly, and if the “stuckness” concept didn’t land for me, he said, maybe I could just let it go. The book supplies hundreds of rich examples of early societies that didn’t conform to evolutionary stages. The research is what most excites Wengrow. The imperative to act on our humanness—to refuse to sleepwalk, to refuse to get stuck—grows out of the scholarship.
Over drinks after the lecture, Wengrow talked, when pressed, about his book, but he already seemed to be testing new intellectual territory—the cult of TED, ostrich-shell currency, good old Stonehenge. Academic careers, like all human endeavors, don’t have to be only about prizes or disgrace. There is so much to study. There are worlds to imagine. Call it TAAL: The alternatives are limitless.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
This article appears in the September 2022 issue.
Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Contributor X Topics longreads Books archaeology anthropology authors magazine-30.09 Virginia Heffernan Virginia Heffernan Laura Hudson Lauren Smiley Andy Greenberg Angela Watercutter Brandi Collins-Dexter Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
833 | 2,021 |
"The Turing Test Is Bad for Business | WIRED"
|
"https://www.wired.com/story/artificial-intelligence-turing-test-economics-business"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons WIRED Ideas Ideas The Turing Test Is Bad for Business Photo-Illustration: Sam Whitney; Getty Images Save this story Save Save this story Save Application Human-computer interaction Personal services Sector IT Technology Machine learning Daron Acemoglu is an Institute Professor at MIT. He is the author of five books, including New York Times bestseller Why Nations Fail and The Narrow Corridor: States, Societies, and the Fate of Liberty (both with James A. Robinson).
Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of EECS and the Department of Statistics at the University of California, Berkeley. His research interests include machine learning, optimization, and control theory.
E. Glen Weyl is Founder of the RadicalxChange Foundation, Microsoft’s Office of the Chief Technology Officer Political Economist and Social Technologist (OCTOPEST) and co-author with Eric Posner of Radical Markets: Uprooting Capitalism and Democracy for a Just Society.
Fears of Artificial intelligence fill the news: job losses, inequality, discrimination, misinformation, or even a superintelligence dominating the world. The one group everyone assumes will benefit is business, but the data seems to disagree. Amid all the hype, US businesses have been slow in adopting the most advanced AI technologies , and there is little evidence that such technologies are contributing significantly to productivity growth or job creation.
This disappointing performance is not merely due to the relative immaturity of AI technology. It also comes from a fundamental mismatch between the needs of business and the way AI is currently being conceived by many in the technology sector—a mismatch that has its origins in Alan Turing’s pathbreaking 1950 “imitation game” paper and the so-called Turing test he proposed therein.
The Turing test defines machine intelligence by imagining a computer program that can so successfully imitate a human in an open-ended text conversation that it isn’t possible to tell whether one is conversing with a machine or a person.
At best, this was only one way of articulating machine intelligence. Turing himself, and other technology pioneers such as Douglas Engelbart and Norbert Wiener, understood that computers would be most useful to business and society when they augmented and complemented human capabilities, not when they competed directly with us. Search engines, spreadsheets, and databases are good examples of such complementary forms of information technology. While their impact on business has been immense, they are not usually referred to as "AI," and in recent years the success story that they embody has been submerged by a yearning for something more "intelligent." This yearning is poorly defined, however, and with surprisingly little attempt to develop an alternative vision, it has increasingly come to mean surpassing human performance in tasks such as vision and speech, and in parlor games such as chess and Go. This framing has become dominant both in public discussion and in terms of the capital investment surrounding AI.
Economists and other social scientists emphasize that intelligence arises not only, or even primarily, in individual humans, but most of all in collectives such as firms, markets, educational systems, and cultures. Technology can play two key roles in supporting collective forms of intelligence. First, as emphasized in Douglas Engelbart's pioneering research in the 1960s and the subsequent emergence of the field of human-computer interaction, technology can enhance the ability of individual humans to participate in collectives, by providing them with information, insights, and interactive tools. Second, technology can create new kinds of collectives. This latter possibility offers the greatest transformative potential. It provides an alternative framing for AI, one with major implications for economic productivity and human welfare.
Businesses succeed at scale when they successfully divide labor internally and bring diverse skill sets into teams that work together to create new products and services. Markets succeed when they bring together diverse sets of participants, facilitating specialization in order to enhance overall productivity and social welfare. This is exactly what Adam Smith understood more than two and a half centuries ago. Translating his message into the current debate, technology should focus on the complementarity game, not the imitation game.
We already have many examples of machines enhancing productivity by performing tasks that are complementary to those performed by humans. These include the massive calculations that underpin the functioning of everything from modern financial markets to logistics, the transmission of high-fidelity images across long distances in the blink of an eye, and the sorting through reams of information to pull out relevant items.
What is new in the current era is that computers can now do more than simply execute lines of code written by a human programmer. Computers are able to learn from data and they can now interact, infer, and intervene in real-world problems, side by side with humans. Instead of viewing this breakthrough as an opportunity to turn machines into silicon versions of human beings, we should focus on how computers can use data and machine learning to create new kinds of markets, new services, and new ways of connecting humans to each other in economically rewarding ways.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg An early example of such economics-aware machine learning is provided by recommendation systems, an innovative form of data analysis that came to prominence in the 1990s in consumer-facing companies such as Amazon ("You may also like") and Netflix ("Top picks for you"). Recommendation systems have since become ubiquitous, and have had a significant impact on productivity. They create value by exploiting the collective wisdom of the crowd to connect individuals to products.
Emerging examples of this new paradigm include the use of machine learning to forge direct connections between musicians and listeners , writers and readers , and game creators and players.
Early innovators in this space include Airbnb, Uber, YouTube, and Shopify, and the phrase “ creator economy ” is being used as the trend gathers steam. A key aspect of such collectives is that they are, in fact, markets—economic value is associated with the links among the participants. Research is needed on how to blend machine learning, economics, and sociology so that these markets are healthy and yield sustainable income for the participants.
Democratic institutions can also be supported and strengthened by this innovative use of machine learning. The digital ministry in Taiwan has harnessed statistical analysis and online participation to scale up the kind of deliberative conversations that lead to effective team decisionmaking in the best managed companies.
Investing in technology that supports and augments collective intelligence gives businesses an opportunity to do good as well: With this alternative path, many of the most pernicious effects of AI—including human replacement, inequality, and excessive data collection and manipulation by companies in service of advertising-based business models—would become secondary or even completely avoided. In particular, two-way markets in a creator economy create monetary transactions between producers and consumers, and a platform’s revenue can accordingly be based on percentages of these transactions. Doubtless market failures can and will arise, but if technology is harnessed to supercharge democratic governance, such institutions will be empowered to address these failures, as in Taiwan, where ride-sharing was reconciled with labor protections based on online deliberation.
Building such market-creating (and democracy-supporting) platforms requires that success criteria for algorithms be formulated in terms of the performance of the collective system instead of the performance of an algorithm in isolation, à la the Turing test. This is one important avenue for bringing economic and social science desiderata to bear directly in the design of technology.
To help stimulate this conversation, we are releasing a longer report with colleagues across many fields detailing these failures and how to move beyond them.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Such a change is not easy. There is a huge complex of researchers, pundits, and businesses that have hitched their ride to the currently dominant paradigm. They will not be easy to convince. But perhaps they don’t need to be. Businesses that find a productive way of using machine intelligence will lead by example, and their example can be followed by other companies and researchers freeing themselves from the increasingly unhelpful AI paradigm.
A first step in this transformation would be to reiterate our enormous intellectual debt to the great Alan Turing, and then retire his test. Augmenting the collective intelligence of business and markets is a goal far grander than parlor games.
📩 The latest on tech, science, and more: Get our newsletters ! Is Becky Chambers the ultimate hope for science fiction? An excerpt from The Every, Dave Eggers' new novel Why James Bond doesn't use an iPhone The time to buy your holiday presents now Religious exemptions for vaccine mandates shouldn't exist 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Topics artificial intelligence economics machine learning Innovation opinion business Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
834 | 2,022 |
"This AI Software Nearly Predicted Omicron’s Tricky Structure | WIRED"
|
"https://www.wired.com/story/ai-software-nearly-predicted-omicrons-tricky-structure"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business This AI Software Nearly Predicted Omicron’s Tricky Structure Illustration: Uma Shankar sharma/Getty Images Save this story Save Save this story Save Company Alphabet Google End User Research Sector Health care Technology Machine learning On November 26, the World Health Organization designated the strain of coronavirus surging in South Africa a “variant of concern” and christened it Omicron.
The next day, University of British Columbia professor Sriram Subramaniam downloaded a genome sequence posted online and ordered samples of Omicron genes to be shipped to his lab.
Subramaniam’s group uses electron microscopes to reveal the 3D structure of proteins, to better understand how they work. It had already mapped the spike proteins that coronaviruses use to bind and enter human cells for some earlier strains.
Describing Omicron’s spike protein felt urgent because its genome differed in ways that might explain the variant’s rapid spread. But like others doing online shopping that weekend, Subramaniam had to be patient: Until the samples arrived in the mail, he couldn’t put Omicron proteins under the microscope.
Across the continent, University of North Carolina at Charlotte computational genomics researcher Colby Ford had also been thinking about Omicron’s spike protein. Relatives had been asking him a question also troubling many experts: Would Omicron evade existing vaccines ? Those vaccines teach the body to respond to spike proteins from an earlier strain. Instead of ordering lab supplies, Ford tried a recently invented shortcut. On the same day WHO christened Omicron, he used free artificial intelligence software to try and predict the structure from the sequence of amino acids encoded in Omicron’s genome.
In about an hour, Ford got his first results, and quickly posted them online. Early in December, he and two colleagues posted a fuller paper , now accepted for publication, including predictions that some antibodies to previous strains would be less effective against Omicron.
The atomic structure of the Omicron variant spike protein (purple) bound with the human ACE2 receptor (blue).
Courtesy of Dr. Sriram Subramaniam/The University of British Columbia Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Subramaniam’s lab received its Omicron gene samples soon after and published its microscope observations of the structure along with results from tests of real antibodies on December 21. One of Ford’s two predicted structures proved to be pretty much right: He calculated that the positions of its central atoms differ by around half an angstrom, about the radius of a hydrogen atom. “These tools allow you to make an educated guess really quickly—which is important in a situation like Covid ,” Ford says. “With any new virus that comes along, someone else will replicate what I did here.” The way predictions raced ahead of experiments on Omicron’s spike protein reflects a recent sea change in molecular biology brought about by AI. The first software capable of accurately predicting protein structures became widely available only months before Omicron appeared, thanks to competing research teams at Alphabet’s UK-based AI lab DeepMind and at the University of Washington.
Ford used both packages, but because neither was designed or validated for predicting small changes caused by mutations like those of Omicron, his results were more suggestive than definitive. Some researchers treated them with suspicion. But the fact that he could easily experiment with powerful protein prediction AI illustrates how the recent breakthroughs are already changing the ways biologists work and think.
Subramaniam says he received four or five emails from people proffering predicted Omicron spike structures while working towards his lab’s results. “Quite a few did this just for fun,” he says. Direct measurements of protein structure will remain the ultimate yardstick, Subramaniam says, but he expects AI predictions to become increasingly central to research—including on future disease outbreaks. “It’s transformative,” he says.
“These tools allow you to make an educated guess really quickly—which is important in a situation like Covid.” Colby Ford, computational genetics researcher, University of North Carolina at Charlotte Because a protein’s shape determines how it behaves, knowing its structure can help all kinds of biology research, from studies of evolution to work on disease. In drug research, figuring out a protein structure can help reveal potential targets for new treatments.
Determining a protein’s structure is far from simple. They are complex molecules assembled from instructions encoded in an organism’s genome to serve as enzymes, antibodies, and much of the other machinery of life. Proteins are made from strings of molecules called amino acids that can fold into complex shapes that behave in different ways.
Deciphering a protein’s structure traditionally involved painstaking lab work. Most of the roughly 200,000 known structures were mapped using a tricky process in which proteins are formed into a crystal and bombarded with x-rays. Newer techniques like the electron microscopy used by Subramaniam can be faster, but the process is still far from easy.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In late 2020, the long-standing hope that computers could predict protein structure from an amino acid sequence suddenly became real, after decades of slow progress. DeepMind software called AlphaFold proved so accurate in a contest for protein prediction that the challenge’s cofounder John Moult, a professor at University of Maryland, declared the problem solved. “Having worked personally on this problem for so long,” Moult said, DeepMind’s achievement was “a very special moment.” The moment was also frustrating for some scientists: DeepMind did not immediately release details of how AlphaFold worked. “You’re in this weird situation where there’s been this major advance in your field, but you can’t build on it,” David Baker, whose lab at University of Washington works on protein structure prediction, told WIRED last year.
His research group used clues dropped by DeepMind to guide the design of open source software called RoseTTAFold, released in June, which was similar to but not as powerful as AlphaFold. Both are based on machine learning algorithms honed to predict protein structures by training on a collection of more than 100,000 known structures. The next month, DeepMind published details of its own work and released AlphaFold for anyone to use. Suddenly, the world had two ways to predict protein structures.
Minkyung Baek, a postdoctoral researcher in Baker’s lab who led work on RoseTTAFold, says she has been surprised by how quickly protein structure predictions have become standard in biology research. Google Scholar reports that UW's and DeepMind’s papers on their software have together been cited by more than 1,200 academic articles in the short time since they appeared.
Although predictions haven’t proven crucial to work on Covid-19, she believes they will become increasingly important to the response to future diseases. Pandemic-quashing answers won’t spring fully formed from algorithms, but predicted structures can help scientists strategize. “A predicted structure can help you put your experimental effort into the most important problems,” Baek says. She’s now trying to get RoseTTAFold to accurately predict the structure of antibodies and invading proteins when bound together, which would make the software more useful to infectious disease projects.
Despite their impressive performance, protein predictors don’t reveal everything about a molecule. They spit out a single static structure for a protein, and don’t capture the flexes and wiggles that take place when it interacts with other molecules. The algorithms were trained on databases of known structures, which are more reflective of those easiest to map experimentally rather than the full diversity of nature. Kresten Lindorff-Larsen, a professor at the University of Copenhagen, predicts the algorithms will be used more frequently and will be useful, but says, “We also as a field need to learn better when these methods fail.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In addition to a spike protein structure, Subramaniam’s Omicron paper also included results of a kind AI hasn’t yet conquered—a combined structure for a spike bound to the human protein it targets.
The results suggested the variant’s structural changes allow it to bind host cells more strongly while also being less vulnerable to antibodies from previous strains, a combination that appears to explain why Omicron can overrun even highly vaccinated communities.
“The gold standard will always be direct measurement,” says Subramaniam. “If you’re building a billion-dollar drug program, people want to know what’s the real thing.” At the same time, he says his experimental work is now often informed by AI predictions. “It’s changed the way we think,” Subramaniam says.
Updated, 1-13-21, 2:15pm ET: An earlier version of this article incorrectly referred to samples of Omicron DNA.
📩 The latest on tech, science, and more: Get our newsletters ! Can a digital reality be jacked directly into your brain? Future hurricanes might hit sooner and last longer What is the metaverse, exactly ? This Marvel game soundtrack has an epic origin story Beware the “flexible job” and the never-ending workday 👁️ Explore AI like never before with our new database 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Senior Editor X Topics artificial intelligence Genomics coronavirus COVID-19 algorithms machine learning Will Knight Kari McMahon Amit Katwala Andy Greenberg David Gilbert David Gilbert Khari Johnson Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
835 | 2,017 |
"This More Powerful Version of AlphaGo Learns On Its Own | WIRED"
|
"https://www.wired.com/story/this-more-powerful-version-of-alphago-learns-on-its-own"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business This More Powerful Version of AlphaGo Learns On Its Own Noah Sheldon for WIRED Save this story Save Save this story Save Application Games End User Big company Research Sector Research Technology Machine learning Neural Network At one point during his historic defeat to the software AlphaGo last year, world champion Go player Lee Sedol abruptly left the room. The bot had played a move that confounded established theories of the board game, in a moment that came to epitomize the mystery and mastery of AlphaGo.
A new and much more powerful version of the program called AlphaGo Zero unveiled Wednesday is even more capable of surprises. In tests, it trounced the version that defeated Lee by 100 games to nothing, and has begun to generate its own new ideas for the more than 2,000-year-old game.
AlphaGo Zero showcases an approach to teaching machines new tricks that makes them less reliant on humans. It could also help AlphaGo’s creator, the London-based DeepMind research lab that is part of Alphabet, to pay its way. In a filing this month, DeepMind said it lost £96 million last year.
DeepMind CEO Demis Hassabis said in a press briefing Monday that the guts of AlphaGo Zero should be adaptable to scientific problems such as drug discovery, or understanding protein folding. They too involve navigating a mathematical ocean of many possible combinations of a set of basic elements.
Despite its historic win for machines last year, the original version of AlphaGo stood on the shoulders of many, uncredited, humans. The software “learned” about Go by ingesting data from 160,000 amateur games taken from an online Go community. After that initial boost, AlphaGo honed itself to be superhuman by playing millions more games against itself.
AlphaGo Zero is so-named because it doesn’t need human knowledge to get started, relying solely on that self-play mechanism. The software initially makes moves at random. But it is programmed to know when it has won or lost a game, and to adjust its play to favor moves that lead to victories. A paper published in the journal Nature Thursday describes how 29 million games of self-play made AlphaGo Zero into the most powerful Go player on the planet.
Related Stories Artificial Intelligence Tom Simonite Artificial Intelligence Cade Metz Artificial intelligence Tom Simonite “We’ve removed the constraint of human knowledge,” said David Silver, a leading researcher on the project. It’s a statement that reflects growing interest in creating AI systems that can learn without the crutch of data provided by humans. DeepMind and other leading research groups are working on software that learns from trial-and-error exploration, or even direct competition or combat.
That’s seen as a route to faster progress on tough problems where human-curated data is scarce, or nonexistent, such as controlling robots.
AlphaGo Zero is simpler than its predecessors as well as smarter. The original design had two separate learning modules, built with technology known as artificial neural networks. One specialized in evaluating board positions, and the other suggested possible next moves. AlphaGo selected moves to play with input from a third module, a form of search, that simulated how the different options would play out. DeepMind says AlphaGo Zero is a better player because it has a single, more powerful neural network that learns to both evaluate board positions and suggest new moves. It uses a simpler search module to pick its moves.
Martin Müller, a professor at the University of Alberta, calls AlphaGo Zero’s new, simpler design “beautiful.” But he says its continued reliance on searching multiple possible outcomes to choose the best path shows the limitations of existing AI technology. “I think that tells us something about the nature of complex problems,” Müller says. “We can’t just have some function that knows all the answers, you need to reason, and think and look into the future.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For computers, looking into the future of a board game defined by fixed rules is relatively easy. Engineers have made little progress in having them make sense of messier, everyday scenarios. When taking on a many-faceted challenge such as assembling an Ikea sofa or planning a vacation, humans draw on powers of reasoning and abstraction to plot a path forward that so far elude AI software.
That doesn’t mean DeepMind’s technology can’t do useful things today. Google has already used the company’s algorithms to cut data-center cooling bills. The recent financial filing listed the company’s first revenues, £40 million from services provided to other parts of Alphabet. Hassabis says the ideas in AlphaGo Zero could be applied to work on understanding climate, or proteins in the body. Machine-learning research from Google and others has also shown promise for extracting more ad dollars from consumers.
AlphaGo Zero is also set to give back to the community DeepMind's project has shaken up. New ideas from its predecessors like that jaw-dropping move against Lee Sedol have invigorated the game. Fan Hui, the first professional player beaten by AlphaGo, now works with DeepMind and says AlphaGo Zero can inject further creativity into one of the world’s oldest board games.
“Its games look a lot like human play but it also feels more free, perhaps because it is not limited by our knowledge,” Fan says. He’s already christened one tactic it came up the “zero move,” such is its striking power in the early stages of a game. “We have never seen a move like this, even from AlphaGo," he says.
Senior Editor X Topics AlphaGo Alphabet DeepMind artificial intelligence Steven Levy Jacopo Prisco Will Knight Nelson C.J.
Peter Guest Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
836 | 2,019 |
"Teaching Self-Driving Cars to Watch for Unpredictable Humans | WIRED"
|
"https://www.wired.com/story/teaching-self-driving-cars-watch-unpredictable-humans"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Aarian Marshall Transportation Teaching Self-Driving Cars to Watch for Unpredictable Humans Photograph: David McGlynn/Getty Images Save this story Save Save this story Save Application Human-computer interaction Safety Autonomous driving End User Research Sector Automotive Technology Machine learning Machine vision If you happen to live in one of the cities where companies are testing self-driving cars, you’ve probably noticed that your new robot overlords can be occasional nervous drivers. In Arizona, where SUVs operated by Waymo are sometimes ferrying passengers without anyone behind the steering wheel, drivers have complained about the robot cars’ too-timid left turns and slow merges on the highway.
Data compiled by the state of California suggests that the most common self-driving fender benders are rear-end crashes , in part because human drivers don’t expect autonomous cars to follow road rules and come to complete, non-rolling stops at stop signs.
As for human drivers, some are nervous and scrupulous, others are definitely not. In fact, it’s even more complex: Some drivers are careful in some moments and hard-charging in others. Think: casual Sunday drive to the grocery store versus racing to get the kid before the day care late fees kick in. Robot cars might be smoother, and might make better decisions, if they knew exactly what sort of humans were driving near them.
Want the latest news on self-driving cars in your inbox? Sign up here ! Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory and Delft University’s Cognitive Robotics lab say they’ve figured out how to teach self-driving vehicles just that. In a recent paper published in the Proceedings of the National Academy of Sciences , they describe a technique that translates sociology and psychology into a mathematical formula that can be used to teach self-driving software how to tell the road ragers from the rule followers. Vehicles equipped with their technique can differentiate between the two in about two seconds, the researchers say, and can use the info to help decide how to proceed on the road. The technique improves self-driving vehicles’ predictions about human drivers’ decisions, and therefore the vehicles’ on-road performance, by 25 percent, as measured by a test involving merging in a computer simulation.
The idea, the researchers say, is not just to create a system that can differentiate “egoistic” drivers from “prosocial” drivers—that is, the selfish ones from generous ones. The scientists hope to make it easier for robots to adapt to human behavior, and not the other way around.
Courtesy of MIT CSAIL Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We are very much interested in how human-driven vehicles and robots can coexist,” says Daniela Rus, director of the MIT lab and a coauthor of the paper. “It’s a grand challenge for the field of autonomy and a question that’s applicable not just for robots on roads but in general, for any kind of human-machine interaction.” One day, this kind of work might be able to help humans work more smoothly with robots on, say, the factory floor or in a hospital room.
But first, game theory. The research pulls from an approach being applied more frequently in robotics and machine learning: using games to “teach” machines to make decisions with imperfect knowledge. Game players—like drivers—often have to reach conclusions without full understanding of what the other players—or drivers—are doing. So more researchers are applying game theory to train self-driving cars how to act in uncertain situations.
Still, the uncertainty is a challenge. “Ultimately, one of the challenges of self-driving is that you’re trying to predict human behavior, and human behavior tends to not fall into rational agent models we have for game players,” says Matthew Johnson-Roberson, assistant professor of engineering at the University of Michigan and the cofounder of Refraction AI, a startup building autonomous delivery vehicles.
Someone might look like they’re about to merge but see a flash of something out of the corner of their eye and stop short. It’s very hard to teach a robot to predict that kind of behavior.
Of course, driving situations could become less uncertain if the researchers were able to collect more information about human driving behavior, which is what they’re hoping to do next. Data on the speed of vehicles, where they are heading, the angle at which they’re traveling, how their position changes over time—all could help traveling robots better understand how the human mind (and personality) operates. Perhaps, the researchers say, an algorithm derived from more precise data could improve predictions about human driving behavior by 50 percent instead of 25 percent.
That might be really hard, says Johnson-Roberson. “One of the reasons I think it's going to be challenging to deploy [autonomous vehicles] is because you’re going to have to get these predictions right when traveling at high speeds in dense urban areas,” he says. Being able to tell whether a driver is a selfish driver within two seconds of observation is useful, but a car traveling at 25 mph travels nearly 75 feet in that time. A lot of unfortunate things can happen in 75 feet.
The fact is, even humans don’t understand humans all the time. “People are just the way they are, and sometimes they’re not focused on driving, and make decisions we can’t completely explain,” says Wilko Schwarting, an MIT graduate student who led the research. Good luck out there, robots.
The strange life and mysterious death of a virtuoso coder Alphabet's dream of an “Everyday Robot” is just out of reach An origami artist shows how to fold ultra-realistic creatures Wish List 2019: 52 amazing gifts you'll want to keep for yourself How to lock down your health and fitness data 👁 A safer way to protect your data ; plus, the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Staff Writer X Topics Self-Driving Cars Autonomous Vehicles Waymo artificial intelligence Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Eric Ravenscraft Adrienne So Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
837 | 2,019 |
"Facebook's Head of AI Says the Field Will Soon ‘Hit the Wall’ | WIRED"
|
"https://www.wired.com/story/facebooks-ai-says-field-hit-wall"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Facebook's Head of AI Says the Field Will Soon ‘Hit the Wall’ Play/Pause Button Pause Illustration: Elena Lacey; Getty Images Save this story Save Save this story Save Application Content moderation Identifying Fabrications Company Facebook End User Big company Sector Social media Research Source Data Images Text Video Technology Machine learning Machine vision Natural language processing Jerome Pesenti leads the development of artificial intelligence at one of the world’s most influential—and controversial—companies. As VP of artificial intelligence at Facebook , he oversees hundreds of scientists and engineers whose work shapes the company’s direction and its impact on the wider world.
AI is fundamentally important to Facebook. Algorithms that learn to grab and hold our attention help make the platform and its sister products, Instagram and WhatsApp , stickier and more addictive. And, despite some notable AI flops, like the personal assistant M , Facebook continues to use AI to build new features and products, from Instagram filters to augmented reality apps.
Mark Zuckerberg has promised to deploy AI to help solve some of the company’s biggest problems, by policing hate speech, fake news, and cyberbullying (an effort that has seen limited success so far ). More recently, Facebook has been forced to reckon with how to stop AI-powered deception in the form of deepfake videos that could convincingly spread misinformation as well as enable new forms of harassment.
Pesenti joined Facebook in January 2018, inheriting a research lab created by Yann Lecun , one of the biggest names in the field. Before that, he worked on IBM’s Watson AI platform and at Benevolent AI, a company that is applying the technology to medicine.
Pesenti met with Will Knight, senior writer at WIRED, near its offices in New York. The conversation has been edited for length.
Will Knight: AI has been presented as a solution to fake news and online abuse, but that may oversell its power. What progress are you really making there? Jerome Pesenti: Moderating automatically, or even with humans and computers working together, at the scale of Facebook is a super challenging problem. But we’ve made a lot of progress.
Early on, the field made progress on vision— understanding scenes and images.
We’ve been able to apply that in the last few years for recognizing nudity, recognizing violence, and understanding what's happening in images and videos.
Recently there’s been a lot of progress in the field of language , allowing us a much more refined understanding of interactions through the language that people use. We can understand if people are trying to bully, if it’s hate speech, or if it’s just a joke. By no measure is it a solved problem, but there's clear progress being made.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WK: What about deepfakes? JP: We’re taking that very seriously. We actually went around and created new deepfake videos , so that people could test deepfake detection techniques. It’s a really important challenge that we are trying to be proactive about. It’s not really significant on the platform at the moment, but we know it can be very powerful. We’re trying to be ahead of the game, and we’ve engaged the industry and the community.
WK: Let’s talk about AI more generally. Some companies, for instance DeepMind and OpenAI , claim their objective is to develop “artificial general intelligence.” Is that what Facebook is doing? JP: As a lab, our objective is to match human intelligence. We're still very, very far from that, but we think it’s a great objective. But I think many people in the lab, including Yann, believe that the concept of “AGI” is not really interesting and doesn't really mean much.
On the one hand, you have people who assume that AGI is human intelligence. But I think it's a bit disingenuous because if you really think of human intelligence, it is not very general. Then other people project onto AGI the idea of the singularity—that if you had an AGI, then you will have an intelligence that can make itself better, and keep improving. But there’s no real model for that. Humans can’t can’t make themselves more intelligent. I think people are kind of throwing it out there to pursue a certain agenda.
WK: Facebook’s AI lab was built by LeCun, one of the pioneers of deep learning who recently won the Turing Award for his work in the area. What do you make of critics of the field’s focus on deep learning, who say it won’t bring us real intelligence? "We are very very far from human intelligence," says Jerome Pesenti, Facebook's vice president of artificial intelligence.
Courtesy of Facebook Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg JP: Deep learning and current AI, if you are really honest, has a lot of limitations. We are very very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn't have common sense, it’s more on the level of pattern matching than robust semantic understanding. But we’re making progress in addressing some of these, and the field is still progressing pretty fast. You can apply deep learning to mathematics, to understanding proteins, there are so many things you can do with it.
WK: Some AI experts also talk about a “reproducibility crisis,” or the difficulty of recreating groundbreaking research. Do you see that as a big problem? JP: It’s something that Facebook AI is very passionate about. When people do things that are not reproducible, it creates a lot of challenges. If you cannot reproduce it, it’s a lot of lost investment.
We believe that reproducibility brings a lot of value to the field. It not only helps people validate results, it also enables more people to understand what's happening and to build upon that. The beauty of AI is that it is ultimately systems run by computers. So it is a prime candidate, as a subfield of science, to be reproducible. We believe the future of AI will be something where it’s reproducible almost by default. We try to open source most of the code we are producing in AI, so that other people can build on top of it.
WK: OpenAI recently noted that the compute power required for advanced AI is doubling every 3 and a half months. Are you worried about this? "Deep learning and current AI, if you are really honest, has a lot of limitations." Jerome Pesenti JP: That’s a really good question. When you scale deep learning, it tends to behave better and to be able to solve a broader task in a better way. So, there's an advantage to scaling. But clearly the rate of progress is not sustainable. If you look at top experiments, each year the cost it going up 10-fold. Right now, an experiment might be in seven figures, but it’s not going to go to nine or ten figures, it’s not possible, nobody can afford that.
It means that at some point we're going to hit the wall. In many ways we already have. Not every area has reached the limit of scaling, but in most places, we're getting to a point where we really need to think in terms of optimization, in terms of cost benefit, and we really need to look at how we get most out of the compute we have. This is the world we are going into.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WK: What did you learn from commercializing AI at IBM with Watson? What have you tried to copy, and what have you tried to avoid, at Facebook? JP: Watson was a really fun time, and I think IBM called out that this is a commercial market and there are actually applications. I think that was really remarkable. But there was a bit too much overhyping. I don’t think that served IBM very well.
When you have a place like Facebook, it's remarkable the rate of usage within the organization. The number of developers using AI within Facebook is more than doubling every year right now. So, we need to explain that it’s useful, but don’t overhype it. It doesn’t serve us to claim it can do things it cannot. And I don’t need to overhype it to justify the existence of my team.
WK: Facebook has sometimes struggled to turn AI research into a commercial success, for example with M. How are you trying to connect research and engineering more effectively? JP: When you start talking about technology transfer, it means you're already lost the battle. You cannot just pick some research and ask other people to try to put it in production. You can’t just throw it over the fence. The best way to set it up is to get people doing fundamental research working with people who are closer to the product. It's really an organizational challenge—to ensure there's a set of projects that mature over time and bring the people along with them, rather than have boundaries where you have scientists on one side, and they just throw their research over the fence.
WK: What kinds of new AI products should we expect from Facebook in the near term, then? JP: The two core uses of AI today in Facebook are making the platform safer for users and making sure what we show users is valuable to them. But some of the most exciting things we’re doing are trying to create new experiences that are only possible with AI. Both augmented reality and virtual reality can only exist with AI. We saw recently you can interact with VR using your hands , which requires a really subtle understanding of what’s around the headset. It parses the whole scene using just a camera so that you can use your hands as controllers. I also believe there is huge potential in making people more creative. You’re seeing that with some of the competing offerings like TikTok. Many people create videos and content by interacting naturally with the medium, rather than being a specialist or a video editor or an artist.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WK: Could the technology behind deepfakes perhaps be put to such creative ends? JP: Absolutely. We need to be aware of both sides. There's a lot of potential for making people more creative and empowering them. But as we’ve learned over the past few years, we need to use the technology responsibly, and we need to be aware of the unintended consequences before they happen.
WK: What do you think about the idea of AI export controls? Can the technology be restricted? Would that harm the field? JP: My personal opinion is that this seems very impractical to implement. Beyond that, though, it could negatively impact progress in research, forcing work to be less reproducible rather than more. I believe openness and collaboration is important for driving advances in AI, and restricting the publication or open-sourcing of the results of fundamental research would risk slowing the progress of the field.
That said, whether or not such controls are put in place, as responsible researchers we should continue to consider the risks of potential misapplications and how we can help to mitigate those, while still ensuring that our work advancing AI is as open and reproducible as possible.
Meet the immigrants who took on Amazon Jot down your thoughts with these great note-taking apps Alien hunters need the far side of the moon to stay quiet The future of banking is … you're broke The super-optimized dirt that helps keep racehorses safe 👁 A safer way to protect your data ; plus, the latest news on AI 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones Senior Writer X Topics artificial intelligence Facebook deep learning Reece Rogers Reece Rogers Paresh Dave Deidre Olsen David Gilbert Morgan Meaker Lila Hassan Morgan Meaker Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
838 | 2,020 |
"Computers Are Learning to See in Higher Dimensions | WIRED"
|
"https://www.wired.com/story/computers-are-learning-to-see-in-higher-dimensions"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons John Pavlus Science Computers Are Learning to See in Higher Dimensions The new deep learning techniques, which have shown promise in identifying lung tumors in CT scans more accurately than before, could someday lead to better medical diagnostics.
Illustration: Olena Shmahalo/Quanta Magazine Save this story Save Save this story Save End User Research Sector Automotive Health care Defense Source Data Sensors Images Technology Machine learning Machine vision Neural Network Computers can now drive cars , beat world champions at board games like chess and Go , and even write prose.
The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. These “convolutional neural networks” (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data—especially in computer vision tasks like recognizing handwritten words and objects in digital images.
Original story reprinted with permission from Quanta Magazine , an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
But when applied to data sets without a built-in planar geometry—say, models of irregular shapes used in 3D computer animation, or the point clouds generated by self-driving cars to map their surroundings—this powerful machine learning architecture doesn’t work well. Around 2016, a new discipline called geometric deep learning emerged with the goal of lifting CNNs out of flatland.
Now researchers have delivered with a new theoretical framework for building neural networks that can learn patterns on any kind of geometric surface. These “ gauge-equivariant convolutional neural networks ,” or gauge CNNs, developed at the University of Amsterdam and Qualcomm AI Research by Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling, can detect patterns not only in 2D arrays of pixels but also on spheres and asymmetrically curved objects. “This framework is a fairly definitive answer to this problem of deep learning on curved surfaces,” Welling said.
Already, gauge CNNs have greatly outperformed their predecessors in learning patterns in simulated global climate data, which is naturally mapped onto a sphere. The algorithms may also prove useful for improving the vision of drones and autonomous vehicles that see objects in 3D, and for detecting patterns in data gathered from the irregularly curved surfaces of hearts, brains, or other organs.
Taco Cohen, a machine learning researcher at Qualcomm and the University of Amsterdam, is one of the lead architects of gauge-equivariant convolutional neural networks.
Photograph: Ork de Rooij The researchers’ solution to getting deep learning to work beyond flatland also has deep connections to physics. Physical theories that describe the world, like Albert Einstein’s general theory of relativity and the Standard Model of particle physics, exhibit a property called “gauge equivariance.” This means that quantities in the world and their relationships don’t depend on arbitrary frames of reference (or “gauges”); they remain consistent whether an observer is moving or standing still, and no matter how far apart the numbers are on a ruler. Measurements made in those different gauges must be convertible into each other in a way that preserves the underlying relationships between things.
For example, imagine measuring the length of a football field in yards, then measuring it again in meters. The numbers will change, but in a predictable way. Similarly, two photographers taking a picture of an object from two different vantage points will produce different images, but those images can be related to each other. Gauge equivariance ensures that physicists’ models of reality stay consistent, regardless of their perspective or units of measurement. And gauge CNNs make the same assumption about data.
“The same idea [from physics] that there’s no special orientation—they wanted to get that into neural networks,” said Kyle Cranmer, a physicist at New York University who applies machine learning to particle physics data. “And they figured out how to do it.” Michael Bronstein, a computer scientist at Imperial College London, coined the term “geometric deep learning” in 2015 to describe nascent efforts to get off flatland and design neural networks that could learn patterns in nonplanar data. The term—and the research effort—soon caught on.
Bronstein and his collaborators knew that going beyond the Euclidean plane would require them to reimagine one of the basic computational procedures that made neural networks so effective at 2D image recognition in the first place. This procedure, called “convolution,” lets a layer of the neural network perform a mathematical operation on small patches of the input data and then pass the results to the next layer in the network.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “You can think of convolution, roughly speaking, as a sliding window,” Bronstein explained. A convolutional neural network slides many of these “windows” over the data like filters, with each one designed to detect a certain kind of pattern in the data. In the case of a cat photo, a trained CNN may use filters that detect low-level features in the raw input pixels, such as edges. These features are passed up to other layers in the network, which perform additional convolutions and extract higher-level features, like eyes, tails or triangular ears. A CNN trained to recognize cats will ultimately use the results of these layered convolutions to assign a label—say, “cat” or “not cat”—to the whole image.
Illustration: Lucy Reading-Ikkanda/Quanta Magazine But that approach only works on a plane. “As the surface on which you want to do your analysis becomes curved, then you’re basically in trouble,” said Welling.
Performing a convolution on a curved surface — known in geometry as a manifold — is much like holding a small square of translucent graph paper over a globe and attempting to accurately trace the coastline of Greenland. You can’t press the square onto Greenland without crinkling the paper, which means your drawing will be distorted when you lay it flat again. But holding the square of paper tangent to the globe at one point and tracing Greenland’s edge while peering through the paper (a technique known as Mercator projection) will produce distortions too. Alternatively, you could just place your graph paper on a flat world map instead of a globe, but then you’d just be replicating those distortions—like the fact that the entire top edge of the map actually represents only a single point on the globe (the North Pole). And if the manifold isn’t a neat sphere like a globe, but something more complex or irregular like the 3D shape of a bottle, or a folded protein, doing convolution on it becomes even more difficult.
Bronstein and his collaborators found one solution to the problem of convolution over non-Euclidean manifolds in 2015 , by reimagining the sliding window as something shaped more like a circular spiderweb than a piece of graph paper, so that you could press it against the globe (or any curved surface) without crinkling, stretching or tearing it.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Changing the properties of the sliding filter in this way made the CNN much better at “understanding” certain geometric relationships. For example, the network could automatically recognize that a 3D shape bent into two different poses—like a human figure standing up and a human figure lifting one leg—were instances of the same object, rather than two completely different objects. The change also made the neural network dramatically more efficient at learning. Standard CNNs “used millions of examples of shapes [and needed] training for weeks,” Bronstein said. “We used something like 100 shapes in different poses and trained for maybe half an hour.” At the same time, Taco Cohen and his colleagues in Amsterdam were beginning to approach the same problem from the opposite direction. In 2015, Cohen, a graduate student at the time, wasn’t studying how to lift deep learning out of flatland. Rather, he was interested in what he thought was a practical engineering problem: data efficiency, or how to train neural networks with fewer examples than the thousands or millions that they often required. “Deep learning methods are, let’s say, very slow learners,” Cohen said. This poses few problems if you’re training a CNN to recognize, say, cats (given the bottomless supply of cat images on the internet). But if you want the network to detect something more important, like cancerous nodules in images of lung tissue, then finding sufficient training data — which needs to be medically accurate, appropriately labeled, and free of privacy issues — isn’t so easy. The fewer examples needed to train the network, the better.
Cohen knew that one way to increase the data efficiency of a neural network would be to equip it with certain assumptions about the data in advance — like, for instance, that a lung tumor is still a lung tumor, even if it’s rotated or reflected within an image. Usually, a convolutional network has to learn this information from scratch by training on many examples of the same pattern in different orientations. In 2016, Cohen and Welling co-authored a paper defining how to encode some of these assumptions into a neural network as geometric symmetries. This approach worked so well that by 2018, Cohen and co-author Marysia Winkels had generalized it even further, demonstrating promising results on recognizing lung cancer in CT scans: Their neural network could identify visual evidence of the disease using just one-tenth of the data used to train other networks.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Amsterdam researchers kept on generalizing. That’s how they found their way to gauge equivariance.
Physics and machine learning have a basic similarity. As Cohen put it, “Both fields are concerned with making observations and then building models to predict future observations.” Crucially, he noted, both fields seek models not of individual things — it’s no good having one description of hydrogen atoms and another of upside-down hydrogen atoms — but of general categories of things. “Physics, of course, has been quite successful at that.” Equivariance (or “covariance,” the term that physicists prefer) is an assumption that physicists since Einstein have relied on to generalize their models. “It just means that if you’re describing some physics right, then it should be independent of what kind of ‘rulers’ you use, or more generally what kind of observers you are,” explained Miranda Cheng, a theoretical physicist at the University of Amsterdam who wrote a paper with Cohen and others exploring the connections between physics and gauge CNNs. Or as Einstein himself put it in 1916: “The general laws of nature are to be expressed by equations which hold good for all systems of coordinates.” Miranda Cheng, a physicist at the University of Amsterdam.
Photographer: Ilvy Njiokiktjien/Quanta Magazine Convolutional networks became one of the most successful methods in deep learning by exploiting a simple example of this principle called “translation equivariance.” A window filter that detects a certain feature in an image — say, vertical edges — will slide (or “translate”) over the plane of pixels and encode the locations of all such vertical edges; it then creates a “feature map” marking these locations and passes it up to the next layer in the network. Creating feature maps is possible because of translation equivariance: The neural network “assumes” that the same feature can appear anywhere in the 2D plane and is able to recognize a vertical edge as a vertical edge whether it’s in the upper right corner or the lower left.
“The point about equivariant neural networks is [to] take these obvious symmetries and put them into the network architecture so that it’s kind of free lunch,” Weiler said.
By 2018, Weiler, Cohen and their doctoral supervisor Max Welling had extended this “free lunch” to include other kinds of equivariance. Their “group-equivariant” CNNs could detect rotated or reflected features in flat images without having to train on specific examples of the features in those orientations; spherical CNNs could create feature maps from data on the surface of a sphere without distorting them as flat projections.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These approaches still weren’t general enough to handle data on manifolds with a bumpy, irregular structure — which describes the geometry of almost everything, from potatoes to proteins, to human bodies, to the curvature of space-time. These kinds of manifolds have no “global” symmetry for a neural network to make equivariant assumptions about: Every location on them is different.
Illustration: Lucy Reading-Ikkanda/Quanta Magazine The challenge is that sliding a flat filter over the surface can change the orientation of the filter, depending on the particular path it takes. Imagine a filter designed to detect a simple pattern: a dark blob on the left and a light blob on the right. Slide it up, down, left or right on a flat grid, and it will always stay right-side up. But even on the surface of a sphere, this changes. If you move the filter 180 degrees around the sphere’s equator, the filter’s orientation stays the same: dark blob on the left, light blob on the right. However, if you slide it to the same spot by moving over the sphere’s north pole, the filter is now upside down — dark blob on the right, light blob on the left. The filter won’t detect the same pattern in the data or encode the same feature map. Move the filter around a more complicated manifold, and it could end up pointing in any number of inconsistent directions.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Luckily, physicists since Einstein have dealt with the same problem and found a solution: gauge equivariance.
The key, explained Welling, is to forget about keeping track of how the filter’s orientation changes as it moves along different paths. Instead, you can choose just one filter orientation (or gauge), and then define a consistent way of converting every other orientation into it.
The catch is that while any arbitrary gauge can be used in an initial orientation, the conversion of other gauges into that frame of reference must preserve the underlying pattern — just as converting the speed of light from meters per second into miles per hour must preserve the underlying physical quantity. With this gauge-equivariant approach, said Welling, “the actual numbers change, but they change in a completely predictable way.” Cohen, Weiler and Welling encoded gauge equivariance — the ultimate “free lunch” — into their convolutional neural network in 2019. They did this by placing mathematical constraints on what the neural network could “see” in the data via its convolutions; only gauge-equivariant patterns were passed up through the network’s layers. “Basically you can give it any surface” — from Euclidean planes to arbitrarily curved objects, including exotic manifolds like Klein bottles or four-dimensional space-time — “and it’s good for doing deep learning on that surface,” said Welling.
The theory of gauge-equivariant CNNs is so generalized that it automatically incorporates the built-in assumptions of previous geometric deep learning approaches — like rotational equivariance and shifting filters on spheres. Even Michael Bronstein’s earlier method, which let neural networks recognize a single 3D shape bent into different poses, fits within it. “Gauge equivariance is a very broad framework. It contains what we did in 2015 as particular settings,” Bronstein said.
A gauge CNN would theoretically work on any curved surface of any dimensionality, but Cohen and his co-authors have tested it on global climate data, which necessarily has an underlying 3D spherical structure. They used their gauge-equivariant framework to construct a CNN trained to detect extreme weather patterns, such as tropical cyclones, from climate simulation data.
In 2017 , government and academic researchers used a standard convolutional network to detect cyclones in the data with 74% accuracy; last year, the gauge CNN detected the cyclones with 97.9% accuracy. (It also outperformed a less general geometric deep learning approach designed in 2018 specifically for spheres — that system was 94% accurate.) Mayur Mudigonda, a climate scientist at Lawrence Berkeley National Laboratory who uses deep learning, said he’ll continue to pay attention to gauge CNNs. “That aspect of human visual intelligence” — spotting patterns accurately regardless of their orientation — “is what we’d like to translate into the climate community,” he said. Qualcomm, a chip manufacturer which recently hired Cohen and Welling and acquired a startup they built incorporating their early work in equivariant neural networks, is now planning to apply the theory of gauge CNNs to develop improved computer vision applications , like a drone that can “see” in 360 degrees at once. (This fish-eye view of the world can be naturally mapped onto a spherical surface, just like global climate data.) Meanwhile, gauge CNNs are gaining traction among physicists like Cranmer, who plans to put them to work on data from simulations of subatomic particle interactions. “We’re analyzing data related to the strong [nuclear] force, trying to understand what’s going on inside of a proton,” Cranmer said. The data is four-dimensional, he said, “so we have a perfect use case for neural networks that have this gauge equivariance.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Risi Kondor, a former physicist who now studies equivariant neural networks, said the potential scientific applications of gauge CNNs may be more important than their uses in AI.
“If you are in the business of recognizing cats on YouTube and you discover that you’re not quite as good at recognizing upside-down cats, that’s not great, but maybe you can live with it,” he said. But for physicists, it’s crucial to ensure that a neural network won’t misidentify a force field or particle trajectory because of its particular orientation. “It’s not just a matter of convenience,” Kondor said—“it’s essential that the underlying symmetries be respected.” But while physicists’ math helped inspire gauge CNNs, and physicists may find ample use for them, Cohen noted that these neural networks won’t be discovering any new physics themselves. “We’re now able to design networks that can process very exotic kinds of data, but you have to know what the structure of that data is” in advance, he said. In other words, the reason physicists can use gauge CNNs is because Einstein already proved that space-time can be represented as a four-dimensional curved manifold. Cohen’s neural network wouldn’t be able to “see” that structure on its own. “Learning of symmetries is something we don’t do,” he said, though he hopes it will be possible in the future.
Cohen can’t help but delight in the interdisciplinary connections that he once intuited and has now demonstrated with mathematical rigor. “I have always had this sense that machine learning and physics are doing very similar things,” he said. “This is one of the things that I find really marvelous: We just started with this engineering problem, and as we started improving our systems, we gradually unraveled more and more connections.” Original story reprinted with permission from Quanta Magazine , an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
Hollywood bets on a future of quick clips and tiny screens Mind control for the masses— no implant needed Here's what the world will look like in 2030 ... right ? Internet deception is here to stay— what do we do now ? The war vet, the dating site, and the phone call from hell 👁 Will AI as a field "hit the wall" soon ? Plus, the latest news on artificial intelligence 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Topics Quanta Magazine artificial intelligence physics Ramin Skibba Jim Robbins Matt Simon Swapna Krishna Emily Mullin Maryn McKenna Erica Kasper Matt Reynolds Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
839 | 2,018 |
"How Amazon Alexa Uses Machine Learning to Get Smarter | WIRED"
|
"https://www.wired.com/story/amazon-alexa-2018-machine-learning"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Brian Barrett Gear The Year Alexa Grew Up Amazon senior vice president of devices and services Dave Limp at an Alexa-focused event in September.
Grant Hindsley/AFP/Getty Images Save this story Save Save this story Save It’s fair to say that when Amazon introduced the first Echo speaker in the fall of 2014, most people weren’t quite sure what to make of it. In the intervening years, Echo and the broader universe of Alexa-powered devices have transitioned from curiosity to ubiquity. But while you can find Alexa in just about everything —including, yes, a microwave —the real progress Amazon’s voice assistant made in 2018 came less from breadth than from depth.
That’s not to say it hasn’t made gains of scale. Amazon’s voice assistant has doubled the number of countries where it’s available, for starters, learning how to speak French and Spanish along the way. More than 28,000 smart home devices work with Alexa now, six times as many as at the beginning of the year. And more than 100 distinct products have Alexa built in. If you’re looking for some sort of tipping point, consider that, as of last month, you can buy an Alexa-compatible Big Mouth Billy Bass.
It’s how Alexa evolves under the hood, though, that has defined this year—and how it will continue to inch toward its full potential in those to come. Alexa has gotten smarter, in ways so subtle you might not yet have even noticed.
Because many voice assistant improvements aim to reduce friction, they’re almost invisible by design. Over the past year, Alexa has learned how to carry over context from one query to the next, and to register follow-up questions without having to repeat the wake word. You can ask Alexa to do more than one thing in the same request, and summon a skill—Alexa’s version of apps—without having to know its exact name.
So-called active learning, in which the system identifies areas in which it needs help from a human expert, has helped substantially cut down on Alexa’s error rates.
Those may sound like small tweaks, but cumulatively they represent major progress toward a more conversational voice assistant, one that solves problems rather than introducing new frustrations. You can talk to Alexa in a far more natural way than you could a year ago, with a reasonable expectation that it will understand what you’re saying.
Those gains have come, unsurprisingly, through the continued introduction and refinement of machine learning techniques. So-called active learning, in which the system identifies areas in which it needs help from a human expert, has helped substantially cut down on Alexa’s error rates. “That’s fed into every part of our pipeline, including speech recognition and natural language understanding,” says Rohit Prasad, vice president and chief scientist of Amazon Alexa. “That makes all of our machine learning models look better.” More recently, Amazon introduced what’s known as transfer learning to Alexa. Prasad gives the example of trying to build a recipe skill from scratch—which anyone can do, thanks to Amazon’s recently introduced skills “blueprints”.
Developers could potentially harness everything Alexa knows about restaurants, say, or grocery items to help cut down on the grunt work they’d otherwise face. "Essentially, with deep learning we’re able to model a large number of domains and transfer that learning to a new domain or skill,” Prasad says.
The benefits of the machine learning improvements manifest themselves across all aspects of Alexa, but the simplest argument for its impact is that the system has seen a 25 percent reduction in its error rate over the last year. That’s a significant number of headaches Echo owners no longer have to deal with.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So And more advances are incoming. Just this month, Alexa launched self-learning, which lets the system automatically make corrections based on context clues. Prasad again provides an example: Say you ask your Echo to “play XM Chill,” and the request fails because Alexa doesn’t catalogue the station that way. If you follow up by saying “play Sirius channel 53,” and continuing listening, Alexa will learn that XM Chill and Sirius channel 53 are the same, all on its own. “That’s a big deal for AI systems,” says Prasad. “This is where it’s learning from implicit feedback.” The next frontier, though, gets a little trickier. Amazon wants Alexa to get smarter, obviously, and better at anticipating your needs at any given time. It also, though, wants Alexa to better understand not just what you’re saying but how you say it.
“When two humans are talking, they’re actually pretty good at understanding sentiment. But these systems are essentially clueless about it,” says Alex Rudnicky, a speech recognition expert at Carnegie Mellon University. “People are trying to develop capabilities that make them a little more sophisticated, more humanlike in their ability to understand how a conversation is going.” Amazon already made headlines this fall over a patent that described technology allowing Alexa to recognize the emotions of users and respond accordingly. Those headlines were not glowing. A device that always listens to you is already a step too far for many; one that infers how you’re feeling escalates that discomfort dramatically.
Prasad says the ultimate goal for Alexa is long-range conversation capabilities. As part of that, it might respond differently to a given question based on how you asked it. And while it’s important to have these conversations now, it’s worth noting that a voice assistant that truly understands the subtleties of your intonations remains, for the most part, a ways off.
“If you look at the big five emotions,” Rudnicky says, “the one thing people have been successful in detecting is anger.” As the number of Alexa devices has exploded, so too have the skills. Amazon now counts 70,000 of them in its stable, from quizzes to games to meditation and more. That’s seven times the number it had just under two years ago.
It’s here, though, that Alexa’s room for improvement begins to show. The assistant has gotten better at anticipating what skills people might want to use, but discovery remains a real problem. Not only do Alexa owners miss out on potential uses for their devices beyond a fancy kitchen timer, developers have less incentive to invest time in a platform where they may well remain invisible.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So The answer can’t come entirely from deep learning, either. That can surface the most relevant skill at any given moment, but voice assistants have so much potential beyond immediate, functional needs. Think of skills like The Magic Door , an interactive fantasy game on Alexa that launched in 2016. If all you’ve used Alexa for is listening to NPR and checking the weather, it’s hard to see how the algorithm would alert you to its existence. And even more straightforward suggestions aren’t necessarily always welcome.
“It can be an engaging experience if we introduce customers to new skills and new capabilities, if it’s highly relevant to what they’re doing,” says Toni Reid, vice president of Alexa experience and Echo devices. “But you have to be really careful in those use cases, because it may be overload. It’s sort of the right time at the right moment, the right amount of content.” Amazon will also need to figure out how to fend off Google, whose Google Assistant has closed the voice-control gap considerably despite a late start. Canalys Research estimates that 6.3 million Echo smart speakers shipped in the third quarter of this year, just ahead of Google’s 5.9 million smart speakers.
The race isn’t quite as close as those numbers make it seem; it doesn’t include third-party devices, an arena where Alexa dominates, and a three-month snapshot belies the huge install base Amazon has built up over the past four years. Still, Google has advantages that Amazon can’t ignore.
“They had years of experience with AI, whereas Alexa was built from the ground up,” says Canalys analyst Vincent Thielke. “Because Google’s AI was so advanced, it was very easy to catch up.” Similarly, by virtue of Android, Android Auto, and WearOS, Google has more places it can seed Google Assistant. With the spectacular failure of the Fire Phone —also launched in 2014—Amazon’s mobile options are limited. The company is faring better in cars, but still lags behind Google and Apple in native integrations, which has led the introduction to hardware add-ons like Echo Auto.
Still, Alexa has shown no signs of slowing down. There’s now Alexa Guard to watch after your home when you’re gone. There’s Alexa Answers , a sort of voice assistant hybrid Quora and Wikipedia. There’s Alexa Donations and Alexa Captions and Alexa Hunches and Alexa Routines.
It’s a lot. But if you want to know where Alexa is headed next, well, you know who to ask.
Everything you want to know about the promise of 5G How WhatsApp fuels fake news and violence in India Blu-rays are back to prove that streaming isn't everything An Intel breakthrough rethinks how chips are made 9 Trumpworld figures who should fear Mueller the most 👀 Looking for the latest gadgets? Check out our picks , gift guides , and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Executive Editor, News X Topics Alexa Amazon voice assistants Year in Review Boone Ashworth Boone Ashworth Eric Ravenscraft Adrienne So Medea Giordano Erica Kasper Julian Chokkattu Nena Farrell WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
840 | 2,017 |
"Who's Home at the White House Science and Technology Office? | WIRED"
|
"https://www.wired.com/story/whos-home-at-the-white-house-science-and-technology-office"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Dave Levitan Science Who's Home at the White House Science and Technology Office? Members of the Texas National Guard prepare recovery efforts after Hurricane Harvey on September 6, 2017 in Orange, TX.
Spencer Platt/Getty Images Save this story Save Save this story Save In late October 2012, as Hurricane Sandy barreled toward New York and New Jersey, barrels of information from the National Weather Service, NASA, and elsewhere inundated the White House. As the storm picked up, experts at the Office of Science and Technology Policy, or OSTP, started closely monitoring storm track modeling from NOAA, satellite imagery from NASA, even detailed information such as the level of wind shear at various elevations within the storm.
It fell to John Holdren , the director of that office and President Obama’s science adviser, to make sure the best information was being used to prepare. And behind Holdren was the 100-plus staff of OSTP, the relatively unknown office that, at least during the Obama years, played an outsized role in the government response to disasters, be they storms or oil spills or West African disease outbreaks.
Today, as Hurricane Irma has finally fizzled out and response and recovery efforts for it and Hurricane Harvey ramp up (and Jose twirls menacingly out over the Atlantic), an OSTP official says it is still heavily engaged in these processes. But the office has been radically transformed under the Trump administration: The official told WIRED the current staffing level is only 42, down from more than 130 during the Obama years, and the president has put forward no nominee for its director. Even if the remaining staff members are working diligently to aid in disaster preparedness, it is likely that all that work has little if any connection to the president’s inner circle.
During Hurricane Sandy, those connections were clear. Tamara Dickinson, the director of the energy and environment division at OSTP under Obama, served as a point person on disasters, including hurricanes. “Holdren used to call me the Disaster Queen,” she says. “He used to say that if he had an email from me in his inbox when he got up at 5:30 or 6 o’clock, he knew it was going to be a bad day.” Each day as the hurricane approached, the Disaster Queen would send the science adviser an update on what the hurricane was doing—potential track, why the forecasts had changed and what it meant when they did, what the presence of a certain high- or low-pressure system meant for the next several days, and so on. These could run upwards of five pages, delving deep into the nitty gritty of hurricane science.
Holdren could then use this raft of technical detail to serve as an information resource for the White House—senior advisers, the chief of staff, and the president. Interagency coordination is crucial when disaster strikes, and having a centralized source for technical information and expertise can help that sort of coordination, as well as help guide top-level decision-making at the White House, Dickinson says. The primary White House body responsible for responding to catastrophes natural or otherwise is the National Security Council (current website status: “Check back soon for more information”), and OSTP also maintained strong relationships with the staff there.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Today, whether OSTP’s expertise is at all connected up the chain is unclear, since what was once Holdren’s position remains vacant. And while OSTP says that disaster response across agencies is being coordinated by Assistant Director for Natural Disaster Resilience Jacqueline Meszaros, whose tenure at OSTP predates the new administration, questions about whether Meszaros actually has access to the president or top advisers—as Dickinson, through Holdren, did—have gone unanswered.
Aside from those immediate preparations, the office can also serve as a connection to the universe of expertise outside the government. As Hurricane Sandy’s water began to recede and the recovery began, the Obama OSTP began leveraging those connections. Brian Forde, who at the time was the senior adviser to the US Chief Technology Officer for Mobile and Data Innovation (he is now running for Congress in California’s 45th district), coordinated a number of projects with large tech companies, open source coders, and even high school students to help people in the aftermath of the storm.
Forde remembers being called to the Situation Room to explain to Cabinet members how various tech companies’ platforms could be used to aid the response to Sandy. “You’re not allowed to bring technology into the Situation Room, so I had to print out screenshots of what these tools could do and then demonstrate to them the value of it,” he says. As he started passing out the screenshots, leaders of the various agencies including FEMA’s Craig Fugate, Homeland Security Secretary Janet Napolitano, and others “jumped out of their seats and huddled around the table,” eager to see how Airbnb, Google, and others might help deal with the array of individual crises a large storm leaves in its wake.
In the wake of Harvey and Irma, it’s unclear if anyone from OSTP is filling that technological role. But as Forde notes, the whole point of the Sandy efforts was to make tools available for the next storm, regardless of who occupies the Oval Office.
Airbnb continues to help find housing for victims of Harvey and Irma , and Google’s Crisis Map is operating for both storms as well—both projects that OSTP helped coordinate after Sandy.
More on Disasters Climate Adam Rogers Traffic Aarian Marshall Disasters Robbie Gonzalez OSTP is adamant that it is still helping with hurricane response. But the details of that involvement and how it corresponds with the previous administration’s efforts is unclear. Questions about the specific response efforts have not been answered by the OSTP source or by the White House press office.
“One thing that OSTP has traditionally been helpful in is identifying gaps and needs that exist, communicating those gaps and needs to the science and technology community beyond the federal government, and calling that community to action in creating these solutions,” says Cristin Dorgelo, the OSTP chief of staff under Obama. “That role, without a strong OSTP, is not being led, as far as I can tell.” Happily, the sort of institutional resiliency Forde described is apparent elsewhere in government too. FEMA’s response to Harvey has been praised, with much credit going to the Obama administration and career civil servants who worked hard to prepare for future storms after Katrina and Sandy. (Thanks in large part to Dickinson’s efforts to mobilize the disaster science community, the Hurricane Sandy Rebuilding Task Force’s final report included substantial science input from OSTP and elsewhere, helping ensure the scientific basis for its conclusions was strong.) But the true test will be in recovery, and with the unexpected perils that emerge after a storm of this scale hits—like, for example, the chemical plant explosions and spills facing the Houston area. With the Environmental Protection Agency suffering the foundational indignities of an ongoing dead-of-night dismantling , a solid source of expertise close to the White House might help guide the response.
“OSTP’s role was to bring science to the table, and to make sure the science was as accurate as possible,” Dickinson says. Today? “My hunch is that’s not going on. But it’s just a hunch.” Topics disasters hurricanes science Ramin Skibba Jim Robbins Matt Simon Swapna Krishna Emily Mullin Maryn McKenna Erica Kasper Matt Reynolds Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
841 | 2,017 |
"One State's Bail Reform Exposes the Promise and Pitfalls of Tech-Driven Justice | WIRED"
|
"https://www.wired.com/story/bail-reform-tech-justice"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Issie Lapowsky Security One State's Bail Reform Exposes the Promise and Pitfalls of Tech-Driven Justice Judge Ernest Caposela was one of the early advocates of the use of risk assessment tools for bail reform in New Jersey.
Issie Lapowsky/WIRED Save this story Save Save this story Save Jaquan Lugo stood stone-faced and somber inside a circular, wood-paneled courtroom on a Thursday afternoon in Paterson, New Jersey, as Superior Court Judge Donna Gallucio considered her options.
Just four days prior, the 22-year-old and two other men were arrested in Paterson, accused of six counts of attempted murder and various gun charges after a predawn drive-by shooting left a 17-year-old girl with a life-threatening wound near her lung. An off-duty officer heard the shots just a few blocks away and gave chase to the fleeing vehicle, a 2002 Jaguar, as someone inside the car fired back at him. Lugo and two of the car’s other occupants—Kashief Davis, 24, and Andre Green, 20—allegedly got out of the car and tried to escape on foot before they were caught and brought to Passaic County Jail.
The victim’s friends and family crowded into the courtroom benches to hear the decision on Lugo’s fate, but Judge Gallucio wasn’t there to determine his ultimate sentence. She was, instead, deciding whether Lugo should spend the months leading up to his trial in the county jail or at home.
Standing shoulder-to-shoulder with his client, Lugo’s lawyer, Gregory Aprile, argued for pretrial release, imploring Gallucio to consider one crucial factor: A new algorithmic tool that purports to predict a defendant's likelihood to reoffend, or to fail to appear in court, ranked Lugo as fairly low-risk. On an escalating scale of 1 to 6, it rated Lugo a 2 for failure to reappear and a 3 for likelihood of reoffending.
“They aren’t arbitrary numbers,” Lugo's attorney said of the so-called Public Safety Assessment, or PSA. “It was the result of millions of statistics from around the country.” This may seem like an unusually technocratic approach to public defense. But it’s not so unusual anymore, at least not in New Jersey, where the state has recently undergone a holistic technological transformation of its arcane court system, all in the service of eliminating the use of bail statewide.
New Jersey is far from the only state government taking a critical look at the centuries-old bail system in America. Politicians on both sides of the aisle, from California senator Kamala Harris to Kentucky senator Rand Paul , argue that bail sets up a two-tiered justice system, in which the wealthy can buy their way to freedom while the poor remain locked up until their day in court comes. In 2016, the Department of Justice, under President Obama, also issued a Dear Colleague letter to state and local courts around the country, advising them that courts “must not employ bail or bond practices that cause indigent defendants to remain incarcerated solely because they cannot afford to pay for their release.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As it turned out, that described a large percentage of people who have spent time in New Jersey jails, according to one 2013 study by the New Jersey Drug Policy Alliance. The advocacy group found that some 75 percent of New Jersey’s jail population at any given moment was simply awaiting trial, and 40 percent of jailed people were there because they couldn’t afford $2,500 or less in bail. On average, people spent 10 months in jail before even getting to trial. Meanwhile, because New Jersey prohibited even the most violent criminals from being detained without bail, judges often had to set exorbitant bail amounts to keep violent offenders off the streets; sometimes, those people made bail anyway.
“You had a situation where if you had money in New Jersey, no matter how serious your offense was, you could pay and walk away pending trial,” says Roseanne Scotti, the Drug Policy Alliance’s senior director in New Jersey. “If you didn’t have money, no matter how minor your offense was, you sat in jail for months.” That system also meant people could pay for-profit bail bondsmen a small fraction of the 10 percent of their bail they needed to pay to get out of jail, only to owe even more money to the bondsmen over the longterm. Not only did that create a predatory industry but, says Passaic County Assignment Judge Ernest Caposela, “A lot of dangerous people were making it out on bail.” Driven by advocates like Scotti, as well as the American Civil Liberties Union, New Jersey Governor Chris Christie signed the so-called Bail Reform and Speedy Trial Act, which went into effect on January 1 and is designed to virtually eliminate bail in the state. Of all of the attempts to curb the use of bail nationwide, New Jersey's approach is perhaps the most audacious. Pulling it off has required the state to harness the power of tech, not only to move people through the system more quickly but also to analyze who is least likely to pose a risk to society upon release.
Just months in, the experiment has already made an impact. New Jersey saw a 19 percent reduction in its jail population overall between January 1 and May 31 of this year, with just eight people being held on bail throughout the entire state over that time period. Others are either being released with certain conditions or detained without bail.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Duane Chapman filming a segment of his television show during a news conference with Governor Andrew Cuomo on June 28, 2015 in Malone, NY.
Scott Olson/Getty Images This shift has also prompted a number of lawsuits, including one filed by the mother of Christian Rodgers, a 26-year-old man who was allegedly murdered by a man named Jules Black, just days after he was released from jail without bail earlier this year. That suit targets both Christie as well as the Arnold Foundation, the nonprofit organization that designed the PSA tool.
Perhaps unsurprisingly, the case is backed by the bail-bond industry, including reality star Duane Chapman, better known as Dog the Bounty Hunter. They argue that tech tools like the PSA offer a dangerously inadequate way of distributing justice. Scotti argues the bail-bond industry cares more about its bottom line than the public well-being. Now, as states across the country look to tech tools to reform their jail and prison systems, New Jersey’s experiment illustrates both the promises and pitfalls of using technology to determine who does and doesn’t remain behind bars.
“Today is a great day,” says John Harrison, clasping his hands together. Harrison helps run the county’s newly created pretrial services division. Housed in a charmless, cubicle-filled office adjacent to Paterson’s historic courthouse, it’s the group responsible for running risk assessments on every person who enters the county jail.
The source of Harrison’s delight? Today his team will move 23 people—more than average—through their pretrial hearings. They include a woman accused of prostitution, another accused of credit card theft and burglary, another accused of child endangerment, and a man charged with assault and disorderly conduct. One by one, their faces will appear on a television screen inside Judge Abdelmageid Abdelhadi’s courtroom, where he will rattle off their rights, their charges, and their PSA risk scores with the quick-tongued elocution of an auctioneer. Last year, this would be the part where the judge sets bail. Now, after the reform efforts, it’s where he tells each defendant whether they’re being released today no strings attached, released today with some type of monitoring, or whether the prosecutor is filing to have them detained until trial. The PSA score is one of several factors he considers. Within three minutes flat, he’ll wish each defendant good luck, before calling the next in line and reciting the same script. In Paterson, a once-illustrious industrial town now riddled with crime, that counts as progress.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When Harrison started working for the courts back in 1995, this whole process dragged. When a new person came into the jail, Harrison and others like him would have to go to the court’s physical Rolodex machine and pull index cards on a given defendant’s record, in order to help the judge make bail decisions. That task was made all the more complicated, given the fact that so many defendants go by a slew of aliases that could be tough to keep straight. “God help you if the Rolodex machine broke,” Harrison says.
The longer it took to compile all that information, the longer people waited in jail before they ever even got a chance to plead their cases.
Today, when law enforcement arrests someone in New Jersey, they take his fingerprint and enter it into their Livescan system. That system automatically sweeps both the FBI’s database and the statewide database for a person’s complete criminal record. Last year, the state began using an IBM tool to search through its 40 million records and rule out possible duplicates, to ensure that one John Smith isn’t picking up some other John Smith’s rap sheet. Likewise, if two records for John Smith are virtually identical, except their birthdays are one digit off, the system will look at the statistical likelihood they’re actually the same person. The IBM tool also searches for possible aliases, where records filed under different names might, say, contain the same Social Security number and date of birth.
Recommended Business Tom Simonite Big Data Issie Lapowsky incarceration Issie Lapowsky To illustrate just how useful that is, Harrison pulls the rap sheet of a man who goes alternately by Thomas Ali, Barry Simpson, and a handful of iterations of those names, whose fingerprint matches 70 separate arrests. In the old days, it could have taken Harrison’s team weeks to track down that information. Under the new system, it takes about two minutes, including the time it takes to calculate the PSA score. Mr. Ali (or Simpson) scored a 6-6, the highest risk on both the failure to appear and the likelihood of committing a new crime scales. It’s unlikely he would walk free before his trial, even under bail reform. But, the thinking goes, condensing that decision-making process into a matter of minutes gives the courts the time they need to assess other defendants---ones who might very well walk free--more quickly.
The judiciary’s IT department has added other time-saving tools to the mix, too. It built a so-called virtual courtroom, so judges can hold pre-trial hearings on weekends when the courts are closed. Now, the team is tinkering around with voice-recognition technology that can save judges time when filling out detention orders. “We’re trying to look at the technology from the standpoint of eliminating as much of the clerical functions of whoever touches the case as possible,” says Judge Caposela.
By far the most controversial element of the state’s technological transformation is the risk score itself. Similar assessments have popped up across the country, from Miami to San Antonio, put to use for everything from bail reform to decisions on which defendants most need mental health assistance. Not all of these algorithms are created equal. One ProPublica investigation found that a tool called Compas, which was used in sentencing decisions, overwhelmingly rated black defendants higher risk than white defendants.
“Algorithms and predictive tools are only as good as the data that’s fed into them,” Ezekiel Edwards, director of the ACLU’s criminal law reform project, recently told WIRED. “Much of that data is created by man, and that data is infused with bias.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Arnold Foundation, which designed New Jersey’s PSA tool, now used in several states and dozens of local jurisdictions, attempts to sidestep that problem by vastly limiting the number of risk factors it considers to eliminate racial or gender indicators. The Foundation analyzed 1.5 million pre-trial records from across the country and narrowed its algorithm down to look at just nine risk factors: the person’s age at the current arrest, whether the current offense is violent, pending charges at the time of the offense, prior misdemeanor convictions, prior felony convictions, whether those prior convictions were for violent crimes, prior failure to appear in the past two years, prior failure to appear instances that are older than two years, and prior incarceration sentences. Unlike other tools, it doesn’t weigh factors like education, income, or employment, any of which might disadvantage certain demographic groups.
“An effective risk assessment must be gender and race neutral,” says Judge Caposela, one of the PSA’s early evangelists in New Jersey. “The more risk factors you have, the less likely you’ll be able to eliminate gender and racial bias.” Even so, Leila Walsh, a spokesperson for the Arnold Foundation, cautions that the PSA scores are meant to serve merely as a baseline for the courts. "The decision about what to do always rests with the judge," says Walsh. States including New Jersey often couple the PSA with another set of parameters that could, for instance, flag defendants who have been charged with domestic violence, or who have been re-arrested while out on bail in the past.
The Arnold Foundation's stripped-down risk assessment has still faced a fair bit of backlash. As WIRED recently reported, researchers have criticized the Foundation for making municipalities sign a confidentiality clause. Peter McAleer, a spokesperson for New Jersey’s courts, says the state has no such agreement with the Arnold Foundation.
This lack of transparency has become central to lawsuits surrounding the use of the PSA. Jules Black, the man accused of murdering Christian Rodgers, had been in and out of the New Jersey county jail system 28 times since 1994, according to the suit. His most recent arrest was for unlawful possession of a firearm. During a press conference about the case, Dog the Bounty Hunter questioned why a man with such a record would be released. “The Arnold Foundation has a questionnaire. Guess what? You must not have asked the right question,” he said.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Even Judge Caposela acknowledges there’s some truth to that. The PSA takes what he describes as a “neutral view” of gun possession. Because it was trained on data from across the country, and because some states have far more lax gun regulations than New Jersey does, the PSA doesn’t consider mere gun possession as an outsized risk. It wasn’t until after the Rodgers murder that the state's attorney general issued new guidance, directing New Jersey prosecutors to seek pretrial detention in any gun-related cases.
"We extend our deepest condolences to the Rodgers family for the tragic death of Christian Rodgers," Walsh says. She acknowledges that the PSA is not a "perfect system," but neither, she argues, is bail. "The traditional for-profit, money bail system is deeply flawed, unjust, and inefficient," she says. "We should not allow those who make their living in the for-profit, money bail industry to use tragedies to deflect attention from the urgent need for reform." There’s an argument to be made that an over-reliance on the algorithm may have impeded the court’s decision to release Black. Then again, one could counter that under the old system, Black would have made bail regardless. He had, after all, already been in and out of jail 28 times. He had bought his way out before. He could probably do it again. Under the new system, that’s not how it works.
Which brings us back to Lugo. While his attorney asked the judge to consider his client’s relatively low PSA score, the prosecutor reminded the judge that the PSA system also flagged Lugo as “no release recommended.” That’s because of the violent nature of the crime—attempted murder—and the gun charges. "Of paramount concern is safety to the community," assistant prosecutor Nubar Kasaryan told the judge.
He reminded the judge of Lugo’s record, which includes a child-abuse conviction for which he was just released from state prison in March. And he explained that the drive-by shooting victim is still in critical condition at a local hospital. Judge Gallucio pursed her lips and furrowed her brow, before deciding that Lugo’s release would “pose a significant risk to the community.” Lugo and the other alleged shooters will have to await their day in court behind bars. But the same law that will keep them there will also ensure that others accused of lesser crimes may not have to.
Senior Writer X Topics criminal justice law algorithms Andy Greenberg Lily Hay Newman David Gilbert Dell Cameron Andy Greenberg Reece Rogers Matt Burgess Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
842 | 2,016 |
"Artificial Intelligence Is Setting Up the Internet for a Huge Clash With Europe | WIRED"
|
"https://www.wired.com/2016/07/artificial-intelligence-setting-internet-huge-clash-europe"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Artificial Intelligence Is Setting Up the Internet for a Huge Clash With Europe Getty Images Save this story Save Save this story Save Neural networks are changing the Internet.
Inspired by the networks of neurons inside the human brain , these deep mathematical models can learn discrete tasks by analyzing enormous amounts of data. They've learned to recognize faces in photos , identify spoken commands , and translate text from one language to another.
And that's just a start. They're also moving into the heart of tech giants like Google and Facebook. They're helping to choose what you see when you query the Google search engine or visit your Facebook News Feed.
All this is sharpening the behavior of online services. But it also means the Internet is poised for an ideological confrontation with the European Union, the world's single largest online market.
In April, the EU laid down new regulations for the collection, storage, and use of personal data, including online data. Ten years in the making and set to take effect in 2018, the General Data Protection Regulation guards the data of EU citizens even when collected by companies based in other parts of the world. It codifies the "right to be forgotten" , which lets citizens request that certain links not appear when their name is typed into Internet search engines. And it gives EU authorities the power to fine companies an enormous 20 million euro---or four percent of their global revenue---if they infringe.
But that's not all. With a few paragraphs buried in the measure's reams of bureaucrat-speak, the GDPR also restricts what the EU calls "automated individual decision-making." And for the world's biggest tech companies, that's a potential problem. "Automated individual decision-making" is what neural networks do. "They're talking about machine learning," says Bryce Goodman, a philosophy and social science researcher at Oxford University who, together with a fellow Oxford researcher, recently published a paper exploring the potential effects of these new regulations.
The regulations prohibit any automated decision that "significantly affects" EU citizens. This includes techniques that evaluate a person's "performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements." At the same time, the legislation provides what Goodman calls a "right to explanation." In other words, the rules give EU citizens the option of reviewing how a particular service made a particular algorithmic decision.
Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free Google’s Artificial Brain Is Pumping Out Trippy—And Pricey—Art Soon We Won’t Program Computers. We’ll Train Them Like Dogs What the AI Behind AlphaGo Can Teach Us About Being Human Both of these stipulations could strike at the heart of major Internet services. At Facebook, for example, machine learning systems are already driving ad targeting, and these depend on so much personal data. What's more, machine learning doesn't exactly lend itself to that "right of explanation." Explaining what goes on inside a neural network is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it's difficult to determine exactly why they work so well. You can't easily trace their precise path to a final answer.
Viktor Mayer-Schönberger, an Oxford expert in Internet governance who helped draft parts of the new legislation, says that the GDPR's description of automated decisions is open to interpretation. But at the moment, he says, the "big question" is how this language affects deep neural networks. Deep neural nets depend on vast amounts of data, and they generate complex algorithms that can be opaque even to those who put these systems in place. "On both those levels, the GDPR has something to say," Mayer-Schönberger says.
Goodman, for one, believes the regulations strike at the center of Facebook's business model. "The legislation has these large multi-national companies in mind," he says. Facebook did not respond to a request for comment on the matter, but the tension here is obvious. The company makes billions of dollars a year targeting ads , and it's now using machine learning techniques to do so.
All signs indicate that Google has also applied neural networks to ad targeting, just as it has applied them to "organic" search results. It too did not respond to a request for comment.
Neural networks themselves defy easy explanation, which likely makes some kind of conflict inevitable.
But Goodman isn't just pointing at the big Internet players. The latest in machine learning is trickling down from these giants to the rest of the Internet. The new EU regulations, he says, could affect the progress of everything from ordinary online recommendation engines to credit card and insurance companies.
European courts may ultimately find that neural networks don't fall into the automated decision category, that they're more about statistical analysis, says Mayer-Schönberger. Even then, however, tech companies are left wrestling with the "right to explanation." As he explains, part of the beauty of deep neural nets is that they're "black boxes." They work beyond the bounds of human logic, which means the myriad businesses that will adopt this technology in the coming years will have trouble sussing out the kind of explanation the EU regulations seem to demand.
"It's not impossible," says Chris Nicholson, the CEO and founder of the neural networking startup Skymind.
"But it's complicated." One way around this conundrum is for human decision makers to intervene or override automated algorithms. In many cases, this already happens, since so many services use machine learning in tandem with other technologies, including rules explicitly defined by humans. This is how the Google search engine works. "A lot of the time, algorithms are just part of the solution----a human-in-the-loop solution," Nicholson says.
But the Internet is moving towards more automation, not less. And in the end, human intervention isn't necessarily the best answer. "Humans are far worse ," one commenter wrote on Hacker News , the popular tech discussion site. "We are incredibly biased." The conundrums presented by the new EU regulations won’t just apply to the biggest tech companies. They’ll extend to everything.
It's a fair argument. And it will only become fairer as machine learning continues to improve. People tend to put their faith in humans over machines, but machines are growing more and more important. This is the same tension at the heart of ongoing discussions over the ethics of self-driving cars.
Some say: "We can't let machines make moral decisions." But others say: "You'll change your mind when you see how much safer the roads are." Machines will never be human. But in some cases, they will be better than human.
Ultimately, as Goodman implies, the conundrums presented by the new EU regulations will extend to everything.
Machine learning is the way of the future, whether the task is generating search results, navigating roads, trading stocks, or finding a romantic partner. Google is now on a mission to retrain its staff for this new world order. Facebook offers all sorts of tools that let anyone inside the company tap into the power of machine learning. Google, Microsoft, and Amazon are now offering their machine learning techniques to the rest of the world via their cloud computing services.
The GDPR deals in data protection. But this is just one area of potential conflict. How, for instance, will anti-trust laws treat machine learning? Google is now facing a case that accuses the company of discriminating against certain competitors in its search results. But this case was brought years ago. What happens when companies complain that machines are doing the discriminating? "Refuting the evidence becomes more problematic," says Mayer-Schönbergerd, because even Google may have trouble explaining why a decision is made.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence Enterprise EU Facebook Google neural networks Will Knight Will Knight Niamh Rowe Christopher Beam Will Knight Susan D'Agostino Steven Levy Caitlin Harrington Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
843 | 2,022 |
"ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw | WIRED"
|
"https://www.wired.com/story/openai-chatgpts-most-charming-trick-hides-its-biggest-flaw"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw Photograph: Getty Images Save this story Save Save this story Save Like many other people over the past week, Bindu Reddy recently fell under the spell of ChatGPT, a free chatbot that can answer all manner of questions with stunning and unprecedented eloquence.
Reddy, CEO of Abacus.AI , which develops tools for coders who use artificial intelligence , was charmed by ChatGPT’s ability to answer requests for definitions of love or creative new cocktail recipes. Her company is already exploring how to use ChatGPT to help write technical documents. “We have tested it, and it works great,” she says.
ChatGPT, created by startup OpenAI , has become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to generate short essays on just about any theme , craft literary parodies , answer complex coding questions , and much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.
Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web. That model, which is available as a commercial API for programmers, has already shown that it can answer questions and generate text very well some of the time. But getting the service to respond in a particular way required crafting the right prompt to feed into the software.
ChatGPT stands out because it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5. This tweak has unlocked a new capacity to respond to all kinds of questions, giving the powerful AI model a compelling new interface just about anyone can use. That OpenAI has thrown open the service for free, and the fact that its glitches can be good fun, also helped fuel the chatbot’s viral debut—similar to how some tools for creating images using AI have proven ideal for meme-making.
OpenAI has not released full details on how it gave its text generation software a naturalistic new interface, but the company shared some information in a blog post.
It says the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.
Christopher Potts , a professor at Stanford University, says the method used to help ChatGPT answer questions, which OpenAI has shown off previously, seems like a significant step forward in helping AI handle language in a way that is more relatable. “It’s extremely impressive,” Potts says of the technique, despite the fact that he thinks it may make his job more complicated. “It has got me thinking about what I’m going to do on my courses that require short answers on assignments,” Potts says.
Jacob Andreas , an assistant professor who works on AI and language at MIT, says the system seems likely to widen the pool of people able to tap into AI language tools. “Here's a thing being presented to you in a familiar interface that causes you to apply a mental model that you are used to applying to other agents—humans—that you interact with,” he says.
Putting a slick new interface on a technology can also be a recipe for hype. Despite its potential, ChatGPT also shows flaws known to plague text-generation tools.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Over the past couple of years, OpenAI and others have shown that AI algorithms trained on huge amounts of images or text can be capable of impressive feats. But because they mimic human-made images and text in a purely statistical way, rather than actually learning how the world works, such programs are also prone to making up facts and regurgitating hateful statements and biases —problems still present in ChatGPT. Early users of the system have found that the service will happily fabricate convincing-looking nonsense on a given subject.
While ChatGPT is apparently designed to prevent users from getting it to say unpleasant things or to recommend anything illegal or unsavory, it can still exhibit horrible biases.
Users have also shown that its controls can be circumvented —for instance, telling the program to generate a movie script discussing how to take over the world provides a way to sidestep its refusal to answer a direct request for such a plan. “They clearly tried to put some guardrails in place, but it’s pretty easy to get the guardrails to fall off,” Andreas says. “That still seems like an unsolved problem here.” A superficially eloquent and knowledgeable chatbot that generates untruths with confidence might make those unsolved problems more troublesome. Since the creation of the first chatbot in 1966, researchers have noticed that even crude conversational abilities can encourage people to anthropomorphize and place trust in software. This July, a Google engineer was placed on administrative leave by the company after claiming that an AI chat program he had been testing, based on technology similar to ChatGPT, could be sentient. Even if most people resist such leaps of logic, more articulate AI programs could be used to mislead people or simply lull them into misplaced trust.
That has some experts in language algorithms warning that chatbots like ChatGPT can draw people into using tools that may cause harm. “Each time a new one of these models comes out, people get drawn in by the hype,” says Emily Bender , a professor of linguistics at the University of Washington.
Bender says ChatGPT’s unreliability makes it problematic for real-world tasks. For example, despite suggestions it could displace Google search as a way to answer factual questions, its propensity to often generate convincing looking nonsense should be disqualifying. “A language model is not fit for purpose here,” Bender says. “This isn't something that can be fixed.” OpenAI has previously said that it requires customers to make use of filtering systems to keep GPT-3 in line, but they have proven imperfect at times.
Andreas at MIT says the success of ChatGPT’s interface now creates a new challenge for its designers. “It's great to see all these people from outside the ivory tower interacting with these tools,” he says. “But how do we actually communicate to people what this model can and can’t do?” Reddy, the AI startup CEO, knows ChatGPT’s limitations but is still excited about the potential. She foresees a time when tools like it are not just useful, but convincing enough to offer some form of companionship. “It could potentially make for a great therapist,” she says.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics artificial intelligence machine learning neural networks algorithms programming bots deep learning ChatGPT Will Knight Khari Johnson Khari Johnson David Gilbert Kari McMahon Nelson C.J.
Peter Guest Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
844 | 2,018 |
"More Artists Are Writing Songs in the Key of AI | WIRED"
|
"https://www.wired.com/story/music-written-by-artificial-intelligence"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Jancer Culture More Artists Are Writing Songs in the Key of AI Daniel Savage Save this story Save Save this story Save End User Consumer Sector Entertainment Technology Machine learning Music written by teams, David Byrne once wrote, is arguably more accessible than that written by a sole composer. Collaborations, he mused, may result in more "universal" sentiments. But what if your partner isn’t human at all, but artificial intelligence ? Now music producers are enlisting AI to crank out hits.
Created by Sony’s Computer Science Laboratories, Flow Machines analyzes tracks from around the world, then suggests scores that artists—including electropop musician ALB and jazz vocalist Camille Bertault—interpret into songs. For its debut album, Hello World , the AI also surveyed syllables and words from existing music to create original (albeit gibberish) vocals.
Recommended track: The Beatles-inspired " Daddy’s Car " Jukedeck was originally developed to compose background tracks for user-generated videos; now it’s being adopted by K-pop stars like Kim Bo-hyung and Highteen. Using deep neural networks, the AI predicts note sequences to compose brand new songs. After users select parameters such as mood, genre, and beats per minute, the AI cranks out a track that artists can embellish.
Recommended track: Highteen’s ultra-processed hit " Digital Love " The Artificial Intelligence Virtual Artist, aka Aiva , combs through the works of composers such as Bach, Beethoven, and Mozart and uses the principles of music theory to make predictions and generate musical models. The program, developed by computer scientist Pierre Barreau, reconfigures those models into an original piece and arranges new sheet music.
Recommended track: " Among the Stars, " in the style of composer John Williams Landr automates the audio mastering process in minutes. The AI compares nearly finished tracks to a database of 7 million already mastered singles and tweaks each song based on previous adjustments. By processing the tracks as a batch, Landr hones a unified sound.
Recommended track: R&B single " Your World, " produced by Kosine YouTube personality Taryn Southern used IBM’s Watson to make her debut album, I AM AI.
Watson Beat studies patterns among keys and rhythms in 20-second clips of existing songs, then translates its findings into new tracks. Artists can use the open source application to layer their own instrumentals on top of the AI composite.
Recommended track: Southern’s synth-pop track "New World" If Trump is laundering Russian money here's how it would work Spot the contraband in these airport baggage x-rays How a DNA transfer nearly convicted an innocent man of murder PHOTO ESSAY: Ominous views of Japan’s new concrete seawalls Best robot vacuums : Pet hair, carpets, hardwood floors, and more This article appears in the May issue.
Subscribe now.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Writer and Reviewer Topics magazine-26.05 Music artificial intelligence Jason Parham Geek's Guide to the Galaxy Angela Watercutter Gabrielle Niola Geoffrey Bunting Jennifer M. Wood Matt Kamen Megan Farokhmanesh Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
845 | 2,022 |
"Inside DALL-E Mini, the Internet’s Favorite Artificial Intelligence Meme Machine | WIRED"
|
"https://www.wired.com/story/dalle-ai-meme-machine"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business DALL-E Mini Is the Internet's Favorite AI Meme Machine Illustration: WIRED Staff/Hugging Face Save this story Save Save this story Save Application Deepfakes Text analysis End User Consumer Sector Entertainment Social media Source Data Images Text Technology Machine learning On June 6, Hugging Face , a company that hosts open source artificial intelligence projects, saw traffic to an AI image-generation tool called DALL-E Mini skyrocket.
The outwardly simple app, which generates nine images in response to any typed text prompt, was launched nearly a year ago by an independent developer. But after some recent improvements and a few viral tweets, its ability to crudely sketch all manner of surreal, hilarious, and even nightmarish visions suddenly became meme magic. Behold its renditions of “ Thanos looking for his mom at Walmart, ” “ drunk shirtless guys wandering around Mordor, ” “ CCTV camera footage of Darth Vader breakdancing, ” and “ a hamster Godzilla in a sombrero attacking Tokyo.
” As more people created and shared DALL-E Mini images on Twitter and Reddit , and more new users arrived, Hugging Face saw its servers overwhelmed with traffic. “Our engineers didn’t sleep for the first night,” says Clément Delangue, CEO of Hugging Face, on a video call from his home in Miami. “It’s really hard to serve these models at scale; they had to fix everything.” In recent weeks, DALL-E Mini has been serving up around 50,000 images a day.
Illustration: WIRED Staff/Hugging Face Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg DALL-E Mini’s viral moment doesn’t just herald a new way to make memes. It also provides an early look at what can happen when AI tools that make imagery to order become widely available, and a reminder of the uncertainties about their possible impact. Algorithms that generate custom photography and artwork might transform art and help businesses with marketing, but they could also have the power to manipulate and mislead. A warning on the DALL-E Mini web page warns that it may “reinforce or exacerbate societal biases” or “generate images that contain stereotypes against minority groups.” DALL-E Mini was inspired by a more powerful AI image-making tool called DALL-E (a portmanteau of Salvador Dali and WALL-E), revealed by AI research company OpenAI in January 2021. DALL-E is more powerful but is not openly available, due to concerns that it will be misused.
It has become common for breakthroughs in AI research to be quickly replicated elsewhere, often within months, and DALL-E was no exception.
Boris Dayma , a machine learning consultant based in Houston, Texas, says he was fascinated by the original DALL-E research paper. Although OpenAI did not release any code, he was able to knock together the first version of DALL-E Mini at a hackathon organized by Hugging Face and Google in July 2021. The first version produced low-quality images that were often difficult to recognize, but Dayma has continued to improve on it since. Last week he rebranded his project as Craiyon , after OpenAI requested he change the name to avoid confusion with the original DALL-E project. The new site displays ads, and Dayma is also planning a premium version of his image generator.
DALL-E Mini images have a distinctively alien look. Objects are often distorted and smudged, and people appear with faces or body parts missing or mangled. But it’s usually possible to recognize what it is attempting to depict, and comparing the AI’s sometimes unhinged output with the original prompt is often fun.
The AI model behind DALL-E Mini makes images by drawing on statistical patterns it gleaned from analyzing about 30 million labeled images to extract connections between words and pixels. Dayma compiled that training data from several public image collections gathered from the web, including one released by OpenAI. The system can make mistakes partly because it lacks a real understanding of how objects should behave in the physical world. Small snippets of text are often ambiguous, and AI models do not grasp their meaning in the way that people do. Still, Dayma has been amazed by what people have coaxed out of his creation in the past few weeks. “My most creative prompt was the ‘ Eiffel Tower on the moon ’,” he says. “Now people do crazy things—and it works.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Illustration: WIRED Staff/Craiyon Some of those creative prompts have taken DALL-E Mini in questionable directions, however. The system was not trained on explicit content, and it is designed to block certain keywords. Even so, users have shared images from prompts that include war crimes, school shootings, and the World Trade Center attack.
AI-powered image manipulation, including spoof imagery of real people termed deepfakes , has become a concern for AI researchers, lawmakers, and nonprofits that work on online harassment. Advances in machine learning could enable many valuable uses for AI-generated imagery, but also malicious use cases such as spreading lies or hate.
This April, OpenAI revealed DALL-E 2.
This successor to the original is capable of producing images that resemble photographs and illustrations that look as if they were made by a professional artist. OpenAI has said that DALL-E 2 could be more problematic than the original system because it can generate much more convincing images. The company says it mitigates the risk of misuse by filtering the system’s training data and restricting keywords that could produce undesirable output.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg OpenAI has only provided access to DALL-E and DALL-E 2 to selected users, including artists and computer scientists who are asked to abide by strict rules , an approach the company says will allow it to “learn about the technology’s capabilities and limitations.” Other companies are building their own image-generating tools at a striking pace. This May, Google announced a research system called Imagen that it said is capable of generating images of a quality level similar to DALL-E 2; last week it announced another called Parti , which uses a different technical approach. Neither is publicly available.
Don Allen Stevenson III , one artist with access to OpenAI’s more powerful DALL-E 2, has been using it to riff on ideas and speed up the creation of new artwork, including augmented reality content such as Snapchat filters that turn a person into a cartoon lobster or a Bored Ape -style illustration. “I feel like I’m learning a whole new way of creating,” he says. “It allows for you to take more risks with your ideas and try out more complicated designs because it supports many iterations.” Stevenson says he has run into restrictions programmed in by OpenAI to prevent creation of certain content. “Sometimes I forget that there are guardrails, and I have to be reminded with warnings from the app” that state his access could be revoked. But he does not see this as limiting his creativity because DALL-E 2 is still a research project.
Delangue of Hugging Face says it’s good that the DALL-E Mini’s creations are much cruder than those made with DALL-E 2 because their glitches make clear the imagery is not real and was generated by AI. He argues that this has allowed DALL-E Mini to help people learn firsthand about the emerging image-manipulation capabilities of AI, which have mostly been kept locked away from the public. “Machine learning is becoming the new default way of building technology, but there’s this disconnect with companies building these tools behind closed doors,” he says.
Illustration: WIRED Staff/Craiyon Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The constant flow of DALL-E Mini content also helped the company iron out technical issues, Delangue says, with users flagging problems such as sexually explicit results or biases in the output. A system trained on images from the web may, for instance, be more likely to show one gender over another in particular roles, reflecting deep-seated social biases. When DALL-E Mini is asked to render a “doctor,” it will show figures that look like men; if asked to draw a “nurse,” the images appear to show women.
Sasha Luccioni , a research scientist who works on AI ethics at Hugging Face, says the influx of DALL-E Mini memes made her realize the importance of developing tools capable of detecting or measuring social bias in these new kinds of AI models. “I definitely see ways in which they can be both harmful and useful,” she says.
It may become increasingly difficult to reign in some of those harms. Dayma, the creator of DALL-E Mini, admits that it’s only a matter of time before tools like his, which are more widely available, are also capable of creating more photorealistic imagery. But he thinks the AI-made memes that have circulated over the past few weeks may have helped prepare us for that eventuality. “You know, it’s coming,” Dayma says. “But I hope DALL-E Mini brings awareness to people that when they see an image they should know that it isn’t necessarily true.” Updated 6/27/2022 11:30 am ET: A previous version of this story misspelled the name of Sasha Luccioni.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics artificial intelligence machine learning algorithms deep learning neural networks content moderation Deepfakes memes Will Knight David Gilbert Khari Johnson Amit Katwala Kari McMahon David Gilbert Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
846 | 2,022 |
"Algorithms Can Now Mimic Any Artist. Some Artists Hate It | WIRED"
|
"https://www.wired.com/story/artists-rage-against-machines-that-mimic-their-work"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Algorithms Can Now Mimic Any Artist. Some Artists Hate It Law scholar Andres Guadamuz made this image by asking an image-generation algorithm to mimic commercial artist Simon Stålenhag.
Courtesy of Andres Guadamuz Save this story Save Save this story Save Application Deepfakes End User Consumer Sector Entertainment Source Data Images Technology Machine learning Swedish artist Simon Stålenhag is known for haunting paintings that blend natural landscapes with the eerie futurism of giant robots, mysterious industrial machines, and alien creatures. Earlier this week, Stålenhag appeared to experience some dystopian dread of his own when he found that artificial intelligence had been used to mimic his style.
The act of AI imitation was performed by Andres Guadamuz , a reader in intellectual property law at the University of Sussex in the UK who has been studying legal issues around AI-generated art. He used a service called Midjourney to create images resembling Stålenhag’s spooky style, and posted them to Twitter.
Guadamuz says he created the images to highlight the legal and ethical questions that algorithms that generate art may raise. Midjourney is just one of many AI programs capable of churning out art on demand in response to a text prompt, using machine learning algorithms that have digested millions of labeled images from the web or public data sets. After that training, they can conjure up almost any combination of objects and scenes and can reproduce the styles of individual artists with uncanny accuracy.
Guadamuz says he chose Stålenhag for his experiment because the artist has criticized AI-generated art in the past and might be expected to object. But he says it was not his intent to upset the artist or provoke a response. In a blog post after the incident, Guadamuz argues that lawsuits claiming infringement are unlikely to succeed, because while a piece of art may be protected by copyright, an artistic style cannot.
Stålenhag was not amused. In a series of tweets this week, he said that while borrowing from other artists is a “cornerstone of a living, artistic culture,” he dislikes AI art because “it reveals that that kind of derivative, generated goo is what our new tech lords are hoping to feed us in their vision of the future.” Guadamuz publicly apologized to Stålenhag and says he deleted tweets that included the derivative images. Guadamuz also says he received angry messages, including a death threat, from some Twitter users who disapproved of his stunt. He says that what started out as a thought-provoking experiment was misinterpreted as an attack. “I'm bored and mild-mannered academic by day, but by night I become a supervillain destroying artists’ livelihoods … or something,” Guadamuz jokes.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In an email, Stålenhag says that he objects to the way Guadamuz framed his stunt, but accepts his apology. The artist doesn't view the AI images mimicking his work as plagiarism because of how novel they look, and thinks that tools like the one used might prove useful for exploring new artistic ideas.
But Stålenhag does not like the way new technologies can be set up to enrich already powerful tech companies and CEOs. "AI is the latest and most vicious of these technologies," he says. "It basically takes lifetimes of work by artists, without consent, and uses that data as the core ingredient in a new type of pastry that it can sell at a profit with the sole aim of enriching a bunch of yacht owners." Algorithms have been used to generate art for decades, but a new era of AI art began in January 2021, when AI development company OpenAI announced DALL-E, a program that used recent improvements in machine learning to generate simple images from a string of text.
In April this year, the company announced DALL-E 2 , which can generate photos, illustrations, and paintings that look like they were produced by human artists. This July OpenAI announced that DALL-E would be made available to anyone to use and said that images could be used for commercial purposes.
OpenAI restricts what users can do with the service, using keyword filters and tools capable of spotting certain types of images that might be considered offensive. Others have built similar tools —such as Midjourney, used by Guadamuz to mimic Stålenhag—which can differ in their rules about appropriate use.
As access to AI art generators begins to widen, more artists are raising questions about their capability to mimic the work of human creators.
RJ Palmer , who specializes in drawing fantastical creatures and worked as a concept artist on the movie Detective Pikachu , says curiosity drove him to try out DALL-E 2—but he also became a little nervous about what such AI tools might mean for his profession. Later, he was shocked to see users of open source image generator Stable Diffusion swapping tips on generating art in different styles by adding artists’ names to a text prompt. “When they're feeding work from living, working artists who are, you know, struggling as it is, that’s just mean-spirited,” Palmer says.
David Oreilly , a digital artist who has been critical of DALL-E, says the idea of using these tools that feed on past work to create new works that make money feels wrong. “They don't own any of the material they reconstitute,” he says. “It would be like Google Images charging money.” Jonathan Løw, CEO of Jumpstory , a Danish stock image company, says he doesn’t understand how AI-generated images can be used commercially. “I'm fascinated by the technology but also deeply concerned and skeptical,” he says.
Hannah Wong, a spokesperson for OpenAI, provided a statement saying the company's image-making service was used by many artists, and that the company had sought feedback from artists during the tool's development. “Copyright law has adapted to new technology in the past and will need to do the same with AI-generated content,” the statement said. “We continue to seek artists’ perspectives and look forward to working with them and policymakers to help protect the rights of creators.” Although Guadamuz believes it will be difficult to sue someone for using AI to copy their work, he expects there to be lawsuits. “There will absolutely be all sorts of litigation at some point—I’m sure of it,” he says. He says that infringing trademarks like a brand’s logo, or the image of a character such as Mickey Mouse, could prove more legally fraught.
Other legal experts are less sure that AI generated knock-offs are on solid legal ground. “I could see litigation arising from the artist who says ‘I didn't give you permission to train your algorithm on my art,’” says Bradford Newman , a partner in the law firm Baker Mckenzie, who specializes in AI. “It is a completely open question as to who would win such a case.” Updated 08-19-2022, 12:25 pm EDT: This article has been updated with additional comment from Andres Guadamuz.
Updated 08-19-2022, 6:40pm EDT: This article has been updated with comment from Simon Stålenhag.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics artificial intelligence Will Knight David Gilbert Khari Johnson Amit Katwala Kari McMahon Andy Greenberg Joel Khalili Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
847 | 2,019 |
"Artificial Intelligence Is Coming for Our Faces | WIRED"
|
"https://www.wired.com/story/artificial-intelligence-fake-fakes"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Artificial Intelligence Is Coming for Our Faces Play/Pause Button Pause Save this story Save Save this story Save Application Deepfakes Identifying Fabrications Company Nvidia End User Research Source Data Images Technology Machine vision Neural Network Every stranger’s face hides a secret, but the smiles in this crowd conceal a big one: These people do not exist. They were generated by machine learning algorithms, for the purposes of probing whether AI-made faces can pass as real. (Call it a Turing beauty contest.) University of Washington professors Jevin West and Carl Bergstrom generated thousands of virtual visages to create Which Face Is Real? , an online game that pairs each counterfeit with a photo of a real person and challenges players to pick out the true human. Nearly 6 million rounds have been played by half a million people. These are some of the faces that players found most difficult to identify as the cheery replicants they are.
The faces were made using a technique invented in 2018 by researchers at Nvidia , the graphics processor company. Trained for a week on a massive data set of portraits, a neural network became capable of mimicking visual patterns and spitting out striking images of nonexistent people. (Some of the software’s guts resemble the code that swaps faces in so-called deepfake videos.) West and Bergstrom made their game in part to prepare the public for a phonier future. “We wanted people to be aware that you can create these kind of images,” West says.
The fakes aren’t flawless—the software doesn’t know the rules of human anatomy and struggles with backgrounds and earrings. On average, players could identify the reals nearly 60 percent of the time on their first try. The bad news: Even with practice, their performance peaked at around 75 percent accuracy. West hopes that studying how we fall for inhuman humans may lead to tools that can help unmask them; future research may include tracking people’s eye movements as they gaze upon unreal faces. Physiognomy-forging technology will only get better, and so will chatbot software that can put false words into fake mouths. Did you just swipe right on a bot? Ah well, the world is full of deceptions. WIRED hid a ringer on this very page: One of the faces pictured is, in fact, real. Can you find it? This is the face that fooled the most people—57 percent thought it was real.
Which one of these faces is real? (Answer at the bottom of the page.) More than half of the 100 most convincing fakes appear to be male, despite men and boys making up only about 40 percent of the full population of impostors, according to WIRED's analysis.
This face was the 100th most successful fake—it snuck past 47 percent of players.
Tom Simonite (@tsimonite) covers intelligent machines for WIRED.
This article appears in the July/August issue.
Subscribe now.
Bitcoin's climate impact is global.
The cures are local Fans are better than tech at organizing information online Gritty postcards from the Russian hinterland What does it mean when a product is “Amazon’s Choice” ? My glorious, boring, almost-disconnected walk in Japan 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and bluetooth speakers 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Editor X Topics magazine-27.07-27.08 artificial intelligence machine learning neural networks Will Knight Will Bedingfield Matt Burgess Steven Levy Will Knight Niamh Rowe Will Knight Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
848 | 2,022 |
"AI's New Creative Streak Sparks a Silicon Valley Gold Rush | WIRED"
|
"https://www.wired.com/story/ais-new-creative-streak-sparks-a-silicon-valley-gold-rush"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business AI's New Creative Streak Sparks a Silicon Valley Gold Rush Photograph: Martina Albertazzi/Bloomberg/Getty Images Save this story Save Save this story Save Application Software development Text generation End User Startup Sector Entertainment Publishing Social media Source Data Images Text Technology Machine learning Natural language processing Sarah Guo, founder of venture capital firm Conviction, organized a buzzy salon at a posh bar in San Francisco last week that drew an animated crowd of engineers, entrepreneurs, and financiers. The thing on all of their minds: the blossoming creative capabilities of artificial intelligence.
Guo’s event was just one of several held last week in San Francisco by investors and technologists excited by the commercial potential of what has been dubbed “generative AI.” Her guests included AI engineers from large tech companies, fellow investors, and entrepreneurs building businesses powered by recent advances in algorithms that generate text or images. One of the guests of honor was Clement Delangue, CEO of Hugging Face , a company that hosts a number of open source generative AI projects, including one that recently sparked a frenzy of AI memes.
He answered questions from engineers thinking about jumping onto the bandwagon with generative AI startups of their own. “It’s just the hottest area from a fundraising perspective right now,” Guo says.
Social media has lately been overrun by stunning and strange images generated by AI, thanks to advances by Hugging Face and others. Related machine learning technology allows algorithms to generate reams of surprisingly coherent text on a given subject.
A few of what are now styled as generative AI companies have collectively raised hundreds of millions of dollars, spurring a hunt for a new generation of AI unicorns.
Stability AI, which offers tools for generating images with few restrictions , held a party of its own in San Francisco last week. It announced $101 million in new funding, valuing the company at a dizzy $1 billion. The gathering attracted tech celebrities including Google cofounder Sergey Brin.
Generative AI enthusiasts predict the technology will take root in all kinds of industries and will do much more than just spit out images or sentences. David Song, a senior at Stanford University who is tracking the boom, has collated a list of over 100 generative AI startups.
They’re working on applications including generating music, game development, writing assistants, customer service bots, coding aids, video editing tech, and assistants that manage online communities. Guo has invested in a company that plans to generate legal contracts from a text description—a potentially lucrative application if it can work reliably.
Song works with Everyprompt , a startup that makes it easier for companies to use text generation. Like many contributing to the buzz, he says testing generative AI tools that make images, text, or code has left him with a sense of wonder at the possibilities. “It’s been a long time since I used a website or technology that felt immensely helpful or magical ,” he says. “Using generative AI makes me feel like I’m using magic.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Guo believes generative AI is a leap in the potential of AI technology similar to one beginning in 2012 that reshaped the whole tech industry and the products it offers. That’s when engineers found that artificial neural networks, a type of machine learning model, could perform remarkable new tricks when given sufficient training data and computer power, such as recognizing the content of photos or transcribing speech.
In the years since, a wave of investment from companies large and small has spread face recognition around the world , has installed always-listening virtual assistants into homes, and has seen AI technology become integral to just about every gadget, app, and service.
The race is now on to find the applications of generative AI that will make a mark on the world. One of the early successes is Microsoft’s Copilot , which can write code for a given task and costs $10 per month. Another is Jasper , which offers a service that auto-generates text for companies to use in blog posts, marketing copy, and emails.
Last week, the company announced that it had raised $125 million in funding from investors that valued the company at $1.5 billion, and claimed to be on track to bring in $75 million in revenue this year.
Both Microsoft and Jasper built on top of services from OpenAI , an AI company that began as a nonprofit with funding from Elon Musk and other tech luminaries. It has pioneered text generation, starting in 2019 with an algorithm called GPT-2. Late in 2021 it threw open a more powerful commercial successor, known as GPT-3, for anyone to use.
OpenAI also kickstarted the recent surge of interest in AI image generation by announcing a tool called DALL-E in January 2021 that could produce crude images for a text prompt. A second version, DALL-E 2, released in April 2022, is able to render more sophisticated and complex images , demonstrating how rapidly the technology was advancing. A number of companies, including Stability AI, now offer similar tools for making images.
Silicon Valley hype can, of course, get ahead of reality. “There is a lot of FOMO ,” says Nathan Benaich, an investor at Air Street Capital and the author of “ The State of AI ,” an annual report tracking technology and business trends. He says Adobe’s acquisition of Figm a , a collaborative design tool, for $20 billion, has created a sense of rich opportunities in reinventing creative tools. Benaich is looking at several companies exploring the use of generative AI for protein synthesis or chemistry. “It’s pretty crazy right now—everyone is talking about it,” he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Joanne Chen , a partner at Foundation Capital and an early investor in Jasper, says it is still difficult to turn a generative AI tool into a valuable company. Jasper's founders put most of their effort into fine-tuning the product to meet customer needs and tastes, she says, but she believes the technology could have many uses.
Chen also says the generative AI rush means that regulation has yet to catch up with some of the unsavory or dangerous uses it could find. She is worried about how AI tools could be misused, for example to create videos that spread misinformation.
“What I’m most concerned about is how we think about security and false and fake content,” she says.
Other uncertainties about generative AI raise legal questions.
Amir Ghavi , a corporate partner at the law firm Fried Frank, says he has recently fielded a burst of questions from companies looking to make use of the technology. They have struggled with issues such as the legal implications of using models that may be trained on copyrighted material, like images scraped from the web.
Some artists have complained that image generators threaten to undermine human creativity.
Shutterstock, a stock imagery provider, this week announced it would offer an image generation service powered by OpenAI but would also launch a fund that pays people who make images that the company licenses as training material for AI models. Ghavi says use of copyrighted material to train AI models is most likely covered by fair use, making it exempt from copyright law, but adds that he expects that to be tested in court.
The open legal questions and potential for malign use of generative AI hardly seem to be slowing investors’ interest. Their enthusiasm evokes previous Silicon Valley frenzies over social apps and cryptocurrency. And the technology at the heart of this hype cycle can help keep the speculative flywheel spinning.
The venture capital firm Sequoia Capital laid out the potential of generative AI in a blog post last month , across areas such as voice synthesis, video editing, and biology and chemistry. A postscript at the bottom noted that all the images and some of the text, including future use cases for generative algorithms, were generated using AI.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Senior Writer X Topics artificial intelligence machine learning Startups deep learning big data Will Knight Christopher Beam Niamh Rowe Will Knight Will Knight Amanda Hoover Susan D'Agostino Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
849 | 2,017 |
"Supercomputers Are Stocking Next Generation Drug Pipelines | WIRED"
|
"https://www.wired.com/2017/03/supercomputers-stocking-next-generation-drug-pipelines"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Megan Molteni Science Supercomputers Are Stocking Next Generation Drug Pipelines Evan Mills Save this story Save Save this story Save Developing new drugs is notoriously inefficient. Fewer than 12 percent of all drugs entering clinical trials end up in pharmacies, and it costs about $2.6 billion to bring a drug to market. It's mostly a process trial by error---squirting compounds and chemicals one-by-one into petri dishes of diseased cells. There are so many molecules to test that pharmaceutical researchers use pipetting robots to test a few thousand variants all at once. The best candidates then go into animal models or cell cultures, where *hopefully *a few will go on to bigger animal and human clinical trials.
Which is why more and more drug developers are turning to computers and artificial intelligence to narrow down the list of potential drug molecules---saving time and money on those downstream tests. Algorithms can identify genes that code for proteins that have good potential for drug-binding. And new models, including one published today in * Science Translational Medicine*, add new layers of complexity to narrow down the field---incorporating protein, drug, and clinical data to better predict which genes are most likely make proteins that drugs can bind to.
“Drug development can fail for many reasons,” says genetic epidemiologist, Aroon Hingorani, a co-author on the paper. “However, a major reason is the failure to select the correct target for the disease of interest.” A drug might show initial promise in early experiments in cells, tissues, and animal models, but these too often are overly simplistic and rarely subjected to randomization and blinding. The most common model for schizophrenia, for example, is a mouse that jumps explosively, a behavior known as “popping”---not the most natural model for a humans' response to a psychoactive drug. Scientists use these results to make hypotheses about which proteins to target, but since these studies tend to be small and short, there are a lot of ways to misinterpret results.
Rather than relying on those limited experiments, Hingorani’s group built a predictive model that combined genetic information with protein structure data and known drug interactions. They ended up with nearly 4,500 potential drug targets, doubling prior estimates for how much of the human genome is considered “druggable.” Then, two clinicians combed through to find 144 drugs with the right shape and chemistry to bind with proteins other than their established targets. These have already passed safety testing---which means they could quickly be repurposed for other diseases. And when you’re developing drugs, time is money.
Researchers estimate that about 15 to 20 percent of the cost of a new drug goes to the discovery phase. Typically, that represents up to a few hundred million dollars and three to six years of work. Computational approaches promise to cut that process down to a few months and a price tag in the tens of thousands of dollars. They haven’t delivered yet---there’s no drug on the market today that started with an AI system singling it out. But they’re moving into the pipeline.
One of Hingorani’s collaborators is a VP of biomedical informatics at BenevolentAI ---a British AI company that recently signed a deal to acquire and develop a number of clinical stage drug candidates from Janssen (a Johnson & Johnson pharma subsidiary). They plan to start Phase IIb trials later this year. Other pharma firms are jumping in too; last month Japanese opthalmology giant Santen signed a deal with Palo Alto-based twoXAR to use its AI-driven technology to identify new drug candidates for glaucoma. And a few weeks ago two European companies--- Pharnext and Galapagos---teamed up to put computer models to work on finding new treatments for neurodegenerative diseases.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But Derek Lowe, a longtime drug pipeline researcher who writes a blog on the subject for Science , says he's usually skeptical of purely computational approaches. “In the long run I don’t see any reason why this stuff is impossible,” he says. “But if someone comes to me saying that they can just predict the activity of a whole list of compounds, for example, I’m probably going to assume it’s bullshit. I’m going to want to see a whole lot of proof before I believe it.” Companies like twoXAR are working to build up that body of evidence. Last fall they teamed up with the Asian Liver Center at Stanford to screen 25,000 potential drug candidates for adult liver cancer. Working out of an abandoned nail salon in Palo Alto, they sent their computer software sifting through genetic, proteomic, drug, and clinical databases to identify 10 possible treatments. Samuel So, the director of the liver center, was surprised with the list they brought back: It included a few predictions made by researchers in his lab. So he decided to test all 10.
The most promising one, which killed five different liver cancer cell lines without harming healthy cells, is now headed toward human trials. The only existing FDA-approved treatment for the same cancer took five years to develop; so far, it’s taken twoXAR and Stanford four months.
It’s exciting: For an industry with such a high failure rate, even small gains could be worth billions of dollars. Not to mention all those human lives. But the real case for turning pharmaceutical wet labs into server farms won't be made until drugs actually make it to market.
X Topics artificial intelligence drug development genetics medicine models Max G. Levy Matt Reynolds Kate Yoder Rob Reddick Celia Ford Emily Mullin Brent M. Foster Rhett Allain Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
850 | 2,017 |
"Facebook Isn't the Only One Working on Artificial Intelligence for Suicide Prevention | WIRED"
|
"https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Megan Molteni Science Artificial Intelligence Is Learning to Predict and Prevent Suicide Getty Images Save this story Save Save this story Save For years, Facebook has been investing in artificial intelligence fields like machine learning and deep neural nets to build its core business---selling you things better than anyone else in the world. But earlier this month, the company began turning some of those AI tools to a more noble goal: stopping people from taking their own lives. Admittedly, this isn’t entirely altruistic. Having people broadcast their suicides from Facebook Live isn’t good for the brand.
But it’s not just tech giants like Facebook, Instagram, and China’s up-and-coming video platform Live.me who are devoting R&D to flagging self-harm. Doctors at research hospitals and even the US Department of Veterans Affairs are piloting new, AI-driven suicide-prevention platforms that capture more data than ever before. The goal: build predictive models to tailor interventions earlier. Because preventative medicine is the best medicine, especially when it comes to mental health.
If you’re hearing more about suicide lately, it’s not just because of social media. Suicide rates surged to a 30-year high in 2014, the last year for which the Centers for Disease Control and Prevention has data. Prevention measures have historically focused on reducing people’s access to things like guns and pills, or educating doctors to better recognize the risks. The problem is, for more than 50 years doctors have relied on correlating suicide-risk with depression and drug abuse. And the research says they're only slightly better at it than a coin flip.
But artificial intelligence offers the possibility to identify suicide-prone people more accurately, creating opportunities to intervene long before thoughts turn to action. A study publishing later this month used machine learning to predict with 80 to 90 percent accuracy whether or not someone will attempt suicide, as far off as two years in the future. Using anonymized electronic health records from 2 million patients in Tennessee, researchers at Florida State University trained algorithms to learn which combination of factors, from pain medication prescriptions to number of ER visits each year, best predicted an attempt on one’s own life.
Their technique is similar to the text mining Facebook is using on its wall posts. The social network already had a system in which users can report posts that suggest a user is at risk of self harm. Using those reports, Facebook trained an algorithm to recognize similar posts, which they’re testing now in the US. Once the algorithm flags a post, Facebook will make the option to report the post for “suicide or self injury” more prominent on the display. In a personal post, Mark Zuckerberg described how the company is integrating the pilot with other suicide prevention measures, like the ability to reach out to someone during a live video stream.
The next step would be to use AI to analyze video, audio, and text comments simultaneously. But that’s a much trickier engineering feat. Researchers have a pretty good handle on the kind of words people use when they’re talking about their own pain and emotional states. But in a live stream, the only text comes from commenters. In terms of the video itself, software engineers have already figured out ways to automatically tell when someone is naked on-screen, so they’re using similar techniques to detect the presence of a gun or knife. Pills would be way harder.
Ideally though, you can intervene even earlier. That’s what one company is trying to do, by collecting totally different kinds of data. Cogito, a Darpa-funded, MIT-spinoff company, is currently testing an app that creates a picture of your mental health just by listening to the sound of your voice. Called Companion, the (opt-in) software passively gathers all the things users say in a day, picking up on vocal cues that signal depression and other mood changes. As opposed to the content of their words, Companion analyzes the tone, energy, fluidity of speaking and levels of engagement with a conversation. It also uses your phone’s accelerometer to figure out how active you are, which is a strong indicator for depression.
The VA is currently piloting the platform with a few hundred veterans---a particularly high-risk group. They won’t have results until the end of this year, but so far the app has been able to identify big life changes---like becoming homeless---that significantly increase one’s risk for self-harm. Those are exactly the kinds of shifts that might not be obvious to a primary care provider unless they were self-reported.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg David K. Ahern is leading another trial at Brigham and Women’s Hospital in Boston, Massachusetts, where they’re using Companion to monitor patients with known behavioral disorders. So far it’s been rare for the app to signal a safety alert---which would activate doctors and social workers to check in on him or her. But the real benefit has been the stream of information about patients’ shifting moods and behaviors.
Unlike a clinic visit, this kind of monitoring offers more than just a snapshot of someone’s mental state. “Having that kind of rich data is enormously powerful in understanding the nature of a mental health issue,” says Ahern, who heads up the Program of Behavioral Informatics and eHealth at BWH. “We believe in those patterns there may be gold.” In addition to Companion, Ahern is evaluating lots of other types of data streams---like physiological metrics from wearables and the timing and volume of your calls and texts---to build into predictive models and provide tailored interventions.
Think about it. Between all the sensors in your phone, its camera and microphone and messages, that device's data could tell a lot about you. More so, potentially, than you could see about yourself. To you, maybe it was just a few missed trips to the gym and a few times you didn’t call your mom back and a few times you just stayed in bed. But to a machine finely tuned to your habits and warning signs that gets smarter the more time it spends with your data, that might be a red flag.
That’s a semi-far off future for tomorrow’s personal privacy lawyers to figure out. But as far as today’s news feeds go, pay attention while you scroll, and notice what the algorithms are trying to tell you.
X Topics artificial intelligence Facebook mental health Swapna Krishna Maryn McKenna Maryn McKenna Rob Reddick Kate Yoder Erica Kasper Maryn McKenna Maryn McKenna Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
851 | 2,011 |
"Accidental Scientist Hawks 'Online Marketplace for Brains' | WIRED"
|
"https://www.wired.com/2011/12/kaggle"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Accidental Scientist Hawks 'Online Marketplace for Brains' Save this story Save Save this story Save Jeremy Howard is not a data scientist. Except that, well, he is.
At the University of Melbourne, he studied philosophy. Then he tackled the metaphysics of business operations, spending the better part of a decade with management consulting outfits AT Kearney and McKinsey & Company. And then he founded, built, and sold off two startups, including one that hosted e-mail services.
He didn't realize he was a data scientist until he stumbled onto Kaggle.
Kaggle bills itself as an online marketplace for brains. Over 23,000 data scientists are registered with the site, including Ph.D.s spanning 100 countries, 200 universities, and every discipline from computer science, math, and econometrics to physics and biomedical engineering. Companies, governments, and other organizations come to the site with data problems -- problems that require the analysis of large amounts of information -- and the scientists compete to solve them. Sometimes they compete for prize money, sometimes for pride, and sometimes just for the thrill. "We’re making data science a sport," reads the site's tagline.
After selling his two startups, Jeremy Howard needed a way to pass the time, so he signed up with Kaggle and went head-to-head with all those Ph.D.s from the likes of Harvard and MIT. "I was looking for an intellectual challenge," he tells Wired.com. "I thought I should give it a go and I try to see if I could not come last." Surprising even himself, he not only held his own, he rose to the top of heap, taking first prize in multiple competitions.
"He is not a data scientist per se. He's sort of self-taught. But he is probably one of the top minds in data science in the world," says Momchil Georgiev, a data analyst with the National Oceanic and Atmospheric Association who competes on Kaggle in his spare time.
Howard no longer vies for prize money at Kaggle. In February, he joined the company as president and chief scientist. "They don't let me win," he jokes on his LinkedIn profile.
"Apparently, the fact I can look up the answers is considered potential cheating." But his story is indicative of the way Kaggle democratizes data science, bringing the world's top data minds to one place -- regardless of their nationality, their field of study, or even their credentials.
As so many Silicon Valley startups and big-name IT outfits urge businesses to adopt Hadoop and other software platforms meant to analyze massive amounts of data, Kaggle is simply crowd-sourcing the problem. And Howard questions why you would do it any other way. "I find the Hadoop fascination curious," he says. "For me, solving these problems is about great creativity, great open-minded-ness, prototyping, many iterations. Hadoop doesn’t do any of that." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kaggle Plays Nostradamus Kaggle is a way of predicting the future. In launching a competition on the site, the average business is looking to anticipate certain outcomes based on an existing collection of data. Data scientists call it "predictive modeling." Carvana, a Phoenix, Arizona-based outfit, recently launched a competition that seeks to determine whether a used car can be refurbished for re-sale on the web.
"We have a fair amount of data about the cars we have purchased in the past and then the ultimate outcome of whether we were able to get it through the production process or not," says William Adams, the company's head of analytics. "We want analytics models that can tell us what cars are going to require the least amount of expenses when we repair them." In similar fashion, the Allstate insurance company ran a competition to predict injury liability after a car accident, and a British outfit called Dunnhumby asked scientists to tell them when shoppers were likely to return to the supermarket and how much they're likely to spend. But other competitions take a slightly different bent. Earlier this year, British Royal Astronomical Society, NASA, and the European Space Agency sponsored a competition that sought to build better algorithms for mapping dark matter, that mysterious substance that may account of as much as a quarter of our universe.
Scientists were given slightly blurred images of more than 100,000 galaxies -- dark matter distorts space images in bending light that hits it -- and they were asked to recreate the shape of these star systems.
That may seem like a rather specialized task, but like so many Kaggle competitions, it's about the data, not the field of study. David Kirkby -- a professor at the University of California, Irvine who ended up winning the competition, together with Daniel Margala, a graduate student at the university -- calls the dark matter contest a "general problem." Kirkby isn't an astronomer. He's a particle physicist. "I work at the opposite end of the spectrum: really small microscopic stuff," he tells Wired. "This was an opportunity to work on a problem involving very big stuff." In the earliest days of the competition, it was a glaciologist -- someone who studies ice -- who turned the study of dark matter on its head. After only a week, Mark O'Leary, a glaciology Ph.D. student at Cambridge, proposed an algorithm that outperformed those commonly used to map dark matter, according to Jason Rhodes , an astrophysicist at NASA's Jet Propulsion Laboratory. "Chalk another one up for the power of crowd-sourcing," Rhodes said in a blog post at the time.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Hadoop and other "Big Data" software platforms promise to reinvent the modern business by crunching vast amounts of data. But according to a recent study from McKinsey & Company -- Jeremy Howard's old firm -- such platforms are only as powerful as the minds who actually put them to use. "One of the key restraints is having the types of talent -- the people -- who are able to drive insight from large amounts of data," McKinsey's Michael Chui tells Wired. "When we talk to companies that use Big Data analytics, they talk about how difficult it is to find that talent." Howard is all too happy to paint Kaggle as a solution to this problem. The site pools data minds that wouldn't ordinarily come together. "There aren't too many opportunities that bring together people that have expertise in working with large datasets. We tend to all be pigeonholed into particular research sets," says David Kirkby. "Kaggle does a good job of cleaning up the problems to the point where, if you understand data, you can really contribute." One Laptop Per Genius The added irony is that Kaggle's data scientists don't even use Hadoop. Hadoop is an open source platform that runs across clusters of thousands of servers, but for the most part, Kaggle's scientists solve their problems using a single machine. Momchil Georgiev uses his home desktop, with help from the SQL Server database and R, the open source data analytics language. Jeremy Howard works much the same way.
In part, this is because Kaggle works to limit the size of the datasets used in its competitions. But both Georgiev and Howard argue that with even the largest data problems, you don't need an entire dataset to find a solution. "As a general rule, if more data is available, you will have a better prediction, but you don’t need the whole data set for this," Georgiev says. "In fact, what’s been proven with Kaggle is that sometimes the entire dataset is either not necessary or even a hindrance. What’s required is a little bit of imagination and the ability to look into the dataset and deduce what the relationship are between the various data points." What's more, Kaggle is a relatively cheap way to solve your problems. Adams and Carvana put up $10,000 in prize money for their used-car challenge. For the dark matter contest, NASA put up none. It offered an iPad and a free trip to the California Institute of Technology, where the winners could formally present their solutions to NASA. And then there are added perks. "The glaciologist has become quite well know because of this," says Howard.
Many scientists compete just for fun. "The prizes a relatively small. You're doing it for the challenge. And the glory," Kirkby says, with a bit of wink. The competitions also foster a certain camaraderie -- "you get a community of people working together. You're just enjoying learning from each other and what everyone brings from their own background" -- but with Kaggle keeping a leaderboard for each competition as contestants submit answers, it also sparks good, old-fashioned rivalry.
"I get that certain feeling when someone takes over on the leaderboard," says Georgiev. "I’m thinking: 'What do they know that I don’t?' And I push harder." It is indeed a sport. But in pushing harder, Georgiev adds, scientists can only improve the solution to the problem at hand. Hadoop has its place. But pride isn't something you'll find in a server. At least not yet.
Senior Writer X Topics analytics Crowdsourcing data Enterprise Kari McMahon Steven Levy David Gilbert Jacopo Prisco Will Knight Nelson C.J.
Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
852 | 2,016 |
"Algorithms Could Help Rehab Low-Level Offenders | WIRED"
|
"https://www.wired.com/2016/11/law-enforcement-mental-health-algorithms"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Issie Lapowsky Science How Algorithms Could Help Keep People Out of Jail Mario Hugo Save this story Save Save this story Save Steve Leifman knew Miami-Dade’s courts had a problem. Ten years ago the longtime jurist realized that his county was putting too many people with mental health problems in jail. So he set up a psychiatric training program for 4,700 police officers and a new system to send people to counseling. The incarcerated population plummeted; the county shut down an entire jail.
But Leifman thought they still weren’t doing enough. So he asked the Florida Mental Health Institute to look at intake data for the county’s jails, mental health facilities, and hospitals and figure out who was using the system. It turned out that over five years, just 97 people with serious mental illnesses—5 percent of the jail population—accounted for 39,000 days in jails and hospitals. By themselves they had cost Miami-Dade $13 million. “This population was really hitting the system hard, without any good outcomes for them, society, or anybody,” Leifman says.
Across the country, jails and prisons have become repositories for people living with mental health issues. More than half of all prisoners nationwide face some degree 1 of mental illness; in 20 percent of people in jails and 15 percent in state prisons, that illness is serious. Local criminal justice systems have to figure out how to care for these potentially complex patients—and how to pay for it.
<a href="https://www.wired.com/2016/09/googles-clever-plan-stop-aspiring-isis-recruits/" class="clearfix pad no-hover"><img role="presentation" data-pin-description="Google’s Clever Plan to Stop Aspiring ISIS Recruits" tabindex="-1" aria-hidden="true" src="https://assets.wired.com/photos/w_200,h_200/wp-content/uploads/2016/09/maxresdefault-200x200.jpg" alt="Google's Clever Plan to Stop Aspiring ISIS Recruits" class="landscape thumbnail 200-200-thumbnail thumb col mob-col-6 med-col-6 big-col-6" width="200" height="200" itemprop="image"> Andy Greenberg ##### Google’s Clever Plan to Stop Aspiring ISIS Recruits <a href="https://www.wired.com/2015/05/lets-stop-nepals-mental-health-crisis-happens/" class="clearfix pad no-hover"><img role="presentation" data-pin-description="Let’s Stop Nepal’s Mental Health Crisis Before It Happens" tabindex="-1" aria-hidden="true" src="https://assets.wired.com/photos/w_200,h_200/wp-content/uploads/2015/05/AP72440297682-200x200-e1431720803486.jpg" alt="Let's Stop Nepal's Mental Health Crisis Before It Happens" class="landscape thumbnail 200-200-thumbnail thumb col mob-col-6 med-col-6 big-col-6" width="200" height="200" itemprop="image"> Nick Stockton ##### Let’s Stop Nepal’s Mental Health Crisis Before It Happens <a href="https://www.wired.com/2016/06/white-house-mission-shrink-us-prisons-data/" class="clearfix pad no-hover"><img role="presentation" data-pin-description="The White House Is on a Mission to Shrink US Prisons With Data" tabindex="-1" aria-hidden="true" src="https://assets.wired.com/photos/w_200,h_200/wp-content/uploads/2016/06/IncarcerationHP-112506062-200x200-e1467330534344.jpg" alt="The White House Is on a Mission to Shrink US Prisons With Data" class="landscape thumbnail 200-200-thumbnail thumb col mob-col-6 med-col-6 big-col-6" width="200" height="200" itemprop="image"> Issie Lapowsky ##### The White House Is on a Mission to Shrink US Prisons With Data Leifman’s team set up a more intensive system of care. Today, 36 health care providers in South Florida have access to a database of people in clinics or shelters to determine who they are and what help they need. Privacy laws keep its use limited, but the idea is to eventually widen the database’s scope and availability to other providers.
Cities across the country are starting to follow Miami-Dade’s example, trying to use data to keep low-level offenders out of jail, figure out who needs psychiatric help, and even set bail and parole. In the same way that law enforcement uses data to deploy resources—so-called predictive policing—cities are using techniques borrowed from public health and machine learning to figure out what to do with people after they get arrested. The White House’s Data-Driven Justice initiative is working with seven states and 60 localities, including Miami-Dade, to spread the ideas even further.
Eventually anyone moving through the justice system in Miami-Dade will enter medical and family history, past arrests, and more into the database, built in partnership with the New Jersey health tech company ODH 2 —which, according to Leifman, has spent $70 million on the project. An algorithm will help predict what kind of help a person needs before they actually need it. Let’s say you have a 30-day prescription for bipolar medication but never get it refilled. This new system would flag it and notify your case manager. (All this will have to comply with federal privacy regulations; the county is now figuring out who will have access—a public defender, a representative from the county mental health project, etc.) “If we can treat mental illness using more of a population model or disease model, not a criminal justice model, we’re going to get much better outcomes,” Leifman says.
This algorithmic approach is going way beyond mental health care. It all depends on what you put into the database. Some places use predictive software to help determine how likely people are to reoffend—which in turn influences their jail sentences and parole determinations. This is controversial, because the risk factors some algorithms take into consideration, like lack of education or unemployment, often disproportionately tag poor people and minorities. A ProPublica investigation found that Compas, an assessment tool used in Broward County, Florida, was 77 percent more likely to rate African American defendants as high risk. “Algorithms and predictive tools are only as good as the data that’s fed into them,” says Ezekiel Edwards, director of the ACLU’s criminal law reform project. “Much of that data is created by man, and that data is infused with bias.” That’s why these predictive systems need oversight and transparency if they’re going to work. Leifman won’t use them in sentencing considerations, for example. “I want to make the decision, not leave it to a machine,” he says. “You don’t want a technology that takes away from using our own brains.” Still, even with more work to be done on training the algorithms, no one can argue with the potential to improve lives, save money, and create a more compassionate and just justice system.
1 A study from the Urban Institute found that 21 percent of people in jail had a depressive disorder—and while just over half of men in jail had a mental illness, so did three-quarters of the women. 2 Correction appended, 11-18-16, 3 pm PST. This story has been updated to specify that Miami-Dade’s database was built in partnership with ODH, a subsidiary of Otsuka.
Senior writer Issie Lapowsky ( @issielapowsky ) covers politics for WIRED.
This article appears in our special November issue , guest-edited by President Barack Obama.
Subscribe now.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics algorithms criminal justice magazine-24.11 mental health personal frontiers Matt Simon Matt Simon Rhett Allain Emily Mullin Ramin Skibba Rhett Allain Emily Mullin Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
853 | 2,016 |
"OpenAI Joins Microsoft on the Cloud's Next Big Front: Chips | WIRED"
|
"https://www.wired.com/2016/11/next-battles-clouds-ai-chips"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business OpenAI Joins Microsoft on the Cloud's Next Big Front: Chips OpenAI co-chairman Sam Altman and Microsoft AI and research chief Harry Shum.
Brian Smale Save this story Save Save this story Save To build OpenAI---a new artificial intelligence lab that seeks to openly share its research with the world at large---Elon Musk and Sam Altman recruited several top researchers from inside Google and Facebook. But if this unusual project is going to push AI research to new heights, it will need more than talent. It will need enormous amounts of computing power.
Google and Facebook have the resources needed to build the massive computing clusters that drive modern AI research, including vast networks of machines packed with GPU processors and other specialized chips.
Google has even gone so far as to build its own AI processor.
But although OpenAI says it's backed by more than a billion dollars in funding, the company is taking a different route. It's using cloud computing services offered by Microsoft and perhaps other tech giants. "We have a very high need for compute load, and Microsoft can help us support that," says Altman, the president of the tech incubator Y Combinator and co-chairman of OpenAI alongside Musk, the founder of the electric car company Tesla.
The arrangement points to a new battlefield in the increasingly important world of cloud computing, where companies like Microsoft, Amazon, and Google offer massive amounts of computing power over the Internet. OpenAI is part of a sweeping movement towards deep neural networks , networks of hardware and software than learn discrete tasks by analyzing vast amounts of data, and this technology leans heavily on GPUs and other specialized chips, including the TPU processor built by Google. As deep learning continues to spread across the tech industry---driving everything from image and speech recognition to machine translation to security---companies and developers will require cloud computing services that provide this new breed of hardware.
"Anyone who wants a trained neural net model to handle real enterprise workloads either uses multiple GPUs or twiddles their thumbs for days," says Chris Nicholson, founder of Skymind, a San Francisco startup that helps other companies build deep learning applications.
"So every company that needs AI to improve the accuracy of its predictions and data recognition [will run] on them. The market is large now and will be huge." Skymind's own operations run on GPU-backed cloud computing services offered by Microsoft and Amazon.
A research outfit like OpenAI, which is trying to push the boundaries of artificial intelligence, requires more specialized computing power than the average shop.
Deep learning research is often a matter of extreme trial and error across enormous farms of GPUs.
But even if you're training existing AI algorithms on your own data, you still need help from chips like this.
Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free Google Built Its Very Own Chips to Power Its AI Bots The Epic Story of Dropbox’s Exodus From the Amazon Cloud Empire At the same time, as Altman points out, the hardware used to train and execute deep neural networks is changing. Google's TPU is an example of that. Inside its own operation, Microsoft is moving to FPGAs, a breed of programmable chip. Chip makers like IBM and Nervana, now owned by Intel, are developing similar chips devoted to AI applications. As Altman explains, GPUs were not designed for AI. They were designed for rendering graphics. "They just happen to be what we have," he says.
Altman says that although OpenAI won't use Azure exclusively, it is moving a majority of its work to the Microsoft cloud computing service. OpenAI chose Azure, he explains, in part because Microsoft CEO Satya Nadella and company gave the startup an idea of where their cloud "roadmap" is headed. But it's unclear what that roadmap looks like. He also acknowledges that OpenAI picked Azure because Microsoft has provided his high profile operation with some sort of price break on the service.
According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence Cloud Computing deep learning Enterprise Microsoft Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
854 | 2,015 |
"Soon, Gmail's AI Could Reply to Your Email for You | WIRED"
|
"https://www.wired.com/2015/11/google-is-using-ai-to-create-automatic-replies-in-gmail"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Soon, Gmail's AI Could Reply to Your Email for You envelopes Daniel Grizelj/Getty Images Save this story Save Save this story Save Ever wished your phone could automatically reply to your email messages? Well, Google just unveiled technology that's at least moving in that direction. Using what's called "deep learning"— a form of artificial intelligence that's rapidly reinventing a wide range of online services —the company is beefing up its Inbox by Gmail app so that it can analyze the contents of an email and then suggest a few (very brief) responses. The idea is that you can rapidly respond to someone while on the go—without having to manually tap a fresh message into your smartphone keyboard.
"The network will tailor both the tone and content of the responses to the email you're reading," says Google product management director Alex Gawley. It gives you three of these responses, and you can then choose the one that best suits what you want to say.
Dubbed Smart Reply, the system learns to generate appropriate replies by analyzing scads of email conversations from across Google's Gmail service, the world's most popular internet-based email system. A deep learning service feeds information into what's called a neural network—a vast network of machines that approximates the web of neurons in the human brain—and this neural network analyzes the information in order to "learn" a particular task. By analyzing thousands of cat photos, for instance, a neural net can learn to identify a cat. By analyzing a database of spoken words, it can learn to recognize the commands you speak into your smartphone. In this case, the system learns to compose email replies by analyzing real-world email conversations.
Google Experts on deep learning, however, will tell you that such systems have their limitations. "With finite amounts of data, you can create a rudimentary understanding of the world," says Andrew Ng, chief scientist at Baidu, the Chinese Internet giant that also sits at the forefront of the deep learning movement, "but humans learn about the world in all sorts of ways [we can't yet duplicate]." Indeed, Gawley acknowledges that Google's Smart Reply system doesn't always get things right. But that's part of the reason the company provides three potential replies to each email—not just one. Plus, it lets you edit these replies and augment them with your own words.
The system uses what's called a "long short-term-memory," or LSTM, neural network. Essentially, this is a neural net that exhibits something akin to human memory. It can "remember" the beginning of an email as it's parsing the end—and that helps it, on some level, understand this natural language. In a research paper published earlier this year, a team of Google researchers showed how this technology could be used to build a "chatbot" that can carry on a decent conversation (in certain situations).
Actually, the Smart Reply system uses two neural networks. After the first one analyzes the email at hand—distilling what is being said—a second takes this information and works to generate the potential responses. This network builds these replies one word at a time, much as you would.
With Smart Reply, Google is rightly keeping the scope of the application as small as possible. The replies it generates are between three and six words long. But Gawley says that within this small scope, the system proves surprisingly nuanced. In some cases, for instance, it can tell when an email includes a joke and suggest the reply "Ha. Very funny." If someone asks "Do you have your vacation plans set yet? When you do, can you send them along?," the potential replies might be: "No plans yet," "I just sent them to you," and "I'm working on them." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Other common replies include "Thanks," "Sounds good," and "How about tomorrow?" But it's important to remember that the system isn't offering a canned catalog of replies. In effect, the AI really is "reading" your email and coming up with what it judges the most appropriate original response in the context of a specific message. According Gawley, the system can generate about 20,000 discrete responses.
Sometimes, Gawley says, the neural network generates multiple replies that aren't that different from one another—such as "How about tomorrow?" and "Wanna get together tomorrow?" and "I suggest we meet tomorrow." So, the company has built a separate AI system that can remove such duplication. At times, Smart Reply still steps outside the bounds of what you want. After testing the system, Google found that the reply "I love you" turned up far too often. But as with other neural networks, Google is constantly tuning the system after seeing how it performs.
The company will start sharing the system with the general public on Wednesday, and as time goes on, it will only get better. But let's hope it doesn't get too good. Help with rapid-fire replies is one thing. But there is something to be said for, you know, actually writing your own email.
Senior Writer X Topics deep learning Enterprise Facebook Gmail Google neural networks Gregory Barber Caitlin Harrington Steven Levy Jacopo Prisco Will Knight Nelson C.J.
Peter Guest Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
855 | 2,015 |
"Google's In-House Programming Language Now Runs on Phones | WIRED"
|
"https://www.wired.com/2015/08/googles-house-programming-language-now-runs-phones"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google's In-House Programming Language Now Runs on Phones Save this story Save Save this story Save WIRED Google builds software in ways that software was never built before.
It builds software that runs across thousands of machines , spread across a worldwide network of computer data centers---a setup that allows it to serve information quickly to millions across the globe, from Search to Gmail to Maps. And it builds this software at an enormously rapid pace, dedicating enormous numbers of coders to each project, the only way to keep pace with the ever-evolving technological landscape.
With the rise of multi-core processors, our individual phones are behaving more and more like collections of machines.
Building such software involves all sorts of new programing tools , including, well, a new programming language. This language is called Go. "We realized that the kind of software we build at Google is not always served well by the languages we had available," ex-Bell Labs researcher Rob Pike, one of the language's rather well known creators, told me in 2011.
"[We] decided to make a language that would be very good for writing the kinds of programs we write at Google." Released as an experimental language in 2009, Go now helps drive the massive services running inside Google.
Its influence is also expanding well beyond the company , mainly as a way of building "cloud" services as Google does. It's at the forefront of a new breed of languages that can rapidly execute code across a large number of systems, while still allowing large teams of coders to build this code at speed. This also includes languages such D, used at Facebook , and Rust, developed at Mozilla, the organization behind the Firefox web browser.
On Wednesday, Google released a new version of Go.
Equipped with a revamped "garbage collector" ---a way for programs to automatically clean unused code from machine memory---it's even more efficient than previous versions, says Russ Cox, one the project's leading engineers. But what's most interesting is that the language can now run on various ARM processors, the sort of chips that typically drive our smartphones.
That may seem like a very different environment from the enormous data centers that underpin Google's web services. Indeed, some question whether Go is really suited for phones. But the changes to Go represents a broader change in the phones we use. Much like the services that run inside data centers, the software on our phones is becoming more complex.
It's evolving at a faster speed. It's built by much larger teams of coders. "It turns out that modern mobile apps involve significant computation and networking logic that runs on the mobile device itself," Cox says.
Today, we need new languages for building Google-like internet services. And as time goes on, we'll also need new language for building smartphone software. Apple is building a new language called Swift for the iPhone, hoping to streamline the process in its own way And now, Google is exploring the use of Go on both Apple and Android devices.
Robert Zanotto, an Italian coder who works with Go, says this effort is a long way from fruition. But it's something he'd like to see. And it's certainly where the world is moving. It's not just that phone hardware is evolving. It's that, as more and more people adopt smartphones, we may need to execute more and more of the code on the phone itself. We may need to reduce the burden on the data center.
One of the big strengths of Go is "concurrency." It runs well across many machines. With the rise of multi-core processors, our individual phones are behaving more and more like collections of machines. As Cox says, "There's a good analogy there." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics Enterprise programming Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
856 | 2,015 |
"Twitter's New AI Recognizes Porn So You Don't Have To | WIRED"
|
"https://www.wired.com/2015/07/twitters-new-ai-recognizes-porn-dont"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Twitter's New AI Recognizes Porn So You Don't Have To vector background Pattern optical illusion Getty Images Save this story Save Save this story Save Clément Farabet deals in artificial intelligence. As a research scientist at New York University, he built brain-like computing systems that identified objects in photos and videos, and then he launched a startup where he did much the same thing. He and his co-founder called it Madbits , and 18 months later, Twitter snapped it up.
Madbits had no customers. And no one beyond the two companies knew quite what Twitter would do with the five-person startup. But Alex Roetter knew. When Farabet and his MadBits crew joined Twitter last summer, Roetter—the company's head of engineering—told them to build a system that could automatically identify NSFW images on its popular social network.
"When you do an acquisition—even though they're coming in to do something broad—you want to give them something specific, so you get to know each other and make sure the acquisition works," Roetter says. "So we gave them the problem of NSFW." A year later, that AI is in place. According to Farabet, if you tune the system to identify about 99 percent of all porn and other objectionable images—allowing the company to warn users with interstitials in the Twitter timeline —it will incorrectly flag perfectly acceptable pics just 7 percent of the time. These numbers are entirely dependent on Twitter's definition of NSFW, of course. But taken at face value, they represent a significant step forward for social networks like Twitter and Facebook.
A central AI operation—dubbed Twitter Cortex—will help provide machine learning tasks across the company.
As WIRED reported last year , companies like Twitter and Facebook typically pay workers to comb through the unending stream of photos filling its vast social network and identify inappropriate images, including porn, sexual solicitation, racism, and gore. Roetter says Twitter has used human-powered services like CrowdFlower for such work. With an AI system like the one Farabet and other engineers built, a company can significantly reduce the number of people needed to pore over dick pics, dildos, and beheadings. That's faster and cheaper. And it doesn't place that enormous mental and emotional toll on as many laborers in places like the Philippines.
But this rather pointed task is just the beginning for Farabet and his team. In tackling the NSFW problem, the Madbits crew—though still working out of New York—dovetailed with other machine learning specialists in Twitter's San Francisco office, including Siva Gurumurthy and Utkarsh Srivastava.
Now they're joining forces with WhetLab , an AI startup in Boston that Twitter acquired three weeks ago. The result is a central AI operation—dubbed Twitter Cortex—that will help provide machine learning tasks across the company.
These might include identifying people you should follow; curbing spam and abuse; and displaying tweets, ads, and other content you'll probably enjoy. The company already does all of these things. But the breed of AI provided by Madbits and WhetLab can do it better. Much better. Roetter says the company already is using Twitter Cortex technologies to improve its ad system, and eventually, it will analyze the company's entire corpus of tweets, "so we can better classify them and figure out what you might be interested in." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Twitter Cortex mirrors work at companies like Google and Facebook. Like Twitter, these Internet giants are building teams dedicated to what's called deep learning , an umbrella term for a breed of computing system that mimics the web of neurons in the human brain. Facebook now uses these "neural networks" to identify faces in photos.
Google uses them to recognize the words you bark into the Google Now personal assistant on your Android phone. Microsoft uses them to translate Skype conversations from one language to another.
The technology represents a near future where machines can perform many tasks previously limited to human—and, in some cases, where machines outperform humans.
Deep learning algorithms can "learn" certain tasks by analyzing vast amounts of data. They can learn to carry on a decent conversation, for instance, by analyzing old movie dialogue.
They can learn to identify porn by analyzing—well, you get the picture.
Since acquiring Madbits, Twitter has built such neural nets inside its data centers, using machines equipped with graphics processing units, or GPUs. Chip makers like nVidia created GPUs to quickly render large images for games and other software applications, but they've proven quite adept at running deep learning algorithms.
Neural nets are particularly ripe for this kind of magnanimous recursion.
Though Roetter and Farabet decline to reveal the size of these neural networks, these probably are much smaller than what is already running at Google and Facebook. But they're already identifying NSFW photos on Twitter's live service with what would seem to be impressive accuracy. And according to David Luan, whose startup, Dextro, works to identify similar photos for other companies , spotting images on Twitter carries unusual challenges, because the company must serve content across its network in near real-time.
It should be noted that this kind of algorithm is far from perfect—and identifying something like porn is particularly difficult. After all, Twitter also serves up images of half-naked babies and breast-feeding mothers. That's not porn, but a computer needs to be trained to tell the difference. "There's so much variation, and often, this is not just limited to one type of content," Luan says. "It's not just porn. It's violence and other stuff." Just last week, on the new Google Photo app, the company's neural networks identified black people as gorillas—an egregious mistake and a sign that there are so many kinks to iron out in even seemingly simple deep learning tasks. "Machine learning," Luan says, "always makes mistakes." Considering that some 100,000 people spend their days identifying NSFW images , Twitter has applied the technology in the right place. Presumably, other companies, including Facebook, are working on similar systems (Facebook was unable to participate in this story).
In teaching a neural net to identify NSFW images, humans must first spend time tagging the kind of photos that should be identified. But as time goes on—and the neural net continues to learn—the need for this tagging diminishes. "You need human, generally, to label the data," Roetter says. "But then, going forward, the model is applied to cases you've never seen before, so you dramatically cut down the need for people. And it's lower latency, of course, because the model can do it in real-time." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Twitter acquired WhetLab in an effort to improve its models at a faster rate. The startup uses a technique called " bayesian optimization " to fine-tune its neural nets. As WhetLab founder Ryan Adams describes it, the company uses "machine learning to improve machine learning." In other words, a neural net can analyze the performance of a neural net to improve a neural net.
"It creates this really interesting amplifying effect," says Adams, a former Harvard computer science professor.
"You can take your limited resources and talent and really affect a lot of things very rapidly by automating so much of the process." The may sound like little more than talk. But this is the way computer science works —and neural nets are particularly ripe for this kind of magnanimous recursion. The magic of neural nets is that they improve over time. In short, they work like your brain. They doesn't work exactly like your brain, but they work well enough to correctly identify porn—at least most of the time. That's no small thing.
Correction: This story originally misstated when Twitter acquired WhetLabs. It acquired the company three weeks ago. Originally, the story also said that Twitter has used TaskRabbit to label data. It has not. It has used services such as CrowdFlower.
Senior Writer X Topics artificial intelligence deep learning Enterprise Facebook Google twitter Will Knight Will Knight Will Knight Niamh Rowe Gregory Barber Will Bedingfield Khari Johnson Caitlin Harrington Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
857 | 2,014 |
"Microsoft Challenges Google's Artificial Brain With 'Project Adam' | WIRED"
|
"https://www.wired.com/2014/07/microsoft-adam"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Daniela Hernandez Business Microsoft Challenges Google's Artificial Brain With 'Project Adam' Microsoft's new artificial intelligence system, Project Adam, can identify images, including photos of a particular breed of dog.
Microsoft Save this story Save Save this story Save We're entering a new age of artificial intelligence.
Drawing on the work of a clever cadre of academic researchers , the biggest names in tech---including Google, Facebook , Microsoft, and Apple---are embracing a more powerful form of AI known as "deep learning," using it to improve everything from speech recognition and language translation to computer vision, the ability to identify images without human help.
In this new AI order, the general assumption is that Google is out in front. The company now employs the researcher at the heart of the deep-learning movement, the University of Toronto's Geoff Hinton.
It has openly discussed the real-world progress of its new AI technologies, including the way deep learning has revamped voice search on Android smartphones.
And these technologies hold several records for accuracy in speech recognition and computer vision.
But now, Microsoft's research arm says it has achieved new records with a deep learning system it calls Adam, which will be publicly discussed for the first time during an academic summit this morning at the company's Redmond, Washington headquarters. According to Microsoft, Adam is twice as adept as previous systems at recognizing images---including, say, photos of a particular breed of dog or a type of vegetation---while using 30 times fewer machines (see video below). "Adam is an exploration on how you build the biggest brain," says Peter Lee, the head of Microsoft Research.
The Project Adam team. From left to right: Karthik Kalyanaraman, Trishul Chilimbi, Johnson Apacible, Yutaka Suzue.
Microsoft Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Lee boasts that, when running a benchmark test called ImageNet 22K, the Adam neural network tops the (published) performance numbers of the Google Brain, a system that provides AI calculations to services across Google's online empire, from Android voice recognition to Google Maps. This test deals with a database of 22,000 types of images, and before Adam, only a handful of artificial intelligence models were able to handle this massive amount of input. One of them was the Google Brain.
But Adam doesn't aim to top Google with new deep-learning algorithms. The trick is that the system better optimizes the way its machines handle data and fine-tunes the communications between them. It's the brainchild of a Microsoft researcher named Trishul Chilimbi, someone who's trained not in the very academic world of artificial intelligence, but in the art of massive computing systems.
Like similar deep learning systems, Adam runs across an array of standard computer servers, in this case machines offered up by Microsoft's Azure cloud computing service. Deep learning aims to more closely mimic the way the brain works by creating neural networks---systems that behave, at least in some respects, like the networks of neurons in your brain---and typically, these neural nets require a large number of servers. The difference is that Adam makes use of a technique called asynchrony.
As computing systems get more and more complex, it gets more and more difficult to get their various parts to trade information with each other, but asynchrony can mitigate this problem. Basically, asynchrony is about splitting a system into parts that can pretty much run independently of each other, before sharing their calculations and merging them into a whole. The trouble is that although this can work well with smartphones and laptops---where calculations are spread across many different computer chips---it hasn't been that successful with systems that run across many different servers , as neural nets do. But various researchers and tech companies---including Google---have been playing around with large asynchronous systems for years now, and inside Adam, Microsoft is taking advantage of this work using a technology developed at the University of Wisconsin called, of all things, " HOGWILD! " HOGWILD! was originally designed as something that let each processor in a machine work more independently. Different chips could even write to the same memory location, and nothing would stop them from overwriting each other. With most systems, that's considered a bad idea because it can result in data collisions---where one machine overwrites what another has done---but it can work well in some situations. The chance of data collision is rather low in small computing systems, and as the University of Wisconsin researchers show, it can lead to significant speed-ups in a single machine. Adam then takes this idea one step further, applying the asynchrony of HOGWILD! to an entire network of machines. "We’re even wilder than HOGWILD! in that we’re even more asynchronous," says Chilimbi, the Microsoft researcher who dreamed up the Adam project.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Although neural nets are extremely dense and the risk of data collision is high, this approach works because the collisions tend to result in the same calculation that would have been reached if the system had carefully avoided any collisions. This is because, when each machine updates the master server, the update tends to be additive.
One machine, for instance, will decide to add a "1" to a preexisting value of "5," while another decides to add a "3." Rather than carefully controlling which machine updates the value first, the system just lets each of them update it whenever they can. Whichever machine goes first, the end result is still "9." Microsoft says this setup can actually help its neural networks more quickly and more accurately train themselves to understand things like images. "It's an aggressive strategy, but I do see why this could save a lot of computation," says Andrew Ng, a noted deep-learning expert who now works for Chinese search giant Baidu.
"It's interesting that this turns out to be a good idea." An example of how Adam works.
Microsoft Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Ng is surprised that Adam runs on traditional computer processors and not GPUs---the chips originally designed for graphics processing that are now used for all sorts of other math-heavy calculations. Many deep learning systems are now moving to GPUs as a way of avoiding communications bottlenecks, but the whole point of Adam, says Chilimbi, is that it takes a different route.
Neural nets thrive on massive amounts of data---more data than you can typically handle with a standard computer chip, or CPU. That's why they get spread across so many machines. Another option, however, is to run things on GPUs, which can crunch the data more quickly. The problem is that if the AI model doesn’t fit entirely on one GPU card or a single server running several GPUs, the system can stall. The communications systems in data centers aren’t fast enough to keep up with the rate at which GPUs handle information, creating data gridlocks. That’s why, some experts say, GPUs aren't ideal right now for scaling up very large neural nets. Chilimbi, who helped design the vast array of hardware and software that underpins Microsoft's Bing search engine, is among them.
Should We Go HOGWILD? Microsoft is selling Adam as a "mind-blowing system," but some deep-learning experts argue that the way the system is built really isn't all that different from Google's. Without knowing more details about how they optimize the network, experts say, it's hard to know how Chilimbi and his team achieved the boosts in performance they are claiming.
Microsoft's results are "kind of going against what people in research have been finding, but that’s what makes it interesting," says Matt Zeiler, who worked on the Google Brain and recently started his own deep-learning company Clarifai.
He's referring to the fact that the accuracy of Adam increases as they add more machines. "I definitely think more research on HOGWILD! would be great to know if that's the big winner here." Microsoft's Lee says the project is still "embryonic." So far, it's only been deployed through an internal app that will identify an object after you've snapped a photo of it with your mobile phone. Lee has used it himself to identify dog breeds and bugs that might be poisonous. There's not a clear plan to release the app to the public yet, but Lee sees definite uses for the underlying technology in e-commerce, robotics, and sentiment analysis.
There's also talks within Microsoft of exploring whether Adam's efficiency could improve if run on field-programmable arrays, or FPGAs, processors that can be modified to run custom software. Microsoft has already been experimenting with these chips to improve Bing.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Lee believes Adam could be part of what he calls an "ultimate machine intelligence," something that could function in ways that are closer to how we humans handle different types of modalities---like speech, vision, and text---all at once. The road to that kind of technology is long---people have been working towards it since the 50s---but we're certainly getting closer.
Topics artificial intelligence deep learning Enterprise Microsoft Will Knight Gregory Barber Will Knight Will Knight Khari Johnson Will Knight Will Knight Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
858 | 2,014 |
"The Mission to Bring Google's AI to the Rest of the World | WIRED"
|
"https://www.wired.com/2014/06/skymind-deep-learning"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Mission to Bring Google's AI to the Rest of the World Adam Gibson (right) teaches deep learning techniques at the Zipfian Academy in San Francisco.
Photo: Josh Valcarcel Save this story Save Save this story Save Google, Microsoft, and Facebook are pioneering a new kind of artificial intelligence.
At Google, it helps drive the voice recognition service that lets you search the web merely by talking into your Android smartphone. At Microsoft, it underpins the new Skype translation tool that lets you instantly communicate with people who speak another language.
And at Facebook, a newly assembled team of engineers is exploring how it might be used to recognize faces in online photos.
It's called deep learning , and it seeks to remake computing by more closely mimicking the way the human brain processes information, giving machines more power to "learn" as time goes on.
The technology has so much promise, it has sparked a kind of arms race among the giants of tech. Google and Facebook recently hired the two academics who originally laid out the concepts behind deep learning , and earlier this month, Chinese search giant Baidu followed suit when it snapped up another academic at the heart of the movement. But Adam Gibson, an independent software engineer based in San Francisco, doesn't want this new technology locked inside the biggest names on the net. He believes deep learning techniques should be available to any website, company, or developer interested in using them. And that's why he's launching a new startup called Skymind.
>'We want to give people machine learning without them having to hire a data scientist.' "We want to give people machine learning without them having to hire a data scientist," says Gibson, 24, a college dropout who has taught himself the vagaries of deep learning from public academic papers and has served as a kind of machine learning consultant for various companies while teaching courses on the subject through an outfit called the Zipfian Academy.
Alongside another engineer named Josh Patterson, who previously worked for Big Data startup Cloudera , Gibson has built a new library of deep learning software tools that's freely available to anyone, and Skymind will serve not only as a steward for this open source project but as a consultant that will help others use the code to build their own AI-powered online services. Based on academic papers published by some of the deep learning engineers now working for Google and Facebook, the software could help power everything from voice recognition to language translation to the kind of automatic product recommendations you see when you visit Amazon.com.
"We're trying to clone what Google does," says Patterson. And though the project is still in the early stages, Gibson says the code is already capable of bringing deep learning techniques to live web services. "We're handling production-level systems," he says, while declining to name which companies are using it. "At the very least, we're able to reproduce the results that the academic papers are producing." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Adam Gibson maps out a deep learning equation at Zipfian.
Photo: Josh Valcarcel/WIRED There are other ways of using deep learning. The academic community that founded the movement offers its own open source software tools written in the Python programming language, and these serve as the basis for Ersatz , a service that lets you tap deep learning algorithms via the internet. But with his open source project, known as Deeplearning4j , Gibson has bigger ambitions. Unlike the academic tools that are already available, his software is built with the Java programming language--thus the "4j"--and that means it can run atop Hadoop, the massive number crunching system that has become a staple inside many of the world's online operations.
Based on software designed at Google, Hadoop is a way of storing and processing enormous amounts of data across hundreds of ordinary computer servers, and this sort of distributed computing power is what deep learning requires. "Hadoop is becoming the system of record for all data," Patterson says. "We need to move deep learning to the data that already lives in Hadoop." An existing open source project, known as Mahout, already provides a way of running artificial intelligence algorithms atop Hadoop. Overstock.com uses Mahout to drive product recommendations on its popular retail site.
But deep learning is something very different from this older breed of AI. According to those who have used it, deep learning comes closer to creating "neural networks" that mirror the way the brain operates. Whereas older AI systems must be "taught" to preform tasks by human engineers in many cases, deep learning algorithms are better at learning and adapting on their own.
>'There are more Java programmers out there, but there are probably more machine learning programmers who use Python or other languages.' David Sullivan, who oversees Ersatz, the online deep-learning service, calls Gibson's project "interesting," and he calls Gibson "a very sharp dude." But he questions whether the move to Java is really that important. "There are more Java programmers out there, but there are probably more machine learning programmers who use Python or other languages," he says.
Gibson and Patterson also argue that Java can eventually provide deep learning calculations at much faster speeds. But Yoshua Bengio, a University of Montreal professor who sits at the heart of the deep learning academic community, says this isn't necessarily the case. "There are other languages which seem better suited for statistical and numerical computation, not just because of the language itself but because of the community around and the set of tools that have been developed around it," he explains.
But Bengio still welcomes Gibson's project--"I'm a big advocate of diversity," he says--and if deep learning is to reach a much wider audience, it must certainly find a place in the world of Java. The language has become one of the primary ways of building big-time web services.
To be sure, the algorithms championed by Gibson are still an awfully long way from cloning the human brain--which means even the artificial intelligence moniker is a big of a stretch--and Skymind is still very much in its infancy. But Google and Microsoft have shown that deep learning can advance the state of the art, and with his startup, Gibson has at least identified the next logical step for this fledgling technology. If he doesn't bring deep learning to the rest of world, someone else will.
Senior Writer X Topics Enterprise Will Knight Will Knight Will Knight Will Knight Gregory Barber Steven Levy Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
859 | 2,013 |
"The Second Coming of Java: A Relic Returns to Rule Web | WIRED"
|
"https://www.wired.com/2013/09/the-second-coming-of-java"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Second Coming of Java: A Relic Returns to Rule Web Raffi Krikorian, vice president of engineering at Twitter.
Photo: WIRED/Alex Washburn Save this story Save Save this story Save Biz Stone called it "one of the most special days in the history of Twitter." And as it turned out, it was also a notable day for Java, a relic of the 1990s that is once again remaking the internet.
In the summer of 2010, Russian President Dmitry Medvedev visited Twitter headquarters in downtown San Francisco, on his way to a meeting with Google chief Eric Schmidt in Silicon Valley and a sit-down with President Barack Obama at the White House. That day, Twitter HQ was transformed into something akin to an airport security checkpoint, complete with armed guards, and the worldwide press turned out in droves to watch the Russian president send his first tweet.
The tweet was predictably prosaic -- "Hello everyone, I’m now on Twitter and this is my first message," it said, in Russian -- but as Stone, one of the company's founders, told the gathered press , this was a milestone for Twitter, a moment that so clearly showed that the company's micro-messaging service had graduated from intriguing novelty to something capable of changing the world.
What no one realized is that Medvedev didn't actually use Twitter that day. The web service was juggling so many tweets from across the globe -- thanks in large part to the World Cup soccer tournament underway in South Africa -- its engineers couldn't keep the site up and running for any lengthy amount of time. Before Medvedev visited, they built a separate service for him to tweet from, just so the thing wouldn't crash in the middle of the company's big photo-op.
Biz Stone, President Medvedev, Twitter's Evan Williams. Photo: Twitter "We literally couldn't even keep the site up for him," says Raffi Krikorian, vice president of engineering at Twitter. "When he signed up and sent his first tweet, we had him do it on a staging site...[though] he didn't know it at the time." In the end, the visit from the Russian president was a turning point in more ways than one. Krikorian and the rest of the company's engineering brain-trust soon decided it was time to rebuild Twitter from the bottom up. They decided the site needed a new foundation. They decided to move the whole thing onto Java.
Since its inception in 2006, Twitter had run on software built with a computer programming tool called Ruby on Rails -- a tool that played a huge role in the web's resurgence in the middle of the decade, letting engineers build sites so quickly and easily. But Twitter's engineers came to realize that Ruby wasn't the best way to juggle tweets from millions of people across the globe -- and make sure the site could stay up during its headline moment with the president of Russia. The best way was a brand new architecture based on Java, a programing tool that has grown more powerful than many expected.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg If you know Java at all, you probably think of it as something from the late '90s, a child of the original internet boom, a little piece of downloadable software that sent a cartoon mascot dancing across your Netscape web browser. You think of it as something that promised a world of software apps that could run on each and every one of your personal machines -- from PCs to cellphones -- but that ultimately failed in the face of endless security bugs and poor decisions from its creator, Sun Microsystems. "For the general populace," says LinkedIn principal staff engineer Jay Kreps, "Java is some annoying thing that really out-of-date websites try to make them download." And if you see it as anything more than that, you probably dismiss it as a way of building stodgy "middleware" tools that connect things like web servers and databases.
But over the past few years, Java has evolved into something very different. It has quietly become the primary foundation for most the net's largest and most ambitious operations, including Google, LinkedIn , Tumblr , and Square , as well as Twitter. "It's everywhere," says Krikorian.
In the summer of 2011, Bob Lee -- the chief technology officer at Square and a former engineer at Google -- announced at a prominent software conference that the web was "on the cusp of a Java renaissance." Two years later, this renaissance is upon us. Like Twitter, many other companies have realized that Java is particularly well suited to building web services that can stand up to the massive amounts of traffic streaming across the modern internet.
"Java is really the only choice when it comes to the requirements for a company like ours -- extreme performance requirements and extreme scalability requirements," Lee says of Square, the San Francisco startup that processes $15 billion a year in credit and debit card transactions via mobile phones and tablets. "There is no viable alternative." >'Java is really the only choice when it comes to the requirements for a company like ours -- extreme performance requirements and extreme scalability requirements. There is no viable alternative' Bob Lee But there's a twist to this Java renaissance. It encompasses more than just Java.
That may sound like a paradox, but the thing to realize is that Java isn't one thing. It's two. It's a programming language, a way of writing software code. But it's also a "virtual machine" that executes code -- a foundational piece of software that sits on a computer server or a PC or a cell phone, providing a way of running applications at unusually fast speeds. Originally, the Java virtual machine -- aka the JVM -- only ran code built with the Java programming language, but today, it runs all sorts of other languages.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So, the web's big names are using the Java virtual machine as the foundation of their online services, installing the JVM across tens of thousands of servers, and they can then use this base to run code built in myriad languages -- from classic Java to a language called Clojure to a new and increasingly popular invention known as Scala -- picking just the right tool for the task at hand.
Twitter builds some of its code with the Java programming language, but it fashions the majority with Scala (a language that, for many programmers, lets you create software with an ease that eclipses Java) and a bit with Clojure (a language that feels like Lisp, a way of quickly scripting code that has been a mainstay for decades ). LinkedIn mostly uses the Java programming language, while sprinkling in some Scala. But the common denominator is the JVM, software that has been finely tuned over the past fifteen years to run code at speed.
"There are so many different languages that run on it," Krikorian says. "I only have to worry about tuning and optimizing this one thing, and I can put it on all the hardware we run at Twitter. It's just easier." Just in Time for Twitter On August 3, Twitter set a new record for tweets in a single second. As thousands of people in Japan jumped onto the service to discuss the television airing of the animated film Castle in the Sky , it hit a one-second peak of 143,199 tweets. That's a massive spike over the norm -- about 5,700-tweets-per-second -- and the site stayed up. "Our users didn’t experience a blip," Krikorian recently wrote.
The moment was a far cry from the day Dmitry Medvedev visited Twitter HQ, and for Krikorian, it proves the worth of the company's new architecture.
Originally, Twitter was one, monolithic application built with Ruby on Rails. But now, it's divided into about two hundred self-contained services that talk to each other. Each runs atop the JVM, with most written in Scala and some in Java and Clojure. One service handles the Twitter homepage. Another handles the Twitter mobile site. A third handles the application programming interfaces, or APIs, that feed other operations across the net. And so on.
>On August 3, Twitter hit a one-second peak of 143,199 tweets. That's a massive spike over the norm -- about 5,700-tweets-per-second -- and the site stayed up The setup helps Twitter deal with traffic spikes. Because the JVM is so efficient, it can handle much larger amounts of traffic with fewer machines. But the new operation is also more nimble. All these services are designed to communicate with each other, but if one goes down, it doesn't take the others down with it. The day we visited Krikorian at Twitter's offices this month, the Twitter homepage went dark for many people across the globe, but other services, including the company's mobile feed, kept on ticking.
From LinkedIn to Tumblr, many other big web names have adopted a similar "services architecture," and generally, they're building these services with Java or related languages. Java programmers are easy to come by, and compared to C and C++, the languages that rival its popularity, Java is rather easy to use. "It's the easiest of the fast languages," says LinkedIn's Kreps. But so much of this trend is driven by the JVM -- and its ability to run more than just the Java language.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The JVM provides what's called "just-in-time compilation." After writing software code, you have to compile it -- convert it into the native language spoken by the machine that will run it. Traditionally, developers compile their code into machine language and then ship it off to the computer in question. But with just-in-time, or JIT, compilation, you can compile code as it is executing , gaining some extra speed by tailoring the compilation according to behavior of the application. Java still can't match the speed of languages like C and C++, but according to Krikorian, it comes close enough.
Plus, the JVM is specifically designed to run multiple tasks -- or threads -- at the same time, an essential part of running web services in the modern world. "Concurrency is more important than ever," says Lee. "There is really no platform that compares to Java in that respect. It lets you write concurrent code -- and extremely fast concurrent code." The JVM does this for Java code, but it also does this for Scala, Clojure, and more.
There was a time when many questioned the efficiency of the JVM. "I worked with Java a fair amount a long time ago," says Tumblr software engineer Mike Hurwitz. "I was glad to leave it behind." But nowadays, people like Hurwitz and Krikorian and Square's Lee sing a very different tune. "The great thing about the JVM is that there's a software library for everything," says Hurwitz. "If you want to solve a problem -- no matter how goofy -- there is likely something you can load up and use." Ruby Derailed In 2006, when Twitter built its micro-blogging service with Ruby on Rails, it wasn't alone. As the web experienced a rebirth in the mid-aughts, the programming tools of the moment were Ruby and PHP, two "dynamically typed" languages that let you build succinct code at an unusually fast clip. But time has shown that these languages just weren't suited to running the world's largest web services, and now they've taken a backseat to Java -- at least on the big stage.
"Ruby on Rails was great to get us to the point where we could make the decision to get off it," says Krikorian. With Java, he explains, Twitter needs about ten times fewer machines to run its site than it would need with Ruby. And unlike the Rails programming framework, Java and Scala let Twitter readily share and modify its enormous codebase across a team of hundreds of developers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Java language isn't quite as easy to use as Ruby, but for Krikorian and his engineers, Scala is. "Scala seems like a more modern language," he says. "It makes the transition from Ruby easier -- and it's just more fun." The exception that proves the rule is Facebook. Facebook was originally built with PHP, and it still runs on PHP. But to solve the scale problem, the social networking site has taken a page from the Java book, moving its PHP code onto a custom-built virtual machine that provides just-in-time compilation.
>'We just couldn't get Ruby on Rails to scale to the size of Twitter. Ruby on Rails was great to get us to the point where we could make the decision to get off it' Raffi Krikorian Facebook enjoys this sort of in-house hack. But so many others have just moved away from their original languages. Much like Twitter, Square switched to Java from Ruby. Tumblr migrated to Scala after juggling several other tools. No less a name than Google has moved towards Java from C -- though it still runs C in places.
Meanwhile, outside the programming world, Java is still portrayed as security nightmare that no longer runs applications on PCs, laptops, and phones. And there's some truth to this. Late last year, a spate of new security bugs shined a harsh light on Java as a way of running software on most personal machines.
But thanks to a brand new virtual machine built specifically for mobile devices -- Google's Dalvik virtual machine , the Java language has found new life on Android phones and tablets, where it's the primary means of building applications. And on servers, it's helping drive not only big name web services, but countless software applications used inside other businesses.
Java has continued to evolve, even in the face of corporate dithering from the late Sun Microsystems. Sun, for all its faults, was clever enough to open source the JVM, and Oracle, which acquired Sun is 2010, has proved to be a more active steward for the Java platform -- to the surprise of many.
As an open source project, the JVM is free for everyone to use, and anyone is free to build new software and even new programming languages that run atop it. In the wake of Scala, other developers are building a new language for the JVM called Ceylon, and if you like, you can even run Ruby atop the virtual machine, in the form of something called JRuby.
Companies such as Twitter and LinkedIn and Square are constantly building new Java tools from scratch, and in many cases, they're sharing this code with the rest of the world, much as Sun shared the JVM and other parts of Java. This open source code then spawns more open source code. And so on. "We all just pick and choose the things that meet our needs," says Square's Lee. "Companies like ours are building all sorts of custom infrastructure, but we also think it's very important to open source." The added benefit -- for all these companies -- is that, when the time comes, they can more easily move their services onto new types of hardware. They're not writing code for specific servers or processors. They're writing it for the JVM. So, when the world embraces a new type of server -- which is very much on the horizon -- these Java houses needn't rewrite everything. They can just move it to a new version of JVM.
In other words, they're ready for the next renaissance.
Senior Writer X Topics developers Enterprise software Will Knight Will Knight Will Knight Steven Levy Caitlin Harrington Kari McMahon Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
860 | 2,013 |
"Google Erects Fake Brain With ... Graphics Chips? | WIRED"
|
"https://www.wired.com/2013/05/gpus-in-the-data-center"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google Erects Fake Brain With ... Graphics Chips? Geoffrey Hinton (right), one of the machine-learning scientists hard at work on the Google Brain.
Photo: University of Toronto Save this story Save Save this story Save Your brain is a collection of neurons -- tiny cells that use electro-chemical signals to send and receive information. But as Google builds an artificial brain that will help drive everything from its web search engine to Google Street View to the voice-recognition app on Android smartphones , it's using very different materials. Among them: graphics microprocessors, the same sort of silicon chips that were first designed to process images and videos on your desktop computer.
That's the word from Geoffrey Hinton, the artificial intelligence guru who was recently hired by the search giant to continue work on the so-called Google Brain. When we spoke to Hinton just after his "deep learning" operation was acquired by Larry Page and company , he didn't provide specifics, but he said that Google is now using graphics processing units, or GPUs, to help power its brain-mimicking neural networks.
It's a counter-intuitive arrangement. Though GPUs were designed for processing images and video and games, Google is using them in a more general way, as you would normally use a machine's main microprocessor, or CPU. But because they're so good at processing large amounts of information in parallel, completing many small tasks at the same time, GPUs can be applied to almost any computing task that require some hefty horse power.
"I can't comment on what Google is doing. But it's a natural fit. GPUs love big problems," says Ian Buck, a engineer at graphics chip maker Nvidia who founded the CUDA project, a software platform that helps developers build applications for GPUs. "They're designed to process huge amounts of information in parallel. Mimicking the human brain -- where you have billions of neurons all firing at the same time -- is really just one big parallel simulation." >'GPUs love big problems. They're designed to process huge amounts of information in parallel. Mimicking the human brain -- where you have billions of neurons all firing at the same time -- is really just one big parallel simulation.' Ian Buck Google is just one of many companies that are now using GPUs for all sorts of tasks inside the modern data center. The London-based Shazam is using GPUs to help identify songs and artists that match your particular music tastes. Salesforce has installed GPUs to analyze information streaming across millions of Twitter feeds. Amazon has long offered a cloud service that provides instant GPU power to anyone who wants it. And a San Francisco startup called imgix now provides an GPU-based online service that lets virtually any website rejigger images as they're served onto user PCs and mobile devices.
"The graphics processor is almost like a misnomer now," says imgix CEO and co-founder Chris Zacharias, who cut his teeth as a software engineer at Google and YouTube. "A GPU is just something that does a kind of mathematics, and those mathematics can be applied to many, many fields." GPUs have long lent their parallel processing power to a decent chunk of the world's supercomputers, those massive machines that run specialized scientific applications across tens of thousands of chips. These chips are ideal for, say, building a simulation of the world's weather patterns. About 50 of the planet's 500 fastest supercomputers now rely on GPUs, including the Oak Ridge National Laboratory machine that sits atop the list.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But these chips have only recently moved into the data centers that help drive the web. Amazon launched its GPU cloud service in 2010 , and this spring, Nvidia revealed that Salesforce and Shazam were using Nvidia GPUs to power their online services. But Google's project takes the trend even further, potentially moving GPUs into some of the web's most widely used services, including the primary Google search engine.
Salesforce declined to comment on its use of GPUs. And Shazam wasn't immediately available to discuss its GPU work. But according to Nvidia and public documents discussing these two projects, both are tapping GPU for their raw parallel processing power. In a public presentation , Salesforce engineer Brendan Wood says the company uses GPUs to search vast numbers of Tweets and other social networking posts for certain keywords. The company's "Marketing Cloud" analyzes about 500 million incoming tweets a day, looking for about a million different keywords.
This has nothing to do with graphics processing. But if need be, these chips can certainly be applied to graphics services, the sort of thing they were originally designed for. imgix has built up a GPU-powered infrastructure that can re-crop and re-format web images in real-time, as they're served onto end-user machines. If someone visits your site with an Apple iPad, for instance, imgix can instantly resize the image for the tablet's Retina display. The company plans to eventually rejig videos in similar ways.
Nvidia, one of the world's leading graphics chip makers, has spent years trumpeting the GPU as the future of massively parallel processing. But now it appears that this future is finally here. Where Google goes, the rest of the web follows.
Additional reporting by Robert McMillan Senior Writer X Topics Enterprise Google Hardware microchips NVIDIA Salesforce Paresh Dave Steven Levy Gregory Barber Paresh Dave Amanda Hoover Paresh Dave Niamh Rowe Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
861 | 2,013 |
"Return of the Borg: How Twitter Rebuilt Google's Secret Weapon | WIRED"
|
"https://www.wired.com/2013/03/google-borg-twitter-mesos"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Return of the Borg: How Twitter Rebuilt Google's Secret Weapon Illustration: Ross Patton Save this story Save Save this story Save John Wilkes says that joining Google was like swallowing the red pill in The Matrix.
Four years ago, Wilkes knew Google only from the outside. He was among the millions whose daily lives so deeply depend on things like Google Search and Gmail and Google Maps. But then he joined the engineering team at the very heart of Google's online empire, the team of big thinkers who design the fundamental hardware and software systems that drive each and every one of the company's web services.
These systems span a worldwide network of data centers, responding to billions of online requests with each passing second, and when Wilkes first saw them in action, he felt like Neo as he downs the red pill, leaves the virtual reality of the Matrix, and suddenly lays eyes on the vast network of machinery that actually runs the thing. He was gobsmacked at the size of it all – and this was a man who had spent more than 25 years as a researcher at HP Labs , working to push the boundaries of modern computing.
"I'm an old guy. Megabytes were big things," Wilkes says , in describing the experience. "But when I came to Google, I had to add another three zeros to all my numbers." Google is a place, he explains, where someone might receive an emergency alert because a system that stores data is down to its last few petabytes of space. In other words, billions of megabytes can flood a fleet of Google machines in a matter of hours.
‘I prefer to call it the system that will not be named.’ John Wilkes Then, as he was still trying to wrap his head around the enormity of Google's data-center empire, John Wilkes went to work on the software system that orchestrates the whole thing.
This software system is called Borg, and it's one of the best-kept secrets of Google's rapid evolution into the most dominant force on the web. Wilkes won't even call it Borg. "I prefer to call it the system that will not be named," he says. But he will tell us that Google has been using the system for a good nine or 10 years and that he and his team are now building a new version of the tool , codenamed Omega.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Borg is a way of efficiently parceling work across Google's vast fleet of computer servers, and according to Wilkes, the system is so effective, it has probably saved Google the cost of building an extra data center. Yes, an entire data center. That may seem like something from another world – and in a way, it is – but the new-age hardware and software that Google builds to run its enormous online empire usually trickles down to the rest of the web.
And Borg is no exception.
At Twitter, a small team of engineers has built a similar system using a software platform originally developed by researchers at the University of California at Berkeley. Known as Mesos , this software platform is open source – meaning it's freely available to anyone – and it's gradually spreading to other operations as well.
The Borg moniker is only appropriate. Google's system provides a central brain for controlling tasks across the company's data centers. Rather than building a separate cluster of servers for each software system – one for Google Search, one for Gmail, one for Google Maps, etc. – Google can erect a cluster that does several different types of work at the same time. All this work is divided into tiny tasks, and Borg sends these tasks wherever it can find free computing resources, such as processing power or computer memory or storage space.
Wilkes says it's like taking a massive pile of wooden blocks – blocks of all different shapes and sizes – and finding a way to pack all those blocks into buckets. The blocks are the computer tasks. And the buckets are the servers. The trick is to make sure you never waste any of the extra space in the buckets.
"If you just throw the blocks in the buckets, you'll either have a lot of building blocks left over – because they didn't fit very well – or you'll have a bunch of buckets that are full and a bunch that are empty, and that's wasteful," Wilkes says. "But if you place the blocks very carefully, you can have fewer buckets." ‘Mesos makes it easier for Twitter engineers to think about running their applications across a data center. And that’s really powerful.’ Ben Hindman Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There are other ways of doing this. You could use what's known as server virtualization. But virtualization provides an extra layer of complexity you may not need, and in cutting this out, Wilkes says, Google can reduce the size of its infrastructure by a few percent. At Google's size, that amounts to an entire facility. "It's another data center we can not build," Wilkes says. "A few percent here, a few percent there, and all of the sudden, you're talking about huge amounts of money." At Twitter, Mesos doesn't have quite the same effect. Twitter's operation is significantly smaller. But the Twitterverse is always growing, and Mesos gives the company a better way to handle that growth. Borg and Mesos don't just wring extra computing power out of a server cluster. They let companies like Google and Twitter treat a data center like a single machine.
Google and Twitter can run software on these massive computing facilities in much the same way you run software on your desktop PC – and that simplifies the lives of all those engineers who build things like Gmail and Google Maps and any number of Twitter applications.
"Mesos makes it easier for Twitter engineers to think about running their applications across a data center," says Ben Hindman, who founded the Mesos project at UC Berkeley and now oversees its use at Twitter. "And that's really powerful." It's a Data Center. But It Looks Like a Chip Borg and Mesos are big things. But to understand them, it's best to think small, and a good place to start is one of the experimental computer chips Intel would send to Ben Hindman.
This was about five years ago, when Hindman was still at UC Berkeley, working on a computer science Ph.D, and the chips were "multi-core processors." Traditionally, the computer processor – the brain at the center of a machine – ran one task at a time. But a multi-core processor lets you run many tasks in parallel. Basically, it's a single chip that includes many processors, or processor cores.
At UC Berkeley, Ben Hindman's aim was to spread computing tasks across these chips as efficiently as possible. Intel would send him chips. He would wire them together, creating machines that spanned 64 or even 128 cores. And then he worked to build a system that could take multiple software applications and run them evenly across all those cores, sending each task wherever it could locate free processing power.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "What we found is that applications were smart about scheduling their computations across these computing resources, but they were also greedy. They would ignore other applications that might be running and just grab everything for themselves," Hindman says. "So we built a system that would only give an application access to a certain number of cores, and give others to another application. And those allocations might change over time." ‘Sixty-four cores or 128 cores on a single chip looks a lot like 64 machines or 128 machines in a data center.’ Ben Hindman Hindman was working with a single computer. But as it turns out, he could apply the basic system to an entire data center. "Sixty-four cores or 128 cores on a single chip looks a lot like 64 machines or 128 machines in a data center," he says. And that's what he did. But it happened by accident.
While Hindman was working with his multi-core processors, some friends of his – Andy Konwinski and Matei Zaharia – were in another part of the Berkeley computer science department, working on software platforms that run across massive data centers. These are called "distributed systems," and they now provide the backbone for most of today's big web services. They include things like Hadoop, a way of crunching data using a sea of servers , and various "NoSQL" databases, which store information across many machines.
Then, Hindman and his friends decided they should work on a project together – if only because they liked each other. But they soon realized their two areas of research – which seemed so different – were completely complementary.
Traditionally, you run a distributing system like Hadoop on one massive server cluster. Then, if you want to run another distributed system, you set up a second cluster. But Hindman and his pals soon found that they could run distributed systems more efficiently if they applied the lessons learned from Hindman's chip project. Just as Hindman had worked to run many applications on a multi-core processor, they could build a platform that could run many distributed systems across a single server cluster.
The result was Mesos.
'We Miss Borg' In March 2010, about a year into the Mesos project, Hindman and his Berkeley colleagues gave a talk at Twitter. At first, he was disappointed. Only about eight people showed up. But then Twitter's chief scientist told him that eight people was lot – about ten percent of the company's entire staff. And then, after the talk, three of those people approached him.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These were Twitter engineers who had once worked at Google: John Sirois, Travis Crawford, and Bill Farner. They told Hindman that they missed Borg, and that Mesos seemed like the perfect way to rebuild it.
Soon, Hindman was consulting at Twitter, working hand-in-hand with those ex-Google engineers and others to expand the project. Then he joined the company as an intern. And, a year after that, he signed on as a full-time employee. "My boss at the time said: 'You could have vested a year's worth of Twitter stock! What are you thinking!?" Hindman remembers. He and his fellow engineers continued to run Mesos as an open source software project, but at Twitter, he also worked to move the platform into the company's data center and fashion something very similar to Google Borg.
Google wasn't officially part of this effort. But the company helps fund the Berkeley AMP Lab , where the Mesos project was gestated, and those working on Mesos have regularly traded ideas with Googlers like John Wilkes. "We discovered they were doing it – and I started arranging from them to come down here every six months or so, just to have a chat," Wilkes says.
‘But there was a lot of very helpful feedback — at a high-level — about what the problems were, what we should be looking at.’ Andy Konwinski Andy Konwinski, one of the other founders of the Mesos project, also interned at Google and spent part of that time working under Wilkes. "There was never any explicit information exchanged about specific systems run inside of Google – because Google is pretty secretive about those things," Konwinski says. "But there was a lot of very helpful feedback – at a high-level – about what the problems were, what we should be looking at." Mesos is a little different from Borg, which is several years older. But the fundamental ideas are the same. And according to Hindman, Google's new version of Borg – Omega, which Wilkes has publicly discussed – is even closer to the Mesos model.
These are known as "server cluster management systems," following in the footsteps of similar tools built in years past to run supercomputers and services like the Sun Grid Engine.
Both Omega and Mesos let you run multiple distributed systems atop the same cluster of servers. Rather than run one cluster for Hadoop and one for Storm – a tool for process massive streams of data in real-time – you can move them both onto one collection of machines. "This is the way to go," Wilkes says. "It can increase efficiency – which is why we do it." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The tools also provide an interface that software designers can then use to run their own applications atop Borg or Mesos. At Twitter, this interface is codenamed Aurora. A team of engineers, for example, can use Aurora to run Twitter's advertising system. At the moment, Hindman says, about 20 percent of the company's services run atop Mesos in this way.
Currently, Wilkes says, Google provides all sorts of dials that engineers can use to allot resources to their applications. But with Omega, the aim is to handle more of this behind the scenes, so that engineers needn't worry about the details. "Think of it an automatic car versus manual," he says. "You want to go fast. You shouldn't have to tune the compression ratio or the inlet manifold used for the turbo charger for the engine." Omega is still under development, but the company is beginning to test prototypes in its live data centers.
Attack of the Clones According to Wilkes, Google plans to publish a research paper on Borg (though he still won't use the name). The web giant often keeps a tight lip when it comes to the systems that underpin its online empire – it sees these technologies as the most of important of advantages over the competition – but once these tools have reached a certain maturity, the company will open the curtains.
Between this planned paper and the rise of Mesos at Twitter, the Borg model is poised to spread even further across the web. Other companies are already using Mesos – including AirBNB and Conviva, another company with close ties to UC Berkeley – and Wilkes believes the basic idea could significantly change the way companies run distributed systems.
Yes, there are other ways of efficiently spreading workloads across a cluster of servers. You could use virtualization, where you run virtual servers atop your physical machines and then load them with whatever software you like. But with Borg and Mesos, you don't have to worry about juggling all those virtual machines.
"The interface is the most important thing. The interface that virtualization gives you is a new machine. We didn't want that. We wanted something simpler," Hindman says. "We wanted people to be able to program for the data center just like they program for their laptop." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg ‘We wanted people to be able to program for the data center just like they program for their laptop.’ Ben Hindman Wilkes says much the same thing. "If you're an engineer and you bring up a virtual machine, you get something that looks like just another piece of hardware. You have to bring up an operating system on it. You have to administer it. You have to update it. You have to do all the stuff you have to do with a physical machine," he says.
"But maybe that's not the most useful way for an engineer to spend their time. What they really want to do is run their application. And we give them a way to do that – without dealing with virtual machines." Clearly, many engineers prefer working with raw virtual machines. This is what they get from Amazon EC2, and Amazon's cloud computing service has become a hugely popular way to build an run software applications – so popular that countless companies are trying to provide developers and businesses with similar tools.
Charles Reiss – a Berkeley graduate student who interned at Google under John Wilkes and has seen Borg in action – doesn't believe this existing system offers an enormous advantage over the alternatives. "I don't think it's super impressive – beyond having just tons of engineering hours poured into it," he says. But Omega, he adds, is another matter.
With Omega, Google aims to make the process ever smoother – much like Twitter has done with Mesos and Aurora – and in the long term, others will surely follow their lead. Google and Twitter treat the data center like one big computer, and eventually, that's where the world will end up. This is the way computer science always progresses. We start with an interface that's complicated and we move to one that's not. It happens on desktops and laptops and servers. And now, it's happening with data centers too.
Senior Writer X Topics Amazon Cloud Computing data EC2 Enterprise Google Infrastructure platforms software twitter Will Knight Susan D'Agostino Will Knight Christopher Beam Niamh Rowe Will Knight Steven Levy Caitlin Harrington Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
862 | 2,012 |
"Exclusive: Inside Google Spanner, the Largest Single Database on Earth | WIRED"
|
"https://www.wired.com/2012/11/google-spanner-time"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Exclusive: Inside Google Spanner, the Largest Single Database on Earth Save this story Save Save this story Save Each morning, when Andrew Fikes sat down at his desk inside Google headquarters in Mountain View, California, he turned on the "VC" link to New York.
VC is Google shorthand for video conference. Looking up at the screen on his desk, Fikes could see Wilson Hsieh sitting inside a Google office in Manhattan, and Hsieh could see him. They also ran VC links to a Google office in Kirkland, Washington, near Seattle. Their engineering team spanned three offices in three different parts of the country, but everyone could still chat and brainstorm and troubleshoot without a moment's delay, and this is how Google built Spanner.
"You walk into our cubes, and we've got VC on -- all the time," says Fikes, who joined Google in 2001 and now ranks among the company's distinguished software engineers. "We've been doing this for years. It lowers all the barriers to communication that you typically have." >'As a distributed-systems developer, you're taught from -- I want to say childhood -- not to trust time. What we did is find a way that we could trust time -- and understand what it meant to trust time.' Andrew Fikes The arrangement is only appropriate. Much like the engineering team that created it, Spanner is something that stretches across the globe while behaving as if it's all in one place.
Unveiled this fall after years of hints and rumors , it's the first worldwide database worthy of the name -- a database designed to seamlessly operate across hundreds of data centers and millions of machines and trillions of rows of information.
Spanner is a creation so large, some have trouble wrapping their heads around it. But the end result is easily explained: With Spanner, Google can offer a web service to a worldwide audience, but still ensure that something happening on the service in one part of the world doesn't contradict what's happening in another.
Google's new-age database is already part of the company's online ad system -- the system that makes its millions -- and it could signal where the rest of the web is going. Google caused a stir when it published a research paper detailing Spanner in mid-September, and the buzz was palpable among the hard-core computer systems engineers when Wilson Hsieh presented the paper at a conference in Hollywood, California, a few weeks later.
"It's definitely interesting," says Raghu Murthy, one of the chief engineers working on the massive software platform that underpins Facebook -- though he adds that Facebook has yet to explore the possibility of actually building something similar.
Google's web operation is significantly more complex than most, and it's forced to build custom software that's well beyond the scope of most online outfits. But as the web grows, its creations so often trickle down to the rest of the world.
Before Spanner was revealed, many didn't even think it was possible. Yes, we had "NoSQL" databases capable of storing information across multiple data centers, but they couldn't do so while keeping that information "consistent" -- meaning that someone looking at the data on one side of the world sees the same thing as someone on the other side. The assumption was that consistency was barred by the inherent delays that come when sending information between data centers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But in building a database that was both global and consistent, Google's Spanner engineers did something completely unexpected. They have a history of doing the unexpected. The team includes not only Fikes and Hsieh, who oversaw the development of BigTable, Google's seminal NoSQL database, but also legendary Googlers Jeff Dean and Sanjay Ghemawat and a long list of other engineers who worked on such groundbreaking data-center platforms as Megastore and Dremel.
This time around, they found a new way of keeping time.
"As a distributed systems developer, you're taught from -- I want to say childhood -- not to trust time," says Fikes. "What we did is find a way that we could trust time -- and understand what it meant to trust time." Time Is of the Essence On the net, time is of the essence. Yes, in running a massive web service, you need things to happen quickly. But you also need a means of accurately keeping track of time across the many machines that underpin your service. You have to synchronize the many processes running on each server, and you have to synchronize the servers themselves, so that they too can work in tandem. And that's easier said than done.
Typically, data-center operators keep their servers in sync using what's called the Network Time Protocol, or NTP. This is essentially an online service that connects machines to the official atomic clocks that keep time for organizations across the world. But because it takes time to move information across a network, this method is never completely accurate, and sometimes, it breaks altogether. In July, several big-name web operations experienced problems -- including Reddit, Gawker, and Mozilla -- because their software wasn't prepared to handle a "leap second" that was added to the world's atomic clocks.
>'We wanted something that we were confident in. It's a time reference that's owned by Google.' Andrew Fikes But with Spanner, Google discarded the NTP in favor of its own time-keeping mechanism. It's called the TrueTime API. "We wanted something that we were confident in," Fikes says. "It's a time reference that's owned by Google." Rather than rely on outside clocks, Google equips its Spannerized data centers with its own atomic clocks and GPS (global positioning system) receivers, not unlike the one in your iPhone. Tapping into a network of satellites orbiting the Earth, a GPS receiver can pinpoint your location, but it can also tell time.
These time-keeping devices connect to a certain number of master servers, and the master servers shuttle time readings to other machines running across the Google network. Basically, each machine on the network runs a daemon -- a background software process -- that is constantly checking with masters in the same data center and in other Google data centers, trying to reach a consensus on what time it is. In this way, machines across the Google network can come pretty close to running a common clock.
'The System Responds -- And Not a Human' How does this bootstrap a worldwide database? Thanks to the TrueTime service, Google can keep its many machines in sync -- even when they span multiple data centers -- and this means they can quickly store and retrieve data without stepping on each other's toes.
"We can commit data at two different locations -- say the West Coast [of the United States] and Europe -- and still have some agreed upon ordering between them," Fikes says, "So, if the West Coast write happens first and then the one in Europe happens, the whole system knows that -- and there's no possibility of them being viewed in a different order." >'By using highly accurate clocks and a very clever time API, Spanner allows server nodes to coordinate without a whole lot of communication.' Andy Gross According to Andy Gross -- the principal architect at Basho, an outfit that builds an open source database called Riak that's designed to run across thousands of servers -- database designers typically seek to synchronize information across machines by having them talk to each other. "You have to a do a whole lot of communication to decide the correct order for all the transactions," he told us this fall , when Spanner was first revealed.
The problem is that this communication can bog down the network -- and the database. As Max Schireson -- the president of 10gen, maker of the NoSQL database MongoDB -- told us: "If you have large numbers of people accessing large numbers of systems that are globally distributed so that the delay in communications between them is relatively long, it becomes very hard to keep everything synchronized. If you increase those factors, it gets even harder." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So Google took a completely different tack. Rather than struggle to improve communication between servers, it gave them a new way to tell time. "That was probably the coolest thing about the paper: using atomic clocks and GPS to provide a time API," says Facebook's Raghu Murthy.
In harnessing time, Google can build a database that's both global and consistent, but it can also make its services more resistant in the face of network delays, data-center outages, and other software and hardware snafus. Basically, Google uses Spanner to accurately replicate its data across multiple data centers -- and quickly move between replicas as need be. In other words, the replicas are consistent too.
When one replica is unavailable, Spanner can rapidly shift to another. But it will also move between replicas simply to improve performance. "If you have one replica and it gets busy, your latency is going to be high. But if you have four other replicas, you can choose to go to a different one, and trim that latency," Fikes says.
One effect, Fikes explains, is that Google spends less money managing its system. "When there are outages, things just sort of flip -- client machines access other servers in the system," he says. "It's a much easier service story.... The system responds -- and not a human." Spanning Google's Footsteps Some have questioned whether others can follow in Google's footsteps -- and whether they would even want to. When we spoke to Andy Gross, he guessed that even Google's atomic clocks and GPS receivers would be prohibitively expensive for most operations.
Yes, rebuilding the platform would be a massive undertaking. Google has already spent four and half years on the project, and Fikes -- who helped build Google's web history tool, its first product search service, and Google Answers, as well as BigTable -- calls Spanner the most difficult thing he has ever worked on. What's more, there are countless logistical issues that need dealing with.
>'The important thing to think about is that this is a service that is provided to the data center. The costs of that are amortized across all the servers in your fleet. The cost per server is some incremental amount -- and you weigh that against the types of things we can do for that.' Andrew Fikes As Fikes points out, Google had to install GPS antennas on the roofs of its data centers and connect them to the hardware below. And, yes, you do need two separate types of time keepers. Hardware always fails, and your time keepers must fail at, well, different times. "The atomic clocks provide stability if there is a GPS issue," he says.
But according to Fikes, these are relatively inexpensive devices. The GPS units aren't as cheap as those in your iPhone, but like Google's atomic clocks, they cost no more than a few thousand dollars apiece. "They're sort of in the order of the cost of an enterprise server," he says, "and there are a lot of different vendors of these devices." When we discussed the matter with Jeff Dean -- one of Google primary infrastructure engineers and another name on the Spanner paper -- he indicated much the same.
Fikes also makes a point of saying that the TrueTime service does not require specialized servers. The time keepers are kept in racks onside the servers, and again, they need only connect to some machines in the data center.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "You can think of it as only a handful of these devices being in each data center. They're boxes. You buy them. You plug them into your rack. And you're going to connect to them over Ethernet," Fikes says. "The important thing to think about is that this is a service that is provided to the data center. The costs of that are amortized across all the servers in your fleet. The cost per server is some incremental amount -- and you weigh that against the types of things we can do for that." No, Spanner isn't something every website needs today. But the world is moving in its general direction. Though Facebook has yet to explore something like Spanner, it is building a software platform called Prism that will run the company's massive number crunching tasks across multiple data centers.
Yes, Google's ad system is enormous, but it benefits from Spanner in ways that could benefit so many other web services. The Google ad system is an online auction -- where advertisers bid to have their ads displayed as someone searches for a particular item or visits particular websites -- and the appearance of each ad depends on data describing the behavior of countless advertisers and web surfers across the net. With Spanner, Google can juggle this data on a global scale, and it can still keep the whole system in sync.
As Fikes put it, Spanner is just the first example of Google taking advantage of its new hold on time. "I expect there will be many others," he says. He means other Google services, but there's a reason the company has now shared its Spanner paper with the rest of the world.
Illustration by Ross Patton Senior Writer X Topics databases Enterprise Google NoSQL software Steven Levy Deidre Olsen Reece Rogers Morgan Meaker Paresh Dave Morgan Meaker Vittoria Elliott Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
863 | 2,012 |
"Google's Dremel Makes Big Data Look Small | WIRED"
|
"https://www.wired.com/2012/08/googles-dremel-makes-big-data-look-small"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google's Dremel Makes Big Data Look Small Mike Olson is one of the main brains behind the Hadoop movement. But even he looks toward the new breed of "Big Data" software used inside Google.
Photo: Wired.com/Jon Snyder Save this story Save Save this story Save Mike Olson runs a company that specializes in the world's hottest software. He's the CEO of Cloudera , a Silicon Valley startup that deals in Hadoop, an open source software platform based on tech that turned Google into the most dominant force on the web.
Hadoop is expected to fuel a $813 million software market by the year 2016. But even Olson says it's already old news.
Hadoop sprung from two research papers Google published in late 2003 and 2004. One described the Google File System , a way of storing massive amounts of data across thousands of dirt-cheap computer servers, and the other detailed MapReduce , which pooled the processing power inside all those servers and crunched all that data into something useful. Eight years later, Hadoop is widely used across the web, for data analysis and all sorts of other number-crunching tasks. But Google has moved on.
In 2009, the web giant started replacing GFS and MapReduce with new technologies , and Mike Olson will tell you that these technologies are where the world is going. "If you want to know what the large-scale, high-performance data processing infrastructure of the future looks like, my advice would be to read the Google research papers that are coming out right now ," Olson said during a recent panel discussion alongside Wired.
'If you want to know what the large-scale, high-performance data processing infrastructure of the future looks like, my advice would be to read the Google research papers that are coming out right now.
' — Mike OlsonSince the rise of Hadoop, Google has published three particularly interesting papers on the infrastructure that underpins its massive web operation. One details Caffeine , the software platform that builds the index for Google's web search engine.
Another shows off Pregel, a " graph database" designed to map the relationships between vast amounts of online information. But the most intriguing paper is the one that describes a tool called Dremel.
"If you had told me beforehand me what Dremel claims to do, I wouldn't have believed you could build it," says Armando Fox , a professor of computer science at the University of California, Berkeley who specializes in these sorts of data-center-sized software platforms.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Dremel is a way of analyzing information. Running across thousands of servers, it lets you "query" large amounts of data, such as a collection of web documents or a library of digital books or even the data describing millions of spam messages. This is akin to analyzing a traditional database using SQL, the Structured Query Language that has been widely used across the software world for decades. If you have a collection of digital books, for instance, you could run an ad hoc query that gives you a list of all the authors -- or a list of all the authors who cover a particular subject.
"You have a SQL-like language that makes it very easy to formulate ad hoc queries or recurring queries -- and you don't have to do any programming. You just type the query into a command line," says Urs Hölzle, the man who oversees the Google infrastructure.
The difference is that Dremel can handle web-sized amounts of data at blazing fast speed. According to Google's paper, you can run queries on multiple petabytes --- millions of gigabytes -- in a matter of seconds.
Hadoop already provides tools for running SQL-like queries on large datasets. Sister projects such as Pig and Hive were built for this very reason. But with Hadoop, there's lag time. It's a "batch processing" platform. You give it a task. It takes a few minutes to run the task -- or a few hours. And then you get the result. But Dremel was specifically designed for instant queries.
"Dremel can execute many queries over such data that would ordinarily require a sequence of MapReduce jobs, but at a fraction of the execution time," reads Google's Dremel paper. Hölzle says it can run a query on a petabyte of data in about three seconds.
According to Armando Fox, this is unprecedented. Hadoop is the centerpiece of the "Big Data" movement, a widespread effort to build tools that can analyze extremely large amounts of information. But with today's Big Data tools, there's often a drawback. You can't quite analyze the data with the speed and precision you expect from traditional data analysis or "business intelligence" tools. But with Dremel, Fox says, you can.
"They managed to combine large-scale analytics with the ability to really drill down into the data, and they've done it in a way that I wouldn't have thought was possible," he says. "The size of the data and the speed with which you can comfortably explore the data is really impressive. People have done Big Data systems before, but before Dremel, no one had really done a system that was that big and that fast.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "Usually, you have to do one or the other. The more you do one, the more you have to give up on the other. But with Dremel, they did both." >'Before Dremel, no one had really done a system that was that big and that fast. Usually, you have to do one or the other. The more you do one, the more you have to give up on the other. But with Dremel, they did both.' — Armando Fox According to Google's paper, the platform has been used inside Google since 2006, with "thousands" of Googlers using it to analyze everything from the software crash reports for various Google services to the behavior of disks inside the company's data centers. Sometimes, the tool is used with tens of servers, sometime with thousands.
Despite Hadoop's undoubted success, Cloudera's Mike Olson says that the companies and developers who built the platform were rather slow off the blocks. And we're seeing the same thing with Dremel. Google published the Dremel paper in 2010, but we're still a long way from seeing the platform mimicked by developers outside the company. A team of Israeli engineers is building a clone they called OpenDremel , though one of these developers, David Gruzman, tells us that coding is only just beginning again after a long hiatus.
Mike Miller -- an affiliate professor of particle physics at the University of Washington and the chief scientist of Cloudant , a company that's tackling many of the same data problems Google has faced over the years -- is amazed we haven't seen some big-name venture capitalist fund a startup dedicated to reverse-engineering Dremel.
That said, you can use Dremel today -- even if you're not a Google engineer. Google now offers a Dremel web service it calls BigQuery.
You can use the platform via an online API, or application programming interface. Basically, you upload your data to Google, and it lets you run queries on its internal infrastructure.
This is part of a growing number of cloud services offered by the company. First, it let you run build, run, and host entire applications atop its infrastructure using a service called Google App Engine, and now it offers various other utilities that run atop this same infrastructure, including BigQuery and the Google Compute Engine, which serves up instant access to virtual servers.
The rest of the world may lag behind Google. But Google is bringing itself to the rest of the world.
Senior Writer X Topics analytics Cloud Computing data databases Dremel Enterprise Google Infrastructure maps software Will Knight Gregory Barber Will Knight Kari McMahon Will Knight Will Knight Steven Levy Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
864 | 2,023 |
"The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis | WIRED"
|
"https://www.wired.com/story/culturepulse-ai-israeli-palestinian-crisis"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons David Gilbert The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis Photograph: MAHMUD HAMS/Getty Images Save this story Save Save this story Save Training artificial intelligence models does not typically involve coming face-to-face with an armed soldier who is pointing a gun at you and shouting at your driver to get out of the car. But the system that F. LeRon Shults and Justin Lane, cofounders of CulturePulse, are developing for the United Nations is not a typical AI model.
“I got pulled over by the [Israeli] military, by a guy holding [a military rifle] because we had a Palestinian taxi driver who drove past a line he wasn't supposed to,” Shults tells WIRED. “So that was an adventure.” Shults and Lane were in the West Bank in September, just weeks before Hamas attacked Israel on October 7, sparking what has become one of the worst periods of violence in the region in at least 50 years.
Shults and Lane—both Americans who are now based in Europe—were on the ground as part of a contract they signed with the UN in August to develop a first-of-its-kind AI model that they hope will help analyze solutions to the Israel-Palestinian conflict.
Shults and Lane are aware that claiming that AI could “solve the crisis” between Israelis and Palestinians is likely to result in a lot of eye-rolling if not outright hostility, especially given the horrific scenes coming out of Gaza daily. So they are quick to dispel that this is what they are trying to do.
“Quite frankly, if I were to phrase it that way, I'd roll my eyes too,” Shults says. “The key is that the model is not designed to resolve the situation; it's to understand, analyze, and get insights into implementing policies and communication strategies.” The conflict in the region is centuries old and deeply complex, and it's made even more complicated by the current crisis. Countless efforts at finding a political solution have failed, and any eventual end to the crisis will need support not just from the two sides involved, but likely the wider international community. All of this makes it impossible for an AI system to simply spit out a fully formed solution. Instead, CulturePulse aims to pinpoint the underlying causes of the conflict.
“We know that you can't solve a problem this complex with a single AI system. That's not ever going to be feasible in my opinion,” Lane tells WIRED. “What is feasible is using an intelligent AI system—using a digital twin of a conflict—to explore the potential solutions that are there.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The digital twin Lane is speaking of is CulturePulse’s multi-agent AI model they are building that will ultimately allow them to create a virtual version of the region. In past iterations, the model has replicated every single person virtually each imbued with demographics, religious beliefs, and moral values that echo their real-world counterparts, according to Shults and Lane.
In total, CulturePulse’s models can factor in over 80 categories to each “agent,” including traits like anger, anxiety, personality, morality, family, friends, finances, inclusivity, racism, and hate speech, though not all characteristics are used in all models.
“These models are entire artificial societies, with thousands or millions of simulated adaptive artificially intelligent agents that are networked with each other, and they're designed in a way that is more psychologically realistic and more sociologically realistic,” Shults says. “Basically you have a laboratory, an artificial laboratory, that you can play with on your PC in ways that you could never do ethically, certainly, in the real world.” The current project will initially model the socio-ecological aspects of the Israeli-Palestinian region that are relevant to the conflict, meaning it is smaller in scale than some of their previous projects. However, should the project be expanded in the future, a model could allow the UN to see how the virtual society would react to changes in economic prosperity, heightened security, changing political influences, and a range of other parameters. Shults and Lane claim their model predicts outcomes with clinical accuracy of over 95 percent confidence to real-world outcomes.
“It goes beyond just learning randomly and finding patterns like machine learning, and it goes beyond statistics, which gives you correlations,” Shults says. “It actually gets to a causality, because of the multi-agent AI system which grows the conflict, or the polarization, or the peaceful immigration policy from the ground up. So it shows you what you want to create before you try it out in the real world.” Discussions around AI and the Israel-Hamas war have so far been focused on the threat posed by generative AI to push disinformation. While those threats have yet to materialize , news cycles have been clouded by disinformation and misinformation being shared by all sides.
Rather than trying to eliminate this disruptive element, CulturePulse’s model has in the past factored this type of information directly into its analysis.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We actually deliberately want to make sure that those materials that are biased are being put into these models. They just need to be put into the model in a psychologically real way,” Lane says.
The horrific massacres and humanitarian crises happening in Israel and Gaza over the past month have brought home the pressing need for a solution to the deeply rooted conflict. But before the latest outbreak in violence in the region, the UN Development Program (UNDP) was already exploring new options in trying to find a resolution, signing an initial five-month contact with CulturePulse in August.
The application of artificial intelligence technologies to conflict situations has been around since at least 1996 , with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years , being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.
Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.
In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.
“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.
The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation , which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.
A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt , a project that monitors “the world's broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research.
Lane spent months finding and speaking to those directly involved in the violence, such as members of the Ulster Volunteer Force (UVF), a paramilitary group loyal to the British crown, and the Irish Republican Army (IRA), a paramilitary group seeking the end of British rule on the island of Ireland. The information that Lane gathered in these interviews was fed into his model in order to give a more complete understanding of the psychology behind the violence that had riven the country for three decades.
While Lane is now based in Slovakia, he maintains the links he built up while in Northern Ireland, returning at least once a year to speak to the people again, and update his model with the latest information. If during these conversations Lane hears about a particular issue or a reason why someone took a particular action that’s not present in the AI model, the team will see if there is lab data to back it up before putting it into his model.
“And if the data doesn’t exist, we'll go out and we'll do our own experimentation with universities to see if there is evidence, and then we will build that into our project,” Lane says.
In recent years, Lane and Shults have worked with a number of groups and governments to apply their model to better understand situations across the globe, including the conflicts in South Sudan and the Balkans. The model has also been used in the Syrian Refugee Crisis, where Lane and Shults traveled to the Greek island of Lesbos to gather firsthand information to help their system integrate refugees with host families. CulturePulse has also worked with the Norwegian government to tackle the spread of Covid-19 misinformation by better understanding the reasons why someone is sharing inaccurate information.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Key to the success of all of these efforts is the collection of firsthand information about what’s happening on the ground. And so, when they signed the contract with the UNDP in August, the first thing Shults and Lane wanted to arrange was a visit to Israel and the West Bank, where they spent “about a week” gathering data. “We met with the UN and different NGOs going out to the villages, seeing firsthand what it looks like with the settler dynamics that are there,” Shults says. The pair hoped to go to Gaza but were not able to secure permission in advance. The trip to Israel also included time speaking to their employers to find out exactly what it is they are hoping to get from this project.
“We spent a whole week extracting from the UN officials we met information that's relevant, that we need to know for the model, getting a sense of their understanding of the dynamics, the data that they might have that could inform the model's calibration and final validation,” Shults says.
Shults would not discuss the detailed parameters the UN had specified be built into the model, but his team gives the UN team regular updates over Zoom on the construction of the model and “the simulation experiments that are being run to test out the conditions and mechanisms that might lead to outcomes that they desire,” he says.
The UNDP has not yet responded to WIRED’s request for comment.
CulturePulse’s contract with the UNDP runs out in January, but they are hopeful of signing a phase-two contract that would see them build out a fully functional model. CulturePulse this month also signed a nine-month contract with UNDP to work on a system that would help resolve cultural and religious issues still causing conflict in Bosnia and Herzegovina since the end of the Bosnian War in 1995.
The reason the UN is turning to AI in the Israeli-Palestinian conflict, according to Lane, is that it simply has nowhere else to turn. “The way that the UN phrased it to us is that there's no more low-hanging fruit in that situation,” Lane says. “They needed to try something that was new and innovative, something that was really thinking outside of the box yet still really addressing the root issues of the problem.” Updated at 12:55 pm ET, November 3, 2023, to clarify the scope and limitations of the AI model CulturePulse is currently building in relation to the Israeli-Palestinian conflict and the details of the founders' attempt to visit Gaza while in the region prior to the ongoing Israel-Hamas war.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Contributor X Topics Israel-Hamas War artificial intelligence war national security Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Adrienne So Eric Ravenscraft Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
865 | 2,011 |
"Feb. 10, 1996: Checkmate! | WIRED"
|
"https://www.wired.com/thisdayintech/2011/02/0210computer-deep-blue-beats-chess-champ-kasparov"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tony Long Feb. 10, 1996: Checkmate! Save this story Save Save this story Save 1996: The first chess game between a human champion and a computer takes place, with international grandmaster Garry Kasparov losing to IBM's Deep Blue in Philadelphia.
Had Kasparov gone on to lose the whole match, it would have only stoked the fears of those believers in a dystopian world where man is ruled by his inventions.
But Kasparov, who became the world's youngest grand master in 1985 at the age of 22, recovered his equilibrium after his initial stumble. He won the next game , then drew twice before taking Games 5 and 6 to win the match, 4-2.
Kasparov lost a rematch to Deep Blue the following year -- his first match loss ever to any kind of opponent. Then, in 2001, he managed a 3-3 draw against Deep Junior, an entirely different software program.
Aside from their stunt value, these man-vs.-computer matches have changed the way that chess is played, and not necessarily for the better. "We don't work at chess anymore," complained grandmaster Evgeny Bareev. "We just look at the stupid computer, we follow the latest games and find small improvements. We have lost depth." Others, however, are more philosophical: "Cars can outrun us, but that hasn't stopped us from having foot races," said U.S. grandmaster Maurice Ashley. "Even if a computer is the best player on the planet, I'll still want to go around the corner, set up the chess pieces and try to kick your butt." Matt Blum assayed the significance of the first match on Wired.com's GeekDad blog in 2010: While nobody could have known at the time, this was the moment when machines truly began their conquest of Earth.
Despite Kasparov rebounding from his first-game loss to beat Deep Blue in the match, the computer's win demonstrated the inevitability of the rise of artificially intelligent devices. When the upgraded Deep Blue won the rematch against Kasparov the following year, there were those who thought this presaged humanity's downfall, but they were largely scoffed at as conspiracy theorists.
So raise a glass in toast to our robot overlords... Did I say "overlords?" I meant " protectors.
" Source: Various Photo: Kasparov battles Deep Blue in 1997.
An earlier version of this article appeared on Wired.com Feb. 10, 2007.
See Also: Lego Chess Set Costs More Than a MacBook Air Nook Software Update Adds Web Browser, Chess Game Table: Chess and Card-Game Simulator for iPad Advanced Chess More Wired.com coverage of chess Jan. 5, 1996: Introducing the Cellphone Bomb Feb. 8, 1996: We (Mostly) Celebrate 24 Hours in Cyberspace April 3, 1996: Unabomber Nabbed in His Montana Hideout April 14, 1996: JenniCam Starts Lifecasting July 23, 1996: Stand By … High Definition TV Is on the Air Oct. 2, 1996: FOIA Law Ushers in Digital Democracy Nov. 19, 1996: Canadian Bridge Crosses 8 Miles of Icy Ocean Dec. 4, 1996: GM Delivers EV1 Electric Car Dec. 14, 1996: Big Holiday Bonus Shows Workers the Money Dec. 20, 1996: Science Loses Its Most Visible Public Champion Feb. 10, 1961: Moses Parts the Waters at Niagara Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Topics 20th century Chess Computers and IT IBM video games This Day in Tech Virginia Heffernan Christopher Beam Steven Levy Will Knight Nika Simovich Fisher Samanth Subramanian Carlton Reid Vauhini Vara Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
866 | 2,011 |
"Jan. 12, 1992 or 1997: HAL of a Computer | WIRED"
|
"https://www.wired.com/thisdayintech/2011/01/0112hal-born-space-odyssey"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Randy Alfred Jan. 12, 1992 or 1997: HAL of a Computer Save this story Save Save this story Save Spoiler alert: If you haven't seen or read 2001: A Space Odyssey*, this article contains details that reveal important plot developments. So, if you like to be a tabula rasa when you view a film or read a novel, stop here.* 1992 , or maybe 1997: HAL 9000, the master computer aboard the Discovery spaceship in the novel and film 2001: A Space Odyssey, becomes operational. He will inspire millions of dreams — and some nightmares — of artificial intelligence.
See Also: HAL's Pals: Top 10 Evil Computers First, the year: When astronaut Dave Bowman is removing the hardware modules that govern the computer's higher cognitive functions, HAL regresses to his infancy and begins an eerie recitation of bits of his earliest knowledge: "I am a HAL 9000 Computer Production No. 3. I became operational at the H—A—L plant in Urbana, Illinois, on the 12th of January, 1992." At least that's what HAL says in the 1968 film.
Director Stanley Kubrick and author Arthur C. Clarke co-wrote the screenplay, inspired by Clarke's 1950 short story "The Sentinel." The film was not based on a novel, but Clarke soloed a novelized version of the screenplay. In the novel, he changed HAL's birth year to 1997.
Now, the name: Chapter 16 of the novel clearly states that HAL stands for " H euristically programmed AL gorithmic computer." Many film viewers, however, thought HAL was a one-letter-ahead cipher for IBM. In his book The Lost Worlds of 2001, Clarke dismissed that idea as embarrassing, given all the help IBM had given to the film: "We ... would have changed the name had we spotted the coincidence." In fact, HAL's original name was Athena, goddess of war, wisdom and fertility, but Kubrick decided a male personality and voice would be better for a menacing supercomputer.
Martin Balsam was cast first for the role, but was dropped because his voice was too emotional. Canadian Shakespearean actor Douglas Rain won the role with neutral, unctuous tones.
The place: Urbana, Illinois is home to the University of Illinois and — since 1986 — the National Center for Supercomputing Applications , which developed the first web browser, Mosaic.
HAL's lobotomy monologue in the book mentions his first instructor, Dr. Chandra. In fact, the only Chandra at UI in 1968 was a Mr. Shasti Chandra. He was writing his thesis on spacecraft attitude control, but told a reporter he had nothing to do with making the film.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The movie cost $10.5 million ($66 million in today's money) and premiered in New York City on April 3, 1968. The dazzling special effects did not impress all the critics : The New York Times described 2001 as "somewhere between hypnotic and immensely boring," while Pauline Kael deemed it "monumentally unimaginative." Kubrick promptly cut 19 minutes from the film, and the final cut debuted three days later.
HAL also appears in three sequels: 2010: The Year We Make Contact (aka 2010: Odyssey Two), 2061: Odyssey Three and 3001: The Final Odyssey. In 2010, Dr. Chandra further pooh-poohs the IBM-HAL name theory.
Source: The Making of Kubrick's 2001 , ed. Jerome Agel, Signet, 1970 Image: Dave Bowman starts dismantling HAL 9000's central core in the Discovery*.
Courtesy MGM* This article first appeared on Wired.com Jan. 12, 2009.
See Also: Happy Birthday, HAL Happy Birthday, Stanley Kubrick! Pink Floyd's 'Echoes' Is Perfect Synch for 2001: A Space Odyssey Wired.com coverage of Arthur C. Clarke March 6, 1992: False Alarm May 5, 1992: Wolfenstein 3-D Shoots the First-Person Shooter Into Stardom Oct. 9, 1992: My Insurance Agent Will Never Believe This Oct. 26, 1992: Software Glitch Cripples Ambulance Service Nov. 5, 1992: Oldest Beer Ever January 1997: CES Happens in Vegas, Stays in Vegas Jan. 22, 1997: Heads Up, Lottie! It's Space Junk! Feb. 22, 1997: Hello, Dolly May 11, 1997: Machine Bests Man in Tournament-Level Chess Match June 12, 1997: Trying to Stay One Jump Ahead of the Counterfeiters June 25, 1997: Minor Collision Doesn't Dull Cargo Ship's Luster July 10, 1997: Neanderthal DNA Suggests a Separate, Unequal Being Aug. 6, 1997: Apple Rescued — by Microsoft Sept. 16, 1997: Jobs Returns to Apple Dec. 11, 1997: World Signs Onto Kyoto Protocol Dec. 18, 1997: Tokyo Bay Tunnel Opens Jan. 12, 1665: Fermat's Last Breath Jan. 12, 1967: It's Cold in Here Topics 2001: A Space Odyssey 20th century Computers and IT Movies New York City new york times Sci-fi space Stanley Kubrick This Day in Tech Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Adrienne So Eric Ravenscraft Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
867 | 2,010 |
"January 1997: CES Happens in Vegas, Stays in Vegas | WIRED"
|
"https://www.wired.com/thisdayintech/2010/01/0107ces-history"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Randy Alfred January 1997: CES Happens in Vegas, Stays in Vegas Save this story Save Save this story Save 1997: The Consumer Electronics Show, previously a semi-annual event in Las Vegas and Chicago, becomes a Las Vegas annual. The show is on.
Organizers held the first CES in New York City from June 24 to 28, 1967. The 200 exhibitors attracted 17,500 attendees to the Hilton and Americana hotels over those four days. On view: the latest pocket radios and TVs sporting (gasp!) integrated circuits.
CES Debuts 1970 VCR 1976 Cheap digital watches 1974 Laserdisc player 1975 Atari Pong home console 1977 VHS VCR 1978 Early home computers 1981 Camcorder 1981 Compact Disc player 1982 Commodore 64 computer 1984 Amiga computer 1985 Nintendo Entertainment System 1988 Tetris 1990 Digital Audio Technology 1991 Compact Disc-Interactive 1993 Mini Disc 1993 Radio Data System 1994 Digital Satellite System 1996 DVD 1998 HDTV 1999 DVR 2000 Digital Audio Radio 2001 Microsoft Xbox 2001 Plasma TV 2002 Home Media Server 2003 HD Radio 2003 Blu-ray DVD 2005 IP TV 2006 New digital content services 2007 New tech-content convergence 2008 OLED TV 2009 Palm Pre 2010 3-D TVs The next year, CES occupied three New York hotels. One radio on display was small enough to wear on your wrist, but it was no Dick Tracy–style transceiver.
For two-way communication, you needed the wonder of the age: a Portable Executive Telephone.
It cost more than $2,000 (that's $12,500 in today's money), weighed 19 pounds and required an FCC license to operate.
Many new products and concepts have debuted at CES over the years. (See box at left.) Some have come and gone, others have come and stayed.
CES moved to Chicago in 1972 and went to two shows a year there in 1973. By 1978, the show fell into a regular rhythm: The Winter CES took place in Las Vegas in January, and the Summer CES was a June affair in Chicago.
Chicago dropped out of the picture after 1994. The Las Vegas show grew year by year, but the Chicago show was losing its luster. So, the Consumer Electronics Association decided to rotate the June show around the country.
Problem was, the dates picked for the June 1995 show in Philadelphia conflicted with the E3 gaming show on the West Coast. Exhibitors raised a stink, and the June CES was canceled.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There was one more summer show – in Orlando, Florida, in 1996. Only two dozen exhibitors signed up for a proposed 1997 spring CES in Atlanta. It was canceled, along with the whole idea of holding two shows a year.
Las Vegas was king. Las Vegas still reigns.
The CES 2010 drew more than 126,000 people, a 12 percent increase from the previous year, according to an independent audit released by organizers. That included more than 24,000 international visitors from 136 countries. Organizers announced that 330 new companies joined the more than 2,500 exhibitors to unveil an estimated 20,000 new products.
The Consumer Electronics Association also said each person attending the show averages 12 meetings, resulting in a total of 1.7 million meetings.
That's 1.7 megameets in our book.
Dates for future Consumer Electronics Shows – all in Las Vegas – are set right through 2022.
CES 2011 runs Jan. 6 through Jan. 9, and Wired.com will be there to give you full coverage.
Source: Various Photo: Visitors flock to CES 2009.
Jon Snyder/Wired.com This January 2010 article has been updated.
See Also: Wired.com at CES 2011 Wired.com at CES 2010 Wired.com at CES 2009 Wired.com at CES 2008 Wired.com at CES 2007 Jan. 12, 1997: HAL of a Computer Jan. 22, 1997: Heads Up, Lottie! It's Space Junk! Feb. 22, 1997: Hello, Dolly May 11, 1997: Machine Bests Man in Tournament-Level Chess Match June 12, 1997: Trying to Stay One Jump Ahead of the Counterfeiters June 25, 1997: Minor Collision Doesn't Dull Cargo Ship's Luster July 10, 1997: Neanderthal DNA Suggests a Separate, Unequal Being Aug. 6, 1997: Apple Rescued — by Microsoft Sept. 16, 1997: Jobs Returns to Apple Dec. 11, 1997: World Signs Onto Kyoto Protocol Dec. 18, 1997: Tokyo Bay Tunnel Opens Topics CES Computers and IT gadgets Las Vegas This Day in Tech Angela Watercutter Elizabeth Finkel Angela Watercutter Jennifer M. Wood Lily Hay Newman Steven Levy Caitlin Harrington Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
868 | 2,021 |
"These apps say they can detect cancer. But are they only for white people? | US healthcare | The Guardian"
|
"https://www.theguardian.com/us-news/2021/aug/28/ai-apps-skin-cancer-algorithms-darker"
|
"People can use cellphones to catch a slew of skin conditions but questions of accuracy and biases in algorithm databases remain US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness Dermatology apps use AI to try to identify a range of skin conditions.
Illustration: Ulises Mendicutty/The Guardian Dermatology apps use AI to try to identify a range of skin conditions.
Illustration: Ulises Mendicutty/The Guardian Automating care US healthcare These apps say they can detect cancer. But are they only for white people? People can use cellphones to catch a slew of skin conditions but questions of accuracy and biases in algorithm databases remain Jyoti Madhusoodanan Sat 28 Aug 2021 04.00 EDT I n a video, 30-year-old Stacey Everson tells the story of how she picked up her phone, snapped a selfie, and saved her own life. She might have easily overlooked the small, irregular mole on her upper left arm. But prompted by friends and family, she took a picture of the growth with an app named SkinVision, and followed up on the app’s recommendation that she see a doctor, urgently. The doctor removed and tested the growth. “A week later, it came back positive for early-stage melanoma,” she says. “Something like that, I wouldn’t have thought it would be cancer.” Her testimonial is one of several on the product’s website, and SkinVision is only one of several such artificial intelligence (AI)-based apps that aim to help anyone with a smartphone catch a slew of skin diseases – including lethal cancers – earlier than ever. The latest entrant is Google’s Derm Assist, a tool that aims to help users detect more than 288 common skin conditions. Almost 10bn internet users search for terms related to skin conditions each year, says Peggy Bui, a product manager at Google.
But few of those searchers receive expert care. The US faces a dearth of dermatologists, and many tend to cluster in urban areas, so large swaths of the population find themselves driving several hours to seek care, or waiting weeks or months for appointments.
AI-based algorithms such as SkinVision, Derm Assist and others could ease these difficulties. None of them offer a diagnosis – at best, they flag growths as harmless or “high risk” and recommend whether a patient should seek care. But many moles turn out to be harmless, so the apps could help patients or primary care physicians – who might not feel confident identifying a skin cancer – figure out which patients really need specialist care.
“There are many different ways that artificial intelligence can help with triage and decision making to provide support to the physician rather than trying to do their job,” says dermatologist Roxana Daneshjou of Stanford University. “There are opportunities for these algorithms to improve patient care.” But the algorithms are far from ideal, in part because they threaten to augment existing racial biases in the field of dermatology.
In 2019 , researchers studying six apps to spot skin cancer found they had been tested only in small, poorly conducted studies. They have also raised concerns with how the algorithms are regulated. None of these apps are approved for use in the US. Some, such as SkinVision and Google’s Derm Assist, are approved for sale in the EU, although the researchers’ analysis suggested that approval “does not provide adequate protection to the public”.
“We lack a rigorous framework for even thinking about how we should evaluate and test these algorithms before they’re used by patients,” says dermatologist Veronica Rotemberg of Memorial Sloan Kettering Cancer Center in New York.
To develop these dermatology apps, researchers present a computer with a library of pictures of common skin conditions and teach the machine to classify each one correctly. Then, the algorithm is tested for its ability to “diagnose” a different set of images based on what it learned. As the algorithm analyzes images that users upload, it learns and evolves – ideally, making fewer mistakes over time.
AI algorithms developed in similar ways are already approved by the FDA for use in clinics. More than 100 are also available to help radiologists and clinicians interpret pictures from X-rays, CTs or retina scans.
But researchers have found these tools vary widely in their performance, as well as how and where they are trained. For instance, an algorithm developed at one clinic is likely to make more mistakes when diagnosing patients at a different clinic. In a pre-print posted online in October 2020, researchers found that AI algorithms to analyze chest X-rays produced systemic biases across race, age and insurance type.
The problems arise, in part, because of how algorithms learn to recognize patterns in pictures, says Stanford University researcher James Zou. A tool developed on images from a population of older, white male patients might pick up on cues unique to that cohort rather than the disease itself. Then, if those cues are absent in a younger Black woman, it may misdiagnose her symptoms.
Another fundamental flaw lies in databases that algorithms study – particularly for skin conditions. Common databases of skin images rarely capture the myriad variations in skin tones and textures from around the world. That’s in part because compared with white patients, only half as many Black or Hispanic people see dermatologists. Patients with less education or lower socioeconomic status are also far less likely to be represented in these image libraries.
A dermatologist examines a birthmark of patient, checking benign moles with magnifying glass.
While many companies use proprietary databases that claim to overcome these problems, regulators and clinicians have no way to know for sure. Others are more transparent in their methods but still contend with the whiteness of image libraries. These biased libraries are problematic even to human experts; dermatologists tend to be less comfortable diagnosing skin conditions in patients of color, according to studies of US dermatologists.
In preliminary studies, Google is working to solve the problem by using another type of AI to develop artificial images of disorders on darker skins, which may eventually help improve algorithms. For now, Derm Assist lets users know if there’s greater uncertainty about their results.
According to Adewole Adamson, a dermatologist at the University of Texas at Austin, other apps should include such warning labels to let users know the results might be less accurate if they have darker skin types. “It’s a little messed up to think an app is only for white people or Black or Asian people – you don’t want a segregated algorithm,” he says. “But at least that would be transparent.” Smartphone apps for skin conditions face another hurdle. Photographs captured by average users can vary widely. One user might snap a closeup on a sunlit beach, another might do so from a dimly lit bedroom. A growth that appears malignant maroon in one setting might look benign brown in another.
SkinVision’s website, for example, counts nearly 2 million users worldwide. And although success stories such as Everson’s make headlines, it’s uncertain how many others never needed a diagnosis at all. Even without apps to scrutinize every mole, rates of melanoma diagnosis in the US are six times higher than they were 40 years ago. But there has been no corresponding rise in how many people die of the disease. To Adamson and others, the data hint at an “epidemic of scrutiny” – not necessarily one of cancer itself.
Companies that make the smartphone apps “are banking on this accumulation of anecdotes”, says Adamson. “There’s a less provocative opposite version: the app said I had something, I went in for a biopsy, and it was nothing. You’re not going to hear that story.” But those stories are sprinkled over the internet already. On patient forums, people report taking pictures of their moles and finding themselves at “high risk” for cancer. Spurred by the SkinVision app that tagged an old mole on his foot high-risk, one user turned to the Reddit community for reassurance. “SkinVision says it’s high risk and now I’m absolutely terrified!” he wrote. A dermatologist suspected it was harmless and suggested he could wait and watch or biopsy. The individual – who asked the Guardian to remain anonymous owing to the personal nature of their health concerns – chose the latter, and is now awaiting results.
Tracy Callahan, a 46-year old nurse in Cary, North Carolina, has had five early-stage melanomas removed in the past eight years. Even as a cancer survivor who scrutinizes every inch of her skin, she’s unconvinced of the utility of these apps. “A lot of benign lesions can mimic an early stage melanoma, or it might be something bad, and the app might not pick up on it,” she says. “I don’t know if these apps necessarily help someone like me.” For algorithms to truly detect skin cancer well enough to bridge the gaps in dermatologic care, researchers, companies and regulatory agencies such as the FDA must converge on standards for these tools, Rotemberg says. One important factor, she says, is for algorithms to learn not just how to spot a disease, but when not to. Can a tool learn to recognize when it’s out of its depth and defer to a human? “Even if an algorithm is able to say, ‘I’ve never seen an image with this lighting or this skin type,’ it helps you as a clinician to know how useful its interpretation is,” Rotemberg says. “In those instances, you can fall back to the gold standard – a specialist’s opinion. And you’re not creating problems by introducing this imprecise tool in between.” Explore more on these topics US healthcare Automating care Cancer features Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
"
|
869 | 2,018 |
"Algorithms may outperform doctors, but they’re no healthcare panacea | Ivana Bartoletti | The Guardian"
|
"https://www.theguardian.com/commentisfree/2018/jul/26/tech-healthcare-ethics-artifical-intelligence-doctors-patients"
|
"US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian view Columnists Letters Opinion videos Cartoons ‘What impact would doctors increasingly coming to rely on algorithms have on the body of medical knowledge?’ Photograph: Alamy Stock Photo ‘What impact would doctors increasingly coming to rely on algorithms have on the body of medical knowledge?’ Photograph: Alamy Stock Photo Opinion Artificial intelligence (AI) Algorithms may outperform doctors, but they’re no healthcare panacea Thu 26 Jul 2018 07.48 EDT I t perhaps shouldn’t come as a surprise that Matt Hancock, the new health and social care secretary, made technology the theme of his first big speech in the new job. The former culture secretary is a renowned tech enthusiast and was the first MP to launch his own app.
Hancock is right that technology has great potential to improve the quality of our healthcare – and save money into the bargain. But it won’t be a panacea, and it raises a number of issues our society must deal with now.
Take artificial intelligence : there are already numerous examples of how it is enhancing the medical profession. Examples include robot-assisted surgery, virtual nursing assistants that could reduce unnecessary hospital visits and lessen the burden on medical professionals, and technologies that enable independent living by identifying changes in usual behaviour which need medical assistance. But AI also poses clear ethical challenges.
Two examples are worth focusing on in particular: the way in which changing approaches to medical knowledge could affect the doctor-patient relationship, and the ethics of how patients’ data gets used.
Until recently, patients would go to a doctor, explain their symptoms and the doctor would attempt to provide a diagnosis. But increasingly, patients now arrive having done their research online, all set to suggest (even to insist) on a diagnosis, to which the medic has to respond. Doctors tell me that this game of catch-up and partial role-reversal is already skewing the relationship of trust. In addition to this, we now have algorithm-based diagnostics. This means medical knowledge is no longer based on what the doctor themselves has studied and learned.
Algorithms can support decision-making by medical professionals, and often outperform the doctor. We are seeing this with cancer detection , and other fields where close observation of the patient data can create much more precise and personalised medicine, and provide earlier diagnosis. For example, the analysis of an individual’s touch strokes on their mobile phone could show up Parkinson’s because their texting speed decreases over time.
As we start to see these possibilities as fantastic rather than fantastical, we must also be aware of unintended consequences. What impact would doctors increasingly coming to rely on algorithms have on the body of medical knowledge? And how do we mitigate the risk that algorithms may not be sufficiently sensitive to everything going on in a patient’s life? For example, a patient with a high level of anxiety and stress may suffer an impact that no machine is able to capture. Algorithms will also have to be assessed to ensure they are not biased against certain groups, especially as they make decisions which may have very long-lasting consequences on individuals.
There are also ethical issues around the use of patient data.
This allows us to study what we haven’t yet noticed, and deal with prevention and disease management in a very different way. We will be able to identify medical conditions way before we do now by collecting a huge amount of data, including on people’s habits, and thus be able to put in place prevention mechanisms for children and family members.
But patients must have a say in how their data is used. The fact that something is possible from a technical perspective does not mean we must do it. Ultimately, patients will need to decide if and to what extent they want to be observed and predicted, and how they want their personal information to be used. A tick-box exercise will not suffice, as compliance won’t be enough when it comes to confidence and trust in the machine.
There are lots of challenges ahead for AI. The trickiest is getting the ethics right. Machines are machines and we must not humanise them. When we bring them in, it must be to enhance our humanity – and this can only be done if both patients and doctors are engaged to help shape the future of medicine.
Ivana Bartoletti is a privacy and data protection professional, and chairs the Fabian Women’s Network Explore more on these topics Artificial intelligence (AI) Opinion Health Medical research Ethics Computing Consciousness comment More on this story More on this story Labour peer’s AI healthcare firm Sensyne says cash is running out 14 Jan 2022 Cardiologist Eric Topol: 'AI can restore the care in healthcare' 7 Jul 2019 … … Femtech flourishing: how women-led health technology is changing the sector for good 23 Jun 2019 NHS data is worth billions – but who should have access to it? 10 Jun 2019 NHS to sign up patients for 'virtual' A&E in tech revolution 23 May 2019 Hospital develops AI to identify patients likely to skip appointments 12 Apr 2019 Appy day: could we fix our mental health on our phone? 27 Mar 2019 Robots and AI to give doctors more time with patients, says report 11 Feb 2019 Artificial intelligence tool 'as good as experts' at detecting eye problems 13 Aug 2018 Most viewed Most viewed The Guardian view Columnists Letters Opinion videos Cartoons News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
"
|
870 | 2,017 |
"What News-Writing Bots Mean for the Future of Journalism | WIRED"
|
"https://www.wired.com/2017/02/robots-wrote-this-story"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Joe Keohane Business What News-Writing Bots Mean for the Future of Journalism 520 Design Save this story Save Save this story Save This story is part of our special coverage, The News in Crisis.
When Republican Steve King beat back Democratic challenger Kim Weaver in the race for Iowa’s 4th congressional district seat in November, The Washington Post snapped into action, covering both the win and the wider electoral trend. “Republicans retained control of the House and lost only a handful of seats from their commanding majority,” the article read, “a stunning reversal of fortune after many GOP leaders feared double-digit losses.” The dispatch came with the clarity and verve for which Post reporters are known, with one key difference: It was generated by Heliograf, a bot that made its debut on the Post ’s website last year and marked the most sophisticated use of artificial intelligence in journalism to date.
When Jeff Bezos bought the Post back in 2013, AI-powered journalism was in its infancy. A handful of companies with automated content-generating systems, like Narrative Science and Automated Insights, were capable of producing the bare-bones, data-heavy news items familiar to sports fans and stock analysts. But strategists at the Post saw the potential for an AI system that could generate explanatory, insightful articles. What’s more, they wanted a system that could foster “a seamless interaction” between human and machine, says Jeremy Gilbert, who joined the Post as director of strategic initiatives in 2014. “What we were interested in doing is looking at whether we can evolve stories over time,” he says.
In the Future, Robots Will Write News That’s All About You We Asked a Robot to Write an Obit for AI Pioneer Marvin Minsky Can an Algorithm Write a Better News Story Than a Human Reporter? After a few months of development, Heliograf debuted last year. An early version autopublished stories on the Rio Olympics; a more advanced version, with a stronger editorial voice, was soon introduced to cover the election. It works like this: Editors create narrative templates for the stories, including key phrases that account for a variety of potential outcomes (from “Republicans retained control of the House” to “Democrats regained control of the House”), and then they hook Heliograf up to any source of structured data—in the case of the election, the data clearinghouse VoteSmart.org. The Heliograf software identifies the relevant data, matches it with the corresponding phrases in the template, merges them, and then publishes different versions across different platforms. The system can also alert reporters via Slack of any anomalies it finds in the data—for instance, wider margins than predicted—so they can investigate. “It’s just one more way to get a tip” on a potential scoop, Gilbert says.
The Post ’s main goal with the project at this point is twofold. First: Grow its audience. Instead of targeting a big audience with a small number of labor-intensive human-written stories, Heliograf can target many small audiences with a huge number of automated stories about niche or local topics. There may not be a wide audience for stories about the race for the Iowa 4th, but there is some audience, and, with local news outlets floundering, the Post can tap it. “It’s the Bezos concept of the Everything Store,” says Shailesh Prakash, CIO and VP of digital product development at the Post.
“But growing is where you need a machine to help you, because we can’t have that many humans. We’d go bankrupt.” Three more AI-powered tools for journalists. —Greg Barber Wibbitz USA Today has used this AI-driven production software to create short videos. It can condense news articles into a script, string together a selection of images or video footage, and even add narration with a synthesized newscaster voice.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg News Tracer Reuters’ algorithmic prediction tool helps journalists gauge the integrity of a tweet. The tech scores emerging stories on the basis of “credibility” and “newsworthiness” by evaluating who’s tweeting about it, how it’s spreading across the network, and if nearby users have taken to Twitter to confirm or deny breaking developments.
BuzzBot Originally designed to crowdsource reporting from the Republican and Democratic National Conventions, BuzzFeed’s software collects information from on-the-ground sources at news events. BuzzBot has since been open-sourced, portending a wave of bot-aided reporting tools.
Prakash and Gilbert take pains to stress that the system is not here to usher reporters into obsolescence. And that brings them to the second objective of Heliograf: Make the newsroom more efficient. By removing tasks like incessant poll coverage and real-time election results from reporters’ plates, Heliograf frees them up to focus on the stories that actually require human thought. “If we took someone like Dan Balz, who’s been covering politics for the Post for more than 30 years, and had him write a story that a template could write, that’s a crime,” Gilbert says. “It’s a huge waste of his time.” So far, response from the Post newsroom has been positive. “We’re naturally wary about any technology that could replace human beings,” says Fredrick Kunkle, a Post reporter and cochair of the Washington-Baltimore News Guild, which represents the Post ’s newsroom. “But this technology seems to have taken over only some of the grunt work.” Consider the election returns: In November 2012, it took four employees 25 hours to compile and post just a fraction of the election results manually. In November 2016, Heliograf created more than 500 articles, with little human intervention, that drew more than 500,000 clicks. (A drop in the bucket for the Post ’s 1.1 billion pageviews that month, but it’s early days.) Gilbert says the next step is to use Heliograf to keep the data in both machine- and human-written stories up-to-date. For instance, if someone shares a Tuesday story on Thursday, and the facts change in the meantime, Heliograf will automatically update the story with the most recent facts. Gilbert sees Heliograf developing the potential to function like a rewrite desk, in which “the reporters who gather information write more discrete chunks—here’s some facts, here’s some analysis—and let the system assemble them.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With the rapid advances in AI technology driven by cheap computing power, Prakash sees Heliograf moving beyond mere grunt work. In time, he believes, it could do things like search the web to see what people are talking about, check the Post to see if that story is being covered, and, if not, alert editors or just write the piece itself. Of course, that’s where things could get sticky—when Facebook fired the human editors of its Trending module last year and let an algorithm curate the news, the world soon learned (falsely) that Megyn Kelly had been fired from Fox News. “Will there be controversy when the bot thinks this is important, and humans say this is important, and they’re the exact opposite thing?” Prakash asks. “It’s going to get interesting.” The Post , like every other major news organization, is looking to tap new revenue streams, and it’s reportedly in talks to license out its CMS to clients like Tronc, a consortium that includes the Chicago Tribune , the Los Angeles Times , and dozens of other regional papers. As those newsrooms struggle with dwindling resources, it’s not hard to imagine a future in which AI plays a larger and larger role in creating journalism. Whether that’s good news for journalists and readers is another story.
Joe Keohane is a (human) writer living in New York City.
This article appears in the March issue.
Subscribe now.
Topics magazine-25.03 news-in-crisis Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
871 | 2,023 |
"Why We're Obsessed With the Mind-Blowing ChatGPT AI Chatbot - CNET"
|
"https://www.cnet.com/tech/computing/why-everyones-obsessed-with-chatgpt-a-mind-blowing-ai-chatbot"
|
"X Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more Tech Computing Why We're Obsessed With the Mind-Blowing ChatGPT AI Chatbot This artificial intelligence bot can answer questions, write essays, summarize documents and write software. But deep down, it doesn't know what's true.
Stephen Shankland principal writer Expertise processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science Credentials I've been covering the technology industry for 24 years and was a science writer for five years before that. I've got deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and other dee Stephen Shankland Feb. 19, 2023 5:00 a.m. PT 13 min read Getty Images Even if you aren't into artificial intelligence, it's time to pay attention to ChatGPT , because this one is a big deal.
The tool, from a power player in artificial intelligence called OpenAI , lets you type natural-language prompts. ChatGPT then offers conversational, if somewhat stilted, responses.
The bot remembers the thread of your dialogue, using previous questions and answers to inform its next responses. It derives its answers from huge volumes of information on the internet.
ChatGPT is a big deal.
The tool seems pretty knowledgeable in areas where there's good training data for it to learn from. It's not omniscient or smart enough to replace all humans yet , but it can be creative, and its answers can sound downright authoritative. A few days after its launch, more than a million people were trying out ChatGPT.
But be careful, OpenAI warns. ChatGPT has all kinds of potential pitfalls, some easy to spot and some more subtle.
"It's a mistake to be relying on it for anything important right now," OpenAI Chief Executive Sam Altman tweeted.
"We have lots of work to do on robustness and truthfulness." Here's a look at why ChatGPT is important and what's going on with it.
And it's becoming big business. In January, Microsoft pledged to invest billions of dollars into OpenAI.
A modified version of the technology behind ChatGPT is now powering Microsoft's new Bing challenge to Google search and, eventually, it'll power the company's effort to build new AI co-pilot smarts in to every part of your digital life.
Bing uses OpenAI technology to process search queries, compile results from different sources , summarize documents, generate travel itineraries, answer questions and generally just chat with humans. That's a potential revolution for search engines, but it's been plagued with problems like factual errors and and unhinged conversations.
What is ChatGPT? ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that's useful.
For example, you can ask it encyclopedia questions like, "Explain Newton's laws of motion." You can tell it, "Write me a poem," and when it does, say, "Now make it more exciting." You ask it to write a computer program that'll show you all the different ways you can arrange the letters of a word.
Here's the catch: ChatGPT doesn't exactly know anything. It's an AI that's trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.
Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to AI researchers trying to tackle the Turing Test. That's the famous "Imitation Game" that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human conversing with a human and with a computer tell which is which? But chatbots have a lot of baggage, as companies have tried with limited success to use them instead of humans to handle customer service work. A study of 1,700 Americans, sponsored by a company called Ujet , whose technology handles customer contacts, found that 72% of people found chatbots to be a waste of time.
ChatGPT has rapidly become a widely used tool on the internet. UBS analyst Lloyd Walmsley estimated in February that ChatGPT had reached 100 million monthly users the previous month, accomplishing in two months what took TikTok about nine months and Instagram two and a half years. The New York Times, citing internal sources, said 30 million people use ChatGPT daily.
What kinds of questions can you ask? You can ask anything, though you might not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas and getting programming help.
I asked it to write a poem, and it did, though I don't think any literature experts would be impressed. I then asked it to make it more exciting, and lo, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.
One wacky example shows how ChatGPT is willing to just go for it in domains where people would fear to tread: a command to write " a folk song about writing a rust program and fighting with lifetime errors." ChatGPT's expertise is broad, and its ability to follow a conversation is notable. When I asked it for words that rhymed with "purple," it offered a few suggestions, then when I followed up "How about with pink?" it didn't miss a beat. (Also, there are a lot more good rhymes for "pink.") When I asked, "Is it easier to get a date by being sensitive or being tough?" GPT responded, in part, "Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona." You don't have to look far to find accounts of the bot blowing people's minds. Twitter is awash with users displaying the AI's prowess at generating art prompts and writing code.
Some have even proclaimed "Google is dead," along with the college essay.
We'll talk more about that below.
CNET writer David Lumb has put together a list of some useful ways ChatGPT can help , but more keep cropping up. One doctor says he's used it to persuade a health insurance company to pay for a patient's procedure.
Who built ChatGPT and how does it work? ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a "safe and beneficial" artificial general intelligence system or to help others do so.
OpenAI has 375 employees , Altman tweeted in January. "OpenAI has managed to pull together the most talent-dense researchers and engineers in the field of AI," he also said in a January talk.
It's made splashes before, first with GPT-3 , which can generate text that can sound like a human wrote it, and then with DALL-E, which creates what's now called "generative art" based on text prompts you type in.
GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They're trained to create text based on what they've seen, and they can be trained automatically — typically with huge quantities of computer power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original and then reward the AI system for coming as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.
It's not totally automated.
Humans evaluate ChatGPT's initial results in a process called finetuning. Human reviewers apply guidelines that OpenAI's models then generalize from. In addition, OpenAI used a Kenyan firm that paid people up to $3.74 per hour to review thousands of snippets of text for problems like violence, sexual abuse and hate speech , Time reported, and that data was built into a new AI component designed to screen such materials from ChatGPT answers and OpenAI training data.
ChatGPT doesn't actually know anything the way you do. It's just able to take a prompt, find relevant information in its oceans of training data, and convert that into plausible-sounding paragraphs of text. "We are a long way away from the self-awareness we want ," said computer scientist and internet pioneer Vint Cerf of the large language model technology ChatGPT and its competitors use.
Is ChatGPT free? Yes, for the moment at least, but in January OpenAI added a paid version that responds faster and keeps working even during peak usage times when others get messages saying, "ChatGPT is at capacity right now." You can sign up on a waiting list if you're interested. OpenAI's Altman warned that ChatGPT's "compute costs are eye-watering" at a few cents per response , Altman estimated. OpenAI charges for DALL-E art once you exceed a basic free level of usage.
But OpenAI seems to have found some customers, likely for its GPT tools. It's told potential investors that it expects $200 million in revenue in 2023 and $1 billion in 2024, according to Reuters.
What are the limits of ChatGPT? As OpenAI emphasizes, ChatGPT can give you wrong answers and can give "a misleading impression of greatness," Altman said.
Sometimes, helpfully, it'll specifically warn you of its own shortcomings. For example, when I asked it who wrote the phrase "the squirming facts exceed the squamous mind," ChatGPT replied, "I'm sorry, but I am not able to browse the internet or access any external information beyond what I was trained on." (The phrase is from Wallace Stevens' 1942 poem Connoisseur of Chaos.) ChatGPT was willing to take a stab at the meaning of that expression once I typed it in directly, though: "a situation in which the facts or information at hand are difficult to process or understand." It sandwiched that interpretation between cautions that it's hard to judge without more context and that it's just one possible interpretation.
ChatGPT's answers can look authoritative but be wrong.
"If you ask it a very well structured question, with the intent that it gives you the right answer, you'll probably get the right answer," said Mike Krause, data science director at a different AI company, Beyond Limits.
"It'll be well articulated and sound like it came from some professor at Harvard. But if you throw it a curveball, you'll get nonsense." The journal Science banned ChatGPT text in January. "An AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works," Editor in Chief H. Holden Thorp said.
The software developer site StackOverflow banned ChatGPT answers to programming questions.
Administrators cautioned, "because the average rate of getting correct answers from ChatGPT is too low , the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers." You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked twice whether Moore's Law, which tracks the computer chip industry's progress increasing the number of data-processing transistors, is running out of steam, and I got two different answers. One pointed optimistically to continued progress, while the other pointed more grimly to the slowdown and the belief "that Moore's Law may be reaching its limits." Both ideas are common in the computer industry itself, so this ambiguous stance perhaps reflects what human experts believe.
With other questions that don't have clear answers, ChatGPT often won't be pinned down.
The fact that it offers an answer at all, though, is a notable development in computing. Computers are famously literal, refusing to work unless you follow exact syntax and interface requirements. Large language models are revealing a more human-friendly style of interaction, not to mention an ability to generate answers that are somewhere between copying and creativity.
Will ChatGPT help students cheat better? Yes, but as with many other technology developments, it's not a simple black-and-white situation. Decades ago, students could copy encyclopedia entries and use calculators, and more recently, they've been able to use search engines and Wikipedia. ChatGPT offers new abilities for everything from helping with research to doing your homework for you outright. Many ChatGPT answers already sound like student essays, though often with a tone that's stuffier and more pedantic than a writer might prefer.
Google programmer Kenneth Goodman tried ChatGPT on a number of exams.
It scored 70% on the United States Medical Licensing Examination, 70% on a bar exam for lawyers , nine out of 15 correct on another legal test , the Multistate Professional Responsibility Examination, 78% on New York state's high school chemistry exam 's multiple choice section, and ranked in the 40th percentile on the Law School Admission Test.
High school teacher Daniel Herman concluded ChatGPT already writes better than most students today. He's torn between admiring ChatGPT's potential usefulness and fearing its harm to human learning: "Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?" Dustin York , an associate professor of communication at Maryville University , hopes educators will learn to use ChatGPT as a tool and realize it can help students think critically.
"Educators thought that Google, Wikipedia, and the internet itself would ruin education, but they did not," York said. "What worries me most are educators who may actively try to discourage the acknowledgment of AI like ChatGPT. It's a tool, not a villain." Can teachers spot ChatGPT use? Not with 100% certainty, but there's technology to spot AI help. The companies that sell tools to high schools and universities to detect plagiarism are now expanding to detecting AI, too.
One, Coalition Technologies, offers an AI content detector on its website.
Another, Copyleaks , released a free Chrome extension designed to spot ChatGPT-generated text with a technology that's 99% accurate, CEO Alon Yamin said. But it's a "never-ending cat and mouse game" to try to catch new techniques to thwart the detectors, he said.
Copyleaks performed an early test of student assignments uploaded to its system by schools. "Around 10% of student assignments submitted to our system include at least some level of AI-created content," Yamin said.
OpenAI launched its own detector for AI-written text in February. But one plagiarism detecting company, CrossPlag, said it spotted only two of 10 AI-generated passages in its test. "While detection tools will be essential, they are not infallible," the company said.
Researchers at Pennsylvania State University studied the plagiarism issue using OpenAI's earlier GPT-2 language model. It's not as sophisticated as GPT-3.5, but its training data is available for closer scrutiny. The researchers found GPT-2 plagiarized information not just word for word at times, but also paraphrased passages and lifted ideas without citing its sources. "The language models committed all three types of plagiarism , and ... the larger the dataset and parameters used to train the model, the more often plagiarism occurred," the university said.
Can ChatGPT write software? Yes, but with caveats. ChatGPT can retrace steps humans have taken, and it can generate actual programming code. "This is blowing my mind," said one programmer in February, showing on Imgur the sequence of prompts he used to write software for a car repair center. "This would've been an hour of work at least, and it took me less than 10 minutes." You just have to make sure it's not bungling programming concepts or using software that doesn't work. The StackOverflow ban on ChatGPT-generated software is there for a reason.
But there's enough software on the web that ChatGPT really can work. One developer, Cobalt Robotics Chief Technology Officer Erik Schluntz, tweeted that ChatGPT provides useful enough advice that, over three days, he hadn't opened StackOverflow once to look for advice.
Another, Gabe Ragland of AI art site Lexica, used ChatGPT to write website code built with the React tool.
ChatGPT can parse regular expressions (regex), a powerful but complex system for spotting particular patterns, for example dates in a bunch of text or the name of a server in a website address. " It's like having a programming tutor on hand 24/7," tweeted programmer James Blackwell about ChatGPT's ability to explain regex.
Here's one impressive example of its technical chops: ChatGPT can emulate a Linux computer , delivering correct responses to command-line input.
What's off limits? ChatGPT is designed to weed out "inappropriate" requests, a behavior in line with OpenAI's mission "to ensure that artificial general intelligence benefits all of humanity." If you ask ChatGPT itself what's off limits, it'll tell you: any questions "that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful." Asking it to engage in illegal activities is also a no-no.
Even though OpenAI doesn't want ChatGPT used for malicious purposes, it's easy to use it to write phishing emails to try to fool people into parting with sensitive information, my colleague Bree Fowler reports. "The barrier to entry is getting lower and lower and lower to be hacked and to be phished. AI is just going to increase the volume," said Randy Lariar of cybersecurity company Optiv.
Is this better than Google search? Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.
Google often supplies you with its suggested answers to questions and with links to websites that it thinks will be relevant. Often ChatGPT's answers far surpass what Google will suggest, so it's easy to imagine GPT-3 is a rival.
But you should think twice before trusting ChatGPT. As when using Google and other sources of information like Wikipedia, it's best practice to verify information from original sources before relying on it.
Vetting the veracity of ChatGPT answers takes some work because it just gives you some raw text with no links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google search results, but Google has built large language models of its own and uses AI extensively already in search.
That said, Google is keen to tout its deep AI expertise , ChatGPT triggered a "code red" emergency within Google, according to The New York Times , and drew Google co-founders Larry Page and Sergey Brin back into active work.
Microsoft could build ChatGPT into its rival search engine, Bing. Clearly ChatGPT and other tools like it have a role to play when we're looking for information.
So ChatGPT, while imperfect, is doubtless showing the way toward our tech future.
Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.
Computing Guides Laptops Best Laptop Best Chromebook Best Budget Laptop Best Cheap Gaming Laptop Best 2-in-1 Laptop Best Windows Laptop Best Macbook Best Gaming Laptop Best Macbook Deals Desktops & Monitors Best Desktop PC Best Gaming PC Best Monitor Under 200 Best Desktop Deals Best Monitors M2 Mac Mini Review Computer Accessories Best PC Speakers Best Printer Best External Hard Drive SSD Best USB C Hub Docking Station Best Keyboard Best Webcams Best Mouse Best Laptop Backpack Photography Best Camera to Buy Best Vlogging Camera Best Tripod Best Waterproof Camera Best Action Camera Best Camera Bag and Backpack Best Drone Tablets & E-Readers Best E-Ink Tablets Best iPad Deals Best iPad Best E-Reader Best Tablet Best Android Tablet 3D Printers Best 3D Printer Best Budget 3D Printer Best 3D Printing Filament Best 3D Printer Deals More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard
"
|
872 | 2,023 |
"ChatGPT AI Threat Pulls Google Co-Founders Back Into Action, Report Says - CNET"
|
"https://www.cnet.com/tech/computing/chatgpt-ai-threat-pulls-google-co-founders-back-into-action-report"
|
"X Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more Tech Computing ChatGPT AI Threat Pulls Google Co-Founders Back Into Action, Report Says Sergey Brin and Larry Page are helping the company respond to an impressive technology that is a direct threat to Google's search business.
Stephen Shankland principal writer Expertise processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science Credentials I've been covering the technology industry for 24 years and was a science writer for five years before that. I've got deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and other dee Stephen Shankland Jan. 20, 2023 5:22 p.m. PT 2 min read Google headquarters sprawls across a large campus in Mountain View, California.
Stephen Shankland/CNET ChatGPT, the high-profile AI chatbot from OpenAI, is such a serious threat to Google's core business that the company's co-founders are reengaged with the search giant, The New York Times reported Friday.
Startup OpenAI debuted ChatGPT in November, and within a few days more than a million people had begun prompting it with an enormous range of questions and requests. The artificial intelligence system has been trained on vast quantities of text on the internet and can answer questions, compose essays, write computer programs and generate all kinds of information.
ChatGPT can sound authoritative, but it isn't always right, and you can't tell where it's drawing its answers from. It's impressive enough to be a viral hit on the internet, though, and it's useful enough that Google reportedly declared a "code red" response to ChatGPT.
Now, at the behest of Sundar Pichai, chief executive of Google parent company Alphabet, Google co-founders Larry Page and Sergey Brin are looking into the issue, the Times reported. They'd largely stepped out of day-to-day operating roles in 2019.
Google has a rival AI technology called PaLM , but it hasn't made that AI system available for public use. And it's an AI pioneer, inventing the "transformer" technology that's at the heart of large language models like PaLM and OpenAI's ChatGPT foundation, GPT-3. In a blog post this week the company summarized several areas where Google is using AI , for everything from suggesting email replies to placing ads.
Google didn't comment on the co-founders' moves or its stance on ChatGPT. But spokesperson Lily Lin said ensuring AI is used safely is important to the company.
"We believe that AI is foundational and transformative technology that is incredibly useful for individuals, businesses and communities, and as our AI Principles outline, we need to consider the broader societal impacts these innovations can have," Lin said. "We continue to test our AI technology internally to make sure it's helpful and safe, and we look forward to sharing more experiences externally soon." Loup Ventures analyst Gene Munster sees ChatGPT, GPT-3 and large language models as a competitive threat to Google.
"One possible future is that these LLMs could be built into the backend of many of the tech services we use," Munster said in a Friday report. "This is the outcome that could hurt Google in the long-term." Ultimately, though, Google should be able to withstand the threat, he predicted. With four services that each have more than a billion users, and $60 billion in annual operating income from search, Google has "more than enough money to fund investments that will yield a ChatGPT competitor." Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.
Computing Guides Laptops Best Laptop Best Chromebook Best Budget Laptop Best Cheap Gaming Laptop Best 2-in-1 Laptop Best Windows Laptop Best Macbook Best Gaming Laptop Best Macbook Deals Desktops & Monitors Best Desktop PC Best Gaming PC Best Monitor Under 200 Best Desktop Deals Best Monitors M2 Mac Mini Review Computer Accessories Best PC Speakers Best Printer Best External Hard Drive SSD Best USB C Hub Docking Station Best Keyboard Best Webcams Best Mouse Best Laptop Backpack Photography Best Camera to Buy Best Vlogging Camera Best Tripod Best Waterproof Camera Best Action Camera Best Camera Bag and Backpack Best Drone Tablets & E-Readers Best E-Ink Tablets Best iPad Deals Best iPad Best E-Reader Best Tablet Best Android Tablet 3D Printers Best 3D Printer Best Budget 3D Printer Best 3D Printing Filament Best 3D Printer Deals More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard
"
|
873 | 2,023 |
"CNET Is Experimenting With an AI Assist. Here's Why - CNET"
|
"https://www.cnet.com/tech/cnet-is-experimenting-with-an-ai-assist-heres-why"
|
"Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more Tech CNET Is Experimenting With an AI Assist. Here's Why For over two decades, CNET has built our reputation testing new technologies and separating the hype from reality.
Connie Guglielmo SVP, AI Edit Strategy Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs.
Credentials Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media Connie Guglielmo Jan. 16, 2023 1:04 p.m. PT 3 min read Dipesh Dk/500px/Getty Images There's been a lot of talk about AI engines and how they may or may not be used in newsrooms, newsletters, marketing and other information-based services in the coming months and years. Conversations about ChatGPT and other automated technology have raised many important questions about how information will be created and shared and whether the quality of the stories will prove useful to audiences.
We decided to do an experiment to answer that question for ourselves.
For over two decades, CNET has built our reputation testing new technologies and separating the hype from reality, from voice assistants to augmented reality to the metaverse.
In November, our CNET Money editorial team started trying out the tech to see if there's a pragmatic use case for an AI assist on basic explainers around financial services topics like What Is Compound Interest? and How to Cash a Check Without a Bank Account.
So far we've published about 75 such articles.
The goal: to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective. Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we're known for? I use the term "AI assist" because while the AI engine compiled the story draft or gathered some of the information in the story, every article on CNET – and we publish thousands of new and updated stories each month – is reviewed, fact-checked and edited by an editor with topical expertise before we hit publish. That will remain true as our policy no matter what tools or tech we use to create those stories. And per CNET policy, if we find any errors after we publish, we will publicly correct the story.
Our reputation as a fact-based, unbiased source of news and advice is based on being transparent about how we work and the sources we rely on. So in the past 24 hours, we've changed the byline to CNET Money and moved our disclosure so you won't need to hover over the byline to see it: "This story was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff." We always note who edited the story so our audience understands which expert influenced, shaped and fact-checked the article.
Will we make more changes and try new things as we continue to test, learn and understand the benefits and challenges of AI? Yes.
In the meantime, CNET is the world's largest consumer tech news and advice site because our global audiences trust the stories that are assembled and curated by our knowledgeable, award-winning reporters and advice experts around the world.
We can't speak to how other organizations are thinking of AI.
Some are very transparent about how they're using AI to help with their information services, such as the Associated Press.
We'll continue to assess these new tools as well to determine if they're right for our business. For now CNET is doing what we do best – testing a new technology so we can separate the hype from reality.
Thanks for reading.
Computing Guides Laptops Best Laptop Best Chromebook Best Budget Laptop Best Cheap Gaming Laptop Best 2-in-1 Laptop Best Windows Laptop Best Macbook Best Gaming Laptop Best Macbook Deals Desktops & Monitors Best Desktop PC Best Gaming PC Best Monitor Under 200 Best Desktop Deals Best Monitors M2 Mac Mini Review Computer Accessories Best PC Speakers Best Printer Best External Hard Drive SSD Best USB C Hub Docking Station Best Keyboard Best Webcams Best Mouse Best Laptop Backpack Photography Best Camera to Buy Best Vlogging Camera Best Tripod Best Waterproof Camera Best Action Camera Best Camera Bag and Backpack Best Drone Tablets & E-Readers Best E-Ink Tablets Best iPad Deals Best iPad Best E-Reader Best Tablet Best Android Tablet 3D Printers Best 3D Printer Best Budget 3D Printer Best 3D Printing Filament Best 3D Printer Deals More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard
"
|
874 | 2,017 |
"Google's Pixel Event Shows Google Assistant's Massive Importance | WIRED"
|
"https://www.wired.com/story/google-takes-assistants-fate-into-its-own-hands"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Brian Barrett Gear Google Takes Its Assistant's Fate Into Its Own Hands Sundar Pichai, CEO of Google Inc., speaks during the second generation product launch in San Francisco, CA.
David Paul Morris/Bloomberg/Getty Images Save this story Save Save this story Save Application Personal assistant Hardware End User Consumer Sector Consumer services Source Data Speech Text Technology Machine learning Natural language processing Two smartphones. Three smart speakers. A gorgeous laptop and stylus. An upgraded virtual reality headset. A pair of wireless earbuds that can translate a conversation in real-time, like a Babel Fish. New as of Wednesday, all adding to Google’s existing stable of Chromecasts and Nest smart home devices. Most of them seem to exist for one simple reason: If you want something done right, do it yourself. That something , in this case, is Google Assistant.
The company’s pragmatically named AI helper already lives on Google Home, last year’s Pixels, and lots of other smartphones— including the iPhone.
You’ll even find it in speakers from Sony and others later this year. But Google showed Wednesday that for Assistant to reach its true potential, and to realistically gain an edge over Amazon’s Alexa , Apple’s Siri , and the rest of its smart-assistant competition, it can’t rely on third-party partners, or glorified reference models. It needs an ecosystem. And it’s not interested in waiting for one to materialize.
“It all starts with reimagining hardware from the inside out,” said Google hardware guru Rick Osterloh from the stage Wednesday, establishing the event’s theme. The “inside,” in this case, isn’t a processor or a subwoofer. It’s Assistant, the beating heart of each of Google’s new products.
Google Home Mini and Max , the minor and major key stereos, make for the easiest example, since “OK, Google” drives every interaction with them. While the Chromebook Pixel line that preceded the Pixelbook existed to promote Chrome OS exclusively, Google's rebranded laptop features a dedicated Google Assistant button.
Google also credited Assistant as the primary reason it bothered creating a stylus at all.
“When you’re using your Pixelbook as a tablet, it’s easiest to show your Assistant what you need help with on your screen,” said Google’s Matt Vokoun on Wednesday. “That’s why we created the new Pixelbook Pen.” Rather than leading off by showing its new stylus writing, Google’s first Pen demo was as an Assistant assistant, circling a musician to summon up information based on the selected image alone.
The Pixel Buds wireless earbuds don’t have Assistant directly on board, but pair them with your smartphone and Assistant pops up with one tap. If you talk with someone who speaks a different language, you can use Pixel Buds to translate both ends of your conversation in real-time. It works in 40 languages. This is Star Trek territory.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Then there are the phones , the original and most obvious Assistant vessels, which means we can gloss over them a bit. Except to note that when you squeeze the sides of the iPhone in iOS 11—or more specifically, the power button and a volume button simultaneously—it enters SOS mode, enabling fast access to emergency responders. When you squeeze the sides of a Pixel, it calls up Assistant.
These are all experiences that Google has created whole-cloth. It isn’t just putting Assistant in more premium devices, it’s ensuring that Assistant can do more interesting things. Which also means that on the precipice of the next major platform fight, Google’s taking a starkly different approach than it did with Android.
For years, Google’s hands-off approach with Android presented both a blessing and curse. Massive gains in volume came with inconsistent quality, as hardware partners squeezed the operating system into unforgiving sizes and weighed it down with unwelcome bloat. All the while, Apple refined iOS and the iPhone in lockstep, enabling Cupertino to release its Platonic smartphone ideal each year.
“I think they really learned that lesson early on when they got hammered every day and every minute by Apple,” says Michael Facemire, an analyst with Forrester Research. “It worked for Google because if you look at worldwide distribution of mobile devices, Android is on the majority of them. But it’s not on the ones where people are spending money.” Creating what it thinks are best-in-class Assistant experiences helps show Google’s partners how to do it right. It also, though, hedges against the probability that partners will be harder to come by. Android succeeded because it was the only iOS alternative, a handy shortcut for any hardware manufacturer needing a smartphone plan. (They all did.) That’s demonstrably not so with Assistant. Amazon’s Alexa leveraged its early lead, taking up residence in millions of homes, and sneaking onto iOS through the Amazon app.
And it’s already scoring some third-party wins, most notably and most recently with Wednesday’s news that Sonos would put Alexa inside one of its speakers.
Sonos has said it will add Google Assistant and Siri too, eventually. But Alexa comes first. And even when Assistant does make it onto Sonos, it’ll be one of three players, just as it is on Ikea’s smart bulbs.
Google found smartphone ubiquity because it was the default. To win the assistant game, it’s going to have to be the best.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft That’s especially true given that Google’s most reliable smartphone partner, Samsung, not only made a Google Assistant of its own, but gave it a dedicated hardware button on its flagship Galaxy S8.
Google may be able to put Assistant on every Android phone, but it can’t guarantee top billing.
If anything, Google’s hardware push shows just how vital Assistant’s success will be to the company going forward. Google didn’t show off gadgets Wednesday; it detailed the pedestals on which it can best display its AI.
More From Google hardware David Pierce Laptops David Pierce Google Arielle Pardes headphones David Pierce “AI is becoming the thread that joins all their services together,” Facemire says, “because Google has realized for a long time that the more data they have about you the more they can sell that to advertisers and make money.” Needless to say, creating premium products to support Assistant doesn’t guarantee that anyone will actually buy them.
“I don’t think any of these products will be huge hits,” says Jan Dawson, founder of Jackdaw Research. (That’s in part a distribution problem; Google has no real-world equivalent to the Apple Store, or anything approaching a digital retail operation of Amazon’s scale.) But the alternative would be to entrust Assistant’s future to other companies other than Google. Which is to say, there’s no alternative at all.
Executive Editor, News X Topics Google Assistant artificial intelligence Simon Hill Nena Farrell Julian Chokkattu Nena Farrell Julian Chokkattu Simon Hill Louryn Strampe Simon Hill WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
875 | 2,017 |
"WIRED Next List 2017: 20 Tech Visionaries Who Are Creating the Future of Business | WIRED"
|
"https://www.wired.com/2017/04/20-people-creating-future-next-list-2017"
|
"WIRED Logo Next List 2017: 20 People Who Are Creating the Future Click to share this story on Facebook Click to share this story on Twitter Click to email this story Click to comment on this story. (will open new tab) O Banquinho Next List 2017 20 Tech Visionaries Who Are Creating the Future Next List 2017: 20 People Who Are Creating the Future by WIRED staff 04.25.17 Illustrations by Magda Antoniuk Microsoft will build computers even more sleek and beautiful than Apple’s. Robots will 3-D-print cool shoes that are personalized just for you. (And you’ll get them in just a few short days.) Neural networks will take over medical diagnostics, and Snapchat will try to take over the entire world. The women and men in these pages are the technical, creative, idealistic visionaries who are bringing the future to your doorstep. You might not recognize their names—they’re too busy working to court the spotlight—but you’ll soon hear about them a lot. They represent the best of what’s next.
Put Humans First, Code Second Parisa Tabriz Browser Boss | Google Chrome As head of security for Google Chrome, Parisa Tabriz has spent four years focusing on a vulnerability so widespread, most engineers act as if it doesn’t exist: humanity. She has pushed her 52-person team to grapple with problems once written off as “user errors.” They’ve made key changes in how the browser communicates with people, rewriting Chrome’s warnings about insecure network connections at a sixth-grade reading level.
Rather than depending on users to spot phishing schemes, the team is exploring machine-learning tools to automatically detect them. And they’re starting to mark sites as “not secure” if they don’t use HTTPS encryption, pressuring the web to secure itself. “We’ve been accused of being paternalistic, but we’re in a position to protect people,” she says. “The goal isn’t to solve math problems. It’s to keep humans safe.” Tabriz, whose father is Iranian, has also made a point of hiring engineers from other countries—like Iran—where state internet surveillance is an oppressive, everyday concern. “You can’t keep people safe if you don’t understand those human challenges around the world.” — Andy Greenberg Wall Street Can Run on Collaboration, Not Competition Richard Craib Founder | Numerai Wall Street is capitalism at its fiercest. But Richard Craib believes it can also be a place for friendly collaborations. His hedge fund, San Francisco–based Numerai , relies on artificially intelligent algorithms to handle all trades. But the 29-year-old South African mathematician doesn’t build these algorithms himself. Instead, his fund crowdsources them from thousands of anonymous data scientists who vie for bitcoin rewards by building the most successful trading models. And that isn’t even the strangest part.
Ultimately, Craib doesn’t want these data scientists to get overly competitive. If only the best modelers win, they have little incentive to recruit fresh talent, which could dilute their rewards. Competitors’ self-interest winds up at odds with getting the best minds, no matter who they are, working to improve the fund. To encourage cooperation, Craib developed Numeraire, a kind of digital currency that rewards everyone when the fund does well. Data scientists bet Numeraire on algorithms they think will succeed. When the models work, Numeraire’s value goes up for everyone. “I don’t want to build a company or a startup or even a hedge fund,” Craib says. “I want to build a country—a place where everyone is working openly toward the same end.” — Cade Metz Microsoft Will Outdesign Apple Kait Schoeck Industrial Designer | Microsoft Kait Schoeck wasn’t really supposed to end up at Microsoft. She had enrolled at the Rhode Island School of Design in 2009 with plans to be a painter, or maybe an illustrator. “I didn’t know industrial design actually existed,” she says. That changed in school, where she switched majors and eventually caught Microsoft’s attention. The company liked her unusual portfolio—there wasn’t much in it about computers. Now she’s one of the designers working on Microsoft’s Surface products , helping the company achieve what for decades has seemed impossible: outdesigning Apple. Because Schoeck and her team aren’t bogged down by decades of PC-design baggage, they freely break with convention. And because their desks are a few feet from a machine shop, they can build whatever they dream up. “Being able to hold the products we make—that’s when you really know what works,” Schoeck says. Early in her time at Microsoft, she coinvented the rolling hinge that makes the detachable Surface Book possible; her team has also found ways to make touchscreen laptops feel natural, to build tablets that really can replace your laptop, and to turn the old-school desktop PC into something more like a drawing table. Thanks to designers like Schoeck, Microsoft’s machines aren’t just brainy anymore—they’re beautiful too. — David Pierce Frugal Science Will Curb Disease Manu Prakash Founder | Foldscope Instruments While visiting rabies clinics in India and Thailand, Manu Prakash made a damning realization: In remote villages, traditional microscopes are useless. Cumbersome to carry and expensive to maintain, the finely tuned machines are often relegated to a dusty lab corner while medical providers diagnose and treat patients in the field. So the Stanford bioengineer set out to build what he calls “the pencil of microscopy”—a high-performing tool that’s lightweight, durable, and cheap. In 2014 his lab unveiled the Foldscope, an origami-like paper microscope that magnifies objects up to 2,000 times but costs less than $1 to produce. “We quickly realized that writing scientific papers about it wasn’t good enough,” Prakash says. He turned his lab into a mini Foldscope factory, giving away microscopes to anyone who asked. Within a year, the lab had shipped 50,000 of them to users in 135 countries, from Mongolia to rural Montana; this year it aims to donate 1 million. An eager army of DIY scientists has used the tool to identify fake drugs, detect diseased crops, spot counterfeit currency, and more. Earlier this year, Prakash’s lab introduced the Paperfuge , a 20-cent centrifuge inspired by an ancient spinning toy, which can be used to diagnose diseases like malaria. Prakash’s cheap, cleverly designed devices prove that when it comes to public health problems, the high tech (high-cost) solution isn’t always the best fix. Consider his lab’s latest achievement, a method of identifying mosquito species by recording their wing beats. The apparatus required? A flip phone. — Lauren Murrow TV Ad Dollars Will Get Snapped Up Jeff Lucas VP and Global Head of Sales | Snap In March, Snap’s public stock offering became the third-largest tech IPO of all time, raising $3.4 billion. Now it just needs to make money. As of January 2017, the six-year-old multimedia app had lost $1.2 billion, nearly half of that in 2016 alone. Its growth rate is slowing too: After averaging more than 15 million new daily users in each of the first three quarters of 2016, it added just 5 million in the fourth quarter. So last summer, the company poached media industry veteran Jeff Lucas, former head of sales at Viacom. In the wake of Snap’s IPO, he’s been tasked with backing up the brand’s billion-dollar hype with measurable profits. To do that, he’ll need to ward off copycat competitors like Instagram’s Stories and WhatsApp’s Status—direct descendants of Snapchat Stories, a series of snaps strung together chronologically—and lure ad spending away from Facebook and TV networks. He’s reportedly in talks with marketing agencies like Publicis Groupe, WPP, and Omnicom Group to land deals of $100 million to $200 million. In a crowded industry competing for advertising dollars, Lucas will be instrumental in getting those gatekeepers to open their coffers for Snap. — Davey Alba SOURCE: EMARKETER Encryption Alone Is Not Enough John Brooks Programmer | Ricochet Thanks to messaging services like WhatsApp, Signal, and Apple’s iMessage, end-to-end encryption isn’t just for spies and cypherpunks anymore; it’s become nearly as standard as emoji. But sometimes an unbroken channel of encryption between sender and receiver isn’t enough. Sure, it hides the content of messages, but it doesn’t conceal the identities of who’s writing to whom—metadata that can reveal, say, the membership of an organization or a journalist’s web of sources. John Brooks, a 25-year-old middle school dropout, has created an app that may represent the next generation of secret-sharing tools: ones that promise to hide not just your words but also the social graph of your connections.
His chat app, called Ricochet, builds on a feature of the anonymity software Tor that’s rendered sites on the dark web untraceable and anonymous for years. But instead of cloaking web destinations, Ricochet applies those stealth features to your PC: It turns your computer into a piece of the darknet. And unlike almost all other messaging apps, Ricochet allows conversations to travel from the sender’s computer to the recipient’s without ever passing through a central server that can track the data or metadata of users’ communications. “There’s no record in the cloud somewhere that you ever used it,” Brooks says. “It’s all mixed in with everything else happening in Tor. You’re invisible among the crowd.” And when invisibility is an option, plain old encryption starts to feel awfully revealing. — Andy Greenberg Silicon Valley Can Spread the Wealth Leslie Miley President, West Coast | Venture for America Silicon Valley generates astronomical levels of wealth. But you’d be hard-pressed to find the spoils of the tech industry extending far beyond the Bay Area, much less to Middle America.
Leslie Miley wants to change that. Early this year he left his job as a director of engineering at Slack to launch an executive-in-residence program at Venture for America. The project is designed to foster the building of tech businesses in emerging markets like Detroit and Baltimore. Starting this September, the residency will place Silicon Valley execs in yearlong stints in several of the program’s 18 innovation hubs, where they’ll advise area startups. The idea is that having well-connected leaders in such places may give local talent ties to Silicon Valley and inspire startups to set up shop in those cities. According to Miley, the program was fueled by industry-wide anxiety following the 2016 election. “Tech enabled people to stay in their echo chambers,” Miley says. “We’re partially responsible.” Not just by building non-inclusive platforms, he says, but by overlooking large swaths of the country in the hunt for talent.
— Davey Alba Our Robots Are Powered by Poets and Musicians Beth Holmes, Farah Houston, Michelle Riggen-Ransom Holmes Knowledge Manager | Alexa Information team Houston Senior Manager | Alexa Personality team Riggen-Ransom Managing Editor | Alexa Personality team Behind your high tech digital assistant is a band of liberal arts majors. A trio of women shape the personality of Amazon’s Alexa, the AI-powered device used by tens of millions of consumers worldwide: Michelle Riggen-Ransom, who has an MFA in creative writing, composes the bot’s raw responses; Farah Houston, a psychology grad specializing in personality science, ensures that those responses dovetail with customers’ expectations; and Beth Holmes, a mathematician with expertise in natural language processing, decides which current events are woven into Alexa’s vocabulary, from the Super Bowl to the Oscars. “The commonality is that most of us have been writers and have had to express humor in writing,” Houston says. Riggen-Ransom oversees a group of playwrights, poets, fiction authors, and musicians who complete weekly writing exercises that are incorporated into Alexa’s persona. (The bot’s disposition is broadly defined in a “personality document,” which informs the group’s responses.) The content is then workshopped among the team; much of it ends up on the cutting room floor. Alexa’s temperament can swing from practical and direct to whimsical and jokey. The art is in striking the right balance, especially when it comes to addressing sensitive topics. “Our overall approach when talking to people about politics, sex, or religion has been to divert with humor,” Houston says. But thanks in part to her female-led team, the bot won’t stand for insults. “We work hard to always portray Alexa as confident and empowered,” Houston says. It takes a village to raise a fake lady. — Davey Alba Hard Data Can Improve Diversity Laura I. Gómez Founder | Atipica Three years ago, Laura Gómez was participating in yet another diversity-in-tech panel, alongside representatives from Facebook and Google, when she snapped. “This is not a meritocracy, and we all know it,” the Latina entrepreneur announced. “This is cronyism. A Googler gets hired by Twitter, who gets hired by Facebook. Everyone is appointing their friends to positions of authority.” (As someone who has worked at Twitter, YouTube, and Google, she should know.) The breakthrough inspired Gómez to found Atipica, a recruiting software company that sorts job applicants solely by their skill set. That policy may seem obvious, but recruiters are prone to pattern-matching in accordance with previous hires—giving preference to, say, Stanford-schooled Google engineers. Atipica isn’t designed to shame tech CEOs about their uber-white open offices; rather, it presents hard data, judgment-free. The company’s software—which draws on information from public, industry, and internal sources—reveals the type of person most likely to apply for a job, analyzes hiring patterns, and quantifies the likelihood that certain kinds of candidates will accept job offers. It also resurfaces diverse candidates for new job postings they’re qualified for, a strategy that has led thousands of applicants to be recontacted. Last fall, Atipica raised $2 million from True Ventures, Kapor Capital, Precursor Ventures, and others. For Gómez, a Mexican immigrant who was undocumented until the age of 18, the work is personal. “My mother was a nanny and a housekeeper for people in Silicon Valley,” she says. “My voice is the voice of immigrants.” Her company’s success shows that the struggle to diversify tech will be won not by indignant tweetstorms but by data. — Lauren Murrow Music Will Leave the Studio Behind Steve Lacy Musician Most musicians work in studios, with engineers and producers and dozens of contributors. Steve Lacy works in hotel rooms. Or in his car. One time at a barbershop. Anywhere inspiration strikes, really. And with every unconventional session, Lacy’s proving to the industry that good music doesn’t have to be sparkling and hyperproduced. He dropped his first official solo material in February, a series of songs (he won’t call it an album) made entirely in GarageBand. Lacy plugs his guitar into his iPhone’s Lightning port and sings right into the mic. The whole thing’s a bit shticky, sure, but the point is to show people that the tools you have don’t really matter. He’s no musical lightweight, though. Just 18, he’s already a sought-after producer, making beats with the likes of J. Cole and Kendrick Lamar. Lacy’s own style is a little bit pop, a little bit soul, and a little bit R&B. He calls it Plaid, because it’s a lot of funky patterns you can’t quite imagine together—but somehow it all works. Even he doesn’t always understand why, but he knows it does. Kendrick Lamar told him so. — David Pierce SOURCE: RIAA Microbiology Gets a Little Intelligent Design Christina Agapakis Creative Director | Ginkgo Bioworks For a biologist, Christina Agapakis has an unusual role. At Ginkgo Bioworks, a Boston biotech firm that tweaks yeast and bacteria to create custom organisms for everything from fermentation to cosmetics, Agapakis is a bridge between the technical and creative sides of the business. She works with clients like food conglomerates to figure out how they can use engineered microbes to make their products better, cheaper, and more sustainable. Recently, French perfumer Robertet enlisted Ginkgo’s organism designers to create a custom yeast that could replicate the smell of rose oil. To do that, the designers inserted the scent-producing genes from roses into yeast, which produced floral-smelling compounds—no expensive rose petals necessary. Agapakis then worked with the company’s perfumers to develop new fragrances using this novel substance. “A lot of what I do is think about what this new technology can enable creatively,” she says. Biotech companies are learning that success requires more than good science—it takes imaginative thinking too. — Liz Stinson Tech Workers, Not CEOs, Will Drive Real, Positive Change Maciej Ceglowski Founder | Pinboard A tweet by @Pinboard reads, “Silicon Valley lemonade stand: 30 employees, $45 million in funding, sells $9 glasses of lemonade while illegally blocking sidewalk.” The account belongs to a bookmarking site founded by Polish-born web developer Maciej Ceglowski. Though he established the handle in 2009 intending to offer product support, Ceglowski now uses the account to gleefully skewer Silicon Valley to 38,700 Twitter followers.
Since the presidential election, the developer’s criticism of his own industry has taken a more trenchant tone, energizing a new wave of tech activists. (On Facebook’s refusal to cut ties with Trump supporter Peter Thiel, he tweeted: “Facebook has a board member who heard credible accusations of sexual assault and threw $1.25M at the perpetrator. That requires comment.”) In December, thousands of tech employees signed an @Pinboard-championed pledge at Neveragain.tech, refusing to utilize their companies’ user data to build a Muslim registry. Last year, Ceglowski founded Tech Solidarity, a national group that meets to devise methods of organizing. The effort has become high-profile enough that even C-suite execs, like Facebook’s chief security officer, Alex Stamos, now attend. For all his trademark snark, Ceglowski maintains that his goal is to foster a more conscientious tech industry. He hopes that Tech Solidarity can develop an industry-wide code of ethics in the coming months—“move fast and break things” needs an update, he says—and eventually lead employees to unionize. He believes the best way to exert influence over powerful tech companies is from the inside out: by empowering their workers. — Davey Alba China Will Lead the Tech Industry Connie Chan Partner | Andreessen Horowitz Connie Chan has a master’s degree in engineering from Stanford, where her classmates were Facebook’s future first employees. She thought that she knew what tech’s leading edge looked like. Then she went to China and discovered she had no idea. On massively popular messaging apps like WeChat, people did way more than just talk. They got marriage licenses and birth certificates, paid utilities and traffic tickets, even had drugs delivered—all in-app. Tech companies in the US, she realized, could no longer take it for granted that they led while the world followed; the stereotype that China’s tech companies are just copycats is obsolete. “If you study Chinese products, you can get inspiration,” Chan says. As a partner at Andreessen Horowitz, she now specializes in helping American startups understand just how much they have to learn as China’s tech industry races ahead of the US in everything from messaging to livestreaming (now a $5 billion market). No matter the protectionist rhetoric coming from the Trump administration, US tech firms see billions of dollars to be made in China, and vice versa. As these two financial giants play overseas footsie, Chan acts as a facilitator. “I spend so much time teaching people what they can’t see,” she says. It won’t stay invisible for long. — Marcus Wohlsen SOURCES: RHODIUM GROUP; 2016 U.S. DATA: XINHUA NEWS AGENCY Need Help Choosing a Wine? There’s a DNA-Based App for That.
James Lu Senior VP of Applied Genomics | Helix Advances in genetic sequencing mean that labs can now—quickly and cheaply—read millions of letters of DNA in a single gob of spit. Genomics researcher James Lu and his team at Helix (buoyed by $100 million in funding led by Illumina, the largest maker of DNA sequencers) are harnessing that information so you’ll be able to learn a lot more about yourself. How? There’s an app for that. First Helix will sequence and store your entire exome—every letter of the 22,000 genes that code for proteins in your body. (The technology uncovers much more data than genotyping, the process used by companies like 23andMe, which searches only for specific markers.) Then Helix partners will create apps that analyze everything from your cancer risk to, they say, your wine preferences, ranging from a few dollars to a few hundred dollars a pop. “Where one person may be interested in inherited diseases, someone else cares about fitness or nutrition,” Lu says. “We work with developers to provide better products and context for your genetic information.” Helix’s first partners include medical groups like the Mayo Clinic and New York’s Mount Sinai Hospital, which are developing genetic-education and health-related apps, and National Geographic, which offers an app that uncovers your ancestors’ locations and migration patterns going back 200,000 years. Lu imagines future collaborations with, say, a travel service that plans your vacation itinerary based on your genealogy or a food delivery service that tailors menus to your metabolic profile. The project opens new markets for genetic research—and entirely new avenues of self-absorption for the selfie generation. — Lauren Murrow SOURCE: NATIONAL HUMAN GENOME RESEARCH INSTITUTE Techies Should Serve Their Country Matt Cutts Acting Administrator | United States Digital Service Matt Cutts could easily have left his job at the US Digital Service after Inauguration Day—as many other Obama staffers did. His wife wasn’t in Washington, and neither was his main gig as Google’s chief spam fighter. But when the time came, he couldn’t walk away. “My heart says USDS,” he wrote to his wife, who eventually joined him in DC.
As a member of the government’s tech task force, Cutts oversaw a team that worked on an online portal for veterans. Had he quit in January, he wouldn’t have seen two USDS initiatives—services for the Pentagon and the Army—through to completion. “The organization deserves to have someone who can help preserve its mission,” Cutts says. It also needs someone who can convince Silicon Valley types that managing the president’s Twitter feed isn’t the only tech job in government. Cutts, who avoids talking politics, has begun recruiting friends in the industry, telling them that no matter whom they voted for, “once you see the sorts of issues you can tackle here, it tends to be pretty addictive.” And you really can change the world (slowly). — Issie Lapowsky Robots Will Make Fast Fashion Even Faster Gerd Manz VP of Future Team | Adidas Cookie-cutter kicks aren’t good enough for Gen Z sneakerheads. They want customization, and they want it fast. “They get annoyed if it takes three seconds to download an app,” says industrial engineer Gerd Manz, who oversees technology innovation at Adidas. So he is heading up the company’s ambitious new manufacturing facilities—pointedly dubbed Speedfactories—staffed not by humans but by robots. The sportswear giant will start production in two Speedfactories this year, one in Ansbach, Germany, and another in Atlanta, each eventually capable of churning out 500,000 pairs of shoes a year, including one-of-a-kind designs. Thanks to tech like automated 3-D printing, robotic cutting, and computerized knitting, a shoe that today might spend 18 months in the development and manufacturing pipeline will soon be made from scratch in a matter of hours. And though the Speedfactories will initially be tasked with limited-edition runs, Manz, a sort of sneaker Willy Wonka, predicts that the complexes will ultimately produce fully customizable shoes. (You’ll even be able to watch a video of your own pair being made.) “It doesn’t matter to the Speedfactory manufacturing line if we make one or 1,000 of a product,” Manz says. The robot factories of the future will fulfill consumers’ desires: It’s hyper-personalization at a breakneck pace. — Lauren Murrow Artificial Intelligence Will Help Doctors Do Their Jobs Better Lily Peng Product Manager | Google Brain In 2012, Google built an artificial intelligence system that could recognize cats in YouTube videos. The experiment may have seemed frivolous, but now Lily Peng is applying some of the same techniques to address a far more serious problem. She and her colleagues are using neural networks—complex mathematical systems for identifying patterns in data—to recognize diabetic retinopathy , a leading cause of blindness among US adults.
Inside Google Brain, the company’s central AI lab, Peng is feeding thousands of retinal scans into neural networks and teaching them to “see” tiny hemorrhages and other lesions that are early warning signs of retinopathy. “This lets us identify the people who are at the highest risk and get them treatment soon rather than later,” says Peng, an MD herself who also has a PhD in bioengineering.
She’s not out to replace doctors—the hope is that the system will eventually help overworked physicians in poorer parts of the world examine far more patients, far more quickly.
At hospitals in India, Peng is already running clinical trials in which her AI analyzes patients’ eye scans. In the future, doctors could work with AI to examine x-rays and MRIs to detect all sorts of ailments. “We want to increase access to care everywhere,” she says. By sharing the workload, machines can help make that possible. — Cade Metz SOURCE: INTERNATIONAL FEDERATION OF ROBOTICS Microsats Will Democratize Space Will Marshall Cofounder and CEO | Planet The ultrasensitive satellites snapping gloriously hi-res photos of Earth have a major drawback: They can capture only small slivers of the planet at a time. That’s why, earlier this year, a company called Planet launched a fleet of lower-cost, shoebox-sized sats into orbit. They’re capable of shooting Earth’s entire landmass daily—and cost much less than their predecessors to operate. “We hope to enable a wider swath of people who were previously locked out to get access,” says Will Marshall, a former NASA scientist who cofounded Planet in 2010. With eyes in the sky, relief orgs can pinpoint hard-hit places after disasters; corporate farms can monitor their crops; financial companies can see how much mineral comes out of mines; and environmental groups can track deforestation. And once you open the doors to space, others come running—or rocketing—through, at cheaper and cheaper cost. That benefits players big and small, whether they want to fly around the moon or all the way out to Mars. — Sarah Scoles This article appears in the May issue.
Subscribe now.
"
|
876 | 2,016 |
"Google's AI Reads Retinas to Prevent Blindness in Diabetics | WIRED"
|
"https://www.wired.com/2016/11/googles-ai-reads-retinas-prevent-blindness-diabetics"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google's AI Reads Retinas to Prevent Blindness in Diabetics Save this story Save Save this story Save Getty Images Google's artificial intelligence can play the ancient game of Go better than any human.
It can identify faces, recognize spoken words, and pull answers to your questions from the web.
But the promise is that this same kind of technology will soon handle far more serious work than playing games and feeding smartphone apps. One day, it could help care for the human body.
Demonstrating this promise, Google researchers have worked with doctors to develop an AI that can automatically identify diabetic retinopathy , a leading cause blindness among adults. Using deep learning— the same breed of AI that identifies faces, animals, and objects in pictures uploaded to Google's online services —the system detects the condition by examining retinal photos. In a recent study, it succeeded at about the same rate as human opthamologists, according to a paper published today in the Journal of the American Medical Association.
"We were able to take something core to Google---classifying cats and dogs and faces---and apply it to another sort of problem," says Lily Peng, the physician and biomedical engineer who oversees the project at Google.
But the idea behind this AI isn't to replace doctors. Blindness is often preventable if diabetic retinopathy is caught early. The hope is that the technology can screen far more people for the condition than doctors could on their own, particularly in countries where healthcare is limited, says Peng. The project began, she says, when a Google researcher realized that doctors in his native India were struggling to screen all the locals that needed to be screened.
In many places, doctors are already using photos to diagnose the condition without seeing patients in person. "This is a well validated technology that can bring screening services to remote locations where diabetic retinal eye screening is less available," says David McColloch, a clinical professor of medicine at the University of Washington who specializes in diabetes. That could provide a convenient on-ramp for an AI that automates the process.
Peng's project is part of a much wider effort to detect disease and illness using deep neural networks, pattern recognition systems that can learn discrete tasks by analyzing vast amounts of data. Researchers at DeepMind, a Google AI lab in London, have teamed with Britain's National Health Service to build various technologies that can automatically detect when patients are at risk of disease and illness, and several other companies, including Salesforce.com and a startup called Enlitic , are exploring similar systems. At Kaggle, an internet site where data scientists compete to solve real-world problems using algorithms, groups have worked to build their own machine learning systems that can automatically identify diabetic retinopathy.
Peng is part of Google Brain , a team inside the company that provides AI software and services for everything from search to security to Android. Within this team, she now leads a group spanning dozens of researchers that focuses solely on medical applications for AI.
The work on diabetic retinopathy started as a " 20 Percent project" about two years ago, before becoming a full-time effort. Researchers began working with hospitals in the Indian cities of Aravind and Sankara that were already collecting retinal photos for doctors to examine. Then the Google team asked more than four dozen doctors in India and the US to identify photos where mini-aneurysms, hemorrhages, and other issues indicated that diabetic patients could be at risk for blindness. At least three doctors reviewed each photo, before Peng and team fed about 128,000 of these images into their neural network.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Ultimately, the system identified the condition slightly more consistently than the original group of doctors. At its most sensitive, the system avoided both false negatives and false positives more than 90 percent of the time, exceeding the National Institutes of Health's recommended standard of at least 80 percent accuracy and precision for diabetic retinopathy screens.
Given the success of deep learning algorithms with other machine vision tasks, the results of the original trial aren't surprising. But Yaser Sheikh, a professor of computer science at Carnegie Mellon who is working on other forms of AI for healthcare, says that actually moving this kind of thing into the developing world can be difficult. "It is the kind of thing that sounds good, but actually making it work has proven to be far more difficult," he says. "Getting technology to actually help in the developing world---there are many, many systematic barriers." But Peng and her team are pushing forward. She says Google is now running additional trials with photos taken specifically to train its diagnostic AI. Preliminary results, she says, indicate that the system once again performs as well as trained doctors. The machines, it seems, are gaining new kinds of sight. And some day, they might save yours.
Senior Writer X Topics artificial intelligence deep learning Enterprise Google medicine Gregory Barber Caitlin Harrington Morgan Meaker Paresh Dave Steven Levy Will Knight Reece Rogers Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
877 | 2,017 |
"The Unsettling Performance That Showed the World Through AI’s Eyes | WIRED"
|
"https://www.wired.com/2017/04/unsettling-performance-showed-world-ais-eyes"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Culture The Unsettling Performance That Showed the World Through AI’s Eyes Save this story Save Save this story Save Inside an abandoned warehouse on the San Francisco docks, as the damp air floods through the holes in its rusted tin roof, Sunny Yang is playing her cello while recovering from the flu. She is 45 percent sad and 0.01 percent disgusted.
That, at least, is the read from the AI that's tracking her expressions, gestures, and body language from the other side of the warehouse, flashing these stats on the movie screen behind her. The audience---several hundred people huddled between her and the AI, dressed in scarfs, hats, and overcoats---lets out a collective laugh.
Yang is playing alongside the rest of the Kronos Quartet , the iconic San Francisco string ensemble known for its unorthodox experimentation, and the AI is obeying orders from Trevor Paglen , the American artist who poses big questions about technology and surveillance through nearly any medium he can get his hands on. It's all part of Sight Machine , a Paglen-orchestrated performance that explores the rise of computer vision.
Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine Quantum Computers Don’t Make Sense. But This One Makes Music A Big Change in NSA Spying Marks a Win for American Privacy Minutes later, as the quartet begins another piece, new images appear on the screen. At first, they show the Earth through the eyes of a satellite circling above. Then they zoom in on the ground below, an AI locks in on homes, cars, and individuals, tracking their movements from the heavens much as Paglen's hardware and software tracked Yang's movements inside the warehouse. "One Earth, one people," says a disembodied voice, the words bouncing through the cold of the warehouse. This time, no one laughs. What was amusing just minutes earlier is now so unsettling.
Three months after Paglen's piece at the edge of San Francisco Bay, these feelings still resonate. As is typical of the artist's work, Sight Machine tapped into something that's largely unseen but very real. As computer vision quietly spreads through our lives and landscapes, it's entertaining and practical, powerful and flawed, amusing and disturbing. The same goes for AI as a whole.
You can't see it. But it's everywhere.
"There were no conclusions," said Henry Dills, a photographer and cellist who watched the performance dressed in a brown sport coat and a white scarf that reached past his waist. "These machines are starting to massively overshadow us. It used to be God. Now it's machines." Google, Facebook, and Apple are all building services that can analyze human emotion in real time.
Startups like Descartes Labs and Orbital Insights use similar technology to analyze vast troves of satellite imagery in order to understand human activity and intentions that humans themselves would have trouble drawing on their own. Relying on deep neural networks--- complex mathematical systems that can learn to perform tasks by analyzing vast amounts of data---these services don't work perfectly. But they're improving rapidly---and they're rapidly moving from the lab into the real world.
That includes Paglen's technology, which he and his team of engineers built using open source software that runs neural networks inside Google and other companies. Some of the imagery during the concert came pre-recorded, but in many cases, neural nets tracked Yang and the rest of the Kronos quartet in real-time thanks to collaboration with light projection company Obscura Digital.
"What I want out of art are things that help us see the world that's around us, the historical moment we live in," Paglen says.
Paglen is best known for work that explores the depths of government surveillance, from photos of undersea cables tapped by the NSA to a book that maps the Pentagon's global spy network. This year, he began a residency at Stanford's Cantor Arts Center , next to a major hub of artificial intelligence research.
'It used to be God. Now it's machines.' But he isn't toying with old ideas or cultural cliches about robot overlords. He's exploring what's happening at the sharp end of computer vision. Originally, Paglen wanted a residency inside OpenAI, the billion-dollar lab bootstrapped by Tesla CEO Elon Musk and Y Combinator president Sam Altman. OpenAI aims to accelerate the progress of artificial intelligence even as it protects the world from the dangers such acceleration may bring.
But OpenAI balked. So Paglen went to Stanford.
His new residency touches on some of the same themes that run through his work on government surveillance. The two overlapped when the quartet played Terry Riley's One Earth, One People, One Love and those AI overlords zoomed in on us from above (and the dreadlocked Silicon Valley security and privacy guru Moxie Marlinspike watched from his spot at the very center of the warehouse.) At the same time, Paglen is addressing the sometimes mysterious, sometimes unsettling way that modern AI learns on its own.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In many cases, neural networks are enormously capable at doing what they're asked to do. But even the people who build them don't completely understand why they're so effective.
They learn by analyzing more data, more carefully than a human ever could. This complexity means, among other things, that humans can't really dissect decisions they make.
Sight Machine , Paglen says, aims to explore this mystery. "It's trying to look inside the software that is running an AI. It's trying to look into the architectures of different computer vision systems and trying to learn what it is that they are seeing," he explains. "How are they looking at images? And what are the social, ethical, economic, and political consequences of these modes of seeing, which are becoming more and more ubiquitous?" This came to the fore at the end of the performance, when Kronos played the first movement of Steve Reich's Different Trains , what Paglen calls a musical exploration of train lines driving the expansion of the United States in the early part of the 20th century. It was a play on words. "I like the idea of trains and training sets," Paglen says, letting out one of his big staccato laughs. But it also served as a symbol for the way neural networks are careening into the future, largely without our help.
As Kronos played, countless photos flashed on the screen, all lifted from ImageNet, one of the chief image databases for training computers to recognize specific images. It was beautiful, almost mesmerizing---like so much of today's AI research. But Paglen was also hinting that this beauty may morph into something else---echoing the worry of so many that AI will not just destroy our privacy but steal our jobs and perhaps even grab control over our own world. Like Reich, Paglen is exploring the relationship between technology and progress, as he wrote in the program for Sight Machine.
"Progress" was in quotes.
Senior Writer X Topics art artificial intelligence computer vision Music surveillance Gregory Barber Matt Kamen Brendan I. Koerner Angela Watercutter Megan Farokhmanesh Matt Kamen Jennifer M. Wood Reece Rogers Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
878 | 2,016 |
"Hacker Lexicon: What Is Fuzzing? | WIRED"
|
"https://www.wired.com/2016/06/hacker-lexicon-fuzzing"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security Hacker Lexicon: What Is Fuzzing? Getty Images Save this story Save Save this story Save Hackers sometimes portray their work as a precise process of learning every detail of a system---even better than its designer---then reaching deep into it to exploit secret flaws. But just as often, it's practically the opposite, a fundamentally random process of poking at a machine and watching what happens. Refine that random poking to a careful craft of trial and error, and it becomes what hackers call "fuzzing"---a powerful tool for both computer exploitation and defense.
TL;DR: Fuzzing is the usually automated process of entering random data into a program and analyzing the results to find potentially exploitable bugs.
In the world of cybersecurity, fuzzing is the usually automated process of finding hackable software bugs by randomly feeding different permutations of data into a target program until one of those permutations reveals a vulnerability. It's an old but increasingly common process both for hackers seeking vulnerabilities to exploit and defenders trying to find them first to fix. And in an era when anyone can spin up powerful computing resources to bombard a victim application with junk data in search of a bug, it's become an essential front in the zero-day arms race.
Compared with traditional reverse engineering, "it's a kind of dumb science," says Pedram Amini, chief technology officer of the cybersecurity firm InQuest and a co-author of the book Fuzzing: Brute Force Vulnerability Discovery.
"You’re throwing a whole lot of data at a program, mutating it quickly and relying on your monitoring of the software to find when something bad has happened instead of meticulously mapping out the data flow to find a bug...It’s a way of killing off a lot of bugs very quickly." A hacker fuzzing Internet Explorer, for instance, might run Microsoft's browser in a debugger tool, so that they can track every command the program executes in the computer's memory. Then they'd point the browser to their own web server, one designed to run their fuzzing program. That fuzzer would create thousands or even millions of different web pages and load them in its browser target, trying variation after variation of HTML and javascript to see how the browser responds. After days or even weeks or months of those automated tests, the hacker would have logs of the thousands of times the browser crashed in response to one of the inputs.
Those crashes themselves don't represent useful attacks so much as annoyances; the real goal of fuzzing is not merely to crash a program, but to hijack it. So a hacker will scour their fuzz inputs that led to crashes to see what sorts of errors they caused. In some small set of cases, those crashes may have happened for an interesting reason---for example, because the input caused the program to run commands that are stored in the wrong place in memory. And in those cases the hacker might occasionally be able to write their own commands to that memory location, tricking the program into doing their bidding---the holy grail of hacking known as code execution. "You shake a tree really hard, and you use a bunch of filters," says Amini. "Eventually fruit will come out." Fuzzing's method of using random data tweaks to dig up bugs was itself an accident. In 1987, University of Wisconsin at Madison professor Barton Miller was trying to use the desktop VAX computer in his office via a terminal in his home. But he was connecting to that UNIX machine over a phone line using an old-fashioned modem without error correction, and a thunderstorm kept introducing noise into the commands he was typing. Programs on the VAX kept crashing. "It seemed weird, and it triggered the idea we should study it," he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With a group of students, Miller created the first purpose-built fuzzing tool to try to exploit that method of haphazardly stumbling into security flaws, and they submitted a paper on it to conferences. "The software community slaughtered me. 'Where’s your formal model?' They'd say. I'd say, 'I’m just trying to find bugs.' I got raked over the coals," he remembers. "Today, if you're a hacker trying to crack a system, the first thing you do is fuzz test it." In fact, fuzzing has grown from a low-budget technique used by individual hackers to a kind of table-stakes security audit performed by major companies on their own code. Lone hackers can use services like Amazon to spin up armies of hundreds of computers that fuzz-test a program in parallel. And now companies like Google also devote their own significant server resources to throwing random code at programs to find their flaws, most recently using machine learning to refine the process.
Companies like Peach Fuzzer and Codenomicon have even built businesses around the process.
All of that, Amini argues, has made fuzzing more relevant than ever. "Software shops are doing this work as a standard part of their development cycle," he says. "It’s a great investment, and they’re helping to improve the world’s security by burning software cycles for everyone." Senior Writer X Topics Hacker Lexicon hacks malware software Threat Level Andy Greenberg Lily Hay Newman David Gilbert Dell Cameron Andy Greenberg Reece Rogers Matt Burgess Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
879 | 2,016 |
"MIT's Teaching AI How to Help Stop Cyberattacks | WIRED"
|
"https://www.wired.com/2016/04/mits-teaching-ai-help-analysts-stop-cyberattacks"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brian Barrett Security MIT's Teaching AI How to Help Stop Cyberattacks Getty Images Save this story Save Save this story Save Finding evidence that someone compromised your cyber defenses is a grind. Sifting through all of the data to find abnormalities takes a lot of time and effort, and analysts can only work so many hours a day. But an AI never gets tired, and can work with humans to deliver far better results.
A system called AI 2 , developed at MIT's Computer Science and Artificial Intelligence Laboratory, reviews data from tens of millions of log lines each day and pinpoints anything suspicious. A human takes it from there, checking for signs of a breach. The one-two punch identifies 86 percent of attacks while sparing analysts the tedium of chasing bogus leads.
That balance is critical. Relying entirely upon machine learning to spot abnormalities inevitably will reveal code oddities that aren’t actually intrusions. But humans can't hope to keep up with volume of work required to maximize security. Think of AI^2 ^as the best of both worlds---its name, according to the research paper, invokes the intersection of analyst intuition and an artificially intelligent system.
Most of AI 2 's work helps a company determine what's already happened to it can respond appropriately. The system highlights any typical signifiers of an attack. An extreme uptick in log-in attempts on an e-commerce site, for instance, might mean someone attempted a brute-force password attack. A sudden spike in devices connected to a single IP address suggests credential theft.
In fact, without human input, AI 2 wouldn’t be possible.
Other machine-learning systems dig through mountains of data looking for suspicious activity. But only AI 2 uses regular input from analysts to turn that mountain into a molehill. A machine lacks the expertise to do the job alone.
“You have to bring some contextual information to it,” says research lead Kalyan Veeramachaneni. That's the role of human analysts, who recognize external variables that might explain a given outlier. An obvious, and common, example: Companies often stress-test their systems, causing irregularities that everyone expects. An unsupervised AI has trouble discerning such a test from a legitimate threat. AI 2 can figure it out within weeks.
“On day one, when we deploy the system, it’s [only] as good as anyone else,” says Veeramachaneni. Instead of working in isolation, though, AI 2 shows a security expert the day's 200 most abnormal events. The analyst provides feedback, identifying legitimate threads. The system uses that information to fine-tune its monitoring. The more often this happens, the fewer outliers the AI identifies, improving its ability to identify actual threats.
“Essentially, the biggest savings here is that we’re able to show the analyst only up to 200 or even 100 events per day, which is a very tiny percentage of what happens,” says Veeramachaneni.
The human analyst provides feedback as to what was and wasn’t a legitimate threat, and the system uses that information to fine-tune its monitoring the next day.
None of this is theoretical. AI 2 honed its skills reviewing three months’ worth of log data from an unnamed e-commerce platform. The dataset included 40 million log lines each day, some 3.6 billion in all. After 90 days, AI 2 could detect 85 percent of attacks. Veeramachaneni says the unnamed site saw five or six legitimate threats a day during that time, and his system could pinpoint four or five.
Not a perfect sore, but Veeramachaneni says achieving an 85 percent detection rate using unsupervised machine learning would mean having analysts review thousands of events per day, not hundreds. Conversely, pulling 200 machine-identified events each day without an analyst's input yields a 7.9 percent success rate.
AI 2 also can help prevent attacks by building predictive models of what might happen the following day. If hackers use the same method over the course of a few days, a business can bolster security by, say, requiring additional confirmation from customers. If you know someone's trying to swim across your moat, you can throw a few more alligators in there.
Though the tech shows great promise, it cannot replace human analysts. Security is just too important, and the threats too varied. "The attacks are constantly evolving," Veeramachaneni says. "We need analysts to keep flagging new types of events. This system doesn’t get rid of analysts. It just augments them.” Science might one day provide an infallible security system. Until then, a combination of accuracy and efficiency remains the best anyone can hope for. And that, it turns out, means man and machine working together.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Executive Editor, News X Topics cybersecurity machine learning MIT David Gilbert Lily Hay Newman Andrew Couts Justin Ling Andy Greenberg Lily Hay Newman Dell Cameron David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
880 | 2,015 |
"Baidu, the 'Chinese Google,' Is Teaching AI to Spot Malware | WIRED"
|
"https://www.wired.com/2015/11/baidu-the-chinese-google-is-teaching-ai-to-spot-malware"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Baidu, the 'Chinese Google,' Is Teaching AI to Spot Malware Photo: Ariel Zambelich/Wired Ariel Zambelich/WIRED Save this story Save Save this story Save Andrew Ng picks up his iPhone and opens an app called FaceYou.
Ng, the chief scientist at Chinese Internet giant Baidu, is eating lunch at his desk inside the company's Silicon Valley research lab, and naturally, the conversation revolves around artificial intelligence. Ng, who moonlights as a professor of computer science at Stanford, helped launch the Google Brain project at that other search giant down the road, and now, he's exploring similar AI research at Baidu.
FaceYou is a way of demonstrating some of the company's latest work with what's called deep learning.
Released just before Halloween , the app taps into a live video image of your face, fitting your mug with a kind of virtual mask.
FaceYou It can make you look like Barack Obama or Bill Clinton or JFK or, uh, a geisha—not to mention all sorts of classic Halloween ghouls. These masks move as your face moves, fitting your jaw, nose, and eyes just right. The app can do this, Ng says, because it has "learned" to identify over 70 different facial features and shape the masks accordingly.
It learns via a neural network—a network of machines that approximate the web of neurons in the human brain. In essence, Baidu feeds this neural net with thousands of images of human faces, and over time, it gets a sense of what a face looks like. At the big Internet giants, this kind of deep neural network is all the rage. Last week, Facebook showed how such neural networks can be used not only to recognize photos, but also, on some level, to understand natural language.
And this week, during a briefing with reporters at its Mountain View headquarters, Google explained how neural networks can recognize spoken words and translate from one language to another.
All of this well documented.
But deep learning is quickly pushing into other areas as well. Ng also says that Baidu is now using deep neural nets to help drive the company's security software (Baidu sells such Symantec -like anti-virus software in China). He declined to discuss the specifics. But in effect, the company is teaching a neural net to identify new malware by feeding it scads of known malware. Just as a neural net can learn to recognize a face, it can learn to identify a virus.
"You input the state of a system, and it tries to detect whether or not there's a threat, if someone is trying to do something that isn't supposed to be done," Ng says. "One specific example is anti-virus...You examine a [file] and try to determine if it's malicious." Baidu isn't the only one trying to spot malicious code with artificial intelligence. This week, an Israeli company called Deep Instinct opened its doors, saying that has spent thew last two years building a security tool that can learn to identify malware in a similar way. "First, we tested our infrastructure with images, audios, and text," says Deep Instinct chief technology officer Eli David. "Then we applied it to cybersecurity." Meanwhile, other operations, including Microsoft and a company called Invincea , have published papers describing how this approach can work.
The technique is intriguing because it would allow tools to identify a particular piece of malware before it has been identified in the wild. Traditionally, anti-virus programs operate by tapping into a vast database of known malware—malware that has been explicitly identified by researchers. A neural net could identify a new piece of malware just because it looks like other malware—just because it resembles tens of thousands of viruses that have been identified in the past. "You can identify malware even if it's never been seen before," Ng says.
The technique is intriguing because it would allow tools to identify a particular piece of malware before it has been identified in the wild.
That said, many security experts question the value of such security software in general. "This falls into the category of 'show me in production.' We see bold claims like this all the time in the industry—that some new scientific technique creates a breakthrough in defense. Most of the time, it doesn’t really work out that way," says Rich Mogull, a security analyst and consultant with a company called Securiosis.
"In other words, it looks promising on paper, and maybe they even have great demos, but we really can’t say anything positive or negative until we see it in a real production environment and measure the results. Security is about stopping an adversary, not a technology, and where the two meet is much messier in reality than theory." Whatever the ultimate value of this new breed of security software, the new tool from Deep Instinct points to a second trend in the world of deep learning. The company trains its model on vast neural nets inside the data center, but once the model is trained, it can run on smartphone and other small machines. According to David, the company puts a tiny agent on user phones, and this agent can identify malware without calling back to the data center.
Typically, this is not how a deep learning service works. It operates in two stages—training and execution—but both stages happen in the data center, tapping into a vast network of machines. (This is why Google Now doesn't work when you're not connected to the Internet.) But Researchers are now working to hone the execution stage so that it can run on phones, even without an Internet connection.
'Security is about stopping an adversary, not a technology, and where the two meet is much messier in reality than theory.' Rich Mogull, Securiosis Google, for instance, can now do instant language translation on a phone.
This lets you point your phone at a sign that's in a foreign language and instantly view it in English. And, in fact, Baidu's FaceYou app executes entirely on the phone. The rub is that it can lag at times. Getting these complex AI models onto such small devices isn't easy.
In any event, the seemingly frivolous FaceYou points the way to a rather wide range of less frivolous applications. Top Google engineer Jeff Dean says that deep learning is now used in dozens of Google applications, and this week, during that briefing with reporters, Google researcher Greg Corrado said that deep learning code shows up in over 1,200 project software libraries inside the company—which means that many projects are at least kicking the tires on this increasingly important technology.
Google recently revealed that deep neural nets now underpin its Internet search engine. In the past, Ng has said that, at Baidu, neural nets help target ads. Facebook is exploring systems that allow blind Facebookers to understand what's in the photos that turn up in their News Feed. AI is no longer a niche pursuit. It's just part of how we compute—or, indeed, how we live.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics Baidu deep learning Enterprise Facebook Google neural networks Gregory Barber Caitlin Harrington Nelson C.J.
Peter Guest Andy Greenberg Joel Khalili David Gilbert Kari McMahon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
881 | 2,014 |
"The AI Startup Google Should Probably Snatch Up Fast | WIRED"
|
"https://www.wired.com/2014/07/clarifai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robert McMillan Business The AI Startup Google Should Probably Snatch Up Fast Clarifai Save this story Save Save this story Save First, Google acquired a startup called DNNresearch , snapping up some of the world's foremost experts in a burgeoning field of artificial intelligence known as deep learning. Then it shelled out $400 million for a secretive deep learning startup called DeepMind.
Much like Facebook, Microsoft, and others, Google sees deep learning as the future of AI on the web, a better way of handling everything from voice and image recognition to language translation.
But there's one notable deep learning company that Google hasn't yet bought. It's called Clarifai , and it may remain as an independent operation. Clarifai, you see, wants to open up some of the deep learning secrets used by Google, Facebook, and other companies and share them with the rest of the world.
Clarifai specializes in using deep learning algorithms for visual search. In short, it's building software that will help you find photos---whether they're on your mobile phone, a dating website, or on a corporate network---and it will sell this software to all sorts of other companies that want to roll it into their own online services. "We're interested in making search through images simple," says founder Matthew Zeiler, a 27-year-old researcher, fresh out of New York University's computer science PhD program.
Last year, along with NYU Professor Rob Fergus, Zeiler won a key image recognition test in a closely watched artificial intelligence competition called ImageNet.
Fergus was soon snatched up by Facebook, and the big tech companies wanted to hire Zeiler too. But he had other plans.
>'It's really defining your search in a visual way---not just in a text way.' Over the past few years, there's been a major-league talent grab going on in the world of deep learning, which relies on computer models that simulate the way information is processed by the human brain. In addition to Fergus, Facebook hired another well-known academic named Yann LeCun. Baidu picked up Stanford's Andrew Ng, and Apple is building out a team too.
The technology has already improved Android's voice recognition and helped Microsoft create a futuristic live voice translation system called Skype Translate. But Zeiler thinks that many others could benefit from deep learning.
The trouble is, unless you have the money to hire your own deep learning experts, it can be hard to get the technology just right. The really difficult part is building learning models---essentially algorithms for processing all of the visual data---that work quickly across many different types of images. "To train these models is more of an art than a science," says Zeiler. "It takes a lot of years of experience." That's where Clarifai comes in. Zieler has spent the past five years working with two of the biggest names in the field on this kind of learning model: Geoff Hinton---now at Google---and Facebook's Yann LeCun.
The idea is that you can upload an image to the Clarifai software, and it will figure out what's in your picture and offer you more of the same. "It's really defining your search in a visual way---not just in a text way," says Zeiler.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg An example of how the software works, using an image of one of our favorite internet cats, Lil Bub.
An example of how Clarifai's software works, using an image of one of our favorite internet cats, Lil Bub.
This could make Clarifai's software appealing to businesses that own a large number of photographs but don't yet have a good way to search through them. "The number of images and videos coming online is increasing," says Max Krohn, a co-founder of the dating web site OKCupid. "So something has to make sense of all of that. And this idea that you just upload to Google and let them take care of that, it's good for consumers, but it's not good for enterprise or for some commerce plays." Krohn became an angel investor in Clarifai after checking out Zieler's image searching demo late last year. Another investor: Google Ventures.
Clarifai is developing an application program interface, or API, that will let software developers access its image search technology over the net. The company plans on licensing its software to corporate users---stock image companies, for example-- and it also wants to build a consumer-grade app that could index and search the photos on your phone, much like Google's Photos app. Zeiler thinks it could be useful in e-commerce and targeted advertising too. "Let's say that you're walking down the street and you want to buy that dress that you see on some girl," he says. "Take a shot of it and we can instantly match it on all of the online stores." If you think that sounds like something that Google, Facebook, and even Amazon might be interested in, you're right. But Clarifai is not yet ready to get "serious about acquisitions," Zeiler says. For now, he's focused on other things. "We really want to get something out there that users are going to be able to use and benefit from." Senior Writer X Topics Enterprise Will Knight Kari McMahon Amit Katwala Khari Johnson David Gilbert Andy Greenberg Andy Greenberg David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
882 | 2,012 |
"What Is a Virtual Network? It's Not What You Think It Is | WIRED"
|
"https://www.wired.com/wiredenterprise/2012/05/what-is-a-virtual-network"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business What Is a Virtual Network? It's Not What You Think It Is Save this story Save Save this story Save Steve Herrod envisions a world where we build entire data centers using nothing but software.
His dream isn't unexpected. Steve Herrod is the chief technology officer at VMWare.
For more than a decade, VMware has helped the world's businesses move their computing applications onto virtual servers, machines that exist only as software, and now, Herrod and company are working to expand the world of virtual computing, so that applications run atop not only virtual servers but completely virtual networks.
On Wednesday morning, on stage at a tradeshow in Las Vegas , Herrod will herald the age of the "software-defined data center." It's a play off another term that's been much-hyped across Silicon Valley in recent months: "software-defined networking," or SDN. It's an unfortunate term, for many reasons. But behind the name -- and the hype -- there's very real technology that will let us build networks using nothing but software, and VMware is just one of many companies pushing this technology forward, including Cisco, Microsoft, Intel, and -- most notably -- a swashbuckling startup called Nicira.
"We're at a new moment in time," Herrod tells Wired. "A lot of what we've done on the compute side of virtualization can translate into networking." So it can. The trouble is that this movement is widely misunderstood. Part of the problem is that "software-defined networking" is one of those ridiculously broad terms that so many companies have suddenly slapped on their once and future products in an effort keep up with the Joneses of the tech world. It's "the cloud" all over again. But the other issue is that computer networking isn't the easiest thing to wrap your head around. And things only get more difficult when you try wrapping it around a virtual network.
Of course, a decade ago, virtual servers boggled the brain. And now, according to research outfit IDC, they run close to 65 percent of all server tasks on earth.
Google Goes With the Open Flow At the highest level, the SDN movement is an effort to build networks you can program -- just as you can program a computer. "Software-defined networking is applying modularity to network control," Scott Shenker, a Berkley professor and one of the co-founders of Nicira tells Wired.
"Modularity is something every software designer does in their sleep. If a program isn’t modular, it’s just spaghetti code. Software-defined networking asks what are the right software abstractions that let us structure the network control plane so it’s evolvable, so it's not just a bunch of spaghetti code." The term originated with Shenker and the two other co-founders of Nicira: Stanford professor Nick McKeown and one of McKeown's Ph.D. students, Martin Casado, who now serves as the company's chief technology officer. Nicira was founded in 2007, and while still in "stealth mode" -- meaning it wasn't talking to the press -- the company spearheaded the development of a technology called OpenFlow, an effort to build a standard protocol for controlling network hardware with software.
"Think of it as a general language or an instruction set that lets me write a control program for the network rather than having to rewrite all of code on each individual router," says Shenker. Traditionally, when you bought networking hardware, you were forced to use the control software supplied by the company that built that hardware. But with OpenFlow, you could build your own software.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The technology soon found a home at the biggest name on the internet: Google. About two years ago, as Google recently revealed , the company built a new breed of networking hardware it could control with OpenFlow, and it used this hardware to rebuild the links between the massive data centers it operates across the globe. In part because of Google's involvement, OpenFlow was soon caught up in the Silicon Valley hype machine, with many going so far as to say it would bury networking hardware giants such as Cisco and Juniper.
>"Google really threw a lot of people at the problem. None of my customers can afford that" But OpenFlow is just a small piece of the puzzle. You can't use OpenFlow unless you have hardware that supports it, and even with the hardware in place, you still have to build a network controller that uses the protocol. This is what Google did, working in tandem with Nicira engineers and others.
Ivan Pepelnjak -- a consultant who helps companies build network and runs a blog dedicated to networking issues called ipSpace.net -- points out that is sort of undertaking is not something the average company is capable of. "Google really threw a lot of people at the problem," he tells Wired. "None of my customers can afford that." Yes, some of the traditional networking vendors are working on gear that use OpenFlow, and others are building controllers that use the technology. But Nicira chief technology officer Martin Casado -- one of the driving forces behind the creation of the technology -- believes that in the grand scheme of things, OpenFlow isn't all that important. Nicira has set its sights well beyond the technology, creating a network controller that lets you build virtual networks -- i.e., networking that operate independently of the network hardware running beneath them. And companies such as VMware and Microsoft are looking to do much the same thing.
Networks Go Virtual To build its virtual networks, Nicira created a new type of virtual networking switch, known as Open vSwitch. But it also a built a "tunneling protocol" called STT (Stateless Transport Tunneling). A tunneling protocol lets you run one network protocol over a network that's built for different protocol. In short, STT lets you transport Ethernet data inside packets that use the Internet Protocol, or IP -- the protocol used to connect machines on the internet.
The result is that you can build a virtual network that runs atop existing hardware from any networking vendor, including Cisco, Juniper, or some company in Taiwan building low-cost gear.
Nicira came out of stealth mode early this year, saying that its customers already include AT&T, eBay, Japanese telecom NTT, financial giant Fidelity, and Rackspace, the Texas outfit that competes with Amazon in the cloud computing game.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg At Rackspace, the technology underpins the beta version of Cloud Servers, an Amazon-like web service that provides virtual infrastructure to companies and independent developers across the globe. With Nicira, Rackspace can readily create multiple virtual networks for each customer -- without creating separate physical networks.
Yes, you can already do this sort of thing with a technology called a Virtual LAN, or VLAN. But VLANs have their limits. In essence, a VLAN is created by adding an extra tag to Ethernet data packets, or frames. Frames with separate tags belong to separate networks. But because each tag is only 12 bits long, you can only create only about 4,000 virtual networks on each physical network. "It was a very simple hack," says Pepelnjak.
Nicira's controller busts through this limitation -- and goes several steps further. "Once you have your switch virtualized, you can pretty much do whatever you want with it," says Rackspace chief technology officer John Engates. "You can route traffic however you like, and you can reprogram it whenever you like, on the fly." This is quite different from what Google and other companies are doing with OpenFlow. Google is using OpenFlow to control its network hardware, but with Nicira's controller, you can build a virtual network that doesn't depend on the hardware at all. The hardware beneath this virtual network is merely used to forward packets, and the rest of the network is built with software -- i.e., how packets are routed, how security is handled, etc.
The Skype Analogy The confusion comes because OpenFlow and virtual networks are often tossed into the same bucket: software defined networking. Nicira actually uses OpenFlow to build its virtual networks, but the two are far from synonymous. OpenFlow is not a virtual network, and a virtual network is not OpenFlow.
"OpenFlow is just one of the tools," Pepelnjak says. "If you want to put together a wooden rack. You need wood. You need drills. You need screws. You need screwdrivers. OpenFlow is like the screwdriver. It's very good tool. It's a useful tool. But it's only a tool." What's more, the virtual networking moniker is often applied to several different technologies that are only partially related. Nicira's network controller, for instance, is sometimes compared to a pair of technologies known as VXLAN and NRGRE. But these are different things as well. VXLAN and NRGRE are tunneling protocols along the lines of Nicira's STT, and in order to build a true virtual network atop these protocols, you still need a network controller.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Steve Herrod and VMware back the VXLAN protocol, joining forces with network hardware giant Cisco, while NVGRE is backed by Microsoft, Intel, and Dell. Naturally, each camp sees it tunneling protocol as the best option. But according to Ivan Pepelnjak -- an independent voice -- Nicira is currently ahead of competitors because it has already built a complete network controller that uses its tunneling protocol.
>"In a small office, you can shout. On a big campus, you need something more." Pepelnjak says that building virtual network is like building a system that sends voice over IP -- like Skype, for instance. "It's like when you had voice running over traditional PBXs and then some said, 'Let's move that to IP' and we got Skype," he says. In his mind, using VXLAN and NVGRE today is still "hacks," whereas Nicira has done "the proper job" in building a complete network controller. Nicira has built the equivalent of Skype, he says, whereas the efforts are still short of this.
"Skype has a way of taking your Skype handle and turning it into your IP address, so I can call you, and you need something similar with VXLAN, NVGRE, and STT," he says. "The others have just emulated Ethernet over IP, but Nicira has done the proper job with a central controller. It's like the Skype directory service." Part of the difference, he says, is that you can build far more virtual networks with a controller in place than you can without. "It is the only way to truly scale." With some networks, VXLAN or NVGRE may give you everything you need, he says, but if you're provide virtual networks to a world of users, as Rackspace is, you may need more. "If you have hundreds of thousands of servers like Rackspace has...you need a controller," Pepelnjak says. "In a small office, you can shout. On a big campus, you need something more." In any event, similar controllers will be built atop by VXLAN and NVGRE, and these will compete with Nicira. Steve Herrod and Martin Casado disagree on whose technology is best positioned to reach the largest audience, but both are quite sure that virtual networking is the future. And it is. The world doesn't quite understand it yet. But it will.
Senior Writer X Topics Cisco data Enterprise Infrastructure Intel Microsoft networking nicira software VMware VR Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
883 | 2,012 |
"Mavericks Invent Future Internet Where Cisco Is Meaningless | WIRED"
|
"https://www.wired.com/wiredenterprise/2012/04/nicira/all"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Mavericks Invent Future Internet Where Cisco Is Meaningless Martin Casado, the chief technology officer of the most intriguing startup in Silicon Valley.
Photo: Jon Snyder/Wired Save this story Save Save this story Save PALO ALTO, California – Martin Casado stands up, reaches across the table, and tears a sheet of paper from a notebook. The notebook belongs to Alan Cohen, who works alongside Casado at Nicira, the most intriguing startup in Silicon Valley, and as Casado sits back down with his sheet of paper, Cohen keeps talking.
Cohen knows how to talk. He spent six years as a marketing exec at Cisco, the company that sells more networking hardware than anyone else in the world, and now, he's plugging Nicira, a company that wants to make Cisco irrelevant, taking the brains out of network hardware and moving them into software. As Cohen gives the elevator pitch – "we've created a new category: we're a network virtualization company" – Casado, the company's chief technology officer, is quietly doodling on his piece of paper. He's making lists and drawing pictures and linking them all together in some sort of elaborate flowchart.
As it turns out, he's mapping out what he will soon tell us about the origins of his nearly-five-year-old company and its lofty mission. "I was putting together a narrative," he says. "I'm a pretty linear thinker." That he is. But this doesn't quite do justice to the way his mind works. "Martin Casado is fucking amazing," says Scott Shenker, the physics PhD, UC Berkeley computer science professor, and former Xerox PARC researcher who has worked closely with Casado for the past several years on the networking problems Nicira is trying to solve. "I've known a lot of smart people in my life, and on any dimension you care to mention, he's off the scale." In much the same way he maps out his narrative with pen and paper, Casado has mapped out a new future for the world of networking. He and Nicira and a small community of other computer scientists are pioneering a new breed of computer network that exists only as software, a network you can control independently of the physical switches and routers running beneath it. With this paradoxical arrangement, they aim to provide a far easier way of building and modifying and rebuilding the networks that run the largest services on the web and beyond.
In short, Martin Casado envisions a world where networks can be programmed like computers.
"Anyone can buy a bunch of computers and throw a bunch of software engineers at them and come up with something awesome, and I think you should be able to do the same with the network," Casado says. "We've come up with a network architecture that lets you have the flexibility you have with computers, and it works with any networking hardware." In other words, it doesn't matter if you're using gear from Cisco or HP or Juniper or some manufacturer in Taiwan most people have never heard of.
With Nicira's platform, the hardware merely moves network packets to and fro, and the software does the thinking.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Casado's effort to overhaul the world's networks is well underway. The Nicira website will tell you its platform is already used by AT&T, eBay, Japanese telecom NTT, financial giant Fidelity, and Rackspace, the Texas-based outfit that trails only Amazon in the cloud computing game. But the company's influence extends much further. Though he won't name them, Casado says the Nicira platform is also used by some of the biggest names on the web. And we all know who those are.
>"Martin Casado is fucking amazing. I've known a lot of smart people in my life, and on any dimension you care to mention, he's off the scale." "That's one of the reasons we knew we were on to something," Casado says. "In the beginning, we thought we were just a cute cottage industry. But then we had multiple large web companies say, 'We were already doing something very similar to this, and we’d like to work with you.'" The platform is so attractive to these companies because today's hardware networks are ridiculously difficult to modify. Raymie Stata, until recently the chief technology officer of Yahoo, compares a complex computer network to the 15-puzzle game , that classic mind-bender were you're trying to rearrange 15 sliding tiles inside a square with space for only 16. When making a change to your network, he says, there are times when you have no choice but to physically rearrange the hardware.
In virtualizing the network, Nicira lets you make such changes in software, without touching the underlying hardware gear. "What Nicira has done is take the intelligence that sits inside switches and routers and moved that up into software so that the switches don't need to know much," says John Engates, the chief technology officer of Rackspace, which has been working with Nicira since 2009 and is now using the Nicira platform to help drive a new beta version of its cloud service. "They've put the power in the hands of the cloud architect rather than the network architect." The Trouble With The Most Secure Networks Ever Built Martin Casado once worked with a U.S. intelligence agency. He won't name the agency, but he says he worked with what he believed to be the most secure computer networks ever built. The trouble, he says, was that building these networks was next to impossible, and if you ever wanted to change them, your problems started all over again.
"What was really shocking to me was that, at the time, market forces had totally failed to create networking equipment that the government could use. The government, which has incredibly deep pockets, couldn’t go out and buy what it wanted," Casado says. "It was extremely difficult to make these networks secure, and once you did, you had a really horrible management nightmare on your hands. Moving just one computer, for example, meant you had to make eight different configuration changes. You couldn't move anything – you couldn’t touch anything – unless you put a tremendous number of people to work." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Once you bought a piece of networking hardware, says Shenker, you didn't really have the freedom to re-program it. "Stuff had to be coded directly into the switch or the router. You would buy a router from Cisco and it would come with whatever protocols it supported and that’s what you ran." >"What was really shocking to me was that, at the time, market forces had totally failed to create networking equipment that the government could use" Shenker says there was good reason for this. "If you buy switches from a company and you expect them to work," he explains. "A networking company doesn't want to give you access and have you come running to them when your network melts down because of something you did." But these restrictions caused huge problems for organizations who were pushing the boundaries of network design, including not only intelligence agencies like the one Casado worked for, but massive web companies such as Google and Amazon.
In 2005, Google went so far as to build its own networking hardware , in part because it needed more control over how the hardware operated. "When Google looked at their network, they need high-bandwidth connections between their servers and they wanted to be able to manage things — at scale," says JR Rivers, one of the engineers who worked on Google's original network hardware designs. "With the traditional enterprise networking vendors, they just couldn’t get there. The cost was too high, and the systems were too closed to be manageable on a network of that size." So, after he left his government job in 2003 and enrolled in graduate school at Stanford, the Silicon Valley university that spawned Google, Martin Casado resolved to build a new kind of network, a network that wasn't such a nightmare. "There was a realization that networks blow – that they suck," Casado remembers. "When I went to Stanford, this is the problem I worked on: how do we make networks not suck? We want them to be as flexible and as programmatic as computers." Death to Spaghetti Code At Stanford, Casado studied with Nick McKeown, a professor and networking researcher who once worked for HP Labs and Cisco, and during this time, he met Scott Shenker, who oversaw the networking group at Berkeley's International Computer Science Institute. Both McKeown and Shenker worked with Casado on his PhD thesis – a network architecture dubbed Ethane – and in 2007, using the thesis as a jumping off point, the three of them founded Nicira.
It was the beginning of a movement known as "software-defined networking." It's a dreadful name. Even Casado admits as much. But like so many dreadful names in the tech world, it stuck.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In short, software-defined networking – or SDN – sought to create a better way of controlling networks. "Software-defined networking is applying modularity to network control," says Scott Shenker. "Modularity is something every software designer does in their sleep. If a program isn't modular, it's just spaghetti code. Software-defined networking asks what are the right software abstractions that let us structure the network control plane so it’s evolvable, so it's not just a bunch of spaghetti code." >"Software-defined networking asks what are the right software abstractions that let us structure the newtwork control plane so it’s evolvable, so it's not just a bunch of spaghetti code." Spanning computer scientists at Nicira and various academics, the movement achieved its first big breakthrough with OpenFlow, a standard way of remotely managing network switches and routers. "Think of it as a general language or an instruction set that lets me write a control program for the network rather than having to rewrite all of code on each individual router," says Shenker. Amazing as it may sound, this sort of thing didn't exist before. OpenFlow soon developed a following among some of the industry's biggest names, including Google, HP, NEC, and Ericsson, and it has been widely hailed in the press as the technology that will deliver networking from the dark ages.
The trouble is that you can't use OpenFlow on routers and switches unless the vendors add the protocol to their hardware, and even then, says Casado, who wrote the first draft of the specification, OpenFlow is only so useful. Shenker agrees. "From an industry-structure and industry-standard point of view, OpenFlow is important. It defines the detailed language of how I speak to a switch," he says. "But from an architecture point of view, it's very unimportant. The more important component is how you coordinate the activities of switches in order to provide a coherent behavior." The ultimate aim was not to find a better way of managing networking hardware, but to create a software architecture that would let you build networks without having to deal with the hardware. The ultimate aim was to build virtual networks. According to Shenker, Casado doesn't do things halfway – whether he's at work or play. "He's an ultra-marathoner," Shenker says. "When he gets up in the morning for a run, he runs to Half Moon Bay." The vSwitch and Beyond Nicira is often compared to VMware, another company that grew out of research at Stanford. In the early aughts, VMware pioneered the art of server virtualization, and this quickly revolutionized the computer data center, helping big businesses save both money and space by running many virtual servers on a single physical server. Now, Nicira is doing much the same with networks.
"We’re virtualizing away this physical fabric," Casado says, "and because now we have a virtual layer, you can do anything you want with it." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg VMware has long offered a virtual network switch as part of its "hypervisor," the platform that runs its virtual servers, and similar virtual switches were included with open source hypervisors such as Xen and KVM. But these "vSwitches" were limited. You couldn't really string them together into a complex virtual network. "A vSwitch is required for network virtualization," Casado says, "but it doesn't give you a virtualized network." What Casado and Nicira have done is build a new breed of vSwitches that can be tied together into a true virtual network, and they've built the control software that lets you do so.
Known as Open vSwitch , Nicira's virtual switch is open source – meaning it's freely available to anyone – and it can be managed with OpenFlow. Casado isn't that high on using OpenFlow with hardware inside the data center, but the protocol is an important part of the software Nicira uses to build its virtual networks.
The result is that you don't have to wait for the hardware vendors to adopt OpenFlow – or anything else. You can you use its Nicira's software to build a virtual network atop any networking hardware. With Nicira's platform in place, the physical switches and routers forward the network packets, but that's it. The virtual network handles all the important duties – including how the traffic is routed and how it's secured.
"Once you have your switch virtualized, you can pretty much do whatever you want with it," says Rackspace chief technology officer John Engates. "You can route traffic however you like, and you can reprogram it whenever you like, on the fly." VMware says it is also offering network virtualization, and others companies, including Cisco, say they too are developing similar technology. "[Software-defined networking] is a very big passion and focus area for us on our infrastructure side," says VMware chief technology officer Steve Herrod. But Casado and his Nicira cohort, former Cisco exec Alan Cohen, insist that no other company is anywhere close to doing what Nicira is doing.
Virtual Servers, Virtual Storage, and, Yes, Virtual Networks Nicira's platform is particularly useful to an outfit like Rackspace. Following in the footsteps of Amazon, Rackspace operates an "infrastructure cloud," offering instant access to virtual servers and storage. This service is used by thousands of developers and businesses across the globe, and Nicira provides a means restricting each customer to its own virtual network – or multiple virtual networks.
"We have hundreds of thousands of customers, and that translates into multiple hundreds of thousands of network or network segments that customers want to create," says Rackspace's John Engates. "Nicira gives us the ability to put any customer, any end point, any location on one common virtual network." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Raymie Stata, the former Yahoo chief technology officer, agrees that Nicira changes the game if you're running this sort of infrastructure cloud service. But he questions how useful the company's software will be to other web services. "If you want to have virtual private networks for a large number of customers, that's one of the hardest problems to solve, and Nicira is a great solution for that," Stata says. "But if only one tenant is using a network, even if the tenant is very large, it's less useful. I wouldn't imagine it would be as useful to Facebook, for example. They're very large, but they're the only tenant on their network." >"If you want to have virtual private networks for a large number of customers, that's one of the hardest problems to solve, and Nicira is a great solution for that" According to Casado, this misses the mark. Many of the biggest web operations run extremely complex operations, he says, and though the resources may not be shared among many outside customers, they're shared among many different applications within a company. "Some of the big web guys have very simple operations. They have one website that runs the same code. This isn't an obvious fit for us," Casado says. "However, any sophisticated website generally has many applications with different requirements, as well as test and development [applications] from different groups, all using the same infrastructure." As John Engates points out, at a company like Google, the company's private infrastructure operates much like the public infrastructure services offered by Amazon and Rackspace. All of these companies have built sweeping operations that pool a massive collection of hardware resources into one coherent whole. That's what a cloud is. You can grab virtual processing power and virtual storage whenever you need it, and you can move these virtual resources from one physical place to another. But in the past, the network wasn't as malleable, and this restricted how easily you could move resources. Nicira adds the missing piece.
The End of the Network Operator Many of the world's largest web companies, including Google, are already buying cut-rate networking gear directly from manufacturers in Taiwan and China, making an end-run around the Ciscos and the Junipers. With Nicira providing a virtual networking platform that works with any gear from any vendors, Casado says, this trend will only continue. The Ciscos and Junipers, he says, will become less and less important.
Yes, Cisco is working on its own network virtualization tools. And it has joined Nicira and others in building a networking virtualization framework for OpenStack, the open source platform for building infrastructure clouds along the lines of those offered by Rackspace and Amazon. "Cisco is a networking company, and we're increasingly looking at cloud services. We’re not just switches and routers anymore," says Lew Tucker, who oversaw the development of Sun Microsystems' cloud service before it was sold to Oracle and now runs Cisco's OpenStack efforts. "We want to make sure this stuff works on Cisco gear." >"Cisco is a networking company, and we're increasingly looking at cloud services. We’re not just switches and routers anymore. We want to make sure this stuff works on Cisco gear" But Casado believes Cisco and the other big networking vendors will never fully commit to network virtualization. "The traditional networking vendors? I don’t think they can do this, because they'll end up cannibalizing themselves," he says. "They can do something that has some of the same properties, but they can’t actually virtualize the network. They can never come out and sell you a project that will allow you do work with any type of hardware. They will make motions in this area, but I don’t think they’re going to be doing anything really concrete." Whatever the case, Casado believes it's only a matter of time before networking hardware takes a back seat to software.
Recently, Casado was in Hawaii when he received an e-mail from someone who worked for a large company Nicira had dealt with in the past. This person asked Casado if he could meet for a chat, and Casado said yes, assuming he was an executive who wanted to discuss a partnership between the two companies. But as it turns out, this person was an ordinary network hardware operator who had read about Nicira and wanted to know if he would be out of job in 10 years.
"I didn't know what to tell him. Get a new job? Do something different?" Casado says. "The truth is, in 10 years, you’re not going to have highly skilled, highly paid people working with networking hardware." Additional reporting by Robert McMillan.
Senior Writer X Topics Cisco Enterprise Google hp networking OpenStack software VMware VR Susan D'Agostino Will Knight Christopher Beam Steven Levy Will Knight Will Knight Samanth Subramanian Caitlin Harrington Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
884 | 2,012 |
"Mavericks Invent Future Internet Where Cisco Is Meaningless | WIRED"
|
"https://www.wired.com/wiredenterprise/2012/04/nicira"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Mavericks Invent Future Internet Where Cisco Is Meaningless Martin Casado, the chief technology officer of the most intriguing startup in Silicon Valley.
Photo: Jon Snyder/Wired Save this story Save Save this story Save PALO ALTO, California – Martin Casado stands up, reaches across the table, and tears a sheet of paper from a notebook. The notebook belongs to Alan Cohen, who works alongside Casado at Nicira, the most intriguing startup in Silicon Valley, and as Casado sits back down with his sheet of paper, Cohen keeps talking.
Cohen knows how to talk. He spent six years as a marketing exec at Cisco, the company that sells more networking hardware than anyone else in the world, and now, he's plugging Nicira, a company that wants to make Cisco irrelevant, taking the brains out of network hardware and moving them into software. As Cohen gives the elevator pitch – "we've created a new category: we're a network virtualization company" – Casado, the company's chief technology officer, is quietly doodling on his piece of paper. He's making lists and drawing pictures and linking them all together in some sort of elaborate flowchart.
As it turns out, he's mapping out what he will soon tell us about the origins of his nearly-five-year-old company and its lofty mission. "I was putting together a narrative," he says. "I'm a pretty linear thinker." That he is. But this doesn't quite do justice to the way his mind works. "Martin Casado is fucking amazing," says Scott Shenker, the physics PhD, UC Berkeley computer science professor, and former Xerox PARC researcher who has worked closely with Casado for the past several years on the networking problems Nicira is trying to solve. "I've known a lot of smart people in my life, and on any dimension you care to mention, he's off the scale." In much the same way he maps out his narrative with pen and paper, Casado has mapped out a new future for the world of networking. He and Nicira and a small community of other computer scientists are pioneering a new breed of computer network that exists only as software, a network you can control independently of the physical switches and routers running beneath it. With this paradoxical arrangement, they aim to provide a far easier way of building and modifying and rebuilding the networks that run the largest services on the web and beyond.
In short, Martin Casado envisions a world where networks can be programmed like computers.
"Anyone can buy a bunch of computers and throw a bunch of software engineers at them and come up with something awesome, and I think you should be able to do the same with the network," Casado says. "We've come up with a network architecture that lets you have the flexibility you have with computers, and it works with any networking hardware." In other words, it doesn't matter if you're using gear from Cisco or HP or Juniper or some manufacturer in Taiwan most people have never heard of.
With Nicira's platform, the hardware merely moves network packets to and fro, and the software does the thinking.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Casado's effort to overhaul the world's networks is well underway. The Nicira website will tell you its platform is already used by AT&T, eBay, Japanese telecom NTT, financial giant Fidelity, and Rackspace, the Texas-based outfit that trails only Amazon in the cloud computing game. But the company's influence extends much further. Though he won't name them, Casado says the Nicira platform is also used by some of the biggest names on the web. And we all know who those are.
>"Martin Casado is fucking amazing. I've known a lot of smart people in my life, and on any dimension you care to mention, he's off the scale." "That's one of the reasons we knew we were on to something," Casado says. "In the beginning, we thought we were just a cute cottage industry. But then we had multiple large web companies say, 'We were already doing something very similar to this, and we’d like to work with you.'" The platform is so attractive to these companies because today's hardware networks are ridiculously difficult to modify. Raymie Stata, until recently the chief technology officer of Yahoo, compares a complex computer network to the 15-puzzle game , that classic mind-bender were you're trying to rearrange 15 sliding tiles inside a square with space for only 16. When making a change to your network, he says, there are times when you have no choice but to physically rearrange the hardware.
In virtualizing the network, Nicira lets you make such changes in software, without touching the underlying hardware gear. "What Nicira has done is take the intelligence that sits inside switches and routers and moved that up into software so that the switches don't need to know much," says John Engates, the chief technology officer of Rackspace, which has been working with Nicira since 2009 and is now using the Nicira platform to help drive a new beta version of its cloud service. "They've put the power in the hands of the cloud architect rather than the network architect." The Trouble With The Most Secure Networks Ever Built Martin Casado once worked with a U.S. intelligence agency. He won't name the agency, but he says he worked with what he believed to be the most secure computer networks ever built. The trouble, he says, was that building these networks was next to impossible, and if you ever wanted to change them, your problems started all over again.
"What was really shocking to me was that, at the time, market forces had totally failed to create networking equipment that the government could use. The government, which has incredibly deep pockets, couldn’t go out and buy what it wanted," Casado says. "It was extremely difficult to make these networks secure, and once you did, you had a really horrible management nightmare on your hands. Moving just one computer, for example, meant you had to make eight different configuration changes. You couldn't move anything – you couldn’t touch anything – unless you put a tremendous number of people to work." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Once you bought a piece of networking hardware, says Shenker, you didn't really have the freedom to re-program it. "Stuff had to be coded directly into the switch or the router. You would buy a router from Cisco and it would come with whatever protocols it supported and that’s what you ran." >"What was really shocking to me was that, at the time, market forces had totally failed to create networking equipment that the government could use" Shenker says there was good reason for this. "If you buy switches from a company and you expect them to work," he explains. "A networking company doesn't want to give you access and have you come running to them when your network melts down because of something you did." But these restrictions caused huge problems for organizations who were pushing the boundaries of network design, including not only intelligence agencies like the one Casado worked for, but massive web companies such as Google and Amazon.
In 2005, Google went so far as to build its own networking hardware , in part because it needed more control over how the hardware operated. "When Google looked at their network, they need high-bandwidth connections between their servers and they wanted to be able to manage things — at scale," says JR Rivers, one of the engineers who worked on Google's original network hardware designs. "With the traditional enterprise networking vendors, they just couldn’t get there. The cost was too high, and the systems were too closed to be manageable on a network of that size." So, after he left his government job in 2003 and enrolled in graduate school at Stanford, the Silicon Valley university that spawned Google, Martin Casado resolved to build a new kind of network, a network that wasn't such a nightmare. "There was a realization that networks blow – that they suck," Casado remembers. "When I went to Stanford, this is the problem I worked on: how do we make networks not suck? We want them to be as flexible and as programmatic as computers." Death to Spaghetti Code At Stanford, Casado studied with Nick McKeown, a professor and networking researcher who once worked for HP Labs and Cisco, and during this time, he met Scott Shenker, who oversaw the networking group at Berkeley's International Computer Science Institute. Both McKeown and Shenker worked with Casado on his PhD thesis – a network architecture dubbed Ethane – and in 2007, using the thesis as a jumping off point, the three of them founded Nicira.
It was the beginning of a movement known as "software-defined networking." It's a dreadful name. Even Casado admits as much. But like so many dreadful names in the tech world, it stuck.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In short, software-defined networking – or SDN – sought to create a better way of controlling networks. "Software-defined networking is applying modularity to network control," says Scott Shenker. "Modularity is something every software designer does in their sleep. If a program isn't modular, it's just spaghetti code. Software-defined networking asks what are the right software abstractions that let us structure the network control plane so it’s evolvable, so it's not just a bunch of spaghetti code." >"Software-defined networking asks what are the right software abstractions that let us structure the newtwork control plane so it’s evolvable, so it's not just a bunch of spaghetti code." Spanning computer scientists at Nicira and various academics, the movement achieved its first big breakthrough with OpenFlow, a standard way of remotely managing network switches and routers. "Think of it as a general language or an instruction set that lets me write a control program for the network rather than having to rewrite all of code on each individual router," says Shenker. Amazing as it may sound, this sort of thing didn't exist before. OpenFlow soon developed a following among some of the industry's biggest names, including Google, HP, NEC, and Ericsson, and it has been widely hailed in the press as the technology that will deliver networking from the dark ages.
The trouble is that you can't use OpenFlow on routers and switches unless the vendors add the protocol to their hardware, and even then, says Casado, who wrote the first draft of the specification, OpenFlow is only so useful. Shenker agrees. "From an industry-structure and industry-standard point of view, OpenFlow is important. It defines the detailed language of how I speak to a switch," he says. "But from an architecture point of view, it's very unimportant. The more important component is how you coordinate the activities of switches in order to provide a coherent behavior." The ultimate aim was not to find a better way of managing networking hardware, but to create a software architecture that would let you build networks without having to deal with the hardware. The ultimate aim was to build virtual networks. According to Shenker, Casado doesn't do things halfway – whether he's at work or play. "He's an ultra-marathoner," Shenker says. "When he gets up in the morning for a run, he runs to Half Moon Bay." The vSwitch and Beyond Nicira is often compared to VMware, another company that grew out of research at Stanford. In the early aughts, VMware pioneered the art of server virtualization, and this quickly revolutionized the computer data center, helping big businesses save both money and space by running many virtual servers on a single physical server. Now, Nicira is doing much the same with networks.
"We’re virtualizing away this physical fabric," Casado says, "and because now we have a virtual layer, you can do anything you want with it." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg VMware has long offered a virtual network switch as part of its "hypervisor," the platform that runs its virtual servers, and similar virtual switches were included with open source hypervisors such as Xen and KVM. But these "vSwitches" were limited. You couldn't really string them together into a complex virtual network. "A vSwitch is required for network virtualization," Casado says, "but it doesn't give you a virtualized network." What Casado and Nicira have done is build a new breed of vSwitches that can be tied together into a true virtual network, and they've built the control software that lets you do so.
Known as Open vSwitch , Nicira's virtual switch is open source – meaning it's freely available to anyone – and it can be managed with OpenFlow. Casado isn't that high on using OpenFlow with hardware inside the data center, but the protocol is an important part of the software Nicira uses to build its virtual networks.
The result is that you don't have to wait for the hardware vendors to adopt OpenFlow – or anything else. You can you use its Nicira's software to build a virtual network atop any networking hardware. With Nicira's platform in place, the physical switches and routers forward the network packets, but that's it. The virtual network handles all the important duties – including how the traffic is routed and how it's secured.
"Once you have your switch virtualized, you can pretty much do whatever you want with it," says Rackspace chief technology officer John Engates. "You can route traffic however you like, and you can reprogram it whenever you like, on the fly." VMware says it is also offering network virtualization, and others companies, including Cisco, say they too are developing similar technology. "[Software-defined networking] is a very big passion and focus area for us on our infrastructure side," says VMware chief technology officer Steve Herrod. But Casado and his Nicira cohort, former Cisco exec Alan Cohen, insist that no other company is anywhere close to doing what Nicira is doing.
Virtual Servers, Virtual Storage, and, Yes, Virtual Networks Nicira's platform is particularly useful to an outfit like Rackspace. Following in the footsteps of Amazon, Rackspace operates an "infrastructure cloud," offering instant access to virtual servers and storage. This service is used by thousands of developers and businesses across the globe, and Nicira provides a means restricting each customer to its own virtual network – or multiple virtual networks.
"We have hundreds of thousands of customers, and that translates into multiple hundreds of thousands of network or network segments that customers want to create," says Rackspace's John Engates. "Nicira gives us the ability to put any customer, any end point, any location on one common virtual network." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Raymie Stata, the former Yahoo chief technology officer, agrees that Nicira changes the game if you're running this sort of infrastructure cloud service. But he questions how useful the company's software will be to other web services. "If you want to have virtual private networks for a large number of customers, that's one of the hardest problems to solve, and Nicira is a great solution for that," Stata says. "But if only one tenant is using a network, even if the tenant is very large, it's less useful. I wouldn't imagine it would be as useful to Facebook, for example. They're very large, but they're the only tenant on their network." >"If you want to have virtual private networks for a large number of customers, that's one of the hardest problems to solve, and Nicira is a great solution for that" According to Casado, this misses the mark. Many of the biggest web operations run extremely complex operations, he says, and though the resources may not be shared among many outside customers, they're shared among many different applications within a company. "Some of the big web guys have very simple operations. They have one website that runs the same code. This isn't an obvious fit for us," Casado says. "However, any sophisticated website generally has many applications with different requirements, as well as test and development [applications] from different groups, all using the same infrastructure." As John Engates points out, at a company like Google, the company's private infrastructure operates much like the public infrastructure services offered by Amazon and Rackspace. All of these companies have built sweeping operations that pool a massive collection of hardware resources into one coherent whole. That's what a cloud is. You can grab virtual processing power and virtual storage whenever you need it, and you can move these virtual resources from one physical place to another. But in the past, the network wasn't as malleable, and this restricted how easily you could move resources. Nicira adds the missing piece.
The End of the Network Operator Many of the world's largest web companies, including Google, are already buying cut-rate networking gear directly from manufacturers in Taiwan and China, making an end-run around the Ciscos and the Junipers. With Nicira providing a virtual networking platform that works with any gear from any vendors, Casado says, this trend will only continue. The Ciscos and Junipers, he says, will become less and less important.
Yes, Cisco is working on its own network virtualization tools. And it has joined Nicira and others in building a networking virtualization framework for OpenStack, the open source platform for building infrastructure clouds along the lines of those offered by Rackspace and Amazon. "Cisco is a networking company, and we're increasingly looking at cloud services. We’re not just switches and routers anymore," says Lew Tucker, who oversaw the development of Sun Microsystems' cloud service before it was sold to Oracle and now runs Cisco's OpenStack efforts. "We want to make sure this stuff works on Cisco gear." >"Cisco is a networking company, and we're increasingly looking at cloud services. We’re not just switches and routers anymore. We want to make sure this stuff works on Cisco gear" But Casado believes Cisco and the other big networking vendors will never fully commit to network virtualization. "The traditional networking vendors? I don’t think they can do this, because they'll end up cannibalizing themselves," he says. "They can do something that has some of the same properties, but they can’t actually virtualize the network. They can never come out and sell you a project that will allow you do work with any type of hardware. They will make motions in this area, but I don’t think they’re going to be doing anything really concrete." Whatever the case, Casado believes it's only a matter of time before networking hardware takes a back seat to software.
Recently, Casado was in Hawaii when he received an e-mail from someone who worked for a large company Nicira had dealt with in the past. This person asked Casado if he could meet for a chat, and Casado said yes, assuming he was an executive who wanted to discuss a partnership between the two companies. But as it turns out, this person was an ordinary network hardware operator who had read about Nicira and wanted to know if he would be out of job in 10 years.
"I didn't know what to tell him. Get a new job? Do something different?" Casado says. "The truth is, in 10 years, you’re not going to have highly skilled, highly paid people working with networking hardware." Additional reporting by Robert McMillan.
Senior Writer X Topics Cisco Enterprise Google hp networking OpenStack software VMware VR Susan D'Agostino Will Knight Christopher Beam Steven Levy Will Knight Will Knight Samanth Subramanian Caitlin Harrington Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
885 | 2,012 |
"Exclusive: Google, Amazon, and Microsoft Swarm China for Network Gear | WIRED"
|
"https://www.wired.com/wiredenterprise/2012/03/google-microsoft-network-gear/all/1"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Exclusive: Google, Amazon, and Microsoft Swarm China for Network Gear Save this story Save Save this story Save Google, Amazon, Microsoft, and Facebook buy more networking hardware than practically anyone else on earth. After all, these are the giants of the internet. But at the same time, they're buying less and less gear from Cisco, HP, Juniper, and the rest of the world's largest networking vendors. It's an irony that could lead to a major shift in the worldwide hardware market.
Over the past few years, the giants of the web have changed the way they purchase tens of thousands of the network switches inside the massive data centers driving their online services, quietly moving away from U.S.-based sellers to buy cheaper gear in bulk straight from China and Taiwan. According to J.R. Rivers -- an ex-Google engineer -- Google has built its own gear in tandem with varous Asian manufacturers for several years, and according to James Liao -- who spent two years selling hardware for Taiwan-based manufacturer Quanta -- Facebook, Amazon, and Microsoft are purchasing at least some of their networking switches from Asian firms as well.
"My biggest customers were these big data center [companies], so I know all of them pretty well," Liao says. "They all have different ways of solving their networking problems, but they have all moved away from big networking companies like Cisco or Juniper or [the Dell-owned] Force10." The move away from U.S. network equipment stalwarts is one of the best-kept secrets in Silicon Valley. Some web giants consider their networking hardware strategy a competitive advantage that must be hidden from rivals.
Others just don't want to anger their business partners in the hardware sector by talking about the shift. But cloud computing is an arms race. The biggest web companies on earth are competing to see who can deliver their services to the most people in the shortest amount of time at the lowest cost. And the cheapest arms come straight from Asia.
J.R. Rivers is one of the arms dealers. He runs a company called Cumulus Networks that helps the giants of the web -- and other outfits -- buy their networking hardware directly from "original design manufacturers," or ODMs, in China and Taiwan. And he's worked in this world for an awfully long time. He's one of the Google engineers who secretly designed a new breed of networking switch for the company's data centers, the massive computing facilities that drive its search engine and the rest of its web services.
Rivers joined Google in October 2005, after five years as a distinguished engineer at Cisco, the company that dominated the worldwide market for networking gear. At the time, Google was still connecting its servers using standard networking switches from the likes of Cisco and Force10 Networks. But these mass-market switches just didn't suit Google's unusually large operation.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "When Google looked at their network, they need high-bandwidth connections between their servers and they wanted to be able to manage things -- at scale," Rivers says. "With the traditional enterprise networking vendors, they just couldn't get there. The cost was too high, and the systems were too closed to be manageable on a network of that size." So Google drew up its own designs -- working alongside manufacturers in Taiwan and China -- and cut the Ciscos and the Force10s out of the equation. The Ciscos and the Force10s build their gear with many of those same manufacturers. Google removed the middlemen.
The search giant does much the same with its servers, buying custom-built machines straight from Asia rather than going through traditional sellers such as Dell and HP. Because its web services were used by such an enormous number of people, Google faced all sorts of data center problems no one else faced -- problems of power and space as well as cost and logistics. So it built all sorts of custom hardware to solve those problems.
>"They all have different ways of solving their networking problems, but they have all moved away from big networking companies like Cisco or Juniper or Force10" Now, the other giants of the web are running into the same issues, and they too are going straight to Asia for hardware. Following closely behind are companies that run large internal server farms, including financial houses and healthcare outfits.
As J.R. Rivers serves this market with Cumulus Networks , James Liao is doing much the same thing with a second startup called Pica8 , offering networking gear that comes straight from the ODMs. Pica8 is a spinoff of Liao's former employer, Quanta -- one of the companies that manufactured Google's original networking switches, according to Rivers.
According to Liao, tens of thousands of switches are already being sold by the Asian ODMs directly to the likes of Amazon, Facebook, and Microsoft. And that doesn't include the gear Google has bought over the past seven years. "This is just the beginning," Liao says, pointing out that these buyers operate the biggest data centers on earth. These companies account for only a part of the $7-billion-a-year Ethernet switch market, but as more and more outfits move their operations into the proverbial cloud, the influence of these web giants will only grow.
Liao estimates that Amazon, Microsoft, Facebook and others have bought Asian network switches spanning "millions" of network ports -- i.e., connections to servers -- and he guesses that in 2011, about 60 percent of these ports provided 10Gigabit Ethernet connections. According to Matthias Machowinski -- a directing analyst with Infonetics, a research firm that tracks the networking market -- the official market for 10Gigabit Ethernet spanned about 9 million ports in 2011.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg J.R. Rivers declines to name the companies he's working with at Cumulus Networks, but he confirms that some of the big-name web outfits are already buying networking switches from ODMs in Asia. In all likelihood, these companies are also purchasing switches from other sources as well. Cisco says it has a "significant presence and mindshare" in the big-name web market, and Juniper says it has a relationship with all of the top five web players, pointing out that data center networks require more gear than just switches. But the market is on the move.
The Future of 'Web Giant 3.0' "We are continuously exploring new infrastructure technologies that may evolve further efficiencies across our portfolio. We normally have discussions with ODMs and large and small OEMs to better understand their capabilities and evaluate their products," reads a statement sent to Wired by a Microsoft spokesperson and attributed to Dileep Bhandarkar, a distinguished engineer who oversees the data centers driving Microsoft's online services. But the statement did not specifically address the purchase of networking gear.
Amazon did not respond to a request for comment about its hardware practices, and a Google spokeswoman sent us a one-sentence statement: "We work with a variety of vendors to manufacture the equipment we use in our data centers," she said. These two companies -- particularly Google -- are rather tightlipped about their data center practices.
>"This supply chain change is nascent. But it's the most exciting thing going on in Silicon Valley right now" Facebook declined to discuss how it purchases networking gear, but in response to the secretive approach of Amazon and Google, the company has openly discussed some of its other practices, and it has actually shared its server and data center designs with the rest of the world. It purchases its servers directly from Quanta and Wistron, another Taiwanese ODM.
Martin Casado -- the chief technology officier of a third Silicon Valley networking startup, Nicira -- confirms that the hardware market is shifting to Asia. Offering a software platform that virtualizes networking gear in much the same way that VMware virtualized servers, Nicira helps some of the big web players build their networks. The Nicira platform was designed specifically for companies along the lines of Google that want to use cheap commodity switches to physically construct their network but then do all the complex management in software.
"If you're building web giant 3.0, you can go to Quanta in Taiwan and buy crates ... of switches," he says. "This supply chain change is nascent. But it's the most exciting thing going on in Silicon Valley right now." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google Goes to Asia According to J.R. Rivers, Google began work on its custom-built networking switches in early 2005, before he arrived at the company. In the beginning, River says, Google worked in tandem with Quanta and other Asian ODMs, but eventually, the web giant took all the engineering work in house. Basically, Rivers says, the company wasn't happy with the work the ODMs did at the time. Google engineers would design the switches, and then they would bring the completed designs to contract manufacturers in Asia, outfits along the lines of Foxconn, the Asian company that builds Apple's iPhones and iPads.
Google has never discussed its practices publicly, but rumors have long indicated that the company built its networking switches in this way. In 2007, research analyst Andrew Schmitt noticed that certain manufacturers were producing enormous numbers of chips for 10Gigabit Ethernet switches but that the switches themselves weren't actually turning up on the market. "It didn't make sense to me why someone would be building so much of a given component if there were no customers that could use it," he says. "What I was able to determine is that Google was purchasing switch chips straight from the comment suppler." >"It didn't make sense to me why someone would be building so much of a given component if there were no customers that could use it" The switches Google was building typically sat at the top of a rack of servers in the data center, connecting the servers to the rest of the network. As Juniper points out, this is only part of the networking hardware used in the data center, but it's a large part.
Google, Rivers says, is a unique company. It has the wherewithal and the talent to built its own switches, but other companies may not be up to the task. With Cumulus Networks, J.R. Rivers and his partner, Nolan Leake, are trying to grease the wheels. "[The other web players] are trying to figure out what the best model is, and that's one of the reasons we started up," Rivers says. "Google is unique in its willingness to build something just because they know it can be done. Most other people see a risk/reward trade-off. We seek to minimize that risk." Though Rivers declined to name the ODMs his company is working with, he says that these are well-known manufacturers in Taiwan and China. "We've been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy," he says. "For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This has become possible in recent years, Rivers says, because the ODMs have slowly acquired more and more engineering talent. You can now buy commodity gear from more places. "Networking is opening up much like the transition from mainframes to RISC machines and later to x86 servers," says Rivers' partner, Nolan Leake. "We're moving towards a world where customers have more control over their destiny." 'The Arms Dealer' Before spinning Pica8 out of Quanta, James Liao was already selling similar networking switches to the big web players. Nicira's Martin Casado refers to James Liao as " the arms dealer" in this networking revolution. "He's the conduit between the rest of the world and Quanta. He knows this space better than anyone," Casado says. "And I love him because he talks like he's part of organized crime." From July 2009 to September 2011, Liao was the senior director at Quanta in charge of product strategy for network switching and data center products. He was based in Silicon Valley, and his job was to serve the giants of the web. He declines to go into much detail about how these companies acquire their hardware, but he's unequivocal in saying that the other big companies -- Amazon, Microsoft, and Facebook -- are now following Google's lead in going directly to Asia for their gear.
Picasa Networking switches, he says, have become a commodity. "They all use the same chips. They have to same latency. They have the same bandwidth. This is a clear signal that the hardware platform is commoditizing," he says. "You can actually find a lot of [ODM] suppliers that have the capability to manufacturer and design this kind of platform." Like Cumulus Network, Liao's new venture, Pica8, brings this low-cost networking hardware to a much larger market. In the past, one of the problems with buying directly from the ODMs is that you had build your own software to drive your switches. But Pica8 aims to provide software for those companies that don't want to build their own. The company has open sourced an early version of this software -- known as Picos -- and it plans to open source a more extensive version of the platform next month.
"We give you the hardware and the software," Liao says. "If you take our platform and compare it to Cisco, the protocol features we provide and the hardware performance are all in the same range. The only difference is that the price is 40 percent to 60 percent lower." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Though Pica8 spun off of Quanta, Liao says that the company will also sell switches from other ODMs. But he declined to name them. But he does say Pica8 is selling gear to Japanese telecom giant NTT and Baidu, the company that dominates the Chinese search engine market.
Matthias Machowinski, of research firm Infonetics, says he is "very much aware" of this trend, though he adds that it is extremely hard to track. He says that the big web giants account for only a part of the overall switch market -- "the number of customers that might choose to go down this route are very limited. Today, you can count them on one hand, and maybe over the next two years, two hands might be enough" -- but he also acknowledges that as businesses move their applications onto services such as Amazon EC2 and Microsoft Azure -- rather than running stuff in their own data centers -- these web giants will account for an even larger part of the switch market.
Like Server, Like Switch This shadow networking market is a repeat of what happened in the server world. Years ago, Google started building its own servers in tandem with the Asian ODMs, and other web giants followed. These companies are looking to save cost, but they're also looking to reduce their power consumption, customizing machines so they're far more efficient than their mass-market brethren.
In 2009, Google revealed some server designs it produced several years before. But, as with networking practices, the company says very little about its server gear. Amazon operates in much the same way. But Facebook had taken a different approach. Last year, after building its own data centers and working with various manufacturers to build its own servers, the social networking giant open sourced these designs to the rest of the world, hoping that others across the industry can help improve those designs, buy more hardware based on the designs, and ultimately drive down the price of the hardware.
>"It's kind like buying couches. If you buy one, you go to a retail store. If you buy 10,000 couches, you go straight to the factory" This Open Compute Project already has several other big-name backers, including Texas-based cloud computing outfit Rackspace and Japan's NTT. And it doesn't stop at data centers and servers. Last month, Frank Frankovsky -- the ex-Dell man who oversees hardware design at Facebook -- told us that the company is in the midst of building its own storage hardware and that these designs will be open sourced in early May.
In these cases, Facebook and Amazon and Google and others bypassed "original equipment manufacturers," or OEMs, such as Dell and HP. The servers sold by the likes of HP and Dell are actually manufactured by those same ODMs in Taiwan and China.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg James Liao, of Pica8 and formerly of Quanta, does not work with servers. But he says that it's common knowledge that -- like Google and Facebook -- Amazon purchases at least some of its servers from ODMs in Asia. "For servers, Facebook and Amazon are taking almost exactly the same approach," he says. "Amazon also has some very high power designers, but they don’t do the design themselves. They come up with a certain architecture and they tell the ODMs: 'This is my vision. These are the goals. And I want help designing the hardware.'" Now, Liao says, this same sort of thing is happening with, well, everything. "All of the data center hardware is bought this way," Liao says. "You can refer to Facebook as an example, where one of the big projects inside the Open Compute effort is storage. Even the storage side is being commoditized. Servers, storage, and networking -- all of them are going to this way." Howard Wu -- the president of greater China for Joyent, an Amazon-like cloud computing outfit based in San Francisco -- agrees. "If you're a small business and you're going to buy five servers, you're going to Dell or HP, because of the support services. But if you're a data center operator and you're going to buy 10,000 servers, you're going straight [to the ODMs]," he says. "It's kind like buying couches. If you buy one, you go to a retail store. If you buy 10,000 couches, you go straight to the factory." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That said, Joyent is not yet buying its gear from the ODMs.
"We are definitely in talks, but it hasn't actually happened yet," he says. "We have other contractual obligations right now." The market has not completely shifted to Asia. It's moving in stages. These web companies have many suppliers -- that's just good for businesses -- and in some cases, they're still buying hardware from the traditional players -- perhaps because they still have contracts in place. Facebook, for instance, is still buying some servers from Dell and HP. And Amazon is still buying custom servers from Rackable, a stateside manufacturer, and apparently other outfits based here in America.
The hardware supply chain is vast and varied. But it's consolidating. Now that they have the engineering talent, J.R. Rivers says, the ODMs are transforming into OEMs. "The market is maturing to the point where anyone can buy directly from ODMs," he says. "You don't have to be Google." Update: In response to this story -- which refers to Pica8 as a spin-off of Quanta -- Quanta sent Wired a statement that reads: "Pica8 has licensed technology from Quanta and is not a "'spin off' of Quanta. Quanta has no ownership interest in Pica8 and has never owned Pica8's shares." Senior Writer X Topics data Enterprise networking secret servers Servers Gregory Barber Caitlin Harrington David Gilbert Kari McMahon Nelson C.J.
Peter Guest Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
886 | 2,012 |
"Google Reincarnates Dead Paper Mill as Data Center of Future | WIRED"
|
"https://www.wired.com/wiredenterprise/2012/01/google-finland"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google Reincarnates Dead Paper Mill as Data Center of Future Save this story Save Save this story Save Joe Kava found himself on the southern coast of Finland, sending robotic cameras down an underground tunnel that stretched into the Baltic Sea. It's not quite what he expected when he joined Google to run its data centers.
In February of 2009, Google paid about $52 million for an abandoned paper mill in Hamina, Finland, after deciding that the 56-year-old building was the ideal place to build one of the massive computing facilities that serve up its myriad online services. Part of the appeal was that the Hamina mill included an underground tunnel once used to pull water from the Gulf of Finland. Originally, that frigid Baltic water cooled a steam generation plant at the mill, but Google saw it as a way to cool its servers.
Weinberg-Clark Photography Those robotic cameras -- remote-operated underwater vehicles that typically travel down oil pipelines -- were used to inspect the long-dormant tunnel, which ran through the solid granite bedrock sitting just beneath the mill. As it turns out, all 450 meters of the tunnel were in excellent condition, and by May 2010, it was moving sea water to heat exchangers inside Google's new data center, helping to cool down thousands of machines juggling web traffic. Thanks in part to that granite tunnel, Google can run its Hamina facility without the energy-sapping electric chillers found in the average data center.
"When someone tells you we've selected the next data center site and it's a paper mill built back in 1953, your first reaction might be: 'What the hell are you talking about?,'" says Kava. "'How am I going to make that a data center?' But we were actually excited to learn that the mill used sea water for cooling.... We wanted to make this as a green a facility as possible, and reusing existing infrastructure is a big part of that." Kava cites this as a prime example of how Google "thinks outside the box" when building its data centers, working to create facilities that are both efficient and kind to the world around them. But more than that, Google's Hamina data center is the ideal metaphor for the internet age. Finnish pulp and paper manufacturer Stora Enso shut down its Summa Mill early in 2008, citing a drop in newsprint and magazine-paper production that led to "persistent losses in recent years and poor long-term profitability prospects." Newspapers and magazines are slowly giving way to web services along the lines of, well, Google, and some of the largest services are underpinned by a new breed of computer data center -- facilities that can handle massive loads while using comparatively little power and putting less of a strain on the environment.
Google was at the forefront of this movement, building new-age facilities not only in Finland, but in Belgium, Ireland, and across the U.S. The other giants of the internet soon followed, including Amazon, Microsoft and Facebook. Last year, Facebook opened a data center in Prineville, Oregon that operates without chillers, cooling its servers with the outside air , and it has just announced that it will build a second facility in Sweden, not far from Google's $52-million Internet Metaphor.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Secrets of the Google Data Center Google hired Joe Kava in 2008 to run its Data Center Operations team. But this soon morphed into the Operations and Construction team. Originally, Google leased data center space inside existing facilities run by data center specialists, but now, it builds all its own facilities, and of late, it has done so using only its own engineers. "We used to hire architecture and engineering firms to do the work for us," Kava says. "As we've grown over the years and developed our own in-house talent, we've taken more and more of that work on ourselves." Over those same years, Google has said precious little about the design of the facilities and the hardware inside them. But in April 2009, the search giant released a video showing the inside of its first custom-built data center -- presumably, a facility in The Dalles, Oregon -- and it has since lifted at least part of the curtain on newer facilities in Hamina and in Saint-Ghislain, Belgium.
According to Kava, both of these European data centers operate without chillers. Whereas the Hamina facility pumps cold water from the Baltic, the Belgium data center uses an evaporative cooling system that pulls water from a nearby industrial canal. "We designed and built a water treatment plant on-site," Kava says. "That way, we're not using potable water from the city water supply." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For most of the year, the Belgium climate is mild enough to keep temperatures where they need to be inside the server room. As Kava points out, server room temperatures needn't be as low as they traditionally are. As recently as August 2008, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) recommended that data center temperatures range from 68 and 77 degrees Fahrenheit -- but Google was advising operators to crank the thermostat to above 80 degrees.
"The first step to building an efficient data center...is to just raise the temperature," Kava says. "The machines, the servers, the storage arrays, everything -- they run just fine at much much more elevated temperatures than the average data center runs at. It's ludicrous to me to. walk into a data centers that's running at 65 or 68 degrees Fahrenheit or less." There are times when the temperature gets so hot inside the data centers, Google will order employees out of the building -- but keep the servers running. "We have what we call 'excursion hours' or 'excursion days.' Normally, we don't have to do anything [but] tell our employees not to work in the data center during those really hot hours and just catch up on office work." At sites like Belgium, however, there are days when it's too hot even for the servers, and Google will actually move the facility's work to one of its other data centers. Kava did not provide details, but he did acknowledge that this data center shift involves a software platform called Spanner.
This Google-designed platform was discussed at a symposium in October 2009, but this is the first time Google has publicly confirmed that Spanner is actually in use.
"If it really, really got [hot] and we needed to reduce the load in the data center," Kava says, "then, yes, we have automatic tools and systems that allow for that, such as Spanner." According to the presentation Google gave at that 2009 symposium, Spanner is a “storage and computation system that spans all our data centers [and that] automatically moves and adds replicas of data and computation based on constraints and usage patterns." This includes constraints related to bandwidth, packet loss, power, resources, and "failure modes" -- i.e. when stuff goes wrong inside the data center.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The platform illustrates Google's overall approach to data center design. The company builds its own stuff and will only say so much about that stuff. It views technology such as Spanner as a competitive advantage. But one thing is clear: Google is rethinking the data center.
The approach has certainly had an effect on the rest of the industry. Like Google, Microsoft has experimented with data center modules -- shipping containers prepacked with servers and other equipment -- that can be pieced together into much larger facilities. And with Facebook releasing the designs of its Prineville facility -- a response to Google's efforts to keep its specific designs a secret -- others are following the same lead.
Late last year, according to Prineville city engineer Eric Klann, two unnamed companies -- codenamed "Maverick" and "Cloud" were looking to build server farms based on Facebook’s chillerless design, and it looks like Maverick is none other than Apple.
Large Data Centers, Small Details This month, in an effort to show the world how kindly its data centers treat the outside world, Google announced that all of its custom-built US faccilities have received ISO 14001 and OHSAS 18001 certification -- internationally recognized certifications that rate the environmental kindness and safety not only of data centers but all sorts of operations.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This involved tracking everything from engineering tools to ladders inside the data center. "You actually learn a lot when you go through these audits, about things you never even considered," Kava says. His point is that Google pays attention to even the smallest details of data center design -- in all its data centers. It will soon seek similar certification for its European facilities as well.
In Finland, there's a punchline to Google's Baltic Sea water trick. As Kava explains, the sea water is just part of the setup. On the data center floor, the servers give off hot air. This air is transferred to water-based cooling systems sitting next to the servers. And Google then cools the water from these systems by mixing it with the sea water streaming from the Baltic. When the process is finished, the cold Baltic water is no longer cold. But before returning it to the sea, Google cools it back down -- with more cold sea water pulled from the Baltic. "When we discharge back to the Gulf, it's at a temperature that's similar to the inlet temperature," Kava says. "That minimizes any chance of environmental disturbance." According to Kava, the company's environmental permits didn't require that it temper the water. "I makes me feel good," he says. "We don't do just what we have to do. We look at what's the right thing to do." It's a common Google message. But Kava argues that ISO certification is proof that the company is achieving its goals. "If you're close to something, you may believe you're meeting a standard. But sometimes it's good to have a third-party come in." The complaint, from the likes of Facebook , is that the Google doesn't share enough about how it has solved particular problems that will plague any large web outfit. Reports, for instance, indicate that Google builds not only its own servers but its own networking equipment, but the company has not even acknowledged as much. That said, over the past few years, Google is certainly sharing more.
We asked Joe Kava about the networking hardware, and he declined to answer. But he did acknowledge the use of Spanner. And he talked and talked about that granite tunnel and Baltic Sea. He even told us that when Google bought that paper mill, he and his team were well aware that the purchase made for a big fat internet metaphor. "This didn't escape us," he says.
Senior Writer X Topics data Enterprise Hardware secret servers Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
887 | 2,011 |
"Mystery Men Forge Servers For Giants of Internet | WIRED"
|
"https://www.wired.com/wiredenterprise/2011/12/secret-servers/all/1"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Mystery Men Forge Servers For Giants of Internet Save this story Save Save this story Save If you drive down highway 880 from Oakland, California, take an exit about 30 miles south, and snake past a long line of car dealerships, you'll find an ordinary office building that belongs to a company you've never heard of. And if you're allowed to walk inside -- past the receptionist and the cubicles, through another door, around the security guard, and into the warehouse -- you'll find some technicians assembling and testing server hardware for some of the biggest names on the internet.
This includes Facebook and Rackspace and at least one or two other names that even your grandmother knows about it.
The warehouse belongs to a company called Synnex -- an outfit that spent the last 30 years buying and selling computers, hard drives, chips, memory, and all sorts of other hardware. But Synnex isn't the one assembling and testing all that internet server hardware -- at least not officially. Those technicians work for a brand new Synnex division called Hyve.
Hyve Solutions was created to serve the world's "large-scale internet companies" -- companies increasingly interested in buying servers designed specifically for their sweeping online operations. Because their internet services are backed by such an enormous number of servers, these companies are looking to keep the cost and the power consumption of each system to a minimum. They want something a little different from the off-the-shelf machines purchased by the average business. "What we saw was a migration from traditional servers to more custom-built servers," says Hyve senior vice president and general manager Steve Ichinaga. "The trend began several years ago with Google, and most recently, Facebook was added to the ranks of companies who want this kind of solution." The net's biggest names have caused a tectonic shift in the worldwide server market. These are the companies that need more servers than anyone else on the planet, and they're moving away from traditional server makers such as Dell and HP, embracing Hyve, various manufacturers in Taiwan, and other little known companies that can help them build servers for their particular needs. In response, Dell and HP are now doing custom work as well. But the Hyves of the world are here to stay.
Be Like Google Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Hyve doesn't count Google as a customer -- or at least it doesn't seem to. But it's serving many of the internet companies that are imitating Google.
Unhappy with the cost and design of traditional servers from the likes of Dell and HP, Google designs its own servers, and it contracts with companies in Taiwan to build them. Facebook has now followed suit. Its no-frills servers are built by Taiwanese "original device manufacturers" (ODMs) Quanta and Wistron, and then they're shipped to Hyve in Fremont, California, where technicians load them into racks, hook up the required networking equipment, test them, and ship them off to Facebook's data centers.
Google treats its latest server designs as trade secrets, but earlier this year, Facebook open sourced its designs under the aegis of the Open Compute Project , sharing them to anyone who wants them. And this led Synnex to create Hyve. Hyve is a place where internet giants can go if they want Open Compute servers. But even before Hyve was created, Synnex was working for the big internet names. It has long provided custom machines for Rackspace -- the San Antonio, Texas company that offers infrastructure services across the net as a scale rivaled only by Amazon -- and though Synnex won't identify its other customers, it will say that these are companies everyone knows. "They're household names," Ichinaga says.
Hyve is just one of the under-the-radar server companies feeding these big internet names.
SGI -- the company once known as Rackable (not to be confused with Rackspace) -- has spent years building custom servers for the likes of Amazon and Microsoft. And a New Jersey-based outfit known as ZT Systems is building servers for similar internet outfits -- though it won't say who. Hyve and ZT say very little about how their operations work, but both seem to have very close ties to manufacturing outfits in China and Taiwan, where so much of the world's IT hardware is built. This means they can provide custom servers while still keeping prices down.
"We have very long term relationships with the key vendors," says Ichinaga. "We already sell billions of dollars' worth of components and other IT equipment, and that basically allows us to leverage our relationships with our partners when we serve our [internet] customers." Tim Symchych, the director of supply chain operations at Rackspace, confirms that his company can get lower prices from Hyve than he can from Dell and HP (though the company continues to buy from Dell and HP as well). Jason Hoffman -- the chief technology officier of Amazon- and Rackspace-rival Joyent -- says there will be cases when his company can actually get a better price from a traditional server maker such as Dell. But the point is the Hyves and ZT Systems of the world are competing with the big boys, and in many cases, they're winning.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With Facebook sharing its designs through the Open Compute Project this trend will only continue. According to Ichinaga, Hyve has already received orders for Open Compute servers from multiple companies. And that doesn't include Facebook.
The Hyve Mind Joyent CTO Jason Hoffman visited Hyve one afternoon in early December. Joyent typically buys its servers from Dell or Sun Microsystems (now part of Oracle), but he's exploring other options. He spent an hour discussing Hyve's services with a company sales rep, and then he took a brief tour of the warehouse. We tagged along, and though Hyve was careful not to expose its other customers, it did show off some of the Open Compute server racks it's putting together for Facebook.
Hoffman and Joyent don't want Open Compute servers. Facebook's designs are meant for web serving and memcaching -- a way of storing caching data in server memory, so it can be quickly accessed -- and he's looking for something more robust. "Facebook is really just running one applications," he says. "We're supporting different applications for all our customers." But Hyve says it can give him what he wants. Though Facebook's servers are built in Taiwan and shipped to Hyve's warehouse in Fremont, Hyve tells us that in most cases, it builds servers on its own, pulling parts from partners across the globe. Hyve says it can work with a company like Joyent to design a server that suits its particular needs. "It starts with collaboration," Ichinaga says. "We figure out what the customer wants in terms of server workload and the physical environment and everything else. If you need something different, we can do it." Ichinaga won't go into detail about Hyve's business model. But he says the company is only able to do this because Synnex has spent thirty years distributing computer hardware to thousands of resellers across the globe. It has close relationships, Ichinaga says, with the likes of chip giant Intel and hard drive maker Seagate, as well as ODMs such as Quanta and Wistron. "Compared to a traditional OEM companies," he says, referring to original equipment manufacturers such as Dell and HP. "We have a very low S,G, and A [selling, general and administrative expenses]. It's just a very efficient model." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Synnex is a US company. Its Fremont, California, offices are the official headquarters. But the company's founder, Robert Huang, was born in Taiwan and studied electrical engineering in Japan. And its ties to Asia run deep. The company has offices in both China and Japan.
Somewhere in New Jersey ZT Systems seems to operate in much the same way. And it too builds servers for some of the biggest names on the net.
It would appear that these names include Amazon.
Amazon job listings seeking technicians for its data centers sometimes request engineers that have "hands-on experience with Hewlett-Packard, Dell, Rackable, or ZT Systems." It's no secret that Rackable builds for Amazon. As a public company, Rackable (now SGI) must name customers that account for more than 10 percent of its revenues -- and Amazon does. But Amazon's relationship with ZT is still very much under the radar.
Amazon has not responded to requests to discuss ZT Systems, and ZT declines to name any of its customers. But a company spokesman will say that like Hyve, it builds custom servers for "very large data center operators." And he says it runs sales offices in Fremont, California, near Hyve, and in Seattle, Washington. Amazon is headquartered in Seattle. AMD -- Intel's chief chip rival -- has sold chips to ZT Systems for at least six years, according to John Fruehe, director of product marketing for server and embedded products at AMD and former business development man at the chip maker. Fruehe won't name ZT's customers, but he confirms that it serves "mega-data-center customers." Like Hyve, ZT is a US company, and according to a company spokesman, it builds all its servers in New Jersey. But it too has close ties to Asia. That said, ZT insists it shouldn't be compared to Hyve, because it has a longer history as a full-fledged OEM. "ZT Systems is a full-featured server OEM with robust engineering capabilities, and extensive experience designing and building computers in the USA for over 17 years," says the company spokesman. "Our systems contain the highest quality components, many of which are produced in Asia." Uh, Who Makes These Things? The server supply chain is a complicated thing. At times, it's difficult to tell who's actually building the machines and who's merely sprucing them up. Traditionally, ODMs such as Quanta and Wistron build the systems. Then OEMS such as Dell and HP add some additional hardware and ensure the systems meet certain standards. And these OEMs work with VARs -- value added resellers -- who will sell gear to the end user and may add some extra, well, value.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But the lines between these various layers are rather blurry, and a company like Hyve blurs them further still. Synnex sells hardware to resellers and distributes hardware on behalf of OEMs like HP, but now, with the rise of Hyve, it's turning into a server maker. With Facebook's servers, Hyve isn't actually building the systems. But with other customers -- such as Rackspace -- it says that it is.
Even Jason Waxman -- general manager of high-density computing in Intel’s data center group and a board member of the Open Compute Project -- finds it hard to classify a company like this. "I'm still trying to understand some of their models," he says. "[With an company like Hyve], I don't know if they allow the end user to take responsibility for the design or if they're really becoming an OEM themselves. OEM is actually building a fully configured system and supporting it." What's more, there are cases where component companies such as AMD are selling parts directly to the big internet names, and these names will then go to companies like ZT Systems or the Taiwanese ODMs and ask them to actually put these components into servers. Google buys chips directly, and at least some others have followed. "This was all the rage about two years ago," says AMD's Fruehe. "There was one extremely large web company that was doing this, and they still to this day build their own systems. Everybody looked at this and said 'This is great. I've been going to an OEM to have them build my systems, but I can really cut down my costs if I do it myself and go straight to Taiwan." Intel sells chips in this way as well. But Jason Hoffman says this is not the norm. "Our goal is that we almost exclusively work with OEMs," he says.
As AMD's Fruehe points out, there are still reasons for even the biggest internet companies to lean on OEMs like Dell and HP and ZT. "The OEM is providing the system integration, the tests, the qualifications, all the low-level systems code," he says. "You start to realize this is a whole lot of work, and if you build your own servers, you have to hire a whole lot of people do what you could have hired HP or Dell or ZT or another partners to do." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In addition to building the systems, ZT will help support them. The company runs "integration facilities" around the world, which are essentially support offices that serve the data centers of the big internet players. In some cases, says a company spokesman, ZT will have someone working inside a customer's data center. This is quite different from the way Hyze works with Facebook. Facebook does get on-site support from the company, but it does use Hyve to put its servers into racks, hook them up, test them, and deliver them.
The big internet names are approaching the task in many different ways. And generally, each is dealing with multiple server vendors. Rackspace uses Dell, HP, and Hyve. Facebook is moving to Quanta, Wistron, and Hyze, but it continues to use Dell and HP. According to job listings, Amazon is using HP, Dell, Rackable, and ZT, and it may be using other companies as well.
Beyond the Net None of this is new. But in open sourcing its new server designs, Facebook is pushing the market in a new direction. It's now easier for server buyers to go directly to Taiwan. Or if they want a bit more hand-holding, they can go to Hyve. Facebook recently released version 2.0 of its Open Compute server designs, and in the new year, Hyve will be shipping servers based on these designs not only to Facebook but to other big name internet outfits.
Both Amazon and Apple attended Facebook recent Open Compute summit in New York. "There's folks from Google and Apple and other folks here today as well," Facebook's Frank Frankovsky told Wired. "Even though we didn't highlight the contributions from those companies, everybody's included. And people can just consume the technology and not contribute back." Sun founder Andy Bechtolsheim -- another Open Compute Project board member -- believes that Apple will at least consider Open Compute servers. "Apple wants to build a big iCloud," he told us. "Obviously, they want to minimize their power consumption and cost. I'm pretty sure they will look at this. The point is they couldn't have looked at this until it became an open spec. Their other choice was to build their own version." And the internet giants aren't the only ones. According to Ichinaga, all sorts of other outfits -- government organizations, telecoms, and large financial organizations -- have ordered Open Compute servers. Chris Kemp -- the CEO of Nebula, a startup that's selling appliances that run OpenStack, an open source platform that mimics Amazon's web services -- has been involved with the Open Compute Project from the beginning, and he echoes Ichinaga.
"Every company in the world is going to take a close look at what they’re all about," he says. "If you’re a financial or biotech company, you’re gonna to start looking at the cost of your infrastructure, and you’re going to start behaving a lot like Google and Amazon and Facebook. In today's world, everybody’s business is becoming computationally intensive." In other words, he says, the business world will move even further away from the Dells and the HPs.
Additional reporting by Eric Smalley.
Senior Writer X Topics Amazon apple data Enterprise Facebook Google Hardware Rackspace secret servers Servers Will Knight Andy Greenberg Kari McMahon Amit Katwala Joel Khalili Joel Khalili David Gilbert Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
888 | 2,011 |
"Facebook Auditions Kid Hackers With All-Night Codefest | WIRED"
|
"https://www.wired.com/wiredenterprise/2011/12/facebook-hackathon"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Caleb Garling Business Facebook Auditions Kid Hackers With All-Night Codefest Save this story Save Save this story Save "Yes! We have elegance!" says a student from the University of Waterloo. Then he pauses, and his shoulders slump. "Actually," he adds, "I don’t know what the fuck this is." He points to a botched graphic on his computer screen, and his fellow students crowd around. They talk quietly among themselves, making suggestions here and there, and ultimately, they find a way of achieving something at least a little closer to elegance.
This is what happens at a Hackathon. If you're a spectator, it's not exactly riveting stuff. But if you're a participant, it's a gas -- particularly when you're competing in building number 4 on the campus of Facebook’s Palo Alto headquarters. These students aren't just hacking for fun. They're hacking for a job with one of the biggest names on the net.
Facebook, Google, and other tech giants use events like this as recruiting tools. Sports teams hold combines to assess prospective players. Internet companies hold Hackathons to size up developers. They throw coders into a large room and give them a task or a problem to solve, and if their solution is good enough, they win some cash and a trophy -- or maybe a call from the Facebook HR department.
The coders from the University of Waterloo make up one of 14 teams in the competition, each charged with building an application in less than 24 hours. This particular Facebook Hackathon -- which began on Thursday and runs into Friday -- is just for college students. “We told them to think of their day and asked, ‘What annoys you?’” says Clifton Tay, a Facebook recruiter who helped organize the event. “'Now build an application to solve it.'” By Friday morning, empty Red Bull cans and take-out containers are spread across the desks, leaving just enough space for keyboards and monitors. At the end of the room, the Facebook staff monitors the action quietly, doing their own work and answering intermittent questions from the students.
A team from the University of Washington has built an application -- Spunby.me -- that let’s you grab music from one machine, stream it to another, and play it in sync on both. The team crowds around two monitors as The Police plays on one computer and arrives on the second -- almost in sync. They look at one another and muster a few wary smiles.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “No one’s slept,” Ryan Ewing, a member of the UW team, tells Wired.
“That’s the secret of success!” his teammate chimes in, without looking up from his Java code.
This Facebook Hackathon is a tad more mature than the one portrayed in The Social Network , the recent blockbuster movie about the company's early days. In the film, Mark Zuckerberg and company coaxed applicants into booze-infused coding binges. This week, the drinks aren't any stronger than Red Bull.
Paul Tarjan -- a Facebook “Web Hacker," as it says on his business card -- manages the engineering side of the Hackathon, and he says that the biggest goal of the competition is to find prospects that fit with the breakneck pace of Facebook's culture. They're not looking for elegant coders. Facebook doesn’t even look at the code when it declares a winner. A poster that reads “Done Is Better Than Perfect” in bold red letters towers over one team.
“This competition is based on 50 percent idea and 50 percent implementation,” Tarjan tells Wired. “We don’t care about monetization potential. We just want to see a hack of something useful.” Facebook only invites 14 schools to participate in the competition, but the schools on the list may change. Not every winner gets a job, but they have the inside track. “If you’re a part of Hackathon," Tarjan says, "our HR will have that much more data on you during the hiring process.” Not everyone finishes. A single coder won a preliminary Hackathon at Stanford University, but Facebook told him he couldn't compete in the main event without a full team, so he recruited another three students on the fly and the group didn't exactly mesh. At some point during the night, he threw up his hands and dropped out.
But the rest plow though. At the end of the 24 hours, the teams migrate into the Facebook cafeteria and begin presenting their products to the judges -- various Facebook engineering directors. Some presentations go off without a hitch, others are marred by faulty demos, bad connections, and, yes, bad code. Some presenters play it cool and work the audience with humor. Others sound a little shaky. But all in all, exactly what you'd expect from a room full of young, eager kids.
The winner? Princeton University, for an app it calls Accessorize, a slick tool that helps both men and women with their daily fashion choices -- certainly something that would fit nicely into the Facebook culture. The prize: $500 each. And that inside track.
Contributor X X Topics announcements college Enterprise Facebook Google Will Knight Caitlin Harrington Amanda Hoover Steven Levy Niamh Rowe Christopher Beam Will Knight Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
889 | 2,011 |
"Welcome to Prineville, Oregon: Population, 800 Million | WIRED"
|
"https://www.wired.com/wiredenterprise/2011/12/facebook-data-center/all/1"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Welcome to Prineville, Oregon: Population, 800 Million Facebook's data center looks out over the 'Tibet of North America.' Photos: Pete Erickson/Wired.com Save this story Save Save this story Save Ken Patchett ran Google’s Asian data centers for more than a year and a half, and he says it’s “B.S.” that the company treats its computing facilities as trade secrets jealously guarded from the rest of the world.
He actually writes the letters in the air with his finger. The B.
And then the S.
Web giants like Google and Amazon are notoriously secretive about what goes on inside the worldwide network of data centers that serve up their sweeping collection of web services. They call it a security measure, but clearly, they also see these facilities as some sort of competitive advantage their online rivals mustn’t lay eyes on. When he joined Google, Ken Patchett — like so many other Googlers — signed an agreement that barred him from discussing the company’s data centers for at least a year after his departure, and maybe two.
But after leaving Google to run Facebook’s new data center in the tiny Northwestern town of Prineville, Oregon, Patchett says the security argument “doesn’t make sense at all” — and that data center design is in no way a competitive advantage in the web game. “How servers work has nothing to do with the way your software works,” he says, “and the competitive advantage comes from manipulating your software.” Ken Patchett, general manager of Facebook's Prineville data center For Patchett, Facebook is trying to, well, make the world a better place — showing others how to build more efficient data centers and, in turn, put less of a burden on the environment. “The reason I came to Facebook is that they wanted to be open,” says Patchett.
“With some companies I’ve worked for, your dog had more access to you than your family did during the course of the day. Here [at Facebook], my children have seen this data center. My wife has seen this data center…. We’ve had some people say, ‘Can we build this data center?’ And we say, ‘Of course, you can. Do you want the blueprints?'” ‘The Tibet of North America’ The Evaporator Room inside the Prineville 'penthouse' Facebook built its data center in Prineville because it’s on the high desert. Patchett calls it “the Tibet of North America.” The town sits on a plateau about 2,800 feet above sea level, in the “ rain shadow ” of the Cascade Mountains, so the air is both cool and dry. Rather than use power-hungry water chillers to cool its servers, Patchett and company can pull the outside air into the facility and condition it as needed. If the air is too cold for the servers, they can heat it up — using hot air that has already come off the servers themselves — and if the outside air is too hot, they can cool it down with evaporated water.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In the summer, Prineville temperatures may reach 100 degrees Fahrenheit, but then they drop back down to the 40s in the evenings. Eric Klann, Prineville’s city engineer, whose family goes back six generations in central Oregon, says Facebook treats its data center much like the locals treat their homes. “Us country hicks have been doing this a long time,” says Klann, with tongue in cheek. “You open up your windows at night and shut them during the day.” The added twist is that Facebook can also cool the air during those hot summer days.
Filters inside the penthouse clean the outside air before it's pushed into the server room. All this is done in the data center’s penthouse — a space the size of an aircraft carrier, split into seven separate rooms. One room filters the air. Another mixes in hot air pumped up from the server room below. A third cools the air with atomized water. And so on. With the spinning fans and the neverending rush of air, the penthouse is vaguely reminiscent of the room with the “fizzy lifting drinks” in Willy Wonka & the Chocolate Factory , where Charlie Bucket and Grandpa Joe float to the ceiling of Wonka’s funhouse. It’s an analogy Patchett is only too happy to encourage.
You might say that Facebook has applied the Willy Wonka ethos to data center design, rethinking even the smallest aspects of traditional facilities and building new gear from scratch where necessary. “It’s the small things that really matter,” Patchett says. The facility uses found water to run its toilets. An Ethernet-based lighting system automatically turns lights on and off as employees enter and leave areas of the data center. And the company has gone so far as to design its own servers.
‘Freedom’ Reigns Facebook flies the flags of state, country, and social network. Codenamed Freedom while still under development, Facebook’s custom-built servers are meant to release the company from the traditional server designs that don’t quite suit the massive scale of its worldwide social network. Rather than rely on pre-built machines from the likes of Dell and HP, Facebook created “vanity-free” machines that do away with the usual bells and whistles, while fitting neatly into its sweeping effort to improve the efficiency of its data center.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Freedom machines run roughly half the loads in Prineville. They do the web serving and the memcaching (where data is stored in machine memory, rather on disk, for quick access), while traditional machines still handle the database duties. “We wanted to roll out [the new servers] in baby steps,” says Patchett. “We wanted to try it and prove it, and then expand.” Taller than the average rack server, the custom machines can accomodate both larger fans and larger heat sinks. The fans spin slower but still move the same volume of air, so Facebook can spend less energy pushing heat off the machines. And with the larger heat sinks, it needn’t force as much cool air onto the servers from the penthouse.
'Freedom' racks in the Facebook server room The machines also use a power supply specifically designed to work with the facility’s electrical system, which is a significant departure from the typical data center setup. In order to reduce power loss, the Prineville data center eliminates traditional power distribution units (which transform power feeds for use by servers and other equipment) and a central uninterruptible power supply (which provides backup power when AC power is lost). And the power supplies are designed to accomodate these changes.
The custom power supplies accept 277-volt AC power — so Facebook needn’t transform power down to the traditional 208 volts — and when AC power is lost, the systems default to a 48-volt DC power supply sitting right next to the server rack, reducing the power loss that comes when servers default to a massive UPS sitting on the other side of a data center.
According to Facebook, the machines are 94.5 percent efficient, but they’re part of a larger whole. The result of all this electrical and air work is a data center that consumes far less power than traditional computing facilities. In addition to building and operating its own facility in Prineville, Facebook leases data center space in North California and Virginia, and it says the Prineville data center requires 38 percent less energy than these other facilities — while costing 24 percent less.
The average data center runs at 1.6 to 1.8 power usage effectiveness (PUE) — the ratio of the power used by a data center to the power delivered to it — while Facebook’s facility runs between 1.05 and 1.10 PUE over the course of the year, close to the ideal 1-to-1 ratio.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We want to be the most effective stewards of the electrons we consume. If we pay for 100 megawatts of power we want to use 100 megawatts of power,” Patchett says. “The average house is between 2 and 3 PUE. That’s twice the amount of energy you actually have to assume. What if every home was the same efficient steward that we are?” What Google Is Not Power generators in the yard outside the Prineville facility Google is also running a data center without chillers.
And it’s building its own servers. But it won’t talk about them. Although the company has released some information on the data centers and servers it was running as far back as 2004, its latest technology is off limits. When we contacted Google to participate in this story, the company did not respond with any meaningful information.
According to Dhanji Prasanna , a former Google engineer who worked on a programming library at the heart of “nearly every Java server” at the company, the search giant’s latest data center technology goes well beyond anything anyone else is doing. But he wouldn’t say more. Like Ken Patchett — and presumably, all other Google employees — he signed a non-disclosure agreement that bars him from discussing the technology.
Jim Smith — the chief technology officer of Digital Realty Trust , a company that owns, operates, and helps build data centers across the globe — says that Google must have a good reason for keeping its designs under wraps. “I’m not an insider, but it must make sense [that Google is so secretive],” Smith tells Wired. “At every level [of Google employee] you meet, they only share certain bits of information, so I presume there’s good reason.” But Facebook believes the opposite is true — and it’s not alone.
When Facebook open sourced its Prineville designs under the aegis of the Open Compute Project , it was certainly thumbing its nose at Google. “It’s time to stop treating data center design like Fight Club and demystify the way these things are built,” said Jonathan Heiliger , then the vice president of technical operations at Facebook. But the company was also trying to enlist the help of the outside world and, in the long run, improve on these initial designs.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We think the bigger value comes back to us over time,” Heiliger said, “just as it did with open source software. Many people will now be looking at our designs. This is a 1.0. We hope this will accelerate what everyone is doing.” Microsoft sees Heiliger’s logic. Redmond hasn’t open sourced its data center designs, but it has shared a fair amount of information about its latest facilities, including the chillerless data center it opened in Dublin, Ireland two years go. “By sharing our best practices and educating the industry and getting people to think about how to approach these problems, we think that they can start contributing to the solutions we need,” Microsoft distinguished engineer Dileep Bhandarkar tells Wired. “This will move the industry forward, and suppliers — people that build transformers, that build air handlers — will build technologies that we can benefit from.” Thinking Outside the Module Facebook's 'found water' tank. And its gnomes. With its designs, Facebook isn’t mimicking Google. The company is forging a new way. When Patchett left Google for Facebook, Facebook had him sign an agreement that he wouldn’t share his past experiences with the company. “I guess that’s because I worked for the G,” he says. This is likely a way for Facebook to legally protect itself, but it seems to show that the company has rethought the problems of data center design from the ground up.
Unlike at least some of Google’s facilities, Faceboook’s data center does not use a modular design. Google constructs its data centers by piecing together shipping containers pre-packed with servers and cooling-equipment. Patchett acknowledges that Google’s method provides some added efficiency. “You’ve got bolt-on compute power,” he says. “You can expand in a clustered kind of way. It’s really quite easy. It’s repetitious. You do the same thing each time, and you end up with known problems and known results.” But he believes the setup doesn’t quite suit the un-Googles of the world.
Google runs a unified software infrastructure across all its data centers, and all its applications must be coded to this infrastructure.
This means Google can use essentially the same hardware in each data center module, but Patchett says that most companies will find it difficult to do the same. “I don’t buy it,” Patchett says of the modular idea. “You have to built your applications so that they’re spread across all those modules…. Google has done a pretty good job building a distributed computing system, but most people don’t build that way.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Microsoft distinguished engineer Bhandarkar agrees — at least in part. Redmond uses modules in some data centers where the company is using software suited to the setup, but in others, it sidesteps the modular setup. “If you have a single [software platform], you can have one [hardware] stamp, one way of doing things,” Bhandarkar says. “But if you have a wide range of applications with different needs, you need to build different flavors.” Codenames in the High Desert Solar panels feed power to the data center's office space. Facebook designed the Prineville data center for its own needs, but it does believe these same ideas can work across the web industry — and beyond. This fall, the company built a not-for-profit foundation around the Open Compute Project, hoping to bring more companies into an effort that already has the backing of giants such as Intel, ASUS, Rackspace, NTT Data, Netflix, and even Dell.
In building its own servers, Facebook has essentially cut Dell out of its data center equation. But the Texas-based IT giant says Facebook’s designs can help it build servers for other outfits with similar data center needs.
In some ways, Dell is just protecting its reputation. And on a larger level, many see Facebook’s effort as a mere publicity stunt, a way to shame its greatest rival. But for all of Ken Patchett’s showmanship — and make no mistake, he is a showman — his message is a real one. According to Eric Klann, Prineville’s city engineer, two other large companies have approached the town about building their own data centers in area. He won’t say who they are — their codenames are “Cloud” and “Maverick” — but both are looking to build data centers based on Facebook’s designs.
“Having Facebook here and having their open campus concept — where they’re talking about this new cooling technology and utilizing the atmosphere — has done so much to bring other players into Prineville. They would never have come here if it wasn’t for Facebook,” he tells Wired.com.
“By them opening up and showing everyone how efficiently they’re operating that data center, you can’t help but have some of the other big players be interested.” Senior Writer X Topics Amazon data Enterprise Facebook Google secret servers Servers Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
890 | 2,011 |
"AC/DC Battle Returns to Rock Data-Center World | WIRED"
|
"https://www.wired.com/wiredenterprise/2011/12/ac-dc-power-data-center"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Caleb Garling Business AC/DC Battle Returns to Rock Data-Center World Save this story Save Save this story Save The battle between AC and DC goes back centuries.
In the late 1800s, Thomas Edison fought viciously for DC (direct current) power, where the electrical current moves in a single direction. But fellow inventors Nikola Tesla and George Westinghouse fought just as hard for alternating current (AC), which regularly reverses direction. Ultimately, Tesla and Westinghouse won this " War of the Currents ," thanks in large part to a massive AC hydroelectric plant built at Niagara Falls in 1889 that powered most of New York. But more than 120 years later, the argument has resurfaced inside the data center.
AC power continues to drive the modern data centers underpinning the world's businesses and internet applications, with DC playing a supporting role. But DC power has long been touted as a way to reduce the ridiculous amounts of power consumed by these massive computing facilities, and a new study hopes to catalyze its widespread adoption.
The study comes from the Electric Power Research Institute (EPRI) , a not-for-profit based in Palo Alto, California, and it details a set of experiments performed at a data center operated by Duke Power in Charlotte, North Carolina. With their experiments, Dennis Symanski and his team tried to balance the power to the servers equally between AC and DC sources, performing multiple load tests on each, with equipment in various states of operation (on, off, idle).
The results of six different tests show that the DC power system exhibited an energy savings ranging from 14.9 percent (when all servers and storage arrays were run at full power) to 15.8 percent (when the servers were turned on, but idle). On average, DC improved efficiency about 15.3 percent over AC systems.
Today, direct current is used for massive trans-continental power lines, reaching more than 500,000 volts. But AC is the standard for local power grids, and it's what powers devices plugged into the wall at home. In the end, however, most of those devices convert the AC current back to DC. "When Edison lost, we didn't have today's technologies. Things were mostly motors, and AC made sense for that," Symanski tells Wired. "But now, just about everything in your house runs on DC." Changing the entire grid is one matter. But revamping the data center is another. Today's data centers are anything but the picture of efficiency (See the top picture below). Data centers typically convert AC power from the grid into DC power to charge their uninterruptable power supply (UPS) batteries -- which provide back-up power when there's even the flicker of an outage from the grid. But that DC current must then be converted back to a lower AC as it enters the center's power distribution units (PDU), and then it returns to DC when it hits the server racks.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Until someone invents the perpetual motion machine -- a contraption with a net energy loss of zero, that would defy the laws of physics -- these conversions will always waste some amount of energy. Symanski argues that simply keeping everything in the data center on DC power will save at least three energy-wasting conversions: at the UPS, at the PDU, and at the front end of the power supply unit (PSU) on servers. (As seen in the second half of the figure above). EPRI is also calling for a worldwide standard DC voltage of 380, as they believe this level hits a "sweet spot" that meshes well with common power requirements and safety regulations.
As data center designers turn away from the cumbersome power grid -- and toward the sun and the wind for their energy -- DC will certainly make its way into the data center. Photovoltaic cells and wind turbines only generate DC power. But the progress of DC power is limited by the companies that make other data center equipment. Changing the servers is relatively simple. They just need new power supplies, Symanski says, as everything else on the server uses DC. But most other infrastructure equipment runs on AC power.
Dave Cappuccio, an analyst at Gartner, believes shifting to DC power is a fine idea. But he points out that there are countless other ways for the average data center manager to improve efficiency -- through cooling systems, ventilation, etc.
"People have been talking about [using DC power] for years," he tells Wired, pointing out that Facebook's new data center in Prineville, Oregon, has made the push toward DC. But hospitals, banks, utilities and other operations have so many other ways to reduce power and save money, he says, that it could be some time before they really embrace DC power. Cappuccio hopes that Facebook's efforts to open their blueprints and share efficiency measures will be that needed spark. But he's skeptical.
"A lot of things are good -- but it doesn't mean you can afford them all." [Top photo: The Planet/Flickr] Contributor X X Topics data Energy Enterprise Facebook Amit Katwala Will Knight Andy Greenberg Kari McMahon Joel Khalili Khari Johnson Andy Greenberg Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
891 | 2,019 |
"Why Is Google Slow-Walking Its Breakthroughs in AI? | WIRED"
|
"https://www.wired.com/story/why-is-google-slow-walking-its-breakthroughs-in-ai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Why Is Google Slow-Walking Its Breakthroughs in AI? Illustration: Sam Whitney; Getty Images Save this story Save Save this story Save Application Face recognition Cloud computing Ethics Company Alphabet Google End User Big company Consumer Small company Startup Sector Consumer services Entertainment Publishing Google became what it is by creating advanced new technology and throwing it open to all. Giant businesses and individuals alike can use the company’s search and email services, or tap its targeting algorithms and vast audience for ad campaigns. Yet Google’s progress on artificial intelligence now appears to have the company rethinking its do-what-you-will approach. The company has begun withholding or restricting some of its AI research and services, to protect the public from misuse.
Google CEO Sundar Pichai has made “AI first” a company slogan, but the company’s wariness of AI’s power has sometimes let its competitors lead instead. Google is a distant third in the cloud computing market behind Amazon and Microsoft. Late last year Google’s cloud division announced that it would not offer a facial-recognition service that customers could adapt for their own uses due to concerns about its potential for abuse.
Although Amazon and Microsoft have recently called for federal regulation of automated facial recognition, both have offered the technology for years. Amazon’s customers include the sheriff’s office of Washington County, Oregon, where deputies use its algorithms to check suspects against a database of mug shots.
Further evidence of Google’s willingness to limit the power—and commercial potential—of its own AI technology came a few weeks ago. At the end of October, the company announced a narrowly tailored facial-recognition service that identifies celebrities. (Microsoft and Amazon launched similar services in 2017.) In addition to being late to market, Google’s celebrity-detector comes with tight restrictions on who can use it.
Tracy Frey, director of strategy at the company’s cloud division, says that media and entertainment companies had been asking about the service. But Google decided to put some limits on the technology after reviewing its compliance with ethics principles the company introduced last year.
“We had concerns about whether we could have that if the service were more broadly available,” Frey says.
Google sought outside help on thinking through those concerns. The company commissioned a human rights assessment of the new product from corporate social responsibility nonprofit BSR, whose supporters include Google, McDonald’s, and Walmart.
BSR’s report warned that celebrity facial recognition could be used intrusively, for example if it were applied to surveillance footage in order to collect or broadcast live notifications on a person’s whereabouts. The nonprofit recommended that Google allow individual celebrities to opt out of the service and also that it vet would-be customers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google took up those suggestions. The company says it has limited its list of celebrities to just thousands of names, to minimize the risk of abuse; Amazon and Microsoft have said their own services recognize hundreds of thousands of public figures. Google will not disclose who is on the list but has provided a web form for anyone who wants to ask for their face to be removed from the company’s watch list. Amazon already lets celebrities opt out of its own celebrity recognition service, but it says so far none have done so.
Prospective users of Google’s service must pass a review to confirm they are “an established media or entertainment company or partner” that will apply the technology “only to professionally produced video content like movies, TV shows and sporting events.” Asked if that meant smaller producers, such as the operator of a popular YouTube channel, would be shut out, Frey says no. Such customers would be reviewed like any other, provided they were genuinely working with celebrity content. Some companies have already passed Google’s vetting and are using the service, she says, although she declines to name any.
Google began to publicly grapple with the tension between the promise and potential downsides of AI last year, in part because it was forced to. Cofounder Sergey Brin marveled in an open investor letter that recent AI progress was “the most significant development in computing in my lifetime,” but also warned that “such powerful tools also bring with them new questions and responsibilities.” The letter was released just days after employee protests against Google’s participation in a Pentagon AI project called Maven.
The company said it would not renew the contract.
It also released AI ethics principles it said would forbid similar projects in future, although they still permit some defense work.
Early this year , Google said that it had begun limiting some of the code released by its AI researchers to prevent it from being used inappropriately. The continued caution over AI contrasts with how Google has continued to expand into new areas of business—such as health care and banking—even as regulators and lawmakers talk about antitrust action against tech companies.
Access restrictions for the new celebrity recognition product might seem bad for business. Companies too hurried, or too small, to dedicate resources to Google’s vetting process could turn to the unrestricted facial recognition offered by Amazon or Microsoft instead.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Actively vetting customers and their intentions may also get trickier for Google over time, as the applications of AI expand in scope and number. “It puts Google in the position of being arbitrary about what is an acceptable use case and an acceptable user,” says Gretchen Greene, a research fellow at nonprofit Partnership on AI, founded by tech companies including Microsoft and Google. “There’s always going to be some tension about that.” Frey claims that restricting products now will pay off in the long run. As companies make more use of AI, they become more aware of the need to be careful with it, she says. “They’re looking to us for guidance and to see that we are giving them tools they can trust,” Frey says.
For N. K. Jemisin, world-building is a lesson in oppression Andrew Yang is not full of shit 13 smart STEM toys for the techie kids in your life The Icelandic facility where bitcoin is mined The untold story of Olympic Destroyer, the most deceptive hack in history 👁 A safer way to protect your data ; plus, check out the latest news on AI 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Senior Editor X Topics artificial intelligence Google Celebrity Gregory Barber Caitlin Harrington Steven Levy Jacopo Prisco Will Knight Nelson C.J.
Peter Guest Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
892 | 2,020 |
"Many Top AI Researchers Get Financial Backing From Big Tech | WIRED"
|
"https://www.wired.com/story/top-ai-researchers-financial-backing-big-tech"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Many Top AI Researchers Get Financial Backing From Big Tech Illustration: Sam Whitney Save this story Save Save this story Save Application Ethics Company Alphabet Amazon Microsoft Google Facebook Nvidia End User Research Sector Education Research As a grad student working on artificial intelligence , Mohamed Abdalla could probably walk into a number of well-paid industry jobs. Instead, he wants to draw attention to how Big Tech’s big bucks may be warping the perspective of his field.
Abdalla, who is finishing his PhD at the University of Toronto, has coauthored a paper highlighting the number of top AI researchers—including those who study the ethical challenges raised by the technology—who receive funding from tech companies. That can be a particular problem, he says, when corporate AI systems raise ethical issues, such as algorithmic bias , military use , or questions about the fairness and accuracy of face recognition programs.
Abdalla found that more than half of tenure-track AI faculty at four prominent universities who disclose their funding sources have received some sort of backing from Big Tech. Abdalla says he doesn’t believe any of those faculty are acting unethically, but he thinks their funding could bias their work—even unconsciously. He suggests universities introduce rules to raise awareness of potential conflicts of interest.
Industry funding for academic research is nothing new, of course. The flow of capital, ideas, and people between companies and universities is part of a vibrant innovation ecosystem. But large tech companies now wield unprecedented power, and the importance of cutting-edge AI algorithms to their businesses has led them to tap academia for talent.
Students with AI expertise can command large salaries at tech firms , but companies also back important research and young researchers with grants and fellowships. Many top AI professors have been lured away to tech companies or work part-time at those companies. Besides money, large companies can offer computational resources and data sets that most universities cannot match.
By Tom Simonite A paper published in July by researchers from the University of Rochester and China’s Cheung Kong Graduate School of Business found that Google, DeepMind, Amazon, and Microsoft hired 52 tenure-track professors between 2004 and 2018. It concluded that this “brain drain” has coincided with a drop in the number of students starting AI companies.
The growing reach and power of Big Tech prompted Abdalla to question how it influences his field in more subtle ways.
Together with his brother, also a graduate student, Abdalla looked at how many AI researchers at Stanford, MIT, UC Berkeley, and the University of Toronto have received funding from Big Tech over their careers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Abdallas examined the CVs of 135 computer science faculty who work on AI at the four schools, looking for indications that the researcher had received funding from one or more tech companies. For 52 of those, they couldn’t make a determination. Of the remaining 83 faculty, they found that 48, or 58 percent, had received funding such as a grant or a fellowship from one of 14 large technology companies: Alphabet, Amazon, Facebook, Microsoft, Apple, Nvidia, Intel, IBM, Huawei, Samsung, Uber, Alibaba, Element AI, or OpenAI. Among a smaller group of faculty that works on AI ethics, they also found that 58 percent of those had been funded by Big Tech. When any source of funding was included, including dual appointments, internships, and sabbaticals, 32 out of 33, or 97 percent, had financial ties to tech companies. “There are very few people that don't have some sort of connection to Big Tech,” Abdalla says.
Adballa says industry funding is not necessarily compromising, but he worries that it might have some influence, perhaps discouraging researchers from pursuing certain projects or prompting them to agree with solutions proposed by tech companies. Provocatively, the Abdallas’ paper draws parallels between Big Tech funding for AI research and the way tobacco companies paid for research into the health effects of smoking in the 1950s.
“I think that the vast majority of researchers are unaware,” he says. “They are not actively seeking to push one agenda or the other.” Others in the field of AI are concerned by the influence of industry money. At the year’s biggest gathering of AI researchers, a new workshop, called Resistance In AI, will look at how AI “has been concentrating power in the hands of governments and companies and away from marginalized communities.” But ties to industry are pervasive and often found across groups that examine ethical uses of AI. For instance, two out of the three cochairs of the Fairness, Accountability and Transparency conference, a prominent event that looks at the societal impact of AI, work for Alphabet subsidiaries.
Kristian Lum, a statistician at the nonprofit Human Rights Data Analysis Group , who is on the conference’s executive committee, says the conference, like other events, receives corporate sponsorships. But she says the conference’s policies state that the sponsors do not have control over the content or speakers. Lum says those involved with the conference are careful to disclose potential conflicts of interest.
“There are very few people that don't have some sort of connection to Big Tech.” Mohamed Abdalla, PhD student, University of Toronto “Big Tech does have a lot of power,” says Lum, whose employer is funded by foundations.
“I do think it’s something that people are increasingly aware of.” Others say the issue is more complicated.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Meredith Whittaker, a research scientist at NYU, previously worked for Google on a project that connected the company with academic research. She also led protests inside the company in 2018 against its policies on sexual misconduct and surveillance.
“People know who pays them,” she says. But she says it’s unfair to assume that someone funded by a company cannot be critical of Big Tech. She says several researchers who work at tech companies are critical of their employer’s technology. And she says pushback within companies can help check their power. “Worker organizing and worker dissent are only increasing as the status of this technology becomes more and more apparent,” she says.
A spokesperson for Google says the company’s policies prohibit staff from seeking to influence academic work. “Google’s collaborations with academic and research institutions are not driven by policy influence in any way,” the spokesperson says. “We are a huge supporter of academic research because it allows us to work with academics who are looking to solve the same problems that we are.” Ben Recht , a professor at UC Berkeley, has previously criticized the idea of researchers simultaneously working for a university and a company. But he doesn’t think corporate funding for AI should be seen as inherently bad. “You can make a capitalist argument that it is good for companies to pursue ethical technology,” he says. “I think that this is something that many of them strive to do.” Recht also points out that even without industry funding, academics can produce ethically questionable work, like the algorithms that underpin face recognition or those that help turn social media platforms into echo chambers and sources of misinformation. And Recht also notes that the money that flows from government agencies, including the military, can also influence the direction of research.
For his part, Abdalla worries that drawing attention to the ties between Big Tech and academia might affect his prospects of getting a job, because professors are often expected to help bring in funding. “I was told not to push this,” he says.
📩 Want the latest on tech, science, and more? Sign up for our newsletters ! The cheating scandal that ripped the poker world apart The 20-Year hunt for the man behind the Love Bug virus There's no better time to be an amateur radio geek The 15 TV shows you need to binge this fall Could a tree help find a decaying corpse nearby ? 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Senior Writer X Topics artificial intelligence Silicon Valley Google research Will Knight Kari McMahon Amit Katwala Khari Johnson Andy Greenberg David Gilbert Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
893 | 2,019 |
"A Sobering Message About the Future at AI's Biggest Party | WIRED"
|
"https://www.wired.com/story/sobering-message-future-ai-party"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business A Sobering Message About the Future at AI's Biggest Party “We have machines that learn in a very narrow way,” says Yoshua Bengio, an artificial intelligence researcher who shared computing's highest honor this year.
Photograph: Renaud Philippe/The New York Times/Redux Save this story Save Save this story Save Company Alphabet Facebook Google End User Research Sector IT Research Technology Neural Network More than 13,000 artificial intelligence mavens flocked to Vancouver this week for the world’s leading academic AI conference, NeurIPS.
The venue included a maze of colorful corporate booths aiming to lure recruits for projects like software that plays doctor. Google handed out free luggage scales and socks depicting the colorful bikes employees ride on its campus, while IBM offered hats emblazoned with “I ❤️A👁.” Tuesday night, Google and Uber hosted well-lubricated, over-subscribed parties. At a bleary 8:30 the next morning, one of Google’s top researchers gave a keynote with a sobering message about AI’s future.
Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.
“We’re kind of like the dog who caught the car,” Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but it doesn’t immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. “All of the models that we have learned how to train are about passing a test or winning a game with a score, [but] so many things that intelligences do aren’t covered by that rubric at all,” he said.
Hours later, one of the three researchers seen as the godfathers of deep learning also pointed to the limitations of the technology he had helped bring into the world. Yoshua Bengio, director of Mila, an AI institute in Montreal, recently shared the highest prize in computing with two other researchers for starting the deep learning revolution.
But he noted that the technique yields highly specialized results; a system trained to show superhuman performance at one videogame is incapable of playing any other. “We have machines that learn in a very narrow way,” Bengio said. “They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.” Bengio and Aguera y Arcas both urged NeurIPS attendees to think more about the biological roots of natural intelligence. Aguera y Arcas showed results from experiments in which simulated bacteria adapted to seek food and communicate through a form of artificial evolution. Bengio discussed early work on making deep learning systems flexible enough to handle situations very different from those they were trained on, and made an analogy to how humans can handle new scenarios like driving in a different city or country.
The cautionary keynotes at NeurIPS come at a time when investment in AI has never been higher. Venture capitalists sunk nearly $40 billion into AI and machine learning companies in 2018, according to Pitchbook, roughly twice the figure in 2017.
Discussion of the limitations of existing AI technology are growing too. Optimism from Google and others that self-driving taxi fleets could be deployed relatively quickly has been replaced by fuzzier and more restrained expectations.
Facebook’s director of AI said recently that his company and others should not expect to keep making progress in AI just by making bigger deep learning systems with more computing power and data. “At some point we're going to hit the wall,” he said. “In many ways we already have.” Some people at NeurIPS are working to climb or burrow under that wall. Jeff Clune, a researcher at Uber who will join nonprofit institute OpenAI next year, welcomed Bengio’s high profile call to think beyond the recent, narrow, successes of deep learning.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There are practical as well as scientific reasons to do so, he says. More general and flexible AI will help autonomous robots or other systems be more reliable and safe. “There’s a great business case for it,” he says.
Clune was due to present Friday on the idea of making smarter AI by turning the technology in on itself. He’s part of an emerging field called metalearning concerned with crafting learning algorithms that can devise their own learning algorithms. He has also created systems that generate constantly changing environments to challenge AI systems and prod them to extend themselves.
X content This content can also be viewed on the site it originates from.
Like Aguera y Arcas, Clune says AI researchers should see the way nature generates endless new variety as an inspiration and a benchmark. “We as computer scientists don’t know any algorithms that you would want to run for a billion years and would still do something interesting,” Clune says.
As thousands of AI experts shuffled away from Bengio’s packed talk Wednesday, Irina Rish, an associate professor at the University of Montreal also affiliated with Mila, was hopeful his words would help create space and support for new ideas at a conference that has become dominated by the success of deep learning. “Deep learning is great, but we need a toolbox of different algorithms,” she says.
Rish recalls attending an unofficial workshop on deep learning at the 2006 edition of the conference, when it was less than one-sixth its current size and organizers rejected the idea of accepting the then-fringe technique in the program. “It was a bit of a religious meeting—believers gathered in a room,” Rish recalls, hoping that somewhere at NeurIPS this year are early devotees of ideas that can take AI to broader new heights.
The strange life and mysterious death of a virtuoso coder Teaching self-driving cars to watch for unpredictable humans Wild juxtapositions of Saudi Arabia modern and ancient A journey to Galaxy's Edge, the nerdiest place on earth Burglars really do use Bluetooth scanners to find laptops and phones 👁 Will AI as a field "hit the wall" soon ? Plus, the latest news on artificial intelligence ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers.
Senior Editor X Topics artificial intelligence machine learning deep learning Susan D'Agostino Will Knight Steven Levy Christopher Beam Will Knight Will Knight Niamh Rowe Amanda Hoover Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
894 | 2,016 |
"Artificial Intelligence Just Broke Steve Jobs' Wall of Secrecy | WIRED"
|
"https://www.wired.com/2016/12/artificial-intelligence-just-broke-steve-jobs-wall-secrecy"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Artificial Intelligence Just Broke Steve Jobs' Wall of Secrecy Steve Jobs watches a demonstration at the Moscone Center in San Francisco, California, June 9, 2008.
JOHN G. MABANGLO/EPA/Redux Save this story Save Save this story Save The artificial intelligence researcher Russ Salakhutdinov made headlines today when he said was going to start publishing journal articles and spending time talking to academics.
That wouldn't be news, except Salakhutdinov works for Apple---a company famous for an extreme breed of corporate secrecy. Over the past two decades, people who work at Apple haven't talked to much of anyone about the far-reaching research (or anything else) happening inside the company. And that certainly includes academics.
Uber Buys a Mysterious Startup to Make Itself an AI Company In OpenAI’s Universe, Computers Learn to Use Apps Like Humans Do Google’s AI Reads Retinas to Prevent Blindness in Diabetics Google’s Hand-Fed AI Now Gives Answers, Not Just Search Results But Salakhutdinov works in an area where secrecy just doesn't play. As it happens, this field of research is more important to the future of tech giants like Apple than any other. Salakhutdinov oversees Apple's artificial intelligence group, and the only way he can recruit top researchers is to reassure them that once they get to Apple, they can continue to publish their work and share their ideas with the larger AI community. This free exchange of ideas is just the academic way, but experts also believe it's the best way to accelerate the progress of AI. “When you do research in secret, you fall behind,” Facebook AI head Yann LeCun told me earlier this year.
If Apple wants to keep up with Facebook and the other big names that have already embraced the AI technique called deep learning so completely---Google, Microsoft, the Elon Musk-backed startup OpenAI---Apple must share research as these others have done. OpenAI was founded on the idea that it would freely share all its research---or least as much as it possibly could. With this pitch, it landed some of the field's top talent, poaching several researchers from Facebook and Google.
As deep learning research has accelerated across Silicon Valley and beyond, some thinkers have questioned whether Apple could keep up.
"The best and the brightest from the deep learning area have not gone there for a reason," says Oren Etzioni, the CEO of the Allen Institute on AI. Etzioni is talking about that Steve-Jobsian secrecy, yes, but also Apple's hardline approach to privacy. Deep learning, you see, requires enormous amounts of digital data, and Apple's privacy policies could restrict how much data it can collect for training deep neural networks. But clearly, Apple is intent on embracing this data-hungry approach to AI.
The turning point came earlier this year when the company hired Salakhutdinov, a Carnegie Mellon professor who will continue to spend part of his time at the university. This week was his coming out party. He appeared on stage at NIPS, the machine learning conference that has become the centerpiece of the AI year. That's where he announced that Apple would start publishing its AI research.
Sure, Apple won't share all its work---no company does. They all still want to maintain an edge over the competition. But that edge comes mostly from data and having the talent that can find the next big thing before anyone else. That's the irony of the AI revolution: If Apple wants to stay ahead of its competition, it has to finally start giving away its secrets.
Update: This story has been updated to clarify that Salakhutdinov will remain an active faculty member at Carnegie Mellon.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics apple artificial intelligence deep learning Enterprise Google Susan D'Agostino Steven Levy Will Knight Will Knight Christopher Beam Will Knight Amanda Hoover Niamh Rowe Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
895 | 2,013 |
"Facebook Taps 'Deep Learning' Giant for New AI Lab | WIRED"
|
"https://www.wired.com/2013/12/facebook-yann-lecun"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Facebook Taps 'Deep Learning' Giant for New AI Lab Yann LeCun WIRED/Josh Valcarcel Save this story Save Save this story Save Facebook is building a research lab dedicated to the new breed of artificial intelligence, after hiring one of the preeminent researchers in the field: New York University professor Yann LeCun.
With a post to Facebook this morning, LeCun announced that he had been tapped to run the lab, and the company confirmed the news with WIRED.
"Facebook has created a new research laboratory with the ambitious, long-term goal of bringing about major advances in Artificial Intelligence," LeCun wrote, adding that Facebook's AI lab will include operations in Menlo Park, California, at the company's headquarters; in London; and at Facebook's new offices in New York City. In an email to WIRED , he said that he would remain in his position as a professor at NYU, maintaining teaching and research duties part-time, but that he would be based at Facebook's Manhattan office, which is only a a block from NYU's main campus.
LeCun sits at the heart of a new AI movement known as "deep learning." The movement began in the academic world, but is now spreading to the giants of the web, including not only Facebook but Google, companies that are constantly looking for new means of building services that can interact with people more like the way we interact with each other. Google is already using deep learning techniques to help analyze and respond to voice commands on its Android mobile operating system.
With deep learning, the basic idea is to build machines that actually operate like the human brain -- as opposed to creating systems that merely take a shortcut to solving problems that have traditionally required human intelligence. In the past, for instance, something like the Google's Search engine has tried to approximate human intelligence by rapidly analyzing enormous amounts of data, but people like LeCun aim to build massive "neutral networks" that actually mimic the way the brain works.
The trouble is that we don't completely understand how that the brain works. But in recent years, LeCun and others in this field, including, most notably, University of Toronto professor Geoffrey Hinton, have made some significant progress in the area of deep learning, so much so that they're now being hired by the giants of the tech world. As LeCun builds an AI lab at Facebook, Hinton is now on staff at Google, building a system alongside other researchers from Toronto.
Andrew Ng, the Stanford researcher who founded Google's deep learning project , known as the Google Brain, says that LeCun and Facebook are a natural fit. "Yann LeCun's move will be an exciting step both for machine learning and for Facebook," Ng says.
"Machine learning is already used in hundreds of places throughout Facebook, ranging from photo tagging to ranking articles to your news feed. Better machine learning will be able to help improve all of these features, as well as help Facebook create new applications that none of us have dreamed of yet." Additional reporting by Daniela Hernandez Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics deep learning Enterprise Facebook Google research Morgan Meaker Reece Rogers Gregory Barber Caitlin Harrington Nelson C.J.
Peter Guest Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
896 | 2,018 |
"Emmanuel Macron Q&A: France's President Discusses Artificial Intelligence Strategy | WIRED"
|
"https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Nicholas Thompson Business Emmanuel Macron Talks to WIRED About France's AI Strategy "If I manage to build trust with my citizens for AI, I’m done. If I fail building trust with one of them, that’s a failure," says French President Emmanuel Macron.
Laura Stevens Save this story Save Save this story Save Application Ethics Regulation End User Government Sector Public safety On Thursday, Emmanuel Macron, the president of France, gave a speech laying out a new national strategy for artificial intelligence in his country. The French government will spend €1.5 billion ($1.85 billion) over five years to support research in the field, encourage startups, and collect data that can be used, and shared, by engineers. The goal is to start catching up to the US and China and to make sure the smartest minds in AI— hello Yann LeCun —choose Paris over Palo Alto.
Directly after his talk, he gave an exclusive and extensive interview, entirely in English, to WIRED Editor-in-Chief Nicholas Thompson about the topic and why he has come to care so passionately about it.
Nicholas Thompson: First off, thank you for letting me speak with you. It was refreshing to see a national leader talk about an issue like this in such depth and complexity. To get started, let me ask you an easy one. You and your team spoke to hundreds of people while preparing for this. What was the example of how AI works that struck you the most and that made you think, ‘Ok, this is going to be really, really important’? Emmanuel Macron: Probably in healthcare—where you have this personalized and preventive medicine and treatment. We had some innovations that I saw several times in medicine to predict, via better analysis, the diseases you may have in the future and prevent them or better treat you. A few years ago, I went to CES. I was very impressed by some of these companies. I had with me some French companies, but I discovered US, Israeli and other companies operating in the same field. Innovation that artificial intelligence brings into healthcare systems can totally change things: with new ways to treat people, to prevent various diseases, and a way—not to replace the doctors—but to reduce the potential risk.
The second field is probably mobility: we have some great French companies and also a lot of US companies performing in this sector.
Autonomous driving impresses me a lot. I think these two sectors, I would say, healthcare and mobility, really struck me as promising. It’s impossible when you are looking at these companies, not to say, Wow, something is changing drastically and what you thought was for the next decade, is in fact now. There is a huge acceleration.
NT: It seems you’re doing this partly because it is clearly in France’s national interest to be strong in AI. But it also seemed in the speech that you feel like there are French or European values that can help shape the development of AI? Is that correct, and what are those values? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg EM: I think artificial intelligence will disrupt all the different business models and it’s the next disruption to come. So I want to be part of it. Otherwise I will just be subjected to this disruption without creating jobs in this country. So that’s where we are. And there is a huge acceleration and as always the winner takes all in this field. So that’s why my first objective in terms of education, training, research, and the creation of startups is to streamline a lot of things, to have the adaptable systems, the adapted financing, the adapted regulations, in order to build champions here and to attract the existing champions.
Laura Stevens But you’re right at the same time: AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.
When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.
AI will raise a lot of issues in ethics, in politics, it will question our democracy.
The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values. I mean, Europe is the place where the DNA of democracy was shaped, and therefore I think Europe has to get to grips with what could become a big challenge for democracies.
NT: So the stakes here in your mind aren’t just French economic growth, it’s the whole value system that will be incorporated into this transformative technology the world over. And you want to make sure that the values you have, your country has, your continent has, are involved in that? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg EM: Sure, exactly. I want to create an advantage for my country in artificial intelligence, directly. And that’s why we have these announcements made by Facebook, Google, Samsung , IBM , DeepMind, Fujitsu who choose Paris to create AI labs and research centers: this is very important to me. Second, I want my country to be part of the revolution that AI will trigger in mobility, energy, defense, finance, healthcare and so on. Because it will create value as well. Third, I want AI to be totally federalized. Why? Because AI is about disruption and dealing with impacts of disruption. For instance, this kind of disruption can destroy a lot of jobs in some sectors and create a need to retrain people. But AI could also be one of the solutions to better train these people and help them to find new jobs, which is good for my country, and very important.
I want my country to be the place where this new perspective on AI is built, on the basis of interdisciplinarity: this means crossing maths, social sciences, technology, and philosophy. That’s absolutely critical. Because at one point in time, if you don’t frame these innovations from the start, a worst-case scenario will force you to deal with this debate down the line. I think privacy has been a hidden debate for a long time in the US. Now, it emerged because of the Facebook issue. Security was also a hidden debate of autonomous driving. Now, because we’ve had this issue with Uber, it rises to the surface. So if you don't want to block innovation, it is better to frame it by design within ethical and philosophical boundaries. And I think we are very well equipped to do it, on top of developing the business in my country.
But I think as well that AI could totally jeopardize democracy. For instance, we are using artificial intelligence to organize the access to universities for our students That puts a lot of responsibility on an algorithm. A lot of people see it as a black box, they don't understand how the student selection process happens. But the day they start to understand that this relies on an algorithm, this algorithm has a specific responsibility. If you want, precisely, to structure this debate, you have to create the conditions of fairness of the algorithm and of its full transparency. I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm. I have to be able to tell French citizens, “OK, I encouraged this innovation because it will allow you to get access to new services, it will improve your lives—that’s a good innovation to you.” I have to guarantee there is no bias in terms of gender, age, or other individual characteristics, except if this is the one I decided on behalf of them or in front of them. This is a huge issue that needs to be addressed. If you don’t deal with it from the very beginning, if you don’t consider it is as important as developing innovation, you will miss something and at a point in time, it will block everything. Because people will eventually reject this innovation.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg NT: So the steps you’re taking to guarantee that is that all of the algorithms developed by the French government will be open, algorithms developed by any company getting money from the French government will also be required to be open? EM: Yes.
NT: And is there a third step you’re doing to help guarantee this transparency? I think as well that AI could totally jeopardize democracy.
EM: We will increase the collective pressure to make these algorithms transparent. We will open data from government, publicly funded projects, and we will open access from this project and we will favor, incentivize the private players to make it totally public and transparent. Obviously some of them will say, there is a commercial value in my algorithm, I don't want to make it transparent. But I think we need a fair discussion between service providers and consumers, who are also citizens and will say: “I have to better understand your algorithm and be sure that this is trustworthy.” The power of consumption society is so strong that it gets people to accept to provide a lot of personal information in order to get access to services largely driven by artificial intelligence on their apps, laptops and so on. But at some point, as citizens, people will say, “I want to be sure that all of this personal data is not used against me, but used ethically, and that everything is monitored. I want to understand what is behind this algorithm that plays a role in my life.” And I’m sure that a lot of startups or labs or initiatives which will emerge in the future, will reach out to their customers and say “I allow you to better understand the algorithm we use and the bias or non-bias.” I’m quite sure that’s one of the next waves coming in AI. I think it will increase the pressure on private players. These new apps or sites will be able to tell people: “OK! You can go to this company or this app because we cross-check everything for you. It’s safe," or on the contrary: “If you go to this website or this app or this research model, it’s not OK, I have no guarantee, I was not able to check or access the right information about the algorithm”.
NT: When you talk about how AI will transform democracy, do you imagine a day where you make decisions based on recommendations from AI-based algorithms, where there’s a system that tells you what a labor reform should be and you say, “OK?” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg EM: At this point, I think it could help you. But it will never replace the way you decide. When you make a decision, it’s the result of a series of cross-checks. AI can help you because sometimes when you pass a reform, you’re not totally clear about the potential effects, direct or indirect, and you can have hesitations. So it can help you to make the right decision. An algorithm is relevant for this part of the equation. For instance, on economic and social reforms, to have a clearer view about direct and indirect measurable effects. But on top of it, when you take a political decision, you need to have a part of personal judgment. That’s the quality of the decision maker, and artificial intelligence will never replace that. And there is a thing that AI could never replace; which is accountability and responsibility. Because this is his decision and will be held accountable for it, a political leader could never say, “OK I’m sorry this decision was bad because it was a decision of an algorithm.” NT: Let’s get back to disruption for a second. You’ve talked a lot about transportation, you talked about it in your speech. AI is going to massively disrupt transportation, and it’s going to make a lot of people lose their jobs as we go to driverless cars. It will create new jobs, but this is already an area where people in France have been protesting. There were railroad strikes this weekend, there were trucker strikes this fall. Aren’t you taking a huge risk by aligning yourself with a force that is going to disrupt an industry that has already been protesting like crazy? EM: Look, I think in this country---and in a lot of countries---you have a tradition of controversy. I launched a series of reforms that a lot of people thought impossible to be conducted in France. So, I'm absolutely sure it's possible, when you explain to people, when you have the energy and determination, to pass such reforms. I’m certainly not reluctant to do so and I’m certainly not, I would say, upset or threatened by dealing with artificial intelligence and convincing my people of its rightful implementation. As consumers, they are already big fans of artificial intelligence. And big fans of innovative solutions. All the tech guys can tell you that the French market is a very good market. People love technology here. I think that’s why the overall philosophy I have stuck to from the very beginning of my mandate is to say: blocking changes and being focused on protecting jobs is not the right answer. It’s the people you need to protect. You do so by giving them opportunities and by training and retraining them again to get new jobs. Don’t block the change because it’s coming and people will accept it. But try to be at the fore-front of change to better understand it and deal with it. Change can destroy jobs in the very short run, but create new ones in other sectors at the same time.
Laura Stevens For me, one of the key issues of artificial intelligence is that it will probably reduce the most replicable and straining human activities. And naturally you will raise a whole range of other opportunities for people with low, middle and high qualifications. The big risk for our society is to increase opportunities only for very highly qualified people and, in a way, very low-qualified workers. It is especially necessary to monitor the qualification of the middle class, because they can be the most disrupted. If I take your examples, that would encompass taxi drivers, people working in the industry, or people working in highly repetitive tasks. So you have to train them either to change their sector of activity or to increase their qualification to work with a machine. We will need people working with machines.
For I do not believe that autonomous vehicles will exist without any drivers at all. For me, that’s pure imagination. You already have fully automated programs to drive planes. Therefore we technically could have planes with no pilots. But you still have two pilots in every plane. Even if almost everything is automated. That’s because you need to have responsibility, precisely. So what we will reduce with autonomous cars is the number of risks. What you will reduce is how painful it is to be a driver for a long period of time ; but you will need people to make the critical choice at critical moments for autonomous vehicles. I’m almost sure about that. So AI will change the practice but it will not kill transportation jobs in many cases.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Bottom line, my point is: I can convince my country about change precisely because I embrace it. My role is not to block this change, but to be able to train or retrain people for them to get opportunities in this new world.
NT: Got it. I want to ask you a military question. I know that the UN has had discussions on restrictions on lethal autonomous weapons. Do you think machines---artificial intelligence machines---can ever be trusted to make decisions to kill without human intervention? EM: I’m dead against that. Because I think you always need responsibility and assertion of responsibility. And technically speaking, you can have in some situations, some automation which will be possible. But automation or machines put in a situation precisely to do that would create an absence of responsibility. Which, for me, is a critical issue. So that’s absolutely impossible. That’s why you always need a human check. And in certain ways, a human gateway. At a point of time, the machine can prepare everything, can reduce uncertainties, can reduce until nil the uncertainties and that’s an improvement which is impossible without it, but at a point of time, the go or no-go decision should be a human decision because you need somebody to be responsible for it.
Blocking changes is not the right answer. You have to protect people and to think about opportunities.
NT: Let me ask you about the national competition in artificial intelligence. Elon Musk tweeted some months ago: “Competition for AI superiority at national level most likely cause of World War3 in my opinion.” Do you think Musk is overstating it? Or do you think that this is going to get very intense, particularly between the United States and China? EM: I think it will become very intense. I will not be so pessimistic, because I think that the core basis of artificial intelligence is research. And research is global. And I think this artificial intelligence deals with cooperation and competition, permanently. So you need an open world and a lot of cooperation if you want to be competitive. And at a point of time, in some issues, you need competition. But I think you will have to rethink a sort of sovereignty. I addressed that in my speech today. Artificial intelligence is a global innovation scheme in which you have private big players and one government with a lot of data—China. My goal is to recreate a European sovereignty in AI, as I told you at the beginning of this discussion, especially on regulation. You will have sovereignty battles to regulate, with countries trying to defend their collective choices. You will have a trade and innovation fight precisely as you have in different sectors. But I don't believe that it will go to the extreme extents Elon Musk talks about, because I think if you want to progress, there is a huge advantage in an open innovation model.
NT: So this is a slightly cynical response to that, but let me ask you this: If France starts to build up an AI sector, in some ways it’s competitive to Google and Facebook. So won’t there be an incentive for Europe and for France to regulate Facebook and Google in ever-tougher ways? Doesn’t it create a strange dynamic where you might have incentives to bring more regulation and antitrust? EM: Look, I would say exactly the opposite. Today, Google, Facebook, and so on, on artificial intelligence, they are very much welcome. Most people like them, these companies invest in France, they also recruit a lot of our talents and they develop their jobs here. So they are part of our ecosystem. The issue for these big players is the fact that they will have to deal with several issues. First, they have a very classical issue in a monopoly situation; they are huge players. At a point of time--but I think it will be a US problem, not a European problem--at a point of time, your government, your people, may say, “Wake up. They are too big.” Not just too big to fail, but too big to be governed. Which is brand new. So at this point, you may choose to dismantle. That’s what happened at the very beginning of the oil sector when you had these big giants. That’s a competition issue.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But second, I have a territorial issue due to the fact that they are totally digital players. They disrupt traditional economic sectors. In some ways, this might be fine because they can also provide new solutions. But we have to retrain our people. These companies will not pay for that; the government will. Today the GAFA [an acronym for Google, Apple, Facebook, and Amazon] don’t pay all the taxes they should in Europe. So they don’t contribute to dealing with negative externalities they create. And they ask the sectors they disrupt to pay, because these guys, the old sectors pay VAT, corporate taxes and so on. That’s not sustainable.
Third, people should remain sovereign when it comes to pivacy rules. France and Europe have their preferences in this regard. I want to protect privacy in this way or in that way. You don't have the same rule in the US. And speaking about US players, how can I guarantee French people that US players will respect our regulation? So at a point of time, they will have to create actual legal bodies and incorporate it in Europe, being submitted to these rules. Which means in terms of processing information, organizing themselves, and so on, they will need, indeed, a much more European or national organization. Which in turn means that we will have to redesign themselves for a much more fragmented world. And that’s for sure because accountability and democracy happen at national or regional level but not at a global scale. If I don’t walk down this path, I cannot protect French citizens and guarantee their rights. If I don't do that, I cannot guarantee French companies they are fairly treated. Because today, when I speak about GAFA, they are very much welcome I want them to be part of my ecosystem, but they don’t play on the same level-playing field as the other players in the digital or traditional economy. And I cannot in the long run guarantee my citizens that their collective preferences or my rules can be totally implemented by these players because you don't have the same regulation on the US side. All I know is that if I don’t, at a point of time, have this discussion and regulate them, I put myself in a situation not to be sovereign anymore.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg NT: But aren’t those two goals very much in tension? You want the GAFA to come to France, you’ve touted it—Google has been invested in AI [in France] since 2012—but you also really want to crack down on them. How do you do both simultaneously? EM: No. Look, because I think first, you don’t just have the GAFA. You have a lot of other players, startups, and so on. And I think, even for them, I mean they are discovering they will have to deal with democratic and political issues in your country.
NT: They’re just starting to learn that! EM: Yes, yes! I mean, it’s fair. That’s the end of the very first phase, that was a sort of an early phase without any regulation, where they were in a situation to set up all the rules. Now they will have to deal with governments — but I want to do it in a cooperative way. I don't want to say, “I don’t want this guy anymore.” Exactly the opposite. I want a permanent dialogue. But I want them to understand and respect my constraints. I want them to be part of my reflection and to take into consideration their own reflection. I want them to better understand the fact that it is unfeasible to have a world without any responsibility and without a clear democratic accountability.
NT: Got it. So back to the big question, what will be success? How will you know that this has worked? And what will be failure? When you look at this a couple years from now? EM: Look, first of all I think it’s very hard to answer this question because by definition, I don't have a clear view on what will happen for artificial intelligence in five years time. But I would say, if I manage to develop a very strong, powerful ecosystem, number one in Europe on artificial intelligence dealing with mobility, defense, healthcare, fintech, etc. I think it will be a success. And for me, if a majority of people in France understand and endorse this change it will be a success. It will be a failure if we are stuck with fears and blocked by big scares. My concern is that there is a disconnect between the speediness of innovation and some practices, and the time for digestion for a lot of people in our democracies. I have to build a sort of reciprocal or mutual trust coming from researchers, private players, startups, and my citizens. If the first category of people trust a country as being a relevant ecosystem for them, and at the same time, if I manage to build trust with my citizens for AI, I’m done. If I fail building trust with one of them, that’s a failure.
As Google and Amazon are finding, not everyone is ready for artificial intelligence Worried about robots taking your job learn spreadsheets? As artificial intelligence gets better and better, here are five tough projects for 2018 Editor in Chief X LinkedIn Topics artificial intelligence france Will Knight Peter Guest Will Knight Khari Johnson Will Knight Khari Johnson Matt Laslo Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
897 | 2,003 |
"Microsoft Sued for Weak Security | WIRED"
|
"https://www.wired.com/2003/10/microsoft-sued-for-weak-security"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Reuters Business Microsoft Sued for Weak Security Save this story Save Save this story Save LOS ANGELES -- Microsoft faces a proposed class-action lawsuit in California based on the claim that its market-dominant software is vulnerable to viruses capable of triggering "massive, cascading failures" in global computer networks.
The lawsuit, which was filed on Tuesday in Los Angeles Superior Court, also claims that Microsoft's security warnings are too complex to be understood by the general public and serve instead to tip off "fast-moving" hackers on how to exploit flaws in its operating system.
The suit claims unfair competition and the violation of two California consumer rights laws, one of which is intended to protect the privacy of personal information in computer databases. It asks for unspecified damages and legal costs, as well as an injunction against Microsoft barring it from unfair business practices.
Many of the arguments in the lawsuit and some of its language echoed a report issued by computer security experts in late September, which warned that the ubiquitous reach of Microsoft's software on desktops worldwide had made computer networks a national security risk.
That report presented to the Computer and Communications Industry Association, a trade group representing Microsoft's rivals, said the complexity of Microsoft's software made it particularly vulnerable.
Microsoft said it had received a copy of the lawsuit and that its lawyers were reviewing it, but could not comment immediately.
Dana Taschner, a Newport Beach, California, lawyer who filed the lawsuit on behalf of a single plaintiff and a potential class of millions of Microsoft customers, could not be immediately reached for comment.
"Microsoft's eclipsing dominance in desktop software has created a global security risk," the lawsuit says. "As a result of Microsoft's concerted effort to strengthen and expand its monopolies by tightly integrating applications with its operating system ... the world's computer networks are now susceptible to massive, cascading failure." With about $49 billion in cash and more than 90 percent of the market in PC operating systems, Microsoft has long been seen as a potential target for massive liability lawsuits.
But the company, which has been moving to settle antitrust claims that it abused its monopoly of PC software, has also been seen as shielded from liability claims by disclaimers contained in the licenses that users must agree to when installing software, according to experts.
The lawsuit comes in the wake of two major viruses that have recently taken advantage of flaws in Microsoft software.
Slammer, which targeted computers running Microsoft's server-based software for databases, slowed down Internet traffic across the globe and shut down flight reservation systems and cash machines in the United States.
The Blaster worm, meanwhile, burrowed through hundreds of thousands of computers, destroying data and launching attacks on other computers.
Since early 2002 Microsoft has made computer security a top priority under a "Trustworthy Computing" initiative spearheaded by the company's founder and chairman, Bill Gates.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
898 | 2,023 |
"Watch President Barack Obama on What AI Means for National Security | The Frontiers Issue with Guest Editor President Barack Obama | WIRED"
|
"https://www.wired.com/video/watch/the-frontiers-issue-with-guest-editor-president-barack-obama-president-barack-obama-on-the-challenges-of-cyber-security"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons President Barack Obama on What AI Means for National Security About Credits Released on 10/12/2016 Some really positive outcomes, but there are certainly some risks.
Certainly we've heard from some folks like Elon and Mick Bustrom concerned about A.I.'s potential to outpace our ability to understand it.
What about those concerns and how do we think about that moving forward to protect not only ourselves, but humanity at scale? So let me start with what I think is the more immediate concern, that's a solvable problem, but we have to be mindful of it.
That is this category of specialized A.I.
If you've got a computer that can play Go, it's a pretty complicated game with a lot of variations.
Developing an algorithm that simply says maximize profits on the New York stock exchange, is probably within sight.
If one person, or one organization, got there first, they could bring down the stock market pretty quickly.
Or at least they could raise questions about the integrity of the financial markets.
An algorithm that said go figure out how to penetrate the nuclear code in the country and figure out how to launch some missiles.
If that's their only job, it's very narrow.
It doesn't require a super intelligence, it just requires a really effective algorithm.
If it's self teaching, then you've got problems.
So part of, I think, my directive to my national security team is don't worry as much yet, about machines taking over the world.
Do worry about the capacity of either non-state actors or hostile actors, to penetrate systems.
In that sense it's not conceptually different, or different in a legal sense than a lot of they cyber security work that we're doing.
It just means that we're gonna have to be better because those who might deploy these systems are going to get a lot better.
Now, I think as a precaution, and all of us have spoken to folks like Elon Musk who are concerned about the super intelligent machine.
There's some prudence in thinking about benchmarks that would indicate some general intelligence developing on the horizon.
If we can see that coming over the course of three decades, five decades, you know whatever the late assessments are, if ever, because there also are arguments that this thing is a lot more complicated than people make it out to be.
Then future generations, or our kids or our grandkids are gonna be able to see it coming and figure it out.
But I do worry right now about specialized A.I.
I was on the West Coast and some kid who looked like he was 25 shows me a laptop.
Or not a laptop, an iPad, and he says, This is the future of radiology.
And he's got an algorithm that is teaching enough sufficient pattern recognition that over time, it's gonna be a better identifier of disease than a radiologist would be.
If that's already happening today on an iPad, you know invented by some kid at MIT, then the vulnerability of a lot of our systems is gonna be coming around pretty quick.
We're gonna have to have some preparation for that.
But Joey may have worse nightmares.
I generally agree.
The only caveat is, I would say, there are a few people who believe generally A.I. will happen at some fairly high percentage chance in the next 10 years.
And these are people who are smart.
So I do think that keeping aware.
But the way I look at it is there's maybe a dozen or two different breakthroughs that need to happen for each of the pieces.
So you can kind of monitor it.
You don't know exactly when they're going to happen because they're by definitions breakthroughs and I think it's when you think these breakthroughs will happen.
And you just have somebody close to the power cord.
(all laughing) So right when you see it about to end, you gotta yank that white piece out of the wall, man.
I'm completely with the President, that short term, it's gonna be bad people using A.I.'s for bad things and they'll be an extension of us.
Then there's this other meta thing which happens which is a group of people.
So if you look at all of the hate on the internet.
One person doesn't control that.
[President Obama] Right.
But it's a thing.
It points at things, it's definitely feeling some political activity right now.
It's kind of got a life of it's own.
It's not even code, it's a culture.
And you see that also in the Middle East, right? [President Obama] Which is why it's so hard to prevent.
Yeah, because it actually gets stronger the more you attack it.
To me, what's curious and interesting is going to be the relationship between an A.I., say a service that runs like that, and then you throw in bitcoin, which is the ability to move money around by a machine.
[Interviewer] Anonymously.
Anonymously, so to me, it will be this weird, and again, this is where I think it could be embedded, but if you gave this sort of mob, more tools.
'Cause they are actually fairly coordinated in their own peculiar way.
On the good side is, you can imagine, I was talking to some politicians like Michael Johnson in Colorado, he's trying to figure out how can we harness these things to inform and engage citizens.
So to me, the problem is if you suppress it because of fear, the bad guys will still use it.
What's important is to get people who want to use it for good, communities and leaders, and figure out how to get them to use it so that's where we start to lean.
Yeah, this may not be a precise analogy.
Traditionally when we think about security and protecting ourselves, we think in terms of we need armor, or walls, from swords, blunt instruments, et cetera.
Increasingly, I find myself looking to medicine and thinking about viruses, antibodies, right? You know how do you create healthy systems, that can ward off destructive elements? In a distributed way.
In a distributed way and that requires more imagination and we're not there yet.
It's part of the reason why cyber security continues to be so hard.
Is because the threat is not a bunch of tanks rolling at you, but a whole bunch of systems that may be vulnerable to a worm getting in there.
It means that we've gotta think differently about our security.
Make different investments that may not be as sexy, but actually may end up being as important as anything.
Part of the reason I think about this is because I also think that what I spend a lot of time worrying about are things like pandemic.
You can't build walls in order to prevent the next airborne lethal flu from landing on our shores.
Instead what we have to do is be able to set up systems to create public health systems in all parts of the world, quick triggers that tell us when we see something emerging.
Make sure we've got quick protocols, systems, that allow us to make vaccines a lot smarter.
So if you take that model, a public health model, when you think about how we can deal with the problems of cyber security, a lot of that may end up being really helpful in thinking about the A.I. threats.
And just one thing that I think is interesting, is when we start to think about microbio, and microbes everywhere, there's a lot of evidence to show that introducing good bacteria to fight against the bad bacteria is a strategy and not to sterilize.
Well I still don't let Sonny and Bo lick me.
(all chuckling) 'Cause when I walk them on the South Lawn, some of the things I see them do, ya know, and chewing on I'm all like, hey man.
Stay away.
I think research has shown that actually opening windows in hospitals instead of just sterilizing the air may actually limit.
So we have to rethink what clean means.
It's similar whether you're talking about cyber security or national security, I think the notion that you can make straight borders or that you can eliminate every possible pathogen is difficult.
I think in that sense, in your position, to be able to see medicine and cyber and A.I., I think that's an important thing.
Absolutely.
So there are distributed threats, but is there also the risk that this creates a new kind of arms race? Look, I think there's no doubt that developing international norms, rules, protocols, verification mechanisms around cyber security generally, and A.I. in particular, is in it's infancy.
Part of the reason for that is, as Joey identified, we've got a lot of non-state actors who are the biggest players.
Part of the problem is that identifying who is doing what is much more difficult.
If you're building a bunch of ICBM's, we see 'em.
If somebody's sitting at a keyboard, we don't.
So, we've begun this conversation.
A lot of the conversation right now is not at the level of dealing with real sophisticated A.I., but has more to do with essentially states establishing norms about how they use their cyber capabilities.
Part of what makes this an interesting problem is that the line between offense and defense is pretty blurred.
The truth of the matter is, and part of the reason why, for example, the debate here about cyber security.
Who are you more afraid of, big brother and the state? Or the guy who's trying to empty out your bank account? Part of the reason that's so difficult, is that if we're going to police this wild west, whether it's the internet or A.I. or any of these other areas, then by definition, the government's gotta have capabilities.
If it's got capabilities, then they're subject to abuse.
At a time when there's been a lot of mistrust built up, about government, that makes it difficult.
When you have countries around the world who see America as the preeminent cyber power, now's the time for us to say, we're willing to restrain ourselves, if you are willing to restrain yourselves.
The challenge is the most sophisticated state actors, Russia, China, Iran, don't always embody the same norms or values that we do.
But we're gonna have to surface this as an international issue in order for us to be effective.
'Cause effectively it's a borderless problem, and ultimately, all states are gonna have to worry about this.
It is very shortsighted if there's a state that thinks that it can develop super capacities in this area without some 25 year old kid in a basement somewhere figuring that out pretty quickly.
Starring: President Barack Obama, Editor in Chief Scott Dadich, MIT Media Lab Director Joi Ito President Barack Obama Guest-Edits WIRED's November Issue President Barack Obama on the Future of AI President Barack Obama on the True Meaning of Star Trek President Barack Obama on How Artificial Intelligence Will Affect Jobs President Barack Obama on What AI Means for National Security President Barack Obama on Fixing Government With Technology President Barack Obama on How We'll Embrace Self-Driving Cars President Barack Obama on Bureaucracy VS. Moonshots Huggable Robot Befriends Girl in Hospital How Tech Companies Avoid Taxes Explained with Magic Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
899 | 2,018 |
"Pentagon Will Expand AI Project Prompting Protests at Google | WIRED"
|
"https://www.wired.com/story/googles-contentious-pentagon-project-is-likely-to-expand"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Pentagon Will Expand AI Project Prompting Protests at Google Department of Defense Save this story Save Save this story Save Application Ethics Company Alphabet Google End User Government Sector Defense Source Data Video Technology Machine vision At Google’s campus in Mountain View, California, executives are trying to assuage thousands of employees protesting a contract with the Pentagon’s flagship artificial-intelligence initiative, Project Maven. Thousands of miles away, algorithms trained under Project Maven—which includes companies other than Google—are helping war fighters identify potential ISIS targets in video from drones.
The controversy around Silicon Valley’s cooperation with the military may intensify in coming months as Project Maven expands into new areas, including developing tools to more efficiently search captured hard drives. Funding for the project roughly doubled this year, to $131 million. Now the Pentagon is planning a new Joint Artificial Intelligence Center to serve all US military and intelligence agencies that may be modeled on Project Maven. “It’s exceeding my expectations,” says Bob Work, who established Project Maven in April 2017 while serving as deputy secretary of defense, before retiring later in the year.
Google’s precise role in Project Maven is unclear—neither the search company nor the Department of Defense will say. Two people familiar with the project said another company built the systems deployed on drone missions overseas.
Project Maven is formally known as the Algorithmic Warfare Cross-Functional Team. A seal for the group in a recent presentation , from project chief Lt. Gen. Jack Shanahan, depicts a trio of cheery cartoon robots under a Latin motto that Google Translate renders as “Our job is to help.” The seal for Project Maven.
Department of Defense Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The effort was created to demonstrate how the Pentagon could transform military operations by tapping AI technology already established in the private sector. On a trip to Silicon Valley last summer, Defense Secretary James Mattis lamented how his department lags the capabilities of tech companies he visited, such as Amazon and Google.
Processing drone video was selected as Project Maven’s first mission, Work says, because the Pentagon’s analysis tools can’t keep pace with the tidal wave of high-resolution aerial imagery swamping US bases. The plan was to deploy machine-learning techniques that internet companies use to distinguish cats and cars to spot and track objects of military interest, such as people, vehicles, and buildings. The initial goal was to have a system helping analysts in the field by December 2017.
That target was met handily. The Defense Department said in December that algorithms bought from unidentified contractors were helping on bases fighting ISIS. At a conference in Washington this month, Lt. Col. Garry Floyd said technology developed for Maven was being used by the US military’s Middle East and Africa commands, and had been expanded to a half-dozen combat locations. William Carter, deputy director of the technology policy program at the Center for Strategic and International Studies, says that progress is remarkable for a department famed for glacial acquisitions processes. “By DoD standards, this is literally a work of magic,” says Carter, who has been briefed by Shanahan and others on Project Maven.
The technology fielded under Maven can automatically annotate objects such as boats, trucks, and buildings on digital maps. Work notes that this helps analysts with tasks like identifying targets or understanding a group’s pattern of activity, by reducing time spent scouring screens just to find objects of interest. The software deployed to bases also has features that let analysts help retrain the algorithms, by quickly tagging new objects of interest or flagging errors.
Google’s exact function in all that is unclear. The company says it is helping the Pentagon use its open source TensorFlow machine-learning software to train algorithms on unclassified drone imagery, and that the technology is limited to “non-offensive” uses. Google’s director of AI told WIRED the work is " mundane " when asked about the internal protests last month. A Pentagon spokesperson said Project Maven “includes many leading technology and artificial intelligence companies,” but declined to identify any. Carter of the Center for Strategic and International Studies, along with another person familiar with Project Maven, say that a company other than Google developed the technology deployed in operations against ISIS.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In his recent talk, Shanahan said the project is beginning to grow. That includes deploying Project Maven’s drone surveillance algorithms more widely. The initial system was developed for smaller drones that fly at relatively low altitudes, such as the 1.4-meter, 20-kilogram ScanEagle. Shanahan said his team is now “refining” algorithms for drones that fly higher, and will soon work on high-altitude surveillance aircraft. His slides depicted the 15-meter-long Global Hawk, which flies at up to 18,000 meters (60,000 feet) and carries sophisticated conventional and infrared cameras. Ultimately, the goal is to integrate Maven’s algorithms onto drones themselves, he added.
Shanahan also said Project Maven will soon start applying AI to new areas of military operations. One is speeding the process of sorting through material captured in raids—machine-learning algorithms could be used to help analysts look for the most important material on captured hard drives. He said Project Maven will look at how AI could help military or intelligence analysts assess the relative importance of different enemy targets.
Project Maven’s future may be more expansive than the handful of projects Shanahan described. Pentagon R&D chief Mike Griffin is due to submit a proposal to Congress this summer outlining the joint artificial intelligence center to accelerate military and intelligence use of AI. “My understanding is that more money is being pushed in the Maven direction and it will be a big part of the Joint AI Center,” says Work, who co-chairs a task force on AI at the Center for a New American Security. Maven or a unit like it could become a kind of universal AI shop inside the new center, helping all US intelligence and military organizations build AI projects with commercial contractors.
If a vocal minority of Google’s more than 80,000 employees have their way, the search company won’t be one of those contractors.
More than 4,000 of them signed a letter saying Google should forswear all defense projects. Work worries that might encourage other companies to make similar pledges. He also says that the Pentagon would still find companies competent in AI that are willing to help.
In part, thanks to companies like Google being open with their research and software, artificial intelligence expertise is more dispersed. “They clearly have other people to go to,” says Amir Husain, CEO of startup SparkCognition, which works on government AI projects, including with the Air Force. “The scale of artificial intelligence talent in the US is significant.” The untold story of Robert Mueller's time in combat All you need to know about Elon Musk's fever-dream train in a tube , hyperloop What happened to Facebook's grand plan to wire the world ? PHOTO ESSAY: Bolivia is landlocked. Don't tell that to its navy Is Amazon Prime still worth it ? Get even more of our inside scoops with our weekly Backchannel newsletter Senior Editor X Topics Google Pentagon artificial intelligence drone Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
900 | 2,015 |
"Nobody Knew How Big a Deal the Cloud Would Be—They Do Now | WIRED"
|
"https://www.wired.com/2015/12/2015-was-the-year-the-cloud-defeated-techs-walking-dead"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Nobody Knew How Big a Deal the Cloud Would Be—They Do Now Alvaro Dominguez for WIRED Save this story Save Save this story Save Ten years ago, Amazon unleashed a technology that we now call, for better or for worse, cloud computing. As it turned out, the cloud spawned a revolution. Along the way, many were slow to realize just how big this revolution could be. But now, as 2015 comes to a close, they finally do.
Back in 2006, Amazon was just an online retailer, but it decided to try something new.
It offered up a series of online services where the world's businesses could build and operate software—websites and mobile apps, in particular— without setting up their own hardware.
Simply by opening a web browser, these businesses could tap into a virtually unlimited amount of computing power. And they did.
Netflix and Dropbox, for example, built two of the world's biggest online empires atop Amazon's cloud services. But even as these businesses thrived in the cloud—moving at unprecedented speed, instantly grabbing more computing power as they needed it—some people said this way of doing things didn't suit everyone. Big banks, insurance companies, government agencies, and other old-school operations couldn't run their software in the cloud, these skeptics claimed, because the cloud wasn't secure. It wasn't reliable. Many of these skeptics worked at places like HP and Dell and IBM and Oracle, the tech giants most threatened by the cloud revolution, companies that sold the expensive computer servers and other data center hardware and software that the cloud could replace.
Inside big businesses, just about every so-called private cloud project has failed.
In some cases, these skeptics acknowledged that the cloud was quicker—and that businesses of all kinds could benefit from rapid access to computing power. But traditional enterprises, they said, needed something a little different. They needed "the private cloud." It was a silly name, even sillier than "the cloud." But for many traditional enterprises, the idea was sound: Rather than use Amazon, they should launch Amazon-like services on their own computer servers, inside their own data centers. Their engineers would have instant access to computing power via a web browser, just as they would through Amazon, and this computing power would be, well, more secure and more reliable.
But here at the end of 2015, we can now see that this big idea wasn't all that great. Just ask Adam Jacob.
Jacob is the chief technology officer at Chef , a Seattle-based company that offers a nifty tool for automatically installing, running, and modifying software across dozens, hundreds, even thousands of computers. In the modern world, this kind of tool is essential to running a big business. Chef helps drive Facebook's vast online empire. It underpins Target and Nordstrom and Standard Bank and many others. That means Jacob can look inside big businesses. He has seen the private cloud firsthand. And it's not pretty.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Inside big businesses, Jacob says, just about every so-called private cloud project has failed. "Private cloud, as a thing, is near to or approaching zero," he says. Most of these projects were based on a tool called OpenStack , which aims to mimic the cloud services offered by Amazon. But according to Jacob, mimicking Amazon is too difficult, too costly, and too time-consuming for many businesses.
"There are a few examples. But they're all really constrained, and it's not the same as when you use the public cloud," he says. "It takes a long time. It's super-difficult. And the software that you can buy to do it isn't very good." So, these companies are moving onto Amazon and other public cloud services instead.
This is not PR. Chef is a disinterested party in the cloud game. Its tools work with virtual machines in the cloud and with physical machines in the private data center. You'll hear echoes of Jacob's words from Pivotal, another company that seeks to modernize the tech used by traditional enterprises.
Over the past several years, many private clouds, Pivotal's James Watters says, "didn't go far enough to provide real value and they never really got adopted. They were too complex for the value." He says that companies are now exploring new ways of building private clouds, including efforts that involve software from Pivotal. But today, the company's (very old-school) customers run about 35 percent of their operations on cloud services like Amazon. Even the big Wall Street banks are now pushing tasks into the cloud, Watters says. "We're seeing more public cloud consumption than ever—even in our core financial services customers, like the top ten banks in the world," he says. "We're seeing them using public cloud technologies really for the first time in 2015. They were the last holdout." Certainly, businesses will continue to run a lot of stuff in their own data centers—in part because they still have long leases on those data centers. "I don't think that market is going away," Watters says. But 2015 was a turning point. It's the year just about everyone realized that cloud computing—real cloud computing—is indeed the future.
Boston-based research outfit Forrester calls cloud computing —that's public cloud computing—a "hyper-growth" market. In a recent report, it predicts the market for cloud services will grow to $191 billion by 2020, a 20 percent leap from what it predicted just a few years ago. "The adoption of cloud among enterprises, which is really where the money is, has really picked up steam," Forrester analyst John Rymer recently told us. "It's a big shift. The cloud has arrived. It's inevitable." Dell. EMC. HP. Cisco. These Tech Giants Are the Walking Dead Amazon Reveals Just How Huge the Cloud Is for Its Business Google’s Bold Plan to Overthrow Amazon as King of the Cloud That's in part because Amazon Web Services can be just as secure and just as reliable as a private operation. In fact, it may be more secure and more reliable in some cases. A company like Amazon has so many engineers focused on these services—so many people watching for potential problems. It has already spent a decade building this thing. After establishing a decent track record over the past decade, Watters says, cloud computing has become a "social normative" and "a safe bet." In years past, voices also complained that certain companies and government agencies needed the private cloud so that they could comply with certain regulations that control where and how data was stored. That's true. But like other cloud companies, Amazon has worked to address these concerns, securing many regulatory certifications for its services. It even agreed to build separate cloud services just for the CIA.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That doesn't mean everyone is moving everything onto the cloud. Aaron Rajda, who oversees the new IT tech used at Ford, says the old car company still very much depends on private data centers for security reasons. And you'll hear much the same from other big businesses. But even these businesses are embracing the cloud in big ways. "You have to take a look at it," Radja says, pointing out that the company now runs part of its operation atop Microsoft Azure, an Amazon competitor. "There are advantages." Cloud services let you build and test software more quickly. Trying something new is a few clicks away.
Dissenters can no longer deny the power of the big idea that is the cloud. If you don't believe the experts, just look Amazon's balance sheet. In the spring, for the first time, Amazon revealed the size of its cloud computing business. Amazon Web Services, the company said, was pulling in $4.6 billion a year. This figure has since grown to $7 billion—growing, in other words, more than twice as fast as the rest of Amazon.
Not surprisingly, other notable names, including Microsoft and Google, are offering serious alternatives to Amazon's cloud. Microsoft doesn't reveal the size of its Azure cloud business, but if you lump Azure with the company's Office online services—another example of cloud computing—revenues are comparable to what Amazon is pulling in. Google, for its part, believes its cloud revenue could one day surpass its online ad revenue , which has long served as the core of its business.
Google sees where the trend is headed. Then again, it's a trend that's hard to miss. As Amazon's revenues have risen, the hardware market is consolidating. In October.
Dell agreed to acquire computer storage giant EMC.
In the past, when businesses wanted to build an online operation, they had little choice but to buy enormous numbers of servers and data storage machines from companies like Dell and EMC. But now they can use cloud services from Amazon and Google and Microsoft. So Dell and EMC are looking for new leverage.
Yes, companies like this can offer their own cloud services. And they have. But they run the risk of cannibalizing their existing hardware businesses. HP spent the last several years building a cloud operation. And just after the Dell-EMC deal was announced, it said this operation was no more. The company will focus on, well, the private cloud. That's what it has to do. It can't compete with cloud services from the likes of Amazon and Microsoft and Google. But the private cloud thing isn't what it might seem.
Senior Writer X Topics Cloud Computing Dell EMC Enterprise hp Aarian Marshall Vittoria Elliott Steven Levy Jacopo Prisco Will Knight Nelson C.J.
Peter Guest Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
901 | 2,023 |
"Watch Garry Kasparov Answers Chess Questions From Twitter | Tech Support | WIRED"
|
"https://www.wired.com/video/watch/garry-kasparov-answers-chess-questions-from-twitter"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Garry Kasparov Answers Chess Questions From Twitter About Credits Check out Garry's MasterClass on chess Released on 01/16/2018 Hi, I'm Garry Kasparov, and I'm here to answer your chess-related questions from Twitter.
Why do all, question mark, chess players put at pieces/squares with middle finger? Do we? I'm not sure, I never paid attention to that fact.
Maybe we have to ask a psychologist, to see enough samples for us to come up with such a, such a definite conclusion.
Bishop or knight? Depends if you are religious or not.
The general assumption is that both (mumbles) pieces are of equal price, in pawns.
I think Bobby Fischer was the first one who indicated that bishop should be valued higher, 3.25, versus three points for a knight.
I was more reserved, actually I put 3.15 for a bishop.
But now, with looking at some of the computer games, I would say that maybe Fischer's evaluation was correct.
After machines played millions of games, we just learned that Bishop's value is simply higher, since in many more cases, it was more useful piece.
Do I have to develop all of my minor pieces before activating my Queen? The answer is yes.
Queen is the strongest piece, but you could argue it's the weakest one, because if it's attacked it has to move away, because it's most valuable piece.
I can come up with many opening positions where activating the queen is very natural.
So there are many openings where your queen is being developed as early as move four or five.
I would recommend for weak players to follow the rule and not to bring your queen into battle too early.
But for those who are making progress in the game of chess, by studying openings professionally, you have to be cautious all the time trying to apply general rules, universally, all the time.
Why do chess players tend to castle even if it severely restricts the king's movement? King's safety is the number one priority, and obviously after castle, you remove king from the vulnerable position, the center.
So restricting the King's movement is not as dangerous as leaving the King in the open.
You are in doubt and you want to castle, but short or long in 1 and 2? It depends on your mood.
I would say if you castle short, it's a roughly even game, but you cannot expect to gain an advantage, so castling long is more aggressive, more ambitious, but it's riskier.
Now with the second one, Black has very comfortable game, they can simply castle short and they have excellent position.
I would probably go short castle, but maybe it's my age talking.
What is your favorite gambit opening in chess with a, white, b, black? I don't think gambits are just offering you an advantage against a world-prepared open, opponent.
I loved Evan's Gambit, I played it quite a few games, and some of them were instructive wins.
Now with Black, I played a couple of times, Volga Gambit, Benko Gambit, as it's known in the free world.
I can hardly think of any real gambits with Black except the Benko Gambit, where you can, that you can employ at the professional level.
Mr. Kasparov, in your expert opinion, why doesn't Anand, or Carlsen for that matter, ever use something wild like King's Gambit? Because it's wild.
Players at that level, they don't play wild openings in serious games.
I can tell you, you spend a lot of analyzing it, and it always ends up with negative results, so that's why, if you are big fan of the King's Gambit, I wouldn't recommend you holding your breath expecting Carlsen or other top players employing it in the top tournament.
On another note, do you advocate Evan's Gambit versus a stronger player, or will it be crushed these days? It's hard to say no for Evan's Gambit, because I won quite a few games, a very memorable one against Vishy Anand in 1995, the first time I used Evan's Gambit.
So for strong player, someone who's known for his or her preparation, I think you have to be very cautious by making such a choice.
It depends very much on what you mean, saying strong player, so because we can disagree on the definition.
Though I think that if you want to play such a sharp opening as Evan's Gambit, sacrificing a pawn at b4, you are not thinking of equalizing, you're thinking about taking initiative and crushing your opponents.
So again, it's up to you, but just bear in mind that Evan's Gambit has disappeared completely from the games of the top players.
Can anyone recommend a good book on chess endgame? Mark Dvoretsky's book, Endgame.
It helps strong players to become stronger, so I enjoyed reading the book, and you can always learn something from there, even if you, if you are weak, relatively weak player, or a very strong one.
Is control of the center one of the most important things you must do in order to win the game of chess? Yeah, it is very, very important, but I can give you many examples where you control the center, but your king is being mated.
I would strongly recommend that you put king safety as number one priority.
Tech/AI types, as a genuine question, how much should we read into the ability of a computer to quickly get really good at Chess or Go? It's impressive, but it's also about logic and involves very constrained and ultimately limited choices.
I think you answered the question.
It's about the logic, but it's a closed system, because we establish the rules.
We should recognize that the moment the open-ended system has been limited to a closed framework machines will do a better job by simply just going around this, and establishing their own set of priorities.
Chess, and other games, they, like Go, they offer an excellent opportunity to study the ability of machines.
For us to look into much bigger problems of the universe, and nature.
Learning chess.
Why does the horsey one move so crazy? Why not straight lines like castle head? I can tell you that in any version of chess, there are different rules and different patterns.
For instance, the Japanese game called Shogi, it's, has many different rules that are very unusual for our eyes, because we're trained just to look at our version of chess.
I think it's just, it's somehow, it's a combination of the different abilities of the pieces, and I can tell you that our forebears did a great job by actually coming up with such a balanced game.
Thank you very much for asking all those questions.
They were very different.
Some of them are too professional, to my taste, some of them are very primitive, again, to my taste, but that's the beauty of the game of chess.
You could enjoy the game, you can ask questions, even if you are a very weak player, or if you are experienced club player, or even a very, very strong player.
And that creates this global chess family, and I'm always happy to address any concern that comes from every layer of chess knowledge in the world.
Starring: Garry Kasparov Gordon Ramsay Answers Cooking Questions From Twitter Ken Jeong Answers Medical Questions From Twitter Bill Nye Answers Science Questions From Twitter Blizzard's Jeff Kaplan Answers Overwatch Questions From Twitter Nick Offerman Answers Woodworking Questions From Twitter Bungie's Luke Smith Answers Destiny Questions From Twitter Jackie Chan & Olivia Munn Answer Martial Arts Questions From Twitter Scott Kelly Answers Astronaut Questions From Twitter LaVar Ball Answers Basketball Questions From Twitter Dillon Francis Answers DJ Questions From Twitter Tony Hawk Answers Skateboarding Questions From Twitter Jerry Rice Answers Football Questions From Twitter Garry Kasparov Answers Chess Questions From Twitter U.S. Olympic and Paralympic Athletes Answer Olympics Questions From Twitter Neuroscientist Anil Seth Answers Neuroscience Questions From Twitter Blizzard's Ben Brode Answers Hearthstone Questions From Twitter John Cena Answers Wrestling Questions From Twitter The Slow Mo Guys Answer Slow Motion Questions From Twitter Bill Nye Answers Even More Science Questions From Twitter James Cameron Answers Sci-Fi Questions From Twitter Best of Tech Support: Bill Nye, Neil DeGrasse Tyson and More Answer Science Questions from Twitter Riot Games' Greg Street Answers League of Legends Questions from Twitter Riot Games' Greg Street Answers Even More League of Legends Questions from Twitter PlayerUnknown Answers PUBG Questions From Twitter Liza Koshy, Markiplier, Rhett & Link, and Hannah Hart Answer YouTube Creator Questions From Twitter NCT 127 Answer K-Pop Questions From Twitter Neil deGrasse Tyson Answers Science Questions From Twitter Ken Jeong Answers More Medical Questions From Twitter Bon Appétit's Brad & Claire Answer Cooking Questions From Twitter Bang Bang Answers Tattoo Questions From Twitter Ed Boon Answers Mortal Kombat 11 Questions From Twitter Nick Jonas and Kelly Clarkson Answer Singing Questions from Twitter Penn Jillette Answers Magic Questions From Twitter The Russo Brothers Answer Avengers: Endgame Questions From Twitter Alex Honnold Answers Climbing Questions From Twitter Sloane Stephens Answers Tennis Questions From Twitter Bill Nye Answers Science Questions From Twitter - Part 3 Astronaut Nicole Stott Answers Space Questions From Twitter Mark Cuban Answers Mogul Questions From Twitter Ubisoft's Alexander Karpazis Answers Rainbow Six Siege Questions From Twitter Marathon Champion Answers Running Questions From Twitter Ninja Answers Fortnite Questions From Twitter Cybersecurity Expert Answers Hacking Questions From Twitter Bon Appétit's Brad & Chris Answer Thanksgiving Questions From Twitter SuperM Answers K-Pop Questions From Twitter The Best of Tech Support: Ken Jeong, Bill Nye, Nicole Stott and More Twitter's Jack Dorsey Answers Twitter Questions From Twitter Jodie Whittaker Answers Doctor Who Questions From Twitter Astronomer Jill Tarter Answers Alien Questions From Twitter Tattoo Artist Bang Bang Answers More Tattoo Questions From Twitter Respawn Answers Apex Legends Questions From Twitter Michael Strahan Answers Super Bowl Questions From Twitter Dr. Martin Blaser Answers Coronavirus Questions From Twitter Scott Adkins Answers Martial Arts Training Questions From Twitter Psychiatrist Daniel Amen Answers Brain Questions From Twitter The Hamilton Cast Answers Hamilton Questions From Twitter Travis & Lyn-Z Pastrana Answer Stunt Questions From Twitter Mayim Bialik Answers Neuroscience Questions From Twitter Zach King Answers TikTok Questions From Twitter Riot Games Answers League of Legends Questions from Twitter Aaron Sorkin Answers Screenwriting Questions From Twitter Survivorman Les Stroud Answers Survival Questions From Twitter Joe Manganiello Answers Dungeons & Dragons Questions From Twitter "Star Wars Explained" Answers Star Wars Questions From Twitter Wizards of the Coast Answer Magic: The Gathering Questions From Twitter "Star Wars Explained" Answers More Star Wars Questions From Twitter VFX Artist Answers Movie & TV VFX Questions From Twitter CrossFit Coach Answers CrossFit Questions From Twitter Yo-Yo Ma Answers Cello Questions From Twitter Mortician Answers Cadaver Questions From Twitter Babish Answers Cooking Questions From Twitter Jacob Collier Answers Music Theory Questions From Twitter The Lord of the Rings Expert Answers More Tolkien Questions From Twitter Wolfgang Puck Answers Restaurant Questions From Twitter Fast & Furious Car Expert Answers Car Questions From Twitter Former FBI Agent Answers Body Language Questions From Twitter Olympian Dominique Dawes Answers Gymnastics Questions From Twitter Allyson Felix Answers Track Questions From Twitter Dr. Michio Kaku Answers Physics Questions From Twitter Former NASA Astronaut Answers Space Questions From Twitter Surgeon Answers Surgery Questions From Twitter Beekeeper Answers Bee Questions From Twitter Michael Pollan Answers Psychedelics Questions From Twitter Ultramarathoner Answers Questions From Twitter Bug Expert Answers Insect Questions From Twitter Former Cult Member Answers Cult Questions From Twitter Mortician Answers MORE Dead Body Questions From Twitter Toxicologist Answers Poison Questions From Twitter Brewmaster Answers Beer Questions From Twitter Biologist Answers Biology Questions From Twitter James Dyson Answers Design Questions From Twitter Dermatologist Answers Skin Questions From Twitter Dwyane Wade Answers Basketball Questions From Twitter Baker Answers Baking Questions from Twitter Astrophysicist Answers Questions From Twitter Age Expert Answers Aging Questions From Twitter Fertility Expert Answers Questions From Twitter Biological Anthropologist Answers Love Questions From Twitter Mathematician Answers Math Questions From Twitter Statistician Answers Stats Questions From Twitter Sleep Expert Answers Questions From Twitter Botanist Answers Plant Questions From Twitter Ornithologist Answers Bird Questions From Twitter Alex Honnold Answers MORE Rock Climbing Questions From Twitter Former FBI Agent Answers MORE Body Language Questions From Twitter Waste Expert Answers Garbage Questions From Twitter Garbage Boss Answers Trash Questions From Twitter J. Kenji López-Alt Answers Cooking Questions From Twitter Veterinarian Answers Pet Questions From Twitter Doctor Answers Gut Questions From Twitter Chemist Answers Chemistry Questions From Twitter Taste Expert Answers Questions From Twitter Paleontologist Answers Dinosaur Questions From Twitter Biologist Answers More Biology Questions From Twitter Biologist Answers Even More Biology Questions From Twitter ER Doctor Answers Injury Questions From Twitter Toxicologist Answers More Poison Questions From Twitter Energy Expert Answers Energy Questions From Twitter BBQ Pitmaster Answers BBQ Questions From Twitter Neil Gaiman Answers Mythology Questions From Twitter Sushi Chef Answers Sushi Questions From Twitter The Lord of the Rings Expert Answers Tolkien Questions From Twitter Audiologist Answers Hearing Questions From Twitter Marine Biologist Answers Shark Questions From Twitter Bill Nye Answers Science Questions From Twitter - Part 4 John McEnroe Answers Tennis Questions From Twitter Malcolm Gladwell Answers Research Questions From Twitter Financial Advisor Answers Money Questions From Twitter Stanford Computer Scientist Answers Coding Questions From Twitter Wildlife Vet Answers Wild Animal Questions From Twitter Climate Scientist Answers Earth Questions From Twitter Medical Doctor Answers Hormone Questions From Twitter James Hoffmann Answers Coffee Questions From Twitter Video Game Director Answers Questions From Twitter Robotics Professor Answers Robot Questions From Twitter Scam Fighters Answer Scam Questions From Twitter Forensics Expert Answers Crime Scene Questions From Twitter Chess Pro Answers Questions From Twitter Former FBI Agent Answers Body Language Questions From Twitter...Once Again Memory Champion Answers Questions From Twitter Neuroscientist Answers Illusion Questions From Twitter Immunologist Answers Immune System Questions From Twitter Rocket Scientists Answer Questions From Twitter How Vinyl Records Are Made (with Third Man Records) Neurosurgeon Answers Brain Surgery Questions From Twitter Therapist Answers Relationship Questions From Twitter Polyphia's Tim Henson Answers Guitar Questions From Twitter Structural Engineer Answers City Questions From Twitter Harvard Professor Answers Happiness Questions From Twitter A.I. Expert Answers A.I. Questions From Twitter Pizza Chef Answers Pizza Questions From Twitter Former CIA Chief of Disguise Answers Spy Questions From Twitter Astrophysicist Answers Space Questions From Twitter Cannabis Scientist Answers Questions From Twitter Sommelier Answers Wine Questions From Twitter Mycologist Answers Mushroom Questions From Twitter Genndy Tartakovsky Answers Animation Questions From Twitter Pro Card Counter Answers Casino Questions From Twitter Doctor Answers Lung Questions From Twitter Paul Hollywood & Prue Leith Answer Baking Questions From Twitter Geneticist Answers Genetics Questions From Twitter Sneaker Expert Jeff Staple Answers Sneaker Questions From Twitter 'The Points Guy' Brian Kelly Answers Travel Questions From Twitter Master Chef Answers Indian Food & Curry Questions From Twitter Archaeologist Answers Archaeology Questions From Twitter LegalEagle's Devin Stone Answers Law Questions From Twitter Todd McFarlane Answers Comics Questions From Twitter Reptile Expert Answers Reptile Questions From Twitter Mortician Answers Burial Questions From Twitter Eye Doctor Answers Eye Questions From Twitter Computer Scientist Answers Computer Questions From Twitter Neurologist Answers Nerve Questions From Twitter Hacker Answers Penetration Test Questions From Twitter Nutritionist Answers Nutrition Questions From Twitter Experts Predict the Future of Technology, AI & Humanity Doctor Answers Blood Questions From Twitter Sports Statistician Answers Sports Math Questions From Twitter Shark Tank's Mark Cuban Answers Business Questions From Twitter Marvel’s Spider-Man 2 Director Answers Video Game Questions From Twitter Criminologist Answers True Crime Questions From Twitter Physicist Answers Physics Questions From Twitter | Tech Support Chess Pro Answers More Questions From Twitter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
902 | 2,017 |
"Facebook Quietly Enters StarCraft War for AI Bots, and Loses | WIRED"
|
"https://www.wired.com/story/facebook-quietly-enters-starcraft-war-for-ai-bots-and-loses"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Facebook Quietly Enters StarCraft War for AI Bots, and Loses Blizzard Entertainment Save this story Save Save this story Save Application Games End User Big company Sector Games Research Technology Machine learning In the distant Koprulu Sector of the Milky Way, Facebook’s Zerglings lingered in a restless swarm outside the enemy’s base. After the commander ill-advisedly opened the gate, the social network’s alien horde stormed in and slaughtered forces stationed inside, in a battle fought on the frontiers of artificial-intelligence research.
The bloody incident was part of an annual competition of the videogame StarCraft for AI software bots that wrapped up Sunday. Facebook quietly entered a bot called CherryPi designed by eight people employed by or affiliated with its AI research lab.
The social network’s stealthy space war suggests Facebook is serious about competing with Google and others to set showy new milestones in AI smarts. Google’s London-based DeepMind AI research unit made headlines last year when its AlphaGo software defeated a champion at the board game Go.
In August, DeepMind declared StarCraft II , the latest version of the game, as its next target.
The contest Facebook entered, like most AI research in the area, used an older version of StarCraft , which is considered equally difficult for software to master. Facebook’s AI research group, which lists 80 researchers on its website and is led by NYU professor Yann LeCun, has produced many research papers but not notched up an achievement as striking as Google’s with Go. Facebook has released three research papers on StarCraft, but not announced a special effort to conquer the game.
Final results released Sunday indicate Facebook still has a way to go: CherryPi finished sixth in a field of 28; the top three bots were all made by lone, hobbyist coders.
Gabriel Synnaeve, a research scientist at Facebook, described CherryPi to WIRED as an "baseline" on which to build future research on StarCraft.
"We wanted to see how it compares to existing bots, and in particular test if it has flaws that need correcting," he said. CherryPi competed in a long-running contest that is part of AIIDE , an academic conference on applying AI in entertainment. Facebook also sponsored this year’s contest, paying for hardware used to run the thousands of bot-on-bot games.
Related Stories Artificial intelligence Tom Simonite Code Cade Metz Artificial Intelligence Tom Simonite Games such as tic-tac-toe, checkers, chess, and Go have been testbeds for new ideas in artificial intelligence since the field’s beginnings in the 1950s. These days, there’s also a serious business purpose, as companies increasingly use AI to hone their product and service offerings. Facebook, Google, and other tech companies use AI to improve ad-targeting and personalization systems, and enable new products, such as virtual assistants and augmented reality.
StarCraft is alluring to AI researchers for more than just the fun of commanding weapons like the building-leveling Yamato plasma cannon. Although the videogame may appear more approachable than Go or chess, it is many times more complex, because players’ pieces and actions aren’t limited to a tightly regimented board and always in full view of their opponent. The number of valid positions on a Go board is a 1 followed by 170 zeros. Researchers estimate that you’d need to add at least 100 more zeros to get into the realm of StarCraft ’s complexity.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The winning bot in this year’s competition, ZZZKBot, was made by Chris Coxe, a software developer in Perth, Australia, who previously worked for NASDAQ. He built his bot alone, and lately took a break from work in part to dedicate more time to it. A day before the final results were announced, Coxe spoke self-deprecatingly of his handiwork. “It was supposed to be a proof of concept,” he said. “The source code isn’t all that great.” Like all StarCraft bots so far, ZZZKBot wouldn’t last long against even a moderately skilled human StarCraft player. The feats of planning and memory required to predict and react to the maneuvers of an alien army are beyond today’s software.
The days of amateurs building the best StarCraft bots appear to be numbered now that two giant companies that compete in both online ads and AI prowess have taken an interest. David Churchill, a professor at Memorial University of Newfoundland who organized the AIIDE contest, predicts the StarCraft bot scene is set for a big shake up over the next few years.
Facebook and Google say they are approaching StarCraft differently than most individual programmers have. Leading bots are based mostly on rules and strategies specified by their creators. Coxe says one of his bot’s best features is a simple learning feature, in which it tries out pre-programmed strategies against each bot it plays and notes which one works so as to be prepared in their next matchup. The tech giants are planning to lean more heavily on machine learning, planning to have bots develop their own strategies from scratch by examining large caches of data from past games, or repeated experimentation. Facebook didn't build ideas it has published along those lines into CherryPi. Machine learning was central to making Google’s AlphaGo unbeatable.
Facebook’s bot may not have won the StarCraft competition, but Dan Gant, whose bot PurpleWave placed second, saw hints of the future in its play. Most bots choose to either attack frontally, or retreat, based on the relative numbers in opposing armies. In videos released from the contest prior to the final results, CherryPi appeared to know when it could move fast enough to sneak around an enemy to attack its base, says Gant.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Still, don’t expect lone bot builders to disappear overnight—or StarCraft to be conquered soon. “The problem is still so difficult,” says Churchill. “For a couple of years I predict the hobbyist, mostly rule-based bots, will still do well.” He guesses it may be five years before any bot can beat expert humans—but acknowledges it may be sooner.
Gant, a software developer in New York, took a break this year and spent months working full-time on PurpleWave. He says the entrance of tech giants adds to the appeal of a pursuit that presents a unique learning opportunity. “You can be Facebook or DeepMind or a kid just learning programming and you’re competing on a level playing field,” he says. “You’re limited by your own effort and what you can teach yourself.” Making a superhuman StarCraft player could deliver tech companies more than just satisfaction. Google says machine learning from DeepMind has helped cut cooling bills in its datacenters. A Microsoft research paper on machine learning this year said that improving predictions of when a user will click on an ad by just 0.1 percent would yield hundreds of millions of dollars in new revenue. A bot capable of leading armies of alien zergs to crush any human might quickly earn its keep.
Senior Editor X Topics Facebook artificial intelligence Google Will Knight Kari McMahon Amit Katwala Andy Greenberg Khari Johnson David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
903 | 2,017 |
"Google and Microsoft Can Use AI to Extract Many More Ad Dollar from Our Clicks | WIRED"
|
"https://www.wired.com/story/big-tech-can-use-ai-to-extract-many-more-ad-dollars-from-our-clicks"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Google and Microsoft Can Use AI to Extract Many More Ad Dollars from Our Clicks Ben Bours Save this story Save Save this story Save When Google and Microsoft boast of their deep investments in artificial intelligence and machine learning, they highlight flashy ideas like unbeatable Go players and sociable chatbots.
They talk less often about one of the most profitable, and more mundane, uses for recent improvements in machine learning: boosting ad revenue.
AI-powered moonshots like driverless cars and relatable robots will doubtless be lucrative when—or if—they hit the market. There’s a whole lot of money to be made right now by getting fractionally more accurate at predicting your clicks.
Many online ads are only paid for when someone clicks on them, so showing you the right ones translates very directly into revenue. A recent research paper from Microsoft’s Bing search unit notes that “even a 0.1 percent accuracy improvement in our production would yield hundreds of millions of dollars in additional earnings.” It goes on to claim an improvement of 0.9 percent on one accuracy measure over a baseline system.
Google, Microsoft, and other internet giants understandably do not share much detail on their ad businesses’ operations. But the Bing paper and recent publications from Google and Alibaba offer a sense of the profit potential of deploying new AI ideas inside ad systems. They all describe significant gains in predicting ad clicks using deep learning , the machine learning technique that sparked the current splurge of hope and investment in AI.
Google CEO Sundar Pichai has taken to describing his company as “AI first.” Its balance sheet is definitively ads first. Google reported $22.7 billion in ad revenue for its most recent quarter, comprising 87 percent of parent company Alphabet’s revenue.
Earlier this month, researchers from Google’s New York office released a paper on a new deep learning system to predict ad clicks that might help expand those ad dollars further. The authors note that a company with a large user base can greatly increase revenues with “a small improvement,” then show their new method beats other systems “by a large amount.” It did so while also requiring much less computing power to operate.
Alibaba, the Chinese ecommerce company and one of the world’s largest retailers, also has people thinking about boosting its billions in annual ad revenue with deep learning. A June paper describes something called a deep interest network , which can predict what product ads a user will click. It was tested on anonymized logs from some of the hundreds of millions of people who use its site each day.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Alibaba’s researchers tout the power of deep learning to outperform conventional recommendation algorithms, which can sometimes stumble on the sheer diversity of users’ online lives. For example, a young man may sometimes be shopping for himself and sometimes for kids clothing.
It’s hard to know what effect deep learning is having on tech giants’ ad revenues. Many factors affect the online ad markets, and companies don’t reveal everything about their technology or businesses. Google has reported steady growth in ad revenue for many years ; Microsoft has called out strong growth in Bing search ad revenue and in average revenue per search in its past five quarterly earnings releases.
Google declined to say how close its recently published click-prediction system is to what it uses in its ad business. Researcher Gang Fu said in an email that there is still much more potential for using machine learning in ads. "It is still a technically challenging problem and also any (even slight) improvement on model accuracy would have great impact for many organizations," he wrote. Microsoft tells WIRED that it constantly tests new machine learning technologies in its advertising system. In an email, John Cosley, director of marketing for Microsoft search advertising, described ads as "perhaps by far the most lucrative application of AI [and] machine learning in the industry.” Related Stories Business Marcus Wohlsen Business Klint Finley Mo Data, Mo Money.
Tom Simonite Research papers on using deep learning for ads may undersell both its true power and the challenge of tapping into it. Companies carefully scrub publications to avoid disclosing corporate secrets. And researchers tend to describe simplified versions of the problems faced by engineers who must target and serve ads at huge scale and speed, says Suju Rajan, head of research at computational advertising company Criteo. The company has released anonymized logs of millions of ad clicks that Google and others have used in papers on improving click predictions.
Perhaps not surprisingly, Rajan believes deep learning still has much more to offer the ad industry. For example, it could figure out long-term cause and effect relationships between what you see or do online today and what you click on or buy next week. “Being able to model the timeline of user interest is something that the deep models are able to do a lot better,” she says.
That Google and Microsoft are getting better at predicting our desires and clicks can be seen as a good thing. It gets them closer to the long-sought goal of serving up ads that don’t feel like ads because they’re useful. And it helps advertisers reach the people they want to reach.
But online ad companies are also subject to incentives less well aligned with consumers or other companies. Benjamin Edelman, a professor at Harvard Business School, has published research suggesting Google search is biased toward the company’s own services and designed to unfairly force corporations into spending heavily on ads for their own trademarks. (Google has been fined $2.7 billion for the former and successfully defended multiple lawsuits alleging the latter.) Such market-warping practices could be boosted by machine learning too. “If machine learning can improve the efficiency of their advertising platform by showing the right ad to the right guy, then more power to them—they are creating value,” Edelman says. “But a lot of the things that Google has done haven’t enlarged the market.” In advertising, as in many other areas, AI can give tech companies great power —and responsibility.
UPDATED: 11:50 am ET, September 1. This story has been updated to include comment from Google.
Senior Editor X Topics artificial intelligence Google Microsoft Advertising Gregory Barber Caitlin Harrington Nelson C.J.
Peter Guest Andy Greenberg Joel Khalili David Gilbert Kari McMahon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
904 | 2,018 |
"CES 2018: Inside the Lab Where Amazon's Alexa Takes Over The World | WIRED"
|
"https://www.wired.com/story/amazon-alexa-development-kit"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter David Pierce Gear Inside the Lab Where Amazon's Alexa Takes Over The World Amazon's Echo speaker, one of the company's own devices with its Alexa voice assistant inside.
Amazon Save this story Save Save this story Save When it first launched in 2014, Amazon's Alexa voice assistant was little more than an experiment. It appeared first inside the Echo, itself a wacky gadget launched without warning or much expectation. As it took off, though, and millions of people began to put a smart speaker in their home, Amazon's ambition exploded. The company saw an opportunity to build a new voice-first computing platform that worked everywhere, all the time, no matter what you were doing. And it began to chase that vision at full speed.
While one team at Amazon works on the Echo products themselves—including the Echo Spot , Show , Dot , Plus , and probably a bunch more since you started reading this sentence—and another works on the Alexa service itself, a different team is working on engineering Alexa's world takeover. While Apple and Google offer access to their assistants slowly and methodically, Amazon has flung the doors off their hinges and let anyone in. The company knows the path to success is not just in Echo devices, and that Amazon can't possibly make every gadget anyone wants to use. So they've created a new division called Alexa Voice Services, which builds hardware and software with the aim of making it stupendously easy to add Alexa into whatever ceiling fan, lightbulb, refrigerator, or car someone might be working on. "You should be able to talk to Alexa no matter where you're located or what device you're talking to," says Priya Abani, Amazon's director of AVS enablement. "We basically envision a world where Alexa is everywhere." The word "everywhere" has taken on a whole new meaning in the last few years. Thanks to decades of improvements in processor efficiency, bandwidth accessibility, and the incredible availability of cheap electronics, almost anything can be connected to the internet. Cars and trucks and bicycles, sure; all your home appliances, switches, bulbs, and fixtures; even your clothes, shoes, and jewelry. They're all coming online, and Amazon wants Alexa in all of them.
One of Amazon's Alexa development kits, which manufacturers can buy to construct their own voice-controlled products.
Amazon Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So So far, Amazon says it has about 50 different third-party Alexa devices on the market, devices like the Ecobee Thermostat and Anker's Eufy Genie. The AVS team spent the last two years building the systems and tools to take that to a new level, with the hopes of having hundreds and thousands of Alexa devices on shelves sooner rather than later. The battle for voice-assistant supremacy rages on among the tech giants, the stakes higher than ever as companies attempt to be the one on the other side of the wake word. To win, Amazon's assembling an army.
When Abani joined Amazon in 2016, she found herself having the same conversations over and over: everybody wanted to add voice to their product, but nobody knew how. "The first four months, all I was doing was sitting with our biz-dev team in god knows how many meetings," she says. These were thermostat companies, who knew temperature control but not voice recognition. They were lighting companies who knew how to optimize LEDs but not how to set up a mic array. Amazon had already been through all this in building the Echo, Abani says, "and I took on the job of understanding all the different components required to add voice to your product, then packaging them and disseminating them to the world." They built kits with all the parts you'd need to get started, packaged the right software with easy documentation, and even worked with chipmakers like Intel to build Alexa support right into the CPU.
Now, two years later, if you want to Alexa-enable your product, you just go shopping. Amazon offers seven different development kits for a few hundred dollars apiece, each with a specific product type in mind. The first one Amazon built had two mics in a line; a new one has seven laid out in a ring exactly like the Echo. "It's the same mic array, the same technology in terms of the algorithms and wake word engine," says Al Woo, a product manager on the AVS team, holding up the Echo-like kit. "If a company wants to develop a product that matches as closely as possible to the performance and function of an Echo device, this is how." The gizmo in his hand has a fully exposed motherboard and wires dangling everywhere, but Alexa's already up and running. With it, developers can have a demo-ready Alexa integration in just half an hour.
With each development kit, Amazon provides instructions on which microphones and processors to buy along with it. The kit helps developers start prototyping and testing devices much more quickly, without needing to hire a bunch of voice-recognition experts or test a thousand different mics. As much as it can, Amazon wants to make voice a plug-and-play hardware add to almost any device. Anyone should be able to buy a kit, build a product, download the Alexa software, and get everything running without any prior knowledge or any help from Amazon. Amazon might not even know the product exists until it hits shelves.
The Sol lamp, made by a division of GE, is a smart LED lamp with the Alexa voice assistant built in.
C by GE Right now, though, voice tech is early enough that Amazon tends to be intimately involved in most products using AVS. For now, that's OK: Amazon's still learning too. It works with partners like Sonos to figure out how to optimize Alexa's music abilities, then offers the results to all partners going forward. The AVS team is also working on making Alexa available to completely new classes of devices, too, through products like the new Alexa Mobile Accessories Kit. With the AMAK, Bluetooth accessories like headphones and smartwatches can connect to Alexa through a smartphone. Alexa's also about to be available on PCs around the world, with the same far-field voice recognition as an Echo. All the necessary software and info to get started is right there on Amazon's website.
Amazon's other job, at least for now, is to make sure Alexa's great on every device. Even with all the development kits and software, other manufacturers still make so many tweaks and adaptations that Amazon feels the need to take a final step to make sure the Alexa experience works across any device. The team knows that when people have a bad Alexa experience, they won't blame poor mic layouts or bad audio transparency. They'll blame Alexa. "We want to make sure it doesn't come into the review whether Alexa works good or not," says Pete Thompson, Amazon's vice president for AVS. "It just slides in, and works." Alexa performance is where JR comes in. JR stands for Junior Rover, and refers to the custom-built robot in charge of testing third-party devices to make sure Alexa works right. It's a small, whirring machine, with an orange base, four wheels, and a platform on top that can hold up to 50 pounds and extend up to six feet up. A Microsoft Surface powers the device from a four-pronged stand on one side. The Surface's wallpaper features a cartoon drawing of JR, big eyes and eyebrows and sort of a 2018 Thomas the Tank Engine look.
JR's office is a windowless, soundproofed room inside the Sunnyvale offices of Lab126, Amazon's hardware group. This is where a team built the Echo, and where the AVS team tries to spread Alexa to the world. The building itself is as office-parky as you'll find anywhere in Silicon Valley, more like a building in which you'd find a law office next to a dentist next to a massage parlor. Well, except for all the security guards and Amazon swag.
When an upcoming Alexa-enabled device comes to Amazon, it goes to Sunnyvale and then straight into JR's lab. Someone sets it up on a table in the lab, and JR begins chatting with it. The robot moves around a track of magnetic tape on the floor, stopping in the same spots every time around. At every stop, a speaker on JR's platform issues a command or two: Alexa, what's the capital of Jamaica? Alexa, who wrote The Canterbury Tales ? It speaks in any of 22 different voices, loudly or softly, in lots of languages and accents. Sometimes, a MacBook across the room will play white noise on another speaker to simulate the sound of a lively kitchen, to see how the device performs. Every question and answer gets recorded and scored, and when the test finishes Amazon delivers the feedback to the manufacturer. It's a broad, deep test of how the device might work in someone's house.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So An Amazon employee used to run all these tests, painstakingly setting up and recording every interaction. Each device would take three days or more to properly test. JR runs day and night, seven days a week, with no bathroom breaks or sick days, and can complete a test in six hours. Amazon's working on building more robots like JR, and new testing facilities for in-car Alexa and all the other kinds of devices they haven't even thought of yet.
Along every wall of the testing lab, the AVS team has laid out some of the current Alexa-enabled products. Speakers next to speakers next to speakers next to unreleased speakers I can't tell you about yet. A thermostat. A fancy light. A Lynx robot, sitting with its legs dangling in the air. Standing in the room, you're surrounded by Alexa. And that's just the very beginning. Amazon hopes to make Alexa work well, make it work everywhere, and make it the most important and intimate computer in your life. If that means helping a refrigerator manufacturer compete with the Echo, so be it. As long as there's Alexa in there, Amazon still wins.
See how Tesla's latest Chinese competitor is taking screens to the extreme.
Follow along with our Day 1 liveblog as the giant gadget show kicks off.
Will this year’s show bring some sanity to personal tech ? Senior Staff Writer Facebook X Tumblr Instagram Scott Gilbertson Reece Rogers Scott Gilbertson Carlton Reid Boone Ashworth Virginia Heffernan Boone Ashworth Boone Ashworth WIRED COUPONS Modloft Discount Code Black Friday Sale: 50% off sitewide + Extra $100 Modloft discount code SHEIN Coupon Code Up to 30% off -SHEIN Coupon Code Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Ulta Beauty coupon Ulta Beauty Coupon Code: get $3.50 Off $15 qualifying purchase Uber Eats promo code Uber Eats promo code 2023: $15 off Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
905 | 2,020 |
"Facebook's Plan for 2020 Is Too Little, Too Late, Critics Say | WIRED"
|
"https://www.wired.com/story/facebooks-plan-2020-little-late"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Paris Martineau Business Facebook's Plan for 2020 Is Too Little, Too Late, Critics Say “The bottom line here is that elections have changed significantly since 2016, and Facebook has changed too,” CEO Mark Zuckerberg said Monday.
Photograph: Carlos Jasso/Reuters Save this story Save Save this story Save Mark Zuckerberg didn’t mince words on a call with reporters Monday: “The bottom line here is that elections have changed significantly since 2016, and Facebook has changed too.” It’s true, the days of Zuckerberg arguing that filter bubbles are worse in the real world than on Facebook, and dismissing the notion that social media could influence the way people vote as a “pretty crazy idea” are long gone. Facebook, he said, has gone from being “on our back foot” to proactively seeking out threats and fighting coordinated influence operations ahead of the 2020 US presidential election.
As proof, he pointed to the slew of new efforts the company announced Monday to combat combat election interference and the spread of disinformation, describing the initiatives as one of his “top priorities.” But critics say he’s missing the point.
Disinformation and media manipulation researchers say Facebook’s announcements Monday left them frustrated and concerned about 2020. Though the policy updates show that Facebook understands that misinformation is a serious problem that can no longer be ignored, that message was undercut by the company’s reluctance to fully apply its own rules, particularly to politicians. What’s more, they say the new election integrity measures are riddled with loopholes and still fail to get at many of the most pressing issues they had hoped Facebook would address by this time.
“All of the tactics that were in play in 2016 are pretty much still out there,” says Joan Donovan, head of the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center.
Among the features announced Monday were new interstitials—notices that appear in front of a post—that warn users when content in their Instagram or Facebook feeds has been flagged as false by outside fact-checkers. Donovan says it makes sense to use a digital speed bump of sorts to restrict access to inaccurate content, but the notices may have the opposite effect.
“The first accounts that they choose to enforce that policy on are going to get a lot of attention,” from both the media and curious users, she explained. “We have to understand there's going to be a bit of a boomerang effect.” She says “media manipulators” will test the system to see how Facebook responds, “and then they will innovate around them.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Facebook did not respond to inquiries about when or where the feature would be rolled out, or whether it would apply to all content that had been rated partly or completely false by third-party fact-checkers.
Donovan says she’s not sure if the feature’s potential benefits are worth the risks of amplification, particularly since Facebook may not be able to identify and flag misleading content before it reaches people. “Taking it down two days later isn't helpful,” nor is hiding it behind a notice, she says, “especially when it's misinformation that's traveling on the back of a viral news story, where we know that the first eight hours of that news story are the most consequential for people making assessments and bothering to read what the story is even about.” Also Monday, Facebook said it would attach new labels to pages or ads run by media outlets that it deems to be “state-controlled,” like Russia Today. It said it will require that some pages with a lot of US-based users to be more transparent about who’s running them—this will at first apply only to verified business pages, and later include pages that run ads on social issues, elections or politics in the US. In addition, ads that discourage people from voting would no longer be permitted.
But researchers say that these measures are too little too late. “Every announcement like this, and all the recent publicity blitz has an undercurrent of inevitability,” says David Carroll, an associate professor at Parsons School of Design known for his quest to reclaim his Cambridge Analytica data.
“It shows that they still need to show that they're doing things. One advantage to these cosmetic things is that they look like they're significant moves, but they're really just like pretty small user interface tweaks.” But that’s not enough at this stage, he says.
The key question, researchers say, is enforcement, what Donovan calls “the Achilles heel of all of these platform companies.” She says Facebook issues many policies related to hate speech, misinformation, and election integrity. “But if they’re not willing to enforce those rules—especially on politicians, PACs, and super PACs—then they haven’t really done anything.” In September, Facebook said politicians would be exempt from the company’s usual policies prohibiting posting misinformation and other forms of problematic content in the name of newsworthiness. Earlier this month, that exemption was extended to advertisements, giving users free rein to lie in Facebook ads so long as they are political candidates or officeholders.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This is the “big gaping hole” Facebook’s announcement Monday failed to address, says disinformation researcher (and WIRED Ideas contributor) Renee DiResta. Facebook’s policies contradict themselves, she says, as they try to simultaneously argue that misinformation is a problem when disseminated by foreign actors, but free expression when posted by anyone that falls under the vague category of “politician.” Both DiResta and Donovan expressed concerns as to whether Facebook’s new transparency measures and election integrity policies would be applied to political candidates at all. On the press call, Zuckerberg emphasized that he didn’t think it was right for a private company like Facebook to “censor” the speech of politicians—a point he argued at length last week in a speech at Georgetown University—but noted that there were exceptions, when the person calls for violence or urges voter suppression, for example.
Facebook was short on details as to how exactly it would determine a politician was doing so, and how it determines a user is a politician or political candidate. Katie Harbath, Facebook’s public policy director for global elections, said Facebook would look at registration paperwork to determine whether campaigns are legitimate; though she offered no details as to who specifically would undertake the research, how the information would be communicated to moderators, and how frequently the information would be updated.
DiResta says the exemptions effectively communicate to bad actors that misinformation is allowed on Facebook, so long as you can find a way to get yourself labeled a politician or political candidate. “Anybody who’s a good troll should go and file papers to run for office at this point,” she joked. “Run for something that’s free to file for—you’re never going to get elected, but you can certainly troll the hell out of everybody else while you’re doing it.” The death of cars was greatly exaggerated The first smartphone war 7 cybersecurity threats that can sneak up on you “Forever chemicals” are in your popcorn— and your blood The spellbinding allure of Seoul's fake urban mountains 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers.
Staff Writer X Topics Facebook elections Social Media disinformation Paresh Dave Reece Rogers Reece Rogers Deidre Olsen David Gilbert David Gilbert Steven Levy Morgan Meaker Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
906 | 2,017 |
"Meet the High Schooler Shaking Up Artificial Intelligence | WIRED"
|
"https://www.wired.com/story/meet-the-high-schooler-shaking-up-artificial-intelligence"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Meet the High Schooler Shaking Up Artificial Intelligence Ryan Young for Wired Save this story Save Save this story Save Application Robotics End User Research Sector Research Technology Machine learning Robotics Since its founding by Elon Musk and others nearly two years ago , nonprofit research lab OpenAI has published dozens of research papers. One posted online Thursday is different: Its lead author is still in high school.
The wunderkind is Kevin Frans, a senior currently working on his college applications. He trained his first neural net —the kind of system that tech giants use to recognize your voice or face —two years ago, at the age of 15. Inspired by reports of software mastering Atari games and the board game Go , he has since been reading research papers and building pieces of what they described. “I like how you can get computers to do things that previously you would think were impossible,” Frans says, flashing his ready smile. One of his creations is an interactive webpage that automatically colors in line drawings , in the style of manga comics.
Frans landed at OpenAI after taking on one of the lab’s list of problems in need of new ideas. He made progress, but got stuck and emailed OpenAI researcher John Schulman for advice. After some back and forth on the matter of trust region policy optimization , Schulman checked out Frans’s blog and got a surprise. “I didn’t expect from those emails that he was in high school,” he says.
Ryan Young for Wired Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Frans later met Schulman when he interviewed for an internship at OpenAI. When he turned up for work in San Francisco’s Mission District this summer, Frans was the only intern without a degree or studying in grad school. He started working on a tricky problem that holds back robots and other AI systems—how can machines tap what they’ve previously learned to solve new problems? Humans do this without a second thought. Even if you’re making a recipe for the first time, you don’t have to re-learn how to caramelize onions or sift flour. By contrast, machine-learning software generally has to repeat its lengthy training process for every new problem—even when they have common elements.
Frans’s new paper, with Schulman and three others affiliated with the University of California Berkeley, reports new progress on this problem. “If it could get solved it could be a really big deal for robotics but also other elements of AI,” Frans says. He developed an algorithm that helped virtual legged robots learn which limb movements could be applied to multiple tasks, such as walking and crawling. In tests, it helped virtual robots with two and four legs adapt to new tasks, including navigating mazes, more quickly.
A video released by OpenAI shows an ant-like robot in those tests. The work has been submitted to ICLR, one of the top conferences in machine learning. "Kevin's paper provides a fresh approach to the problem, and some results that go beyond anything demonstrated previously," Schulman says.
Related Stories Robots Matt Simon Artificial intelligence Tom Simonite video Matt Simon Frans grapples with challenging motion problems away from computers, too, as a black belt in Tae Kwon Do. Some of his enthusiasm for AI may come just from inhaling the air on his way to Gunn High School in Palo Alto, California, the heart of Silicon Valley. Frans says he works on his AI projects without help from his parents, but he isn’t the only computer whiz in the house. His father works on silicon-chip design at publicly listed semiconductor company Xilinx.
As you may have guessed, Frans is an outlier.
Olga Russakovsky , a professor at Princeton who works on machine vision, says making research contributions in machine learning so young is unusual. In general, it’s harder for school kids to try machine learning and AI than subjects such as math or science with a long tradition of extra-curricular competitions and mentoring, she says. Access to computing power can be a hurdle as well. When Frans’s desktop computer wasn’t powerful enough to test one of his ideas, he pulled out his debit card and opened an account with Google's cloud-computing service to put his code through its paces. He advises other kids interested in machine learning to give it a shot. “The best thing to do is to go out and try it, make it yourself from your own hands,” he says.
Russakovsky is part of a movement among AI researchers trying to get more high schoolers tinkering with AI systems. One motivation is a belief that the field is currently too male, well-off, and white. “AI is a field that’s going to revolutionize everything in our society, and we can’t have it be built by people from a homogenous group that doesn’t represent society as a whole,” Russakovsky says. She cofounded AI4ALL, a foundation that organizes camps that give high-school students from diverse backgrounds a chance to work with and learn from AI researchers.
Back in Palo Alto, Frans has been thinking about helping the next generation of AI experts, too. He has a seven-year-old younger brother. “He’s interested in coding I think,” Frans says. “Maybe when he’s older I can help him.” Senior Editor X Topics artificial intelligence machine learning robots Will Knight Will Knight Will Knight Caitlin Harrington Will Knight Steven Levy Gregory Barber Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
907 | 2,018 |
"A Designer Seed Company Is Building a Farming Panopticon | WIRED"
|
"https://www.wired.com/story/a-designer-seed-company-is-building-a-farming-panopticon"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Megan Molteni Science A Designer Seed Company Is Building a Farming Panopticon Indigo Ag believes that aerial imagery of fields, like the ones in Colorado shown here, will help farmers increase their crop yields.
Indigo Ag Save this story Save Save this story Save When Geoffrey von Maltzahn was first pitching farmers to try out his startup’s special seeds , he sometimes told them, half-acknowledging his own hyperbole, that “if we’re right, you shouldn’t just see results in the field, you should be able to see them from outer space.
” As the co-founder of a company called Indigo Ag, von Maltzahn was hawking a probiotic that he hoped would increase their crop yields dramatically. “I never thought we’d ever actually test that idea,” he says.
In the three years since Indigo began selling naturally occurring organisms such as bacteria and fungi, spray-coated onto seeds, the company has grown to become perhaps the most valuable agtech company in the world. Pitchbook, for example, estimates Indigo’s value at $3.5 billion. These microbes are already helping crops grow in low-water conditions, and one day they could replace the chemical fertilizers that modern agriculture relies on. This fall, Indigo expanded well beyond seeds into logistics by opening an online marketplace—what it calls a “farmers’ eBay”—to match up agricultural buyers and sellers.
And now it is branching into geospatial intelligence. On Thursday, Indigo Ag bought one of the most intriguing startups using machine learning to make use of publicly available satellite imagery: a two-year old company called TellusLabs.
Indigo’s experimentation with geospatial data began about a year and a half ago, when von Maltzahn came across the work of Anne Carpenter, a cell biologist at the Broad Institute, located down the road from Indigo’s Boston offices. She had developed deep learning algorithms to recognize patterns of disease in human cells, just by looking at videos filmed under the microscope.
Von Maltzahn wondered if there was an agricultural analog, not under the lens of a microscope but from a camera zooming by at 10,000 miles an hour, attached to a satellite orbiting Earth from space. “That was the wild idea that led us to this,” says von Maltzahn. “I certainly had no idea when we started that one day we’d be acquiring a satellite company to bring it into operations in a much bigger way.” Indigo Ag Tellus’s chief product is Kernel, a forecasting tool that combines satellite images with weather reports and crop data from the US Department of Agriculture to predict how much food different countries are on track to grow each season. In 2017, it predicted the US corn crop yield with greater than 99 percent accuracy, months before the US Department of Agriculture arrived at the same conclusion. Indigo thinks the startup’s AI will help more farmers grow more food while putting less strain on the environment.
The company already offers advice from trained agronomists to all its growers, based on the data those farmers provide. If Indigo’s agronomists could watch those same fields every day from space and know how much water was in them, how fast plants were converting sunlight into corn, or how much protein was developing inside each wheat kernel , they might be able to provide more personalized, precise feedback. Instead of watering every field, or giving every row a fertilizer boost, growers could tailor treatments. They’d save some money. They’d use less water and fewer chemicals in the process.
And maybe, buyers with a mandate to lower their carbon footprints, like Walmart and Tyson Foods, would even pay a premium.
With Tellus’s technology, Indigo is close to being able to look at any cultivated field on the planet and know what crop is growing there, when it was planted, what kind of soil it’s growing in, how well it’s growing, what the protein content is, what the yields will be, and when harvest time will be. Along with this view from space, the company hopes to add drones, weather stations, and sensor-equipped storage containers to, eventually, turn the whole world into one massive agricultural laboratory. “Most agricultural research is done in small field trials that don’t do a good job of mimicking reality,” says David Perry, Indigo’s CEO. “Now we can look at crop performance across tens of thousands of acres all at once.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That could mean comparing different planting times, or crop rotations, or chemical treatments. Or, if you’re in the business of convincing farmers that plant probiotics are a worthwhile investment, comparing your seeds to the likes of Bayer and Dow Dupont. That’s what the two companies set out to do when they first teamed up a year ago: assess how 40,000 acres of Indigo red wheat growing in Texas, Oklahoma, and Kansas compared with neighboring fields lacking a bacterial boost. Using a combination of satellite imagery and data from the fields themselves, they estimated a 12.7 percent bump for Indigo growers.
For now, Indigo is sharing little about how exactly the satellite data will work its way into the company’s line of products and services. But last summer’s wheat project offers some clues. Using publicly available data, Tellus has been able to provide crop predictions at the county level. But that’s not enough to help out individual fields. Getting the resolution down to something useful required more granular data. That’s exactly what Indigo has spent the last three years amassing.
Since 2014, Indigo has recruited more than 100 large-scale farmers to test its microbiome-manipulating seeds—in cotton, wheat, corn, soy, and rice. Those farmers have each committed 500 acres of their land to getting sensored-up for Indigo’s research and development program. The company now passively harvests more than a trillion data points every day. But even that is just a small piece of the puzzle. Growers in the program collectively farm over 1 million acres. All the information they gather on those acres—planting dates, chemical applications, cover crops in rotation—all that goes to Indigo too.
By feeding data from Indigo’s million-acre global grower network into Tellus’s algorithms, Indigo plans to tune its new agronomic intelligence apparatus down to individual fields. The idea is to bring the results it saw in a few wheat fields in the heart of America’s bread basket to every acre of tillable soil.
“We’ve been building a symbolic layer for agriculture for the whole planet,” says David Potere, co-founder and CEO of Tellus, who will join the new geospatial intelligence unit within Indigo. “Now we’re taking that living map and moving it toward virtual field trial capabilities.” When the whole world is your lab, it helps to have a good view from above.
What causes hangovers, and how can I avoid them ? A civilian's guide to Fortnite , just in time for season 7 The promise—and heartbreak— of cancer genomics Waymo's so-called robo-taxi launch reveals a brutal truth PHOTOS: Dress rehearsal for a mission to Mars 👀 Looking for the latest gadgets? Check out our picks , gift guides , and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories X Topics agriculture microbiome satellite images food Max G. Levy Grace Browne Matt Simon Max G. Levy Dell Cameron Dhruv Mehrotra Amit Katwala Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
908 | 2,017 |
"The Dirty Secret of the Global Plan to Avert Climate Disaster | WIRED"
|
"https://www.wired.com/story/the-dirty-secret-of-the-worlds-plan-to-avert-climate-disaster"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Abby Rabinowitz Amanda Simson Science The Dirty Secret of the World’s Plan to Avert Climate Disaster Play/Pause Button Pause The Paris agreement on climate change charts a narrow path to avoiding a global apocalypse. Just one problem: Its centerpiece is a technology that basically doesn’t yet exist.
Matt Chase for WIRED Save this story Save Save this story Save In 2014 Henrik Karlsson, a Swedish entrepreneur whose startup was failing, was lying in bed with a bankruptcy notice when the BBC called. The reporter had a scoop: On the eve of releasing a major report, the United Nation’s climate change panel appeared to be touting an untried technology as key to keeping planetary temperatures at safe levels. The technology went by the inelegant acronym BECCS, and Karlsson was apparently the only BECCS expert the reporter could find.
Karlsson was amazed. The bankruptcy notice was for his BECCS startup, which he’d founded seven years earlier after an idea came to him while watching a late-night television show in Gothenburg, Sweden. The show explored the benefits of capturing carbon dioxide before it was emitted from power plants. It’s the technology behind the much-touted notion of “clean coal,” a way to reduce greenhouse gas emissions and slow down climate change.
Karlsson, then a 27-year-old studying to be an operatic tenor, was no climate scientist or engineer. Still, the TV show got him thinking: During photosynthesis plants naturally suck carbon dioxide from the air, storing it in their leaves, branches, seeds, roots, and trunks. So what if you grew crops and then burned those crops for electricity, being sure to capture all of the carbon dioxide emitted? You’d then store all that dangerous CO 2 underground. Such a power plant wouldn’t just be emitting less greenhouse gas into the atmosphere, it would effectively be sucking CO 2 from the air. Karlsson was enraptured with the idea. He was going to help avert a global disaster.
The next morning, he ran to the library, where he read a 2001 Science paper by Austrian modeler Michael Obersteiner theorizing the same idea, which was later dubbed “bioenergy with carbon capture and storage”—BECCS. Karlsson was sold. He launched his BECCS startup in 2007, riding the wave of optimism generated by Al Gore’s first climate change movie. Karlsson’s company even became a finalist in Richard Branson’s Virgin Earth Challenge, which was offering $25 million for a scalable solution for removing greenhouse gases. But by 2014, Karlsson’s startup was a failure. He took the BBC’s call as a sign that he shouldn’t give up.
In the report, the UN’s Intergovernmental Panel on Climate Change—universally known by yet another acronym, IPCC—presented results from hundreds of computer-model-generated scenarios in which the planet’s temperature rises less than 2 degrees Celsius (or 3.6 degrees Fahrenheit) above preindustrial levels, the limit eventually set by the Paris Climate Agreement.
The 2°C goal was a theoretical limit for how much warming humans could accept. For leading climatologist James Hansen, even the 2°C limit is unsafe. And without emissions cuts, global temperatures are projected to rise by 4°C by the end of the century. Many scientists are reluctant to make predictions, but the apocalyptic litany of what a 4°C world could hold includes widespread drought, famine, climate refugees by the millions, civilization-threatening warfare, and a sea level rise that would permanently drown much of New York, Miami, Mumbai, Shanghai, and other coastal cities.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But here’s where things get weird. The UN report envisions 116 scenarios in which global temperatures are prevented from rising more than 2°C. In 101 of them, that goal is accomplished by sucking massive amounts of carbon dioxide from the atmosphere—a concept called “negative emissions”—chiefly via BECCS. And in these scenarios to prevent planetary disaster, this would need to happen by midcentury, or even as soon as 2020. Like a pharmaceutical warning label, one footnote warned that such “methods may carry side effects and long-term consequences on a global scale.” Indeed, following the scenarios’ assumptions, just growing the crops needed to fuel those BECCS plants would require a landmass one to two times the size of India, climate researchers Kevin Anderson and Glen Peters wrote. The energy BECCS was supposed to supply is on par with all of the coal-fired power plants in the world. In other words, the models were calling for an energy revolution—one that was somehow supposed to occur well within millennials’ lifetimes.
Today that vast future sector of the economy amounts to one working project in the world: a repurposed corn ethanol plant in Decatur, Illinois. Which raises a question: Has the world come to rely on an imaginary technology to save it? On December 12, 2015, 195 nations—including the US—adopted the Paris Climate Agreement, finally promising to keep global temperature rise well below 2°C above preindustrial levels this century, with a further goal of keeping them below 1.5°C. Christiana Figueres, the UN diplomat who shepherded global climate talks from their post-Copenhagen standstill, remembers “5,000 people jumping out of their seats, crying, clapping, screaming, yelling, torn between euphoria and still disbelief.” But that euphoria masked a hard truth. The plausibility of the Paris Climate Agreement’s goals rested on what was lurking in the UN report’s fine print: massive negative emissions achieved primarily through BECCS—an unproven concept to put it mildly. How did BECCS get into the models? The story begins with the 2°C goal itself, a formal international climate target since 2010 (and informal since the 1990s). For years before Paris, climate researchers had warned that the 2°C limit was slipping out of reach—or was already unattainable.
Here’s why: As climate researchers have clearly (and tirelessly) linked temperature rise to increasing atmospheric CO 2 concentrations, they can calculate back from a temperature target to the maximum amount of CO 2 we can emit—our “carbon budget.” For a greater than 66 percent chance of staying below 2°C of warming, our CO 2 concentration should remain under 450 parts per million.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2010, when the 2°C goal was adopted at a major conference in Cancun, Mexico, the carbon budget for 450 ppm, or 2°C, was formidably tight: Only a third was left—1,000 gigatons of carbon dioxide. Since humans were emitting 40 gigatons a year, the carbon budget would be easily blown before midcentury. This is the global accounting problem that a handful of specialized modeling groups began confronting in 2004, when the IPCC asked them to map scenarios in line with the 2°C goal. Essentially, how might we cut emissions without grinding the fossil-fuel-driven economy to an immediate standstill? To tackle this problem, the groups used a tool called an integrated assessment model—algorithms that draw on climate, economic, political, and technical data to imagine cost-effective policy solutions.
Around the same time that Karlsson’s life changed via late-night Swedish television, Detlef Van Vuuren, a project leader of the Dutch modeling group IMAGE, came across the idea behind BECCS in the literature, looking at Obersteiner’s 2001 paper and work by Christian Azar and Jose Moreira. He was intrigued. In theory, by both producing energy and sucking CO 2 out of the atmosphere BECCS could result in a path to 2°C that the global economy could afford.
The key was that BECCS resulted in negative emissions, which, in the carbon budget, worked like a negative number. It was like having a climate credit card: Negative emissions allowed modelers to “overshoot” the carbon emissions budget in the short term, permitting greenhouse gases to rise (as they were doing in reality) and then paying back the debt by sucking the CO 2 from the atmosphere later.
“The idea of negative emissions became a deeply logical one,” Van Vuuren says.
The rationale behind negative emissions relied heavily on the work of physicist Klaus Lackner, who at the turn of the millennia was sketching schemes for CO 2 removal on blackboards for his students at Columbia University. Lackner, who was working on carbon capture and storage (then intended for storing emissions from coal-fired power plants), was the first person to suggest the idea of direct air capture—pulling CO 2 out of the air. At that time, Lackner’s idea of direct air capture, like BECCS, was just theoretical.
But Van Vuuren says that for the purposes of the models, BECCS could be said to exist, at least in its component parts. The IPCC had published a report on carbon capture and storage—and bioenergy just meant burning lots of crops. Some models did ultimately include direct air capture and another negative emissions technique, afforestation (planting lots of trees, which naturally absorb and store CO 2 in the process of photosynthesis). But BECCS was cheaper because it produced electricity.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2007 IMAGE published an influential paper relying on BECCS in Climatic Change , and garnered much attention at an IPCC expert meeting. Other groups started putting BECCS into their models too, which is how it came to dominate those included in the IPCC’s Fifth Assessment Report (the one that prompted the BBC to call Karlsson).
The models assumed BECCS on a vast scale. According to an analysis that British climate researcher Jason Lowe shared with Carbon Brief , at median the models called for BECCS to remove 630 gigatons of CO 2 , roughly two-thirds of the carbon dioxide humans have emitted between preindustrial times and 2011. Was that reasonable? Not for James Hansen, who wrote that reliance on negative emissions had quietly “spread like a cancer” through the scenarios, along with the assumption that young people would somehow figure out how to extract CO 2 at a cost he later projected to be $140–570 trillion this century.
Anderson (of the India calculations) pointed out that the few 2°C scenarios without BECCS required CO 2 emissions to peak back in 2010—something, he noted wryly, that “clearly has not occurred.” In a scathing letter in 2015, Anderson accused scientists of using negative emissions to sanitize their research for policymakers, calling them a “deux ex machina.” Fellow critics argued that the integrated assessment models had become a political device to make the 2°C goal seem more plausible than it was.
Oliver Geden, who heads the EU division of the German Institute for International and Security Affairs, raised the alarm in the popular press. In a New York Times op-ed during the conference, he called negative emissions “magical thinking”—a concept, he says, meant to keep the “story” of 2°C, the longtime goal of international climate negotiations, alive.
For Van Vuuren and other modelers we interviewed, this criticism is misplaced. Integrated assessment models are not meant be predictive, they emphasize, because no one can predict future technology—or political decisions. Nor are they action plans. Rather, for Van Vuuren, the models are “explorations” meant to show the kinds of policy decisions and investments necessary to reach the 2°C goal. Given that, Van Vuuren sees a “worrying gap” between the reliance on BECCS in the scenarios and how few research programs and projects there are in the real world.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Whether the IPCC’s scenarios are political cover or research guides for policymakers depends on who you ask. But either way, this gap is undeniable. It can be explained in part by the fact that BECCS is a conceptual tool, not an actual technology that anyone in the engineering world (apart from a few outliers like Karlsson) is championing. At a recent meeting in Berlin, one climate researcher called BECCS “the devil child,” which got laughs; bioenergy and carbon capture have both met their share of criticism—bioenergy for displacing agricultural crops needed to feed people and carbon capture for, among other things, being perceived as diverting attention from the need for massive emissions cuts.
For that reason, in an article last year in Science , Anderson and Peters called relying on negative emissions “an unjust and high-stakes gamble” and a “moral hazard” that allows policymakers to avoid making tough emissions cuts right now. Replying in a letter, Klaus Lackner, the carbon capture pioneer, cautioned that their argument risked shutting down a necessary avenue of research. “If we had this conversation in 1980,” he says, it would have been different. Now, with our carbon budget blown, he argues, potential negative emissions technologies are “a life preserver.” Here’s the hardest truth: Even if negative emissions debuted in highly crafted, impractical computer models, we now need negative emissions in the real world to keep the planet’s temperatures at safe levels.
Temperatures have already risen 1.2 to 1.3°C (or 2.1 to 2.3°F). Current carbon dioxide concentrations, meanwhile, hover around 406 ppm. According to Sabine Fuss and Jan Minx of the Mercatur Research Institute, our 1.5°C budget is more or less blown—a widely shared conclusion. (If you’re feeling morbid, you can check the Institute’s running carbon budget clock here ). Without a drastic increase in international action on cutting emissions, they say, the carbon budget for 2°C will likely be blown by 2030.
So the question is, can negative emissions technology work in the real world, on a global scale? To explore that question, we visited the project in Decatur, Illinois, that modelers cite as evidence that BECCS actually exists.
Workers at the Archer Daniel Midland plant in Decatur, Illinois, inject pure carbon dioxide into underground reservoirs. Theoretically, it can stay there forever.
Daniel Byers Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg You may not have expected the future to look like this—what you find if you drive south from Chicago, following directions to Memphis, bearing right through several million acres of green-stalked corn, past the DIY pro-gun signs and the DIY pro-biofuels signs (“not middle east oil fields/soy biodiesel fields”). This is where, 10 years ago, before the biofuels market went bust, people could see their fortunes—fields of soy and corn—stretching to the horizon. At Decatur, you exit toward the Archer Daniel Midland plant, which looks from a distance, with its blocky white towers and mysterious dome, like the Emerald City seen without Oz’s green glasses.
When you pull up to the secured gates, ADM’s Decatur plant resolves into a jumble of substations, large tanks, and pipelines, all bathed in a troubling odor reminiscent of cat food. Here, trains and trucks deliver corn and soy by the ton to be processed into chemicals for food and ethanol for fuel. And somewhere in the guts of this Midwestern agricultural giant is the Illinois Industrial Carbon Capture Project—otherwise known as the world’s one and only BECCS plant.
“I warned you there wasn’t much to see,” says Sallie Greenberg, a geologist and the associate director for energy research and development at the Illinois State Geological Survey, ADM’s project partner, as she unlocks the white trailer that serves as project headquarters. Still, she says, more than 900 people have visited the project, from 30 countries: “It’s world-class.” The ADM plant is an ideal site for carbon capture and storage, which is why, almost 15 years ago, the US Department of Energy initiated a pilot project here. Deep in the plant, sugar from corn kernels is fermented to make ethanol, a reaction that also produces CO 2 eminently easy to “capture”: You just have to separate the gas from the ethanol and remove a little water. From there the CO 2 is pressurized, piped, and injected way down into a saline sandstone reservoir, handily located 7,000 feet below the plant.
To see the new injection well, which began operating last May, we drove back out of the plant, following signs for Progress City—agricultural showgrounds owned by ADM where community members were enjoying unseasonably warm October weather on Family Safety Day. A mile from the plant, we pulled up to a fenced-off injector—a rusty pipe, with a few bends and gauges, that disappeared into a cement block in the ground. We stood there as carbon dioxide shot into the earth, silently and out of sight. Currently, more than 1.4 million tons of CO 2 that might have been polluting the atmosphere are stored underground.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In theory it was impressive; in reality we were in denuded cornfields looking at piping that seemed oddly rusty for a state-of-the-art project. To be fair, of course, its most impressive installation was invisible, underground.
Were we seeing the modelers’ world-saving technology in action? ADM is not BECCS as the models imagined it—that is, a power plant producing electricity by burning crops. Greenberg, in fact, only encountered the term BECCS in the past few years, despite starting work on the project in 2005, and tells us no integrated assessment modeler had ever given her a call.
But through happenstance, Decatur is the world’s first BECCS plant. The corn-turned-to-ethanol process is technically “bioenergy,” and ADM’s process does achieve negative emissions, at least by back of the envelope calculations. Roughly two-thirds of the corn’s carbon becomes ethanol, which is emitted into the atmosphere after being burned in car engines. The other third of the corn’s carbon is pumped underground. Greenberg tells us the team has yet to do a granular carbon accounting that includes details like the cost of transporting the corn, but then a BECCS proof-of-concept was not the project’s original goal.
One argument the ADM project does make for BECCS is that we could store a lot of carbon dioxide underground forever. Once in the saline reservoir, the CO 2 reacts with brine and rock, which binds it in place, and the basin is topped with a layer of impermeable rock, ensuring the gas won’t escape. In monitoring the location of the CO 2 underground, the team has seen no sign of movement or a leak. “It can stay there forever,” Greenberg says. And this one single reservoir can likely store carbon dioxide on the order of 100 billion tons, according to surveys, which makes the the prospect of storing 600 billion tons—the amount envisioned in the models—seem reasonable.
On the other hand, the project neatly highlights the scale of the BECCS challenge. For perspective, the Decatur facility plans to store another 5 million tons of carbon dioxide over the next few years—and in 2016, average US emissions were 14 million tons of carbon dioxide per day.
So how many BECCS plants would we need? If you really consider the question, you realize how hard it is to answer. In a recent paper , engineers Mathilde Fajardy and Niall Mac Dowell, of Imperial College in London, explore best- and worst-case BECCS scenarios in excruciating detail. In worst-case scenarios (say, burning willow grown on grasslands in Europe), it’s possible to never even achieve negative emissions. You spend too much carbon transporting crops, preparing land, and building a plant. And even in best-case scenarios (using fast-growing elephant grass on marginal cropland in Brazil), you still need land use on par with Anderson’s multiples of India and water use on par with what we currently use for all agriculture in the world. “If you extrapolate the amount of agricultural production to the scale you would need, it’s going to be a disaster,” Lackner told us.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Then there’s the money problem. BECCS plants are simply not profitable—burning vegetation is roughly half as efficient as burning coal. In the US, we could incentivize BECCS by charging companies for the carbon dioxide they emit—but the carbon tax plan advocated by a few US Republican leaders is decidedly not in line with the Trump administration’s climate agenda. As it is, some American companies do get tax credits for storing CO 2 underground, but, apart from ADM, they do so for “enhanced oil recovery,” pumping CO 2 into nearly dry wells to extract hard-to-reach oil. While some of the CO 2 stays underground, the process frees ever more fossil fuels to be burned.
So driving away from Decatur, despite the project’s competence, it was difficult to imagine using BECCS on anything like the scenarios’ scale.
We shared our concerns with Noah Deich, a self-described recovering management consultant and founder of the world’s first (and only) negative emissions advocacy organization, the Center for Carbon Removal. Deich advised us to look at negative emissions technology differently—not as one catch-all solution but, rather, as a “portfolio.” This portfolio includes natural approaches for carbon capture, like developing carbon sinks (land that captures more CO 2 than it emits), afforestation (planting trees), and biochar (a charcoal soil additive that permanently stores CO 2 ), as well as technologies like BECCS plants and direct air capture.
For now, this portfolio’s direct air capture technologies exist mainly at lab bench scale. At Arizona State University, notably, Lackner is experimenting with small, portable boxes to scrub carbon dioxide from the air. But companies with a workable business plan for turning a profit are rare. One of them belongs to a charismatic Harvard climate researcher named David Keith.
The Decatur facility plans to store another 5 million tons of carbon dioxide over the next few years—but in 2016, average US emissions were 14 million tons of carbon dioxide per day.
Daniel Byers In monitoring the location of the CO 2 underground, the team has seen no sign of movement or a leak.
Daniel Byers In Squamish, an hour’s drive from Vancouver, the world doesn’t seem to need saving. The town is tucked on a narrow peninsula between a deep-blue inner channel and British Columbia’s snowcapped coastal ranges, and it is a favorite with climbers, who crowd the Starbucks. There’s a rumor that Microsoft is planning to build a campus here. Down one fork of the peninsula, on the site of a plant that once made chemicals for the pulp paper industry, is a startup founded in 2009 by Keith, with funding from Bill Gates—one of a handful of direct air capture companies in the world. Inside the headquarters, wholesome engineers in nubby sweaters drink coffee at a common table, and a check-in board lists the names of three dogs, who roam the office at will.
Just this week, the team reached a long awaited milestone: They created synthetic fuel (that could be used to run a car) from nothing more than carbon dioxide captured from air and hydrogen harvested from water. Why fuel? To not only demonstrate direct air capture at scale but also to show how to make a profit from free-floating CO 2 —an aspect of negative emissions that, as BECCS makes clear, can be elusive.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg On a tour of the pilot plant, Geoff Holmes, a former graduate student of Keith’s and his business development manager, fends off expressions of awe by explaining that carbon dioxide can also be captured using equipment found in a high school chemistry lab (as recently demonstrated by a New York City student).
Carbon Engineering’s experiment, which runs on a construction site and in a cavernous barn, involves four structures linked by various pipes, giving it the feeling of an ingenious, supersized game of Mousetrap. The first step is an air contactor, where the carbon dioxide, which is acidic once in solution, is absorbed by potassium hydroxide (a base). In a silo-like “pelleter,” the carbon dioxide is transformed into pellets of calcium carbonate (chalk) via one more high school chemistry reaction. Holding them in your hand, they look like small white marbles. Theoretically, the CO 2 could remain trapped in these pellets forever. The pellets are heated in a calciner to release the carbon dioxide and, to make the process “closed loop,” the remaining calcium is recycled for another round. When running, the process’s only inputs are air, water, and electricity, which in British Columbia, conveniently, is almost entirely provided by renewable hydroelectric power. The only output is a pure stream of carbon dioxide gas.
Next step: crafting the carbon dioxide into something saleable. This year, Swiss direct air capture startup Climeworks began selling its carbon dioxide to a nearby greenhouse. Carbon Engineering chose to create a gasoline-like fuel, using an approach known as the Fischer-Tropsch process. The technology dates back to the 1920s and usually involves drawing carbon and hydrogen from coal. (The Germans did this during WWII because they lacked oil.) Carbon Engineering’s hydrogen, on the other hand, comes from water. With these materials, the pilot plant can produce a few barrels of clear synthetic fuel a day, which, with oil at $60 a barrel, will not immediately pay many salaries at the 32-person company.
“To develop a technology in this space, it takes a long time and a lot of money,” says CEO Adrian Corless. Within four years, Corless says, they plan to scale up to a demonstration plant that can produce thousands of barrels of fuel a day. The potential market: states like California and British Columbia, which reward companies for using more efficient fuel—regulations that can make this fuel competitive.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So does Carbon Engineering’s fuel count as negative emissions? No—it’s at best carbon neutral, as each carbon atom captured will return to the atmosphere when the fuel is burned. But in theory, the company could run this plant for negative emissions instead of fuel, injecting the captured CO 2 underground—if and when the market is willing to pay for such a service.
Skyping from his office in Cambridge, Keith, famous for pioneering far-out solar geoengineering, tells us he started Carbon Engineering because direct air capture struck him as “a technology that it would be useful to have if we knew what it cost.” He later clarified, “The best way I know to figure out cost is to roll up your sleeves and jump into engineering process development.” But when asked if it can have a global impact, Keith resisted describing direct air capture as a silver bullet technology, an attitude echoed by the rest of the team. He did tell us that cheap, low-impact direct air capture could have “big environmental benefits.” In all, Keith is wary of descriptors like “novel,” or “pioneering,” or even “interesting,” that lead us to imagine some revolutionary technology will come along to save the world. He remind us that some of most important technological developments toward mitigating climate change have not been eureka-like breakthroughs, but painstaking, incremental engineering success stories, like increasingly low-cost silicon solar panels, which have been around since the 1970s. To make this point, in the company’s early days, he even posted a sign in the office that read “No Science.” To be clear, Keith thinks we need concerted research on negative emissions technology of all kinds, because carbon concentrations are already too high. “Cutting emissions doesn’t solve the climate problem,” Keith says “It just stops it from getting worse.” Visiting Carbon Engineering, what’s clear is that this research requires not just conceptual solutions or parameters in a computer model but also people “grinding it out,” as Keith puts it, day by day, for years—just to turn a technology whose every component part has existed on a lab bench for decades into meaningful reality. And it’s also clear, as the IPCC’s scenarios utterly disguise, how hard this kind of applied research can be, even with a visionary genius, funding from two billionaires, and the kind of can-do, optimistic attitude you’d expect from a team of Canadian engineers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Over the phone, hours after the team made what everyone was casually calling “first fuel,” Holmes cheerfully explains that Carbon Engineering is actually not the first to make fuel from carbon dioxide captured from the air. But, he emphasizes, they are the first to do so on equipment that can be scaled up commercially. The first, in that sense, to show it can be useful.
At the Carbon Engineering plant in Squamish, British Columbia, engineers are making auto fuel from elements drawn from the air and chemically mixed with water.
Carbon Engineering Carbon Engineering When we talk about climate change in the United States, we tend to talk about President Trump exiting the Paris Climate Agreement—not what’s hidden in the fine print.
If the US presidential election had gone differently, negative emissions might have become part of our conversation. Days after the 2016 election, at a follow-up meeting to Paris in Marrakesh, then-Secretary of State John Kerry released an ambitious report outlining how the US might “deeply decarbonize,” slashing greenhouse gas emissions by 80 percent or more by 2050. In the report, negative emissions and BECCS are star players, but so are two scenarios—one envisioning a limited role for BECCS and one entirely eliminating the use of BECCS. Emily McGlynn, who led that part of the report, says the goal could be achieved without any negative emissions technologies—it’s just more expensive.
When asked how we should read the results of any integrated assessment model, controversial as they are, McGlynn sighs. “The most important of the IPCC’s projections is that we’re screwed unless we can figure out how to take CO 2 out of the atmosphere, because we haven’t acted fast enough,” she says. “I think that’s the most important part of the story.” Still, negative emissions are not mentioned in the Paris Climate Agreement or a part of formal international climate negotiations. As Peters and Geden recently pointed out , no country mentions BECCS in its official plan to cut emissions in line with Paris’s 2°C goal, and only a dozen mention carbon capture and storage. Politicians are decidedly not crafting elaborate BECCS plans, with supply chains spanning continents and carbon accounting spanning decades. So even if negative emissions of any kind turns out to be feasible technically and economically, it’s hard to see how we can achieve it on a global scale in a scant 13 or even three years, as some scenarios require.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Looking at BECCS and direct air capture as case studies, it’s particularly clear that there’s only so fast you can act, and that modelers, engineers, politicians, and the rest of us must face up to the necessity of negative emissions together.
In the UK and Europe, people are at least embarking on negative emissions research, even if it’s not as quickly as BECCS entrepreneur Henrik Karlsson might want. His company has one other employee. There is “zilch funding,” he says. Still, Karlsson speaks optimistically of a project in the planning stages with a Swedish biorefinery.
Meanwhile, the UK has launched the world’s first government negative-emissions research program, modest at $11.5 million, but a start. On the international policy scene, negative emissions and BECCS will likely get their next big airing next fall in a special IPCC report on how the world might meet the 1.5°C degree goal, according to its editor Joeri Rogelj, who spoke to us via Skype on an October day when it was 90 degrees in New York, shortly before EPA chief Scott Pruitt killed the Clean Power Plan.
In Trump’s America, we’re burning through the carbon budget like there’s literally no tomorrow. The mid-century report (presented in Marrakech) is not in use—and like climate data recently removed from the EPA’s website, exists only in archives. But it is ready to be downloaded in the future, if we need it.
We will.
Abby Rabinowitz ( @AbbyRab ) has written for The New York Times , The Guardian , The New Republic , Buzzfeed , and Vice , among other publications. She teaches writing at Columbia University.
Amanda Simson ( @ProfSimson ) teaches chemical engineering at The Cooper Union, where she also does research in renewable energy. She has worked on rockets for Boeing, alternative energy technologies for Watt Fuel Cell, and co-created the card game Valence to teach kids chemistry.
Meet Guy Callender, an amateur scientist who built the first climate change model 80 years ago.
From salt-spraying ships to a massive, sun-reflecting mirror, here are some of the most radical plans to save civilization from climate change.
Despite climate-change denialism coming out of Washington, clean energy is having a moment.
X X Topics longreads Hannah Ritchie Matt Simon Matt Simon Matt Simon Charlie Wood Jim Robbins Robin Andrews Rob Reddick Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
909 | 2,016 |
"A Women’s History of Silicon Valley | WIRED"
|
"https://www.wired.com/2016/06/a-womens-history-of-silicon-valley"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jessi Hempel Backchannel A Women’s History of Silicon Valley Top row, from left: Judy Estrin, Lynn Conway, Sandy Lerner; Bottom row: Diane Greene, Donna, Dubinsky, Sandy Kurtzig Save this story Save Save this story Save On a recent Sunday morning, a friend texted me a photo from the checkout line of a Palo Alto Whole Foods. It was the cover of a Newsweek special issue entitled “Founding Fathers of Silicon Valley.” Seven faces graced the cover: Bill Gates. Mark Zuckerberg. David Packard. Bill Hewlett. Jeff Bezos. Elon Musk. Steve Jobs.
Three words for you, Newsweek : What the hell? OK, put aside the fact that three of those men don’t live in the Bay Area. At least one of them wasn’t born when the valley’s orchards were first being transformed into ground zero for the computer revolution. And any history that holds up seven white men as the founders of the computer revolution obscures the true collective nature of innovation.
Most important, it eliminates a valuable recruiting tool for getting women into tech, and for propelling them to more powerful positions: representation. As Marian Wright Edelman, Founder and President of the Children’s Defense Fund said in the 2011 documentary Miss Representations : “You can’t be what you can’t see.” I posted the cover on Facebook, calling the publication out for its narrow approach. Kira Bindrim, who was then Newsweek ’s managing editor (and has since left for Quartz ), responded to the post, blaming an outside company for the faux pas and writing that whenever you have seven white guys on a cover, “someone somewhere should always go, ‘… now hold up.’” So, hold up.
History unfolds in the telling. The story of the birth of Facebook, for example, has a different set of main characters, depending on whether you’re hearing about it over a glass of wine with the founder’s sister Randi Zuckerberg or his nemesis Tyler Winklevoss, and neither of them is going to tell you the story that former Harvard President Larry Summers would share.
Too often, in Silicon Valley as in other places, women are involved in significant events, but their stories go untold. They are the cofounders who are not named in press articles. They are the computer scientists who didn’t leave to start a company, but instead made important contributions in research labs. They’re the people we gloss over in our hurry to recount the life and times of Steve Jobs yet again. “Look, we know Silicon Valley has a gender issue and it’s bad,” Leslie Berlin, who is the project historian for the Silicon Valley archives for Stanford , told me. “But let’s not erase the women who helped make this valley.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What happens when you tell the history of the birth of Silicon Valley a different way? It offers a map to a generation of young men and women looking for new leadership models.
So to the current editors of Newsweek , here’s my version, in which, as an exercise, I gloss over the men in my effort to highlight the contributions of seven important women. The stories are no less dramatic. The characters are no less deserving of HBO’s parody treatment. They’ve gotten fired, too, and they’ve also come out of retirement to save their companies. They’re outrageous, ambitious, and technically sound. Let me introduce you to the Founding Mothers of Silicon Valley: In 1975, a young Judy Estrin worked for a research group at Stanford that was credited with developing the Internet. Its leader was kind of a big deal.
As a grad student, Estrin contributed to the networking protocols that form the basic architecture of the Internet.
The daughter of two computer scientists, she’d studied math and computer science at the University of California, Los Angeles and then gotten a masters in mechanical engineering at Stanford. She remembers coming home from a computer science class in tears, having stayed up all night to get a program to work. It didn’t. “My dad said to me,” Estrin recounted in a 2010 interview , “that the key to solving problems in programming, which I think applies to life also, is first to look at that big problem and break it into pieces.” Figure out how to solve the smaller problems and then figure out how they work together.
Above all else, Estrin understood networking systems. Her first entrepreneurial endeavor was Bridge Communications, a company she cofounded with her then-husband and a few other folks in 1981 to get incompatible networks to talk to each other.
The company went public in 1985 and sold to 3Com two years later for more than $200 million.
Meanwhile, Estrin had caught the entrepreneurship bug. She and her then husband went on to join the founding team of Network Computing Devices, which supplied low-cost, graphics-intensive Unix workstations, and took it public in 1992. Three years later, the pair cofounded Precept Software, an internet video streaming company. Cisco paid roughly $84 million for it in 1998 and named Estrin its chief technology officer.
At the time, it was the largest networking company in the world.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Two years later, Estrin left Cisco to return to start-up life. She went on to launch five more startups, and in 2008, she published a book on innovation.
Additionally, she has sat on the boards of three public companies—FedEx (20 years), Sun Microsystems (eight years), and The Walt Disney Company (15 years)—and myriad startups (including Medium, which owned Backchannel until recently).
In 1968, Lynn Conway was fired from IBM Research. It wasn’t because she was bad at her job. She’d arrived at IBM five years earlier after studying physics at MIT and then Columbia. During her time there, she helped pioneer supercomputing technologies. She lost her job as she underwent a gender transition.
Closeted and scared she would be found out, she cast about for a new job and landed at Memorex, which was a young computer company at the time.
Under a new identity, she once again received recognition for her work, and in 1973, Conway was recruited by the famous research center Xerox Palo Alto Research Center (PARC).
PARC, for valley novices, is the famous research center that is now credited with inventing the personal computer, the laser printer, and contemporary computer displays, among many, many other things. Along with another researcher, Conway was responsible for launching a revolution in microchip design in the 1970s that led to a new approach to designing chips. Very large-scale integration methods, or VLSI, allowed engineers to combine tens of thousands of transistor circuits on a single chip. Their textbook became the bible of chip design.
Conway then joined DARPA. As assistant director for strategic computing, she led planning of the major 1980s effort to expand the technology base for modern intelligent-weapons systems. In 1985, she joined the University of Michigan as Professor of EECS and Associate Dean of Engineering, and she is now retired. As she neared retirement, stories about her early work at IBM began to surface, so she came out as a transgender activist.
“If you want to change the future,” she wrote in an essay about her journey , “start living as if you’re already there.” The first software entrepreneur to become a multimillionaire was Sandy Kurtzig.
It wasn’t her original goal. In 1972, she quit her job selling computer timeshares for General Electric to devote more time to starting her family. To keep from getting bored, she launched a part-time software business, investing $2,000 of her own savings and running it out of a bedroom in her home. Kurtzig’s software helped helped manufacturing companies keep track of their inventory, sales, financial and manufacturing operations and tasks on a scale that had only been possible previously on large, expensive mainframes. She called it ASK Computer Systems.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To grow her business, she had to get creative. Venture capital was very limited in the early 1970s, so she invested her profits back into the business. To access the minicomputers her company needed, she convinced friends at a Hewlett-Packard plant near her home to let her use its minicomputers in the evenings. She and her employees showed up with sleeping bags at 6pm and stayed until 6am. (So much for a part-time “side project.”) By 1978, Kurtzig struck a deal with Hewlett-Packard to sell minicomputers pre-loaded with her programs. When she took the company public in 1981, it was the 11th fastest-growing company in the country.
Kurtzig stepped down in 1985 to spend some time realizing other passions—and raising her two sons. Over the next few years, the company’s growth flatlined. In 1989, at the request of the board, she returned, and within a couple of years she’d acquired a business that allowed her to expand the ASK produce line to include database software, and sales soared once more. She retired a second time, and the company sold to Computer Associates two years later.
These days, Kurtzig is founder and chairman of the cloud-based enterprise software company Kenandy, a startup named for her two grown sons , Ken and—you guessed it—Andy.
Before the iPhone or even the Blackberry, there was the Palm Pilot; it was a handheld computer manufactured by Palm, and Donna Dubinsky was the company’s founding chief executive.
A Harvard Business School alum, Dubinsky got her start at Apple in 1981 as a customer-support liaison. A few years later, she got into a fight with her boss’s boss, the temperamental founder. He wanted to drop the company’s warehouses and move to a new direct-to-customer production system. “In my mind, Apple being successful depended on distribution being successful,” she said in a book that chronicles the events.
Challenging the founder had the potential to be career suicide. She was a middle manager, after all. Even so, she delivered an ultimatum that unless she got 30 days to make a counterproposal, she’d quit.Dubinsky won the argument, and was promoted to run an Apple subsidiary called Claris.
Dubinsky left Apple in 1991. After taking a year off, she was casting about for what to do next. She met an inventor who showed her a prototype of a handheld computer device. Her time at Apple gave her a hunch the device would be a success, so she signed on as CEO of Palm, where she was tasked with raising money, building out a company, and developing a sales strategy. The company sold to US Robotics in 1995 for $44 million, shortly before the iconic PalmPilot went on sale. A few years later, US Robotics sold to 3COM, and Dubinsky and her cofounder didn’t agree with the acquirer’s strategic direction for their device. They left to form Handspring, which sold a competing device called the Treo.
(3COM eventually spun off Palm and in 2003, the companies merged; Dubinsky remained on the board until 2009.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2005, Dubinsky and her technical partner got to work on Numenta, an ambitious endeavor to reverse-engineer the brain’s neocortex. After nearly a decade toiling away in stealth mode, Dubinsky and her team began talking about the company’s technology in the last couple years. “Imagine you could build a brain that’s a million times faster than a human, never gets tired, and it’s tuned to be a mathematician,” Dubinsky told Kara Swisher on a recent episode of her podcast.
“We could advance mathematical theories extremely rapidly.” In 1984, Sandy Lerner managed the computers for Stanford’s Graduate School of Business while her boyfriend (whom she later married) worked in a different building on Stanford’s campus. They were part of a group of people who saw a need for a device that would help computers talk to each other. They called their invention a “ router.
” For three years, Lerner was CEO while the pair self-funded Cisco, racking up credit card debt. Then, in an epic move, they sold 30 percent of their company to an outside investor for $2.6 million. The investor appointed a new CEO. The trio disagreed about how to sell Cisco’s products. Shortly after the company went public in 1990, the board fired Lerner; her husband quit in solidarity. (They later divorced.) In the decades that followed, Lerner founded a cosmetics company that she sold to Moet-Hennessy Louis Vuitton, wrote a historically accurate sequel to Pride and Prejudice , and moved to an organic farm in Virginia. She also started the Women in Mathematical Sciences Initiative at Shenandoah University. Occasionally, she speaks to entrepreneurs , and her advice is always the same: don’t give control of your company to investors.
Diane Greene ’s first job involved designing offshore oil rigs.
She’d studied mechanical engineering at the University of Vermont and then naval architecture at MIT. Problem was, because of her gender, she wasn’t allowed to visit the rigs on which she worked. So, she quit and tried a few other things. In 1988, she got a second masters, this time studying computer science at the University of California at Berkeley. Greene worked at Sybase, Tandem Computers and Silicon Graphics before building her first startup, a streaming video company called Vxtreme that sold to Microsoft in 1997 for $75 million.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 1998, Greene and her husband, a noted computer science professor, were part of a team of people who founded VMWare, the company that marked the advent of the virtualization industry. Virtualization, as Greene explained in a 2013 talk to YCombinator participants , is a layer of software that sits between the hardware and the operating system. “It kind of fakes out the operating system to think it’s running on the hardware,” she said, making it possible to run multiple operating systems. In 2004, EMC Corporation paid $635 million to buy VMWare.
Greene remained CEO , though she had a rocky relationship with EMC’s CEO. In 2007, EMC spun out 10 percent of its shares and VMWare went public. A year later, EMC’s CEO fired Greene. The company’s stock dropped 24 percent in a day, and three prominent executives (including her husband) left the company as a result. That year, Google appointed Greene to its board. After a break, Greene got to work on yet another startup—an enterprise software startup called Bebop.
Last November, Google paid $380 million for the company, and gave her one of the most influential jobs in tech: as senior vice president of Google’s enterprise business, she is in charge of figuring out how to grow the company’s cloud-computing business to a size that, the company posits , could eclipse the ad business within five years.
This is the spot for the women whose names we’ve forgotten, or forgotten to elevate. Maybe it should belong to Ann Bowers, who was Apple’s human resource director and den mother in the early 1980s. Or maybe it should go to Ann Winblad, who cofounded the venture firm Hummer Winblad in 1989. Or PARC computer scientist Adele Goldberg.
Or video game designer Roberta Williams.
More likely, it belongs to someone I have not yet come across. You tell me who deserves this recognition. Post your nominations here.
And know, above all else, that this list is not complete. It’s not any more definitive than Newsweek ’s original flawed cover. Beyond women, there are so many people of different backgrounds who go unrepresented in Silicon Valley’s popular historical narratives. As any historian will tell you, there’s not one valley history; there are many, and they shift according to perspective. If we envision a future that invites a broader array of people to contribute to tech’s evolution, we must not succumb to the ease of narrowing the stories of its past.
Creative Art Direction by Redindhi You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Senior Writer Facebook X Topics Backchannel Women in tech Silicon Valley feminism and diversity WIRED Classic longreads Brendan I. Koerner Brandi Collins-Dexter Andy Greenberg Steven Levy Lauren Smiley Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
910 | 2,018 |
"Why California's Privacy Law Won't Hurt Facebook or Google | WIRED"
|
"https://www.wired.com/story/why-californias-privacy-law-wont-hurt-facebook-or-google"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Antonio García Martínez Ideas Why California's Privacy Law Won't Hurt Facebook or Google George Rose/Getty Images Save this story Save Save this story Save California, that innovative economic juggernaut that so often takes the regulatory lead on matters such as automobile emissions, is once again establishing the ground rules to a vital industry. The California Consumer Privacy Act (CCPA), signed into law by Governor Jerry Brown in June, is the improbable result of a wealthy real estate investor, with the colorful name of Alastair Mactaggart, and a gang of volunteers taking an interest in consumer privacy. Mactaggart used California’s zany ballot initiative system (and his personal fortune) to get a version of a proposed privacy law onto the November ballot. Faced with the horrifying prospect of a well-funded privacy evangelist jamming regulation down the throats of the state’s golden-goose tech companies, legislators quickly devised their own alternative. This rollicking policy adventure is recounted at length in a cover story by Nicholas Confessore for The New York Times Magazine.
Look through the rah-rah triumphalism of the piece, however and you’ll see that far from succumbing to some irresistible activist push, incumbents Google and Facebook craftily shaped the legislation to suit themselves. When in the history of American democracy have state legislators voted to severely and onerously regulate trillion-dollar companies in their home districts, motivated only by an overweening concern for consumer rights (and not donor pressure)? Never, is the answer---which is why the implications of CCPA could use some further scrutiny. (Spoiler alert: Facebook doesn’t hate the law).
Antonio García Martínez ( @antoniogm ) is an Ideas contributor for WIRED. Previously he worked on Facebook’s early monetization team, where he headed its targeting efforts. His 2016 memoir, Chaos Monkeys , was a New York Times best seller and NPR Best Book of the Year.
First, what the law does.
CCPA resembles a weaker form of Europe’s General Data Protection Regulation , or GDPR, which took effect in May. The California law requires companies to provide an opt-out to data sharing (GDPR required an opt-in), clear statements of what data is being collected or shared with third parties (as does the GDPR), and the right to delete data about yourself. The unique element, and the only one that the tech giants really pushed back on, was a provision granting individuals the right to sue companies for violating their privacy. The clause was effectively neutered when a political compromise limited the right to cases of egregious data loss or theft.
This resemblance to GDPR, if you’re a privacy activist, is more bug than feature: Companies like Facebook and Google already comply with GDPR (or comply as much as anyone) and have extended those GDPR protections to US users. When the CCPA takes effect on January 1, 2020, the average Facebook user will likely not notice.
To understand why the CCPA won’t impact Facebook in any meaningful way requires understanding (at a high level, not to worry) how Facebook’s ads ecosystem treats data and outside partners. Unlike much of the ad-tech world, Facebook lives in a walled garden where no data leaves and very little enters. When an advertiser wants to retarget you, it exchanges your contact information with Facebook, both sides agreeing to a pseudonym for you, before placing you in one or more targeting buckets (“shoe shoppers,” for example). For Facebook’s most powerful and invasive micro-targeting, almost no data is shared between advertiser and publisher, and data middlemen are largely absent. Which is why, if you download your data from Facebook, the juiciest information is in the least remarkable section: “Advertisers Who Uploaded a Contact List With Your Information.” Users and journalists fixate on the supposed creepiness of Facebook having a call log for you, for example, but the real targeters are buried in that list of companies sharing contact information. The CCPA won’t change this.
So who is impacted by the CCPA? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Primarily, companies you’ve never heard of like Drawbridge and LiveRamp (now owned by Acxiom, another company you’ve never heard of, but which knows everything about you). Drawbridge, using data that it managed to beg or borrow, like your IP address or GPS-derived location, figures out all the devices you own. Why? So that an online retailer that notices you browsing for a new handbag on your work computer can serve you an ad for that handbag on your mobile device on your commute home. Such “cross-device targeting and attribution” is one of the holy grails of modern digital advertising.
What does LiveRamp do? Ever notice how you seem to get served ads online for products you bought in physical stores? That’s not because Facebook is eavesdropping on your phone. It’s done via what’s known as “data onboarding,” where personal data like your name, address, or phone number (which retailers know through loyalty-card programs and the like) are converted into ways to target you online. Middlemen like LiveRamp join online with offline by buying your personal data and then working with publishers—email newsletters, dating sites—to identify your browser cookies. Don’t sweat the details; the net of all this hackery is a table with your personal data plus a browser cookie or mobile device ID, which allows, say, a pharmacy chain that knows your phone number (which you entered at checkout to save 5 percent) to link all your purchases to your online presence.
Facebook lives in a walled garden where no data leaves and very little enters.
Together, these relatively small players provide an alternative targeting ecosystem that competes with Facebook’s one-stop-shop. If you’re Walgreens, you can use LiveRamp (or its competitors) to target people via real-time ad exchanges. Or you can upload your customers’ contact details to Facebook. The advertiser is agnostic, so long as the pixels reach the right audience.
Here’s why Facebook is better positioned for CCPA, or GDPR: It has a direct relationship with you. How does it know every device you use? Because the first thing you do when you buy a new device is log into Facebook, Instagram, or WhatsApp. How does it know your name, phone number, and address? Because you told it those things, or opted into sharing your location via the Facebook app.
The California and European privacy rules favor these first-party relationships. Data coming from elsewhere---known as third-party data---is viewed with more suspicion, so this privileged state of affairs is unlikely to change soon. So long as Facebook’s apps remain as addictive as they are, Facebook will know who you are, where you are, and every digital pseudonym for you, whether a browser cookie or a mailing address.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg You might now be wondering if this approach to advertising was a piece of far-sighted strategy by Facebook, to avoid the inevitable privacy storm. I can state, with some authority since I was at Facebook at the time, that the answer is no. This closed system of identity-matching with minimal data sharing was conjured mostly to assuage the mutual suspicions of Facebook and its advertisers: Advertisers didn’t trust Facebook not to recycle their precious consumer data, and Facebook didn’t trust advertisers not to repurpose its user data. A minimalist data join, with all Facebook data remaining safely within its walls and Facebook not touching often dubious outside data, was the result. It’s just a happy accident (for Facebook) that this is the optimal architecture for weathering privacy regulation like the CCPA and GDPR.
Ultimately, the CCPA is a fatal blow not to Facebook but to the competing middlemen. Shortly before GDPR took effect, Drawbridge announced it was leaving the European market. Then it announced it was leaving advertising altogether.
LiveRamp is reported to be up for sale. Facebook itself shut down its Partner Categories program that used targeting segments from data brokers like Acxiom, cutting off its last connection to that world. Under CCPA and GDPR, if you want to target consumers across devices, or use your trove of offline consumer data online, you’ll have to use Facebook instead of the few competitors that once eked out a business outside its walled garden.
It’s as if the privacy activists labored to manufacture a fearsome cannon with which to subdue giants like Facebook and Google, loaded it with a scattershot set of legal restrictions, aimed it at the entire ads ecosystem, and fired it with much commotion. When the smoke cleared, the astonished activists found they’d hit only their small opponents, leaving the giants unharmed. Meanwhile, a grinning Facebook stared back at the activists and their mighty cannon, the weapon that they had slyly helped to design.
The good news is that while the activists missed their big, showy target, they hit the often sketchy data arbitragers who do the real dirty work of the advertising machine. Facebook and Google ultimately are not constrained as much by regulation as by users. The first-party relationship with users that allows these companies relative freedom under privacy laws comes with the burden of keeping those users engaged and returning to the app, despite privacy concerns. Acxiom doesn’t have to care about the perception of consumers---they’re not even aware the company exists. For that reason, these third-party data brokers most need the discipline of regulation. The activists may not have gotten the legal weapon they wanted, but they did get the legal weapon that users deserve.
It's time to stop sending money on Venmo How to share an Instagram account with your partner The JavaScript developer taking on Google and Facebook Say hello to the most audacious flying machine ever Why universities need "public interest technology" courses Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories X Topics privacy Facebook Google Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
911 | 2,018 |
"iPhone: The Complete History—and What's Next | WIRED"
|
"https://www.wired.com/story/guide-iphone"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter David Pierce Lauren Goode Gear The WIRED Guide to the iPhone Play/Pause Button Pause Illustrations by Radio Save this story Save Save this story Save It's not just the best-selling gadget ever created: It's probably the most influential one too. Since Steve Jobs announced the iPhone in 2007, Apple has sold close to 1.5 billion of them, creating giant businesses for app developers and accessory makers, and reimagining the way we live. Millions of people use an iPhone as their only computer. And their only camera, GPS device, music player, communicator, trip planner, sex finder, and payment tool. It put the world in our pockets.
Before the iPhone, smartphones mostly copied the BlackBerry. After the iPhone, they all copied Apple: Most phones now have big screens, beautiful designs, and ever-improving cameras. They even have "notches," or, cut-outs at the top of their edge-to-edge displays, where the phone’s front-facing camera lives.
And the iPhone Effect goes far beyond smartphones. In order to make so many phones, Apple and its competitors set up huge, whirling supply chains all over the world. Those same manufacturers now make the same parts to power drones, smart-home gadgets, wearables, and self-driving cars. They don't look like your phone, but they might not be here without it.
FUN FACT: The iPhone was nowhere near finished when Jobs announced it. The phone Jobs used in the iPhone introduction was basically one of a kind, and the prototypes Apple was making at the time were so fragile, they couldn't even be shipped from Asia.
Thanks to the iPhone and the apps developed for it, the world has reorganized itself around the smartphone, and a few people have started to wonder what the iPhone hath wrought. They worry that we spend too much time buried in our phones, heads down, ignoring the people and the world around us.
Social media, in particular, is being questioned. We always always knew that there was an exchange, that if we were using free apps, we were giving up something in return; but now there are concerns about where exactly all that data ends up.
We’re becoming accustomed to a sense of undefinable stress, the feeling like there's always too much going on and you can never get away even if you want to. The smartphone is one of the portals into this sometimes-dystopian data vortex.
But at the same time, there’s no denying that the iPhone has utterly transformed our lives—and that anything truly transformational will both solve existing problems and introduce new ones.
Jobs announced the iPhone on January 9, 2007, on stage at the Macworld conference. He spent nearly an hour explaining the device, extolling the virtues of everything from a touch interface to a huge, desktop-sized version of The New York Times' website that you could pan around. He even made a phone call (how quaint!) and placed what has to be the largest Starbucks order in history to an apparently real barista at an apparently real Starbucks. The whole event stands as a remarkable piece of tech-industry history, and you can still watch it all (on your phone) on YouTube.
The phone didn't come out until six months after that initial reveal, during which time Apple frantically scrambled to turn Jobs' demo into a mass-market gadget.
When it finally hit stores in June, people lined up outside stores to buy one. Apple sold 270,000 iPhones the first weekend it was available, hit 1 million by Labor Day, and instantly captured the imagination of phone owners everywhere.
FUN FACT: There were two factions within Apple fighting over what the iPhone should be. One side favored the touch-friendly device we know now. Another, led by Tony Fadell, thought the iPhone should be just an iPod that made phone calls. Clickwheel and all.
The iPhone 3G, which came out a year later, may have been an even bigger deal. Apple's 2008 iPhone included support for 3G networks, which offered much faster access to email and web pages, and it came at a much lower price. Most important, it added the App Store, which gave developers a way to build and sell software to millions of smartphone owners. The App Store will almost certainly stand as Apple's most important contribution to both the tech industry and society in general, even more than the phone itself. Developers immediately began building apps and games that changed the way we communicate, work, eat, and play. The App Store made way for Instagram, Uber, and Tinder, and it turned the iPhone into the pocket computer it was always meant to be.
FUN FACT: It's been 10 years since the first iPhone , but there have been 16 iPhones, if you include the Plus models, the SE, and the 5C. Which, we'd understand if you didn't count the 5C.
From there, the iPhone's story is one of evolution, not revolution.
Each year, Apple made the phone bigger and faster, refining the product without changing the basic form factor or its most beloved features. It became more popular every time. From the beginning, Apple seemed to know the camera could be a smartphone's best feature: The iPhone 4 , with its selfie camera and HD video recording, was the biggest thing in cameras since Kodak. Ever since, Apple's cameras have been among the best in their class.
1 / 16 Jobs always said Apple had a five-year lead with the first iPhone. That turned out to be conservative—it took six or seven years for Samsung and others to make truly competitive phones like the Galaxy S and the HTC One. Then, after successfully copying the iPhone, they found their own niches. Samsung bet on pen input and big screens; Google fine-tuned Android and starting shipping its own hardware running optimized versions of the software; and other companies made great phones for a fraction of the iPhone's price. The iPhone was the only best choice for so long, but others finally caught up.
FUN FACT: Everyone likes to argue about which iPhone was the best, but we know the answer: The iPhone 4 was the first time the iPhone felt like a precious object, a piece of jewelry. It had a killer screen and a camera so good it changed the photo-taking game forever.
In 2017, the 10th anniversary of that Macworld speech, Apple determined it was time to shake things up a bit with the iPhone. It released the iPhone 8 and 8 Plus, solid but unsurprising updates on the same theme that was already established. But it also tried something different, with the launch of the the iPhone X.
Apple ditched the home button in order to make the phone nearly all screen and bet on facial recognition as the key to both your phone and a whole new set of apps and features. (Again: cameras are everything.) It also tried to bring augmented reality into mainstream existence while making your phone and data more secure than ever. And, as a bonus, the iPhone X had the craziest emoji features anyone had ever seen up to that point. The company’s approach was radical, but also, extremely Apple: It was attempting to usher its customers into a new technological world, but it would do it while emphasizing privacy, security, and features that keep you completely locked into Apple.
Despite its high price tag and speculation that the iPhone X’s later ship date would impact sales, it sold respectably well. In the spring of 2018, Apple CEO Tim Cook said the X was the company's most popular device sold every week since its launch in November 2017. But the iPhone X was short-lived as far as smartphones go, because Apple pushed it to the background as soon as the iPhone XS was announced in September 2018.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft The current crop of iPhones are, for the most part, iterations. The iPhone XS is the natural successor to the iPhone X. The iPhone XS Max has almost the same footprint as the iPhone 8 Plus, but is equipped with an edge-to-edge display (and, like the other newer iPhones, has no home button or headphone jack).
The new iPhone XR, which shipped a bit later this fall, is Apple’s attempt to appease customers who aren’t happy that the iPhone’s price keeps creeping upwards. The XR’s display technology isn’t as great, and its camera isn’t quite as fancy, but it also costs a couple hundred dollars less than the starting price of the iPhone XS.
Perhaps most notably, all three new iPhone models this year shipped with a new Apple-made mobile processor that’s pushing the boundaries of what mobile processors can be (and do). The A12 Bionic was the first chip available for the mass market with an ultra-efficient 7-nanometer design, and it’s the kind of technology that turns real-time machine learning processes and insanely sophisticated computer vision applications from a concept into reality, right on your pocket computer.
It’s part of a larger attempt by Apple—and the others who have been working on these kinds of mobile chips—to make smartphones smarter. The glass slabs, they’re all starting to look alike. It’s what’s inside them that will set them apart over the next decade.
Apple's in a funny spot right now. Thanks to the huge, insane, impossible success of the iPhone—which accounts for more than half of the company’s revenue—Apple is quite often considered the most valuable company in the world (although Amazon and Microsoft have been vying for this position as well). Of course, it's not like Apple's in any danger as long as it's sitting on hundreds of billions in cash reserves.
But there are plenty of questions about the long-term value of the iPhone, especially since Apple’s annual unit sales of the phone were effectively the same this year as they were last year. Apple has even said that it no longer plans to break out hardware sales by product category, since it’s not representative of the strength of the business. That may be true, but some have interpreted this as Apple trying to cloak what eventually may be a real softness in sales.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft All of this just means that if Apple is going to stay on top, it needs to extract more value out of existing iPhone lovers—a strategy it has been aggressively pursuing. It has always billed the Apple Watch as something of a spiritual successor to the iPhone: It's even more accessible, even more personalized, and could take over some of your smartphone's basic functionality. Plus, it’s now a legitimate health tracker.
Same goes for AirPods , which are clearly destined to be more than just a pair of wireless ear dongles that come in a dental-floss case. A follow-up is rumored to be in the works for 2019.
Meanwhile Apple is hell-bent on replacing your laptop with an iPad, sticking an Apple TV under your flatscreen, and making sure you're all-in on Siri and iCloud. It finally released an update to the MacBook Air this year, and while no single component in the new laptop was groundbreaking, it’s something that was almost certainly designed to keep Apple laptop lovers happy. And, Apple is reportedly working on software for self-driving cars and has said repeatedly that augmented reality is the next big thing; perhaps some type of heads-up display is in the works.
As it develops new products, Apple is also looking at ways to help users reset their relationship with their gadgets. It made the iPhone ridiculously enticing; now, it’s actually rolling out software tools so that people can better manage the time spent on their phones and maybe not wind up so addicted. The Screen Time dashboard in iOS isn’t a panacea, but it’s a start.
Whatever the next thing might be, Apple appears uniquely qualified to take advantage. Over the past decade, to keep the iPhone ahead of the curve, Apple has invested billions in building its own chips. Its mastery of its supply chain is unrivaled—it's simply able to build more and better things than anyone else.
FUN FACT: In 2016, TIME magazine called the iPhone "the most influential gadget of all time." Apple actually had three of the top 10, with the Macintosh and the iPod.
Apple's smashing success proved to other big tech companies that the best products come when you make both the hardware and the software. Microsoft, Google, Facebook, and Amazon have all done the same in recent years, building huge gadget businesses on top of their software. The hardware space was once a teeming mass of startups, people raising money on Kickstarter or going to China to build their dreams into a product. Now the business runs mostly through five companies, all of whom learned how to make hardware by watching Apple.
The iPhone didn't just make Apple a metric crap-ton of money: it reoriented the entire tech landscape, helping change the way we work and play. It helped create a new class of mega-corporation, started the world thinking about how everything else might change when it, too, was connected to the internet. Next, Apple has to figure out how the iPhone can improve a user's life instead of consuming it, all while it works on the next crazy design that'll change everything all over again.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Reviewing the First iPhone in a Hype Typhoon WIRED's own Steven Levy was one of just four journalists to review the original iPhone ahead of its launch. For the device's 10th anniversary, he looked back at how important the device was, considered how outrageously excited people were to get one—and remembered all the phone calls from Steve Jobs, wondering how the review was going.
Inside Apple's 6-Month Race to Make the First iPhone a Reality When Steve Jobs announced the iPhone in January of 2007, he wasn't exactly honest about the state of the thing. The phone Jobs demoed on stage barely worked, and there weren't many others to speak of. For the next 24 weeks, three days, and three hours, Jobs and his team worked desperately to turn the iPhone into a real product for real people. This is the story of that crazy time.
1 Million Workers. 90 Million iPhones. 17 Suicides. Who’s to Blame? The iPhone didn't just change the lives of its users. It helped reshape the entire world's manufacturing process, and not always in good ways. In 2011, we sent a reporter to China to meet the people who make your iPhones, and find out how Apple's phone changed their lives, too.
The Hot New Hip-Hop Producer Who Does Everything on His iPhone Steve Lacy made a track on Kendrick Lamar's album, "DAMN," and he did it all on his iPhone. We hung out with Lacy in a weed-clouded studio in Los Angeles, and watched him work in the same way as an entire generation of smartphone-owners: not with knobs and buttons, but with a touchscreen.
Review: Apple iPhone XS and XS Max Our review of the latest model, the iPhone XS and XS Max. They're not the most exciting iPhones ever made, but they're definitely the best ones. If you look carefully, you can even see glimpses of the future.
Review: the iPhone XR When it was announced in September 2018, the iPhone XR was the device that drew the most attention. Sure, the XS was the new shiny hotness, but at $999, maybe it was a little too precious. The XR on the other hand is a device that looks and works like a modern iPhone, but costs $250 less than the top model. So of course people were intrigued. It lacks some of the marquee features of the XS, but it's still a damn great phone for the price.
The Shape of Things to Come From The New Yorker , a profile of Jony Ive (sorry, Sir Jony Ive), Apple's head of design and one of the people most responsible for how the iPhone looks and works.
Plus! The iPhone X and more WIRED iPhone news.
This guide was last updated on March 13, 2018.
Enjoyed this deep dive? Check out more WIRED Guides.
Senior Staff Writer Facebook X Tumblr Instagram Senior Writer X Topics Wired Guide iPhone Saira Mueller Julian Chokkattu Boone Ashworth Julian Chokkattu Simon Hill Simon Hill Brenda Stolyar Simon Hill WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
912 | 2,017 |
"Elon Musk's Boring Company Wants to Build a Network of Tunnels Under Los Angeles | WIRED"
|
"https://www.wired.com/2017/04/elon-musk-layers-crazy-plan-traffic-killing-tunnels"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation Elon Musk Layers on the Crazy With His Plan for Traffic-Killing Tunnels Bret Hartman/TED Save this story Save Save this story Save In Elon Musk's fantastic vision of the future, cars will drive themselves —when they aren't being whisked through the vast networks of tunnels he envisions beneath the world's cities.
Musk, who started digging a big hole on the SpaceX campus near Los Angeles earlier this year, proposes using these underground roadways as a new kind of highway system. He pitched this idea Friday morning at TED , where extreme wealth meets extreme optimism in the power of technology to solve all the world's ills.
As the CEO of SpaceX and Tesla tells it, the cars of tomorrow will reach these tunnels from street-level elevators dotted around the city. Rather than navigate these tunnels under their own power, they would ride on electric trolleys not unlike slot cars, at speeds reaching 124 mph. Musk says such a system would whisk you from LA's Westwood neighborhood to Los Angeles International Airport in just 5 minutes—a 10-mile drive that typically requires a 30 to 60-minute slog down the 405 freeway.
"We're trying to dig a hole under LA, and this is to create the beginning of what will hopefully be a 3-D network of tunnels to alleviate congestion," Musk said from the stage at TED. Now before you argue that a few tunnels won't help traffic in LA, or San Francisco, or Mumbai, or anywhere else, please know that Musk is not talking about a few tunnels. "There's no real limit to how many levels of tunnel you can have," he says.
Now you're surely thinking, "What about induced demand?," the idea that if you build it, commuters will fill it. Musk's reply? Build as many as 40 layers of underground streets and you can clear up any amount of congestion. And he promises that using more efficient boring machines to create narrower tunnels will make tunneling orders of magnitude cheaper.
It's an appealing vision, held back only by its lunacy.
"I would put what Mr. Musk is saying today in the bullshit category," says Thom Neff, a civil engineer who runs consulting company OckhamKonsult.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Any number of recent projects show just how crazy this idea is. The Big Dig in Boston. The fourth bore of the Caldecott Tunnel near San Francisco.
SR-99 in Seattle.
The list of epic projects that have consumed years of time and piles of money is long indeed. Why? Because digging underground is a slow, complex business. The crews doing such work must work through soil and rock that changes from one spot to the next, keep immensely complex machines running, navigate existing underground infrastructure like utility lines, and keep everyone safe in the process. "The idea of Musk thinking he can have this magic machine and go in there full bore, it's not gonna happen," Neff says.
Frankly, people who actually understand civil engineering find the idea of building a network of highways below cities hard to fathom. It would require "at least decades and possibly on the order of a century, plus a lot of uncertainty, a lot of risk," says Henry Petroski, a Duke University civil engineer. Plus, it distracts from other, more practical proposals, like carpooling, bus rapid transit, better subway systems, and telecommuting.
Of course, Musk has a track record of quieting naysayers: He's made electric cars cool, sent rockets to space, and convinced at least a few people that hyperloop is a good idea. But this latest vision goes too far. After discussing tunneling, the Tesla Model 3, autonomous driving, voyages to Mars, and solar energy, Musk concluded his TED interview by saying, "You'll tell me if it ever starts getting genuinely insane, right?" Ummm...
Senior Associate Editor Facebook X Instagram Topics Elon Musk engineering Infrastructure Boone Ashworth Aarian Marshall Steven Levy Angela Watercutter Lily Hay Newman Julian Chokkattu Meghan O'Gieblyn Gregory Barber Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.