id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
13,467 | 2,018 |
"NASA’s Proposed Moon Mission Offers Little Value at Astronomical Cost | WIRED"
|
"https://www.wired.com/story/proposed-moon-mission-offers-little-value-at-astronomical-cost"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Ross Marchand Science Proposed Moon Mission Offers Little Value at Astronomical Cost NASA is looking at sending people back into space. But taxpayer dollars are better spent on unmanned missions.
Kim Shiflett/NASA Save this story Save Save this story Save When it comes to space policy, reliving the glory days too often means pouring billions of taxpayer dollars into black holes.
Preliminary budget plans suggest that the Trump Administration will provide funding for Space Policy Directive 1 , which tasks NASA with getting humans back to the moon for the first time in over 45 years.
Ross Marchand ( @RossAMarchand ) is the policy director for the Taxpayers Protection Alliance in Washington DC.
NASA is already testing the feasibility of using the Orion space capsule to get humans to and from alien worlds. President Trump’s directive, hatched from a unanimous recommendation from the National Space Council in June, has the agency eager to prove that it can once again taxi humans into space.
The itch to have a crewed mission to another world is one that afflicts every Administration, regardless of ideology. President George W. Bush also tried to set a Moon trip in motion, but plans were scuttled by his successor when a committee concluded in 2009 that costs were incompatible with current funding levels.
President Obama was more eager to explore deep space, and officials hatched out a complicated plan involving harnessing a boulder from an asteroid into the Moon’s orbit, and exploring the mineral content of the new mini-moon. The proposal proved unpopular among lawmakers and space nerds. This cycle may continue indefinitely, as partisan fights serve as proxies for warring Moon and Mars camps.
Regardless of disagreements over destination between so-called Martians (advocates for Mars exploration) and Lunatics (advocates for lunar exploration), there seems to be a consensus among lawmakers and NASA top brass that manned missions to somewhere else are worthy of billions of dollars in funding. This sentiment is even shared by fiscal hawks on Capitol Hill who would otherwise be critical of an agency that regularly experiences overruns with costs and schedule.
While budget proposals opt to downsize most agencies, lawmakers have elected to merely shift NASA's focus from orbital data collection (e.g., monitoring global warming) to planetary science. But the moderate boost to funding that Congress has proposed is nowhere near enough for crewed missions.
The aforementioned 2009 committee concluded that a moonshot would take around $50 billion (around $57 billion today) above and beyond regular NASA funding over a 10-year period. And this is likely a conservative estimate, given NASA’s tendency to strap expensive, unnecessary additions to projects. Why are lawmakers turning a blind eye toward these astronomical figures, at a time of large proposed cutbacks? Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Traditionally, the justification used for getting there first was a form of space nationalism; just think of all the prestige that came to America after the moon landing. On its website, NASA answers the question 'what did we get from Apollo?' by stating: “First, and perhaps most important: it was realized at the time of President Kennedy’s 1961 proposal that the primary motivation for sending a man to the Moon was political, not scientific.” Sure, there were important scientific benefits; remote sensing work on the Moon spearheaded important Earth satellite work. But remote sensing work just reinforces the importance of uncrewed missions, which are far more viable today than they were 50 years ago. When it comes to learning critical facts about other worlds, robots are simply better equipped than humans. Robotic explorers, for example, detected water on the Moon that astronauts overlooked decades prior.
Lunar analysis, along with countless other scientific ventures, can be done at a fraction of the cost via unmanned missions to other worlds. Cambridge Cosmology and astrophysics professor and astronomer royal Martin Rees rightly criticizes current manned proposals, pointing out , “the practical case [for human spaceflight] gets weaker and weaker with every advance in robotics and miniaturization.” Ultimately, the American thirst to put people on other worlds harkens back to the international anxiety of the Cold War. The us-versus-them mentality of the 20th century remains, but the foe has changed since the collapse of the Soviet Union in 1991. China announced that it wants to land humans on the Moon by 2036 , and the European Space Agency is keen on creating a lunar village.
But shooting hundreds of billions of dollars into space to advance American prestige ignores all other possible uses of these dollars. American inventiveness can enjoy a renaissance by keeping in place the broad-based tax reform recently enacted by Congress.
A recent analysis shows that reductions in tax rates spur patent filings and the hiring of star scientists by businesses. While the research examined state-level, rather than federal-level, tax changes, the results imply that top innovators readily alter behaviors in response to tax policy changes.
With a greater incentive to innovate, individuals and businesses are more than capable of the moonshots that will continue to propel America forward. Crewed missions to other worlds, however, offer little promise at an astronomical cost to taxpayers.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints.
If Trump sends NASA to the moon, Congress just might be into it Scientists discover clean water just below Mars's surface SpaceX gears up , to finally, actually launch the Falcon Heavy Topics space NASA moon Matt Simon Max G. Levy Dhruv Mehrotra Amit Katwala Max G. Levy Dell Cameron Grace Browne Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,468 | 2,018 |
"What Is Up With Those Pentagon UFO Videos? | WIRED"
|
"https://www.wired.com/story/what-is-up-with-those-pentagon-ufo-videos"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sarah Scoles Science What Is Up With Those Pentagon UFO Videos? According to a New York Times story, a secretive Pentagon program analyzed reports of UFOs. But the associated videos raise some questions.
Bill Clark/CQ Roll Call/Getty Images Save this story Save Save this story Save On December 17, 2017, a newspaper printed a story titled “Real U.F.O.’s? Pentagon Unit Tried to Know.” No, the headline wasn’t surrounded by text about post-baby bods and B-listers’ secret sorrows. Because it was on the front page of The New York Times.
The article describes a federally funded program that investigated reports of unidentified aerial phenomena (UAPs, the take-me-seriously acronym that includes UFOs). And within the story, the Times embedded videos of two such UAPs.
Although the article was careful not to say that unidentified meant extraterrestrial, the Department of Defense acknowledged the program, and it was easy enough for readers to draw the conclusion that these videos could show alien aircraft. The Times supplemented one of the clips with a first-hand account of a Navy pilot who was sent to investigate “mysterious aircraft” that appeared— poof! —at 80,000 feet, dropped down to 20,000, and then seemed to hover before either leaving radar range or launching straight up. Weird, right? [#video: https://www.youtube.com/embed/6rWOtrke0HY The discovery, and federal acknowledgement, of a UFO of non-earthly origin would be revelatory—and the Times ’ scoop seemed to suggest that such a worldview-shifting scenario is at least not not -true. That the videos came courtesy of the Defense Department made it easier for readers to put faith in their validity.
“The video footage, in this case, is what captures people’s imagination and is part of what made this case more compelling,” says historian Greg Eghigian, a recent NASA and American Historical Association Fellow in Aerospace History.
But there are a few missing links in this narrative chain, links that need to be forged before anyone has enough information to accurately interpret these videos, let alone conclude they even remotely suggest anything extraterrestrial.
[#video: https://www.youtube.com/embed/tf1uLwUTDA0 But wait, this story broke the news that the DOD had a secret UFO program and had released secret video! That’s huge! Here’s what happened. About a decade ago, the Department of Defense inaugurated a UFO program, budgeted at $22 million according to the Times.
It went by AATIP, for Advanced Aviation Threat Identification Program, though the Times story refers to it as the Advanced Aerospace Threat Identification Program. Its purpose was to investigate flying foreign weapon threats—ones that exist now or could be developed in the next 40 years. The product of legislation cosponsored by senators Harry Reid of Nevada and Daniel Inouye of Hawaii, the program, according to Pentagon spokesperson Audricia Harris, was primarily executed through a contract with Bigelow Aerospace—a company owned by Reid’s constituent and donor Robert Bigelow. (The wealthy businessman, who is best known for his inflatable space habitats, still owns a company called Bigelow Aerospace Advanced Space Studies, which has also researched UFO reports.) Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The Pentagon program was run by Luis Elizondo, who told WIRED he took the lead position in 2010. (WIRED was unable to verify that Elizondo worked on AATIP, but Harris does confirm that he worked for the Defense Department.) The AATIP team, Elizondo says, took strange-sighting reports from pilots, as well as associated data like camera footage and radar returns, and tried to match them with known international aircraft signatures. “What we found many times was the fact that the aircraft did not belong to anybody,” Elizondo says. Sometimes, he says, the craft displayed behavior the AATIP team couldn’t explain.
Elizondo has become a kind of celebrity—in the wider world, arguably, but definitely in the UFO community. This week, those UFO researchers and enthusiasts and skeptics gathered in Fort McDowell, Arizona, for their annual International UFO Congress. And Elizondo, who had brought them closer to the capital-D Disclosure they’ve long sought, was supposed to be there. Instead, this evening at 6 pm Eastern, the Congress will show a prerecorded interview in which Elizondo will answer submitted questions from the community— “many of the questions that have gone unanswered,” according to a press release.
People have been clamoring for those answers—and Elizondo characterizes himself as being all about the answers. He says he wanted, for instance, to speak more publicly about the crafts’ non-nationality. “That fact is not something any government or institution should classify in order to keep secret from the people,” Elizondo told the Times , and the website linked to his new venture makes reference to the declassification processes the films had to undergo. The Times portrays the program as “shadowy” and possessing “excessive secrecy.” But those are all funny things to say, because it doesn’t seem like the Pentagon ever held the program’s data or documents that close, and it doesn’t seem like the videos in that story ever were classified.
“If they were officially declassified, they would have to have been officially classified,” says Nate Jones, director of the Freedom of Information Act Project at the National Security Archive. And a classified video would likely have a marking at least at the beginning and end, even after it was okayed for public consumption. Someone—at the Times , at To The Stars—could have cut those introductory and closing seconds from the video, but why would they do that, when both groups were emphasizing the direct-from-DOD legitimacy of the videos? “It looks very strongly like these weren’t released through any proper DOD declassification channels that I’ve ever seen,” says Jones. “I’ve seen a lot of DOD declassification in response to FOIA, in response to mandatory declassification review, in response to proactive disclosure. And it doesn’t look like this.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Here is, perhaps, why: While the details of the program weren’t widely known, Harris says that the program files the Pentagon has pored over so far—Pentagon staffers have been reviewing AATIP documentation since around the time the Times story broke—were unclassified.
Of course, there are endless quibbles to be had over classification. Elizondo, for his part, clarified to WIRED that he didn’t believe the videos themselves were ever classified: They were just stored on a classified system. Either way, though, it seems that they made their way into the world without the typical release process, which the Department of Defense requires of “ all documents that are submitted for official public release.” Information is classified, according to the National Archives , if its improper release would present a national security problem. So why would a secret program looking at aerial anomalies—“aerodynamic vehicles engaged in extreme maneuvers, with unique phenomenology,” says Harris—remain unclassified? Sounds like those UAPs weren’t so threatening after all.
Well, fine. But the videos were still part of the program, even if they weren’t classified. It even says right there: “Courtesy of US Department of Defense.” It’s true, that’s what the December Times story says about the videos. But there are two important things to know about that credit.
First of all, Harris maintains the Pentagon isn’t the source of the videos. “The official who is authorized to release this video on behalf of DOD did not approve the release of this video,” she says. She’s adamant: “I stand firm that we did not release those videos.” Which means that although the videos may have originated within the DOD, which Harris acknowledges they may have, there’s no public proof or Pentagon acknowledgement of their association with AATIP. Of course, perhaps the Pentagon wants it that way. In the 1950s, according to a book by investigative journalist Annie Jacobsen, the CIA’s Psychological Strategy Board concluded that the public’s potential reaction to UFOs (belief, followed by hysteria) constituted a national security threat. The '50s were a long time ago, but we still enjoy Jell-O salad every so often, so maybe we would still be susceptible to social chaos if we were to learn about flying objects of questionable origin.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight And in any case, one of the Times ’ video credits has since changed. WIRED contacted the Times reporters in late December, asking them to comment on how the paper obtained the videos, and on the Defense Department’s denial that it had released them. Reporter Ralph Blumenthal replied on behalf of the three coauthors in early January, “We don't discuss the processes by which we obtain information.” But he added, “We have official documents showing the origin of the videos and the process of review provided within the DOD before they were released.” In mid-January, though, the Times changed the caption of the lead video in its story. Both videos still have captions stating they were “released by the Defense Department’s Advanced Aerospace Threat Identification Program.” But the page now simply says the first video is “by,” not “courtesy of,” the Department of Defense.
Journalists gonna journalism, though. Of course they’re protecting their sources. But I just so happen to know that there’s another place that has original video straight from the DOD, and they’re up-front about everything.
Ah, you must be talking about To the Stars Academy of Arts and Science.
In case other readers are not already caught up, To the Stars is a company cofounded by former Blink-182 member and longtime paranormal enthusiast Tom DeLonge. The company wants to collect data on unexplained phenomena, maybe even building out tech based on what they observe. Oh, and sell books, movies, music , and merchandise related to To the Stars’ efforts.
It also, coincidentally, now employs Luis Elizondo. Elizondo says he wanted to speak about what he says the AATIP team had seen, but he didn’t think that was possible from within the Pentagon. So he resigned in October 2017, he says, signing on with To The Stars soon thereafter (although WIRED’s FOIA request for Elizondo's resignation letter, which was quoted in the Times , turned up no records, according to the Office of the Secretary of Defense/Joint Staff).
Also coincidentally, To the Stars launched a video-centric site on the same day the Times story came out—carrying the same two fighter-jet clips that appeared with the article. The so-called Community of Interest currently hosts one pilot report and one video interview along with the gun-camera videos—“the first official UAP footage,” the page says, “ever released by the USG.” (That’s the US government, for all you sheeple.) Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight While the academy’s site may make bolder claims than the Times did, that doesn’t make those claims more true. The Community of Interest page says the videos come from the Defense Department, have gone through the official declassification review process, and have been approved for public release. Further, it boasts that the academy can prove it with chain-of-custody paperwork. Its two UAP videos, together, have garnered nearly 3 million views on To The Stars’ YouTube channel, where the footage begins with on-screen text characterizing the videos as official and released.
Those chain-of-custody files aren’t public, but To The Stars did show WIRED some paperwork suggesting that the videos had gone through the Defense Office of Prepublication and Security Review (DOPSR), which is one part of the DOD’s document release procedure. DOPSR, says this guide , conducts “security and policy reviews on all documents that are submitted for official public release.” “It means that one of the steps for the review of a product has been completed,” says the Pentagon’s Harris.
But that documentation doesn’t actually clear material for release. “An approval from DOPSR does not equate to public release approval,” says Harris. To release AATIP videos by the book, someone would have had to coordinate with the Secretary of Defense for Public Affairs. So the videos on the To the Stars don’t carry any more weight than the same videos published by the Times.
OK, fine. But those videos are still spooky. If we can’t trust the feds or the paperwork, we can trust our own eyes, right? True, the videos show some weird stuff. But without a clear chain of custody, we can’t even know whether they were part of AATIP at all, or trust that they haven’t been tampered with.
And a copy of one of the much-touted videos has been online since at least 2007. UFO researcher Isaac Koi (a pseudonym under which he writes about the topic) established that the second video in the Times story, of an event in 2004, appeared online in 2007. Someone posted it on the conspiracy website Above Top Secret, and Koi delved into its origins. The first appearance he could find was on a website for a company called Vision Unlimited—a film production company. An archived 2007 version of vision-unlimited.de confirms that the footage was hosted there back then.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight That archival film matches the Times video.
1 After all the unclassifications and release-denials, this information shouldn’t surprise you. We’ve pretty clearly established that whatever these videos show, they don’t seem important enough for the Pentagon to get in a tizzy over. And while the fact that one of them has shown up online before doesn’t prove that they didn’t originate with the military, it does call that chain of custody into question. Without official confirmation or available documentation (and more documentation than WIRED saw), you can’t be sure what you’re viewing is unadulterated footage, and you can’t be sure who recorded it first.
To The Stars Academy acknowledges that the 2004 video has existed elsewhere; its explanation is that those incarnations were leaked versions and that theirs is original. But there’s no public proof for that statement.
It’s true, a Navy pilot named David Fravor did give an account to the Times of his 2004 experience with a UFO, and an unnamed source provided a report in September 2017 of the same events to To The Stars Academy. But squint just a little to see that there’s no definitive link between these accounts and that video. The witnesses give a description of an alleged strange event, and the video shows an encounter with a strange object. But without a time and location stamp of some sort, viewers can’t know whether the witnesses are actually describing what’s in the video. And, beyond that, there’s no definitive link between this video and AATIP.
In the end, also, there’s no way for the public to know whether, five seconds after the other film ends, the pilots don’t discover the “fleet” of crazy flyers wasn’t from Finland. Or the Air Force.
Fine, hater. What would it take to make you believe? In lieu of federal nondenial, or more public paperwork, there should exist hard data—like air traffic control reports, or the radar returns Elizondo mentioned—that could help establish the videos’ actualness and officialness, as well as the UAPs’ strangeness. If someone—in an aircraft, on the ground, on a ship—sent radio waves up, and they bounced off a flying object, the timing of their return and the way those waves had changed could reveal the object’s speed, its distance, and sometimes its shape.
Will To The Stars Academy be releasing those? Yes, Elizondo says. But how and when and where, he doesn’t know.
1 UPDATE 9:45 AM ET, 2/17/2018: This article previously included an interpretation of the text on the Nimitz video display.
Contributor X Topics UFOs department of defense Aviation Rhett Allain Robin Andrews Ben Brubaker Ramin Skibba Ramin Skibba Ramin Skibba Emily Mullin Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,469 | 2,016 |
"No easy answers in SpaceX's recent rocket explosion | WIRED"
|
"https://www.wired.com/2016/10/cause-spacexs-explosion-gets-little-clearer"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Eric Niiler Science SpaceX’s Mysterious Rocket Explosion Gets a Little Bit Clearer US Launch Report Save this story Save Save this story Save On the morning of September 1, just before a routine pre-flight ignition test, a SpaceX Falcon 9 rocket exploded. In an instant, the 277 foot-tall space vehicle and its $200 million satellite cargo disappeared into a ball of flames.
SpaceX has been fairly mum with details on what went wrong last month on Launch Complex 40 at Cape Canaveral. Which makes sense. But, considering it is SpaceX's second launch failure in 15 months, the explosion is a more tangible measure of the company's future than its highly-publicized (and hypothetical) plan to settle Mars.
On Friday, the Wall Street Journal reported that the problem may have been operational —rather than a manufacturing or design flaw of the rocket itself. But that does not mean the case is that simple. Nothing involving rockets ever is.
The investigation itself is a collaborative effort between SpaceX, the FAA, NASA, the US Air Force, and industry experts. Together, they are looking at over 3,000 channels of engineering data, along with video, audio and imagery, the company said. Early rumors speculated that SpaceX was worried about potential sabotage by rival space firms, and were reviewing images of strange shadow on a building next to the launch site.
But mostly, the investigation has focused on the second stage liquid oxygen tank.
Or more specifically, on the cryogenic helium system inside the liquid oxygen tank. Basically, this is the fuel that would have helped the Falcon 9's cargo—an Amos-6 communications satellite —maneuver from Low Earth Orbit into Geostationary Transfer Orbit. But even that level of detail masks a confounding number of possibilities.
To start, whether a design flaw, or some part of the pre-flight process, caused the explosion. “It could be good if it turns out to be an operational problem, because that is easily remedied, rather than a design or manufacturing problem,” said Scott Pace, director of the Space Policy Institute at George Washington University. “But you have to ask why did that operation failure happen. Was their lack of training or understanding of what was going on?” Pace said he’d like to know whether investigators from SpaceX and the Federal Aviation Administration, which oversees the accident probe, have proof that the fueling failure was what occurred, or did they eliminate other faults and the fueling operation what was remained? “Was it that people felt rushed?” Pace said. “Was there schedule pressure, were they doing something innovative. Was it something else?” Officially, SpaceX isn’t saying much. A spokesman referred Wired to an earlier statement that “a preliminary review of the data and debris suggests that a large breach in the cryogenic helium system of the second stage liquid oxygen tank took place. At this time, the cause of the potential breach remains unknown.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Other members of the investigation are playing just as coy. NASA officials in Washington referred questions to the FAA’s Office of Commercial Spaceflight. And the FAA spokesman for that office did not answer questions regarding the probe. Which means experts and amateurs in the space-interested public can only speculate.
The investigation itself might inevitably hit a wall of conjecture. “They are looking at some of the charred remains to see what part failed and was there a manufacturing problem,” says Marco Caceres , a space industry analyst at the Vienna, VA, based Teal Group. “Or was it just a one of those freak accidents? I’m not sure they are every going to know exactly.” The timeline of the explosion was extremely short – from first signs of an anomaly to loss of data was about 93 milliseconds or less than 1/10th of a second. And though the investigators have access to thousands of datastreams from that short period, they would have had much more if the explosion had occurred when the rocket were launching, or in flight. Then, thousands of cameras would have been streaming info from all angles of the rocket, and additional sensors would be feeding into flight control computers.
For his contribution to the speculation, Caceres noted that a fueling failure could occur from a small piece of brittle metal that begins vibrating, breaks apart, lodges into a fuel line and causes combustion. SpaceX is testing this sort of malfunction at its McGregor, TX, facility.
Finding out what happened to this rocket is important. The Falcon 9 is SpaceX's workhorse, scheduled to carry the brunt of the 70 commercial satellite launches— a $10 billion backlog —waiting to go into orbit. Oh and also, the rocket is part of SpaceX's bid for a $2.6 billion NASA contract to send astronauts to the International Space Station.
Those ISS crew flights have already been delayed until 2018 at the earliest, three years past NASA’s original launch date, according to a NASA Inspector General’s report released the day of the explosion.
SpaceX has lost only two of its 29 launches. Until the investigation bears out, nobody will know if those failures follow any kind of pattern. In the meantime, Musk's company continues to sign up new customers. Should those customers be worried? Well, there's no simple answer.
Topics commercial spaceflight Elon Musk explosion space SpaceX Ramin Skibba Ramin Skibba Garrett M. Graff Ramin Skibba Ramin Skibba Ramin Skibba Charlie Wood Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,470 | 2,023 |
"This Is a Philosopher on Drugs | WIRED"
|
"https://www.wired.com/story/this-is-a-philosopher-on-drugs"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Justin E. H. Smith Backchannel This Is a Philosopher on Drugs Facebook X Email Save Story ILLUSTRATIONS: JAMES MARSHALL; PHOTOGRAPH: ALI CHERKIS Save this story Save Save this story Save There is something strange in the disinterest philosophers show for experimentation with mind-altering drugs—or at least for talking about their experimentation publicly. At the margins of philosophical writing, we have Walter Benjamin’s record of his dabblings in hashish and Michel Foucault’s casual admission in interviews that he would rather be dropping acid in the Mojave Desert than sipping wine in Paris. Even further out we have philosophy-curious writers like Thomas de Quincey (also a biographer of Immanuel Kant) recounting his own experience of opium addiction. And then we have probabilities and speculation. The natural philosopher Johannes Kepler likely tried some fly agaric before writing his 1608 treatise of lunar astronomy, the Somnium (read it and you’ll see what I mean). The third-century Neoplatonist philosopher Plotinus might have availed himself of some herbal or fungal supplements to help him achieve his many out-of-body experiences, which he liked to call henosis , or “ecstatic union with the One.” I’m probably missing a few notable cases. But still, for the most part, to admit to any intention to use chemical substances, whether found in nature or synthesized in laboratories, in the aim of changing one’s apprehension of reality, is to leave the guild of the philosophers behind, with all its constricting norms and shibboleths, and to join the company, over in the deep end of the pool of life, of sundry countercultural weirdos and deviants.
This shows, I think, just how conservative philosophy remains, in some respects, as an academic discipline. At a cultural moment when psychedelics are getting a second wind, and even someone as upstanding as Michael Pollan has moved from counseling us to eat our roughage to praising the benefits of microdosing, philosophers are conducting themselves as though it were still 1950, when we wore skinny ties to colloquia, got funding from the RAND Corporation to work on decision trees and other such narrow and straitlaced endeavors, and all knew that it is the unaltered and wakeful mind that has exclusive access to the forms and qualities of the external world.
But wait a minute. Even in the mid-20th century, perhaps especially in the mid-20th century, years before the postwar generation was turning on, tuning in, and dropping out en masse, perfectly sober grown-up philosophers understood full well that the reports our senses give us of the physical world hardly settle the matter of what reality in itself is like. The problem is ancient but was sharpened in the early work of Bertrand Russell and G. E. Moore, who together articulated a cluster of problems around the concept of “sense-data.” As Russell would put the point in the 1940s, when we are looking at a table as we walk away from it, what we see shrinks continually; but the table does not shrink; therefore, what we see simply cannot be the table itself. What we see, rather, is only what is given to sense, and the full account will have to involve the physics of light and the physiology of the brain and of the organs of sense as much as it involves the properties, to the extent that these can be known, of any external object. But if we have to take account of what the perceiver brings to the instance of perception in order to make any sense at all of what perception is, then it would seem to follow that perception should also be of interest to philosophers when there is no external object at all—or at most a hallucination of one.
This article appears in the April 2023 issue.
Subscribe to WIRED.
Photograph: Andria Lo Of course, philosophers are interested in hallucination, even if they prefer to draw their examples from case studies of schizophrenia or Oliver Sacks–style pathologies, or from more mild varieties of optical illusion that happen even to the mentally sane (heat wave “oases,” straight sticks emerging from water as if bent). But they are generally interested in it only as a challenge, as an obstacle standing between them and what they would ultimately like to establish: that, namely, there is a real and all-important difference between the perception that is anchored in how the external world actually is and the perception that would seem to come from inside of us. There is a difference between waking and dreaming, in other words, and waking for them is incontestably the superior state to dwell in and the only one that is worthy of a philosopher. For philosophers seek the truth, which is something that can be furnished only to a mind not currently subject to the chimeras of psychosis, of dreaming, or of drugs.
But again, the problem is ancient, which is a pretty reliable sign that it is also intractable. For all our efforts, we still are not one step closer to apprehending the things in themselves. It is not that science hasn’t progressed—of course it has—but rather that the problem is conceptual and not empirical. You can’t perceive the thing that lies behind what you are perceiving, since the instant you do perceive it, it no longer lies behind but is front and center. Given what appears to be this logically necessary stalemate between us and the world, it seems inevitable that alternative accounts of the fundamental nature of reality—alternative ontologies, as we say—should keep returning and drawing off at least some philosophers who get fed up with an external world that demands our loyalty yet refuses to show itself.
In at least some of these alternative ontologies, the visions that come to us unbidden, in the liminal states of insobriety, hypnagogia, or theurgic ecstasy, are not to be dismissed out of hand as obstacles to our apprehension of truth, but may in fact be vehicles of truth themselves. Here I am aware I’m pushing up against the limits of respectability dictated by the implicit norms of my discipline, but I’ve gone about as far as I was ever destined to go in the ranks of this guild, and I’ve got nothing, and no one, to be afraid of. So I’m just going to come right out and say it: I am a philosopher who has taken an interest, of late, in psychedelic experimentation, and I find that my experiments have significantly widened the range of accounts of the nature of reality that I am disposed to take seriously. If you think you are in an emotional state to handle it, and in a legal jurisdiction that permits it, and you think you might benefit from being jolted out of your long-held ontological commitments, then I would recommend that you try some psychotropic drugs as well.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight I will not exaggerate the benefits. I still have no clue what this brief crack of light I call “my life” really is, nor how I got here, nor where I’m headed. But I am significantly less cocky now, my cluelessness is more evident to me, a constant that accompanies me in each moment of the day. No one seems more pathetic to me, now, in their own cluelessness, than the self-styled “realists” who prejudicially and without any grounds go on supposing that they have a firm grasp of concepts like “nature,” “matter,” “being,” “thing,” “world,” “self,” that this grasp flows directly from their acceptance of the plain evidence of reason buttressed by empirical discovery, and that the question of how many kinds of being there are, and of the nature of these beings, is one that has been definitively settled over the past few centuries of naturalistic inquiry.
If this new reflection of mine appears too vast, consider the following scene from a time we conventionally call “the scientific revolution.” A missionary finds himself in what is then known as New France, though the truth is there remains next to nothing French about the place. He is living with the Hurons and trying to convince them of the urgency of converting to Christianity. On some days the group’s leader, a sharp and dignified old man, seems disposed to accept the offer; on others he wakes up from dreams that tell him Jesus Christ is a malevolent supernatural being who has sent another such being among his people to bring them to ruin. Each morning the missionary wonders whether the old man’s latest dream vision will spell the death of him. He recalls his earlier life in Europe and the new philosophy of René Descartes, who claims to be able to prove that our waking life is real, while our dreams are only a delusion. It dawns on him that his new hosts see things in more or less the opposite way.
It dawns on him, further, that it is this opposite way, and not the new way of modern philosophy, that is more or less the default setting of all of humanity, while Descartes and the other moderns constitute a small minority of dissenters, who have worked their way, by great effort, into what is ultimately a rather counterintuitive picture of human life, one in which the great preponderance of what is running through our heads at all times, but especially in dreams and other ecstasies—all the dazzling parade of sights and sounds and spirits, specters, ancestors, anthropomorphic animals, theriomorphic divinities, theomorphic stones, countless other permutations I can’t even name, and infinite swarms of fleeting and fugacious beings—all get in the way of our efforts to orient ourselves in this life. The missionary begins to wonder whether he really knows any better how to live than the oneiromancers he has ostensibly come to enlighten. But he has little time to indulge this question, as he fears the old leader may wake up at any moment and pass a death sentence on him. He writes a letter to his Father Superior in France, begging for a transfer out of there and back among the people who know, or think they know, the difference between appearance and reality.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Philosophers today, at least in the English-speaking world, almost all take for granted that the core Cartesian doctrines are theoretical nonstarters. Yet we all remain the children of Descartes, to the extent that we take for granted that the day is more disconcealing of truth than the night. We do catch some small glimpses of alternatives here and there, and from time to time over the past few hundred years a countertendency will emerge—the psychoanalysts’ concern to center the dream-life, the consciousness-expanding spirit of the 1960s counterculture. So it is with them that I throw in my lot. I am not a Freudian, nor a hippie, yet I believe, now more than ever, in part thanks to age and what I like to think of as an accrual of wisdom, in part thanks to psilocybin and muscimol, that our liminal states of consciousness may well be consciousness at its most veracious.
Alongside drug use, another implicit prohibition of the guild of philosophers is that you really should not ask, in open and childlike terms, a question as general as “What is the meaning of life?” Yet it is just this question that came to press upon me over the past years, with an intensity I could not ignore.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight When my grandparents died long ago, I was heartbroken of course, but they had been old, and I was young, and I couldn’t see what it all had to do with me. Things were different when my father died in 2016. With his disappearance, all of a sudden the basic conditions of my own existence hit me like a revelation. He had had (note that pluperfect) a good long life, but now it seemed so absurdly brief to me, as if this being had just popped into existence, instantly began babbling a few favorite stories over and over again like a talking doll, a few beloved half-truths and misremembered factoids, only to pop right back out again, leaving me agape and wondering: Holy shit, who was that? What was that? Two years later my mother was diagnosed with the same common illness he had, with a name we hear every day and read about constantly in The New York Times ’ “Well” section and other clickbait venues, but that I find myself unable even to say or write. Throughout this era of loss, I have been sharply attuned to the fact that I myself am no longer young and that my parents’ fate has everything to do with me. They are me, just not in every respect at the present moment. I am them, but on a slight delay, and I find myself concerned with not spending the rest of this brief flash clinging to half-truths of my own. I want to know what this is all about, or at least, if knowledge is not to be had, I want to arrive at some equanimity of soul, where this condition of ours should no longer appear so absurd, so unacceptable, and where the veil that occludes my access to the world at least is no longer covered over by an additional veil of tears.
The sense of loss intensified with the beginning of the pandemic and the forced isolation it brought down on the world. I was drinking heavily at the time, as I had been for many years. By the time I finally stopped using alcohol for good, just over two years ago, there was no joy at all left in it, no celebration, as there was in my younger life of at least some imperfect stab at bon-vivantism. It was simply an addiction, and one that darkened the veil through which I am constrained to make sense of the world. So I quit it, at long last. But rather than feeling liberated and good about my healthy new start, it was only then that I fell into the deepest depression I have ever known, deeper than I ever could have imagined possible. I was suddenly cut off from the only means I had of comforting myself, and of charging up the world with at least a sort of counterfeit magic. Nothing I had valued in my earlier life, my idiotic careerism, my foolish vainglory whenever I got something published, had even the faintest trace of significance now. I could still conjure, from somewhere, a semblance of caring about my career and so on, but I truly did not care. I no longer even understood how it could be possible to care about such nothings as fill up a human life.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight When the lockdowns ended, I summoned my forces as best I could, crawled out of my hole, and began to make the trip as often as I was able from France to California in order to visit my mother. I had been vaguely aware of the recent legislative developments in certain US states surrounding the consumption and sale of cannabis, but it was only on a whim, in the middle of one of these visits, that I turned to Google to find the location of the dispensary nearest me. I had tried marijuana a few times in my earlier life, but it had had little effect on me, and in any case I considered it trashy and beneath me in all its cultural significations. But because, now, I no longer cared about any of the judgments I had made in my earlier life, positive or negative, I found that I really could not care less what the cultural position of cannabis was, and I was perfectly happy to show my ID and stand in line with all the chewed-up old army veterans, all the underemployed marginals, all the discarded Americans, my brothers and sisters, at a dispensary on the very seediest side of Sacramento, in a place no zoning law had ever touched. No, I find I’m not putting this plainly enough. I was happier there than I had ever been in any cave à vins in Paris, getting my ear talked off by some French wine merchant about terroir and bouquet and all these supposed properties of the drink that I, anyhow, was never able to detect. While I had never smoked a joint correctly in my younger days, I found that the new abundance of tinctures and oils and other alchemical refinements of the THC molecule were just what I needed to start to see the world, again, as some sort of meaningful whole.
Early in my new life as a late-blooming pothead, one thing that struck me was just what a crummy deal we in the West had been given, whereby all mind-altering substances had been prohibited and stigmatized, except for the one that has such negative medical and social consequences in its overuse as to be described in terms of disease, and that only ever alters consciousness downward, from the more to the less vivid. Alcohol might make us dance and chatter for a short time, but its technical classification as a “depressant” is surely the correct one. That wine is a central sacrament of Christianity, moreover, which in its early centuries seems to have had some interest in stamping out vestiges of pagan rituals relying on other, more intense varieties of mind alteration, seemed to me suddenly to be a rather serious argument against Christianity. It turned us into drunks, I reflected, and made us forgetful of the myriad other ways to make use of the fertile bounty of nature, particularly in its vegetal and fungal expressions, in order to see the world differently. Just a few edibles in, and I was already gravitating toward some kind of neopaganism.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Cannabis, though generally not considered a “psychedelic,” nonetheless has something of the power this word was coined to capture: It makes the soul’s nature manifest to itself. Experiences vary, of course, but in my case it does several things at once. It induces a sort of bodily ecstasy; it presents a vivid spectacle of patterns and figures before the eyes (especially when they are closed); and most interestingly, I think, it dissolves what I ordinarily experience as the metaphysical unity of the self, with all its memories and its steady persistence through time, and makes it temporarily difficult to comprehend how I ordinarily go about my life as if the self I present myself as being were a real thing, or at least anything suitable for presentation.
There is a psychiatric phenomenon, one most of us would ordinarily hope to avoid, that is known as “depersonalization,” in which a person becomes convinced that their own life is not real, that the memories they have, even the body they have, is not theirs. In the depths of depression I came close to something resembling this condition, and it was terrifying. Stoned, by contrast, I have approached a state that is at least a cousin of depersonalization, yet I have found that it is mostly neither enjoyable nor terrifying, but simply revealing. We are, after all, quite likely not unified metaphysical subjects but rather complex assemblages of cells that facilitate an illusion of unity for as long as the assemblage endures. I will not affirm here any dogma, not even the naturalistic account of biological death to which I have just alluded, but will only say that there are several plausible accounts of what a self is on which we are indeed mistaken to suppose that it exists any more than, say, an image of a flamingo briefly manifested on a screen by colored pixels.
But goodness, here I am, still philosophizing like a stoned undergrad in a black-lighted dorm room. Ridiculous. Philosophers aren’t supposed to philosophize; they’re supposed to “do philosophy,” as the professional argot has it. The guild’s prohibition on drugs, perhaps, is linked to the fact that these lead us into a philosophizing of the most freewheeling and unhinged sort. But just as in the middle of a bad trip, it’s too late to pull back now. So let me get to the heart of the matter.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Beginning around 2018 I began writing essays, blog posts, polemics, and at least a few quasi-scholarly articles against the usurpation of classical models of the human being by metaphors drawn from the algorithmic technologies that surround us in the contemporary world. These efforts eventuated in my 2022 book , The Internet Is Not What You Think It Is.
That same year I also published, in Liberties , a resolutely negative review of a new book by my philosophy colleague David Chalmers, Reality+: Virtual Worlds and the Problems of Philosophy.
Chalmers is generally sympathetic to what has come to be called the “ simulation argument ,” the essence of which may be boiled down to the idea that what we think of as “its” have their ultimate causal ground in what are in fact “bits.” That is, what we take to be physical reality would better be conceived on the model of the virtual realities our machines have begun spinning out for us over the past few decades.
My criticisms were in part grounded in my perspective as a specialist in the history of early modern natural philosophy. If you know anything about 17th-century science, you will know that people at the time were particularly impressed with the most cutting-edge technologies of the day, most notably clockworks. Some people, who styled themselves “mechanists,” were so impressed as to propose that the entire universe is best understood on the model of a horologium. And this is a pattern we see again and again in the history of science: The latest shiny gadget, whatever it may be, becomes such a centerpiece of human attention that we find ourselves unable to resist seeing it as a sort of epitome of reality as a whole.
But what a coincidence it would be, really, if the entire world turned out to share in the same nature as a technology that only came into existence within our own lifetimes! “The world is like a dream” seems a perfectly plausible proposition; “The world is like Pac-Man ” seems a crude fetishism. A rigorously historicizing perspective on the simulation argument, in other words, quickly reveals it to be little more than a reflection of presentist myopia. I certainly have no qualms about the idea, defended by Chalmers, that the world is likely not at all as it appears to us. It’s just that when I go searching for alternatives to these appearances, it is not first to our recent technologies and to their cultural ramifications in gaming and other such domains that I turn.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Yet I also confess that my review of Reality+ was at least to some extent unfair and overly harsh. In the end, what displeased me most about it were not its arguments but its tone and authorial voice. It is, to be blunt, a bit dorky, with its narrow range of cultural references to TV shows and pop songs about which I could not care less, and its obvious rootedness in online cultures of gaming and coding and geeking out that I have always shunned. But philosophers are supposed to see past such superficial differences. If I can admire a 10th-century Islamic theologian for his ingenious use of arguments drawn from Aristotle, I ought to be able to appreciate Dave Chalmers, who is, after all, my contemporary and my guildmate too.
But something else has begun to worry me about my earlier critique, beyond the impropriety of dwelling on these cultural differences, on the puerile conceit that Chalmers is a dork whereas I am cool, and it is that in recent times, my mind altered with the help of chemicals, the world has indeed come to appear “glitchy” to me, in just the way the simulationists expect that it should. Under the influence of drugs, the world really does seem to me more like a computer simulation than like a clock, or a loom, or a chariot wheel, or anything else we have come up with so far.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Let me walk that back a bit. The glitches are not exactly as the simulationists, at their most indulgent, like to imagine them. I see no cascades of glowing green 0s and 1s, nor clean Tron -like geometric lines extending off into the horizon, not to mention cats that seem to flicker like an old UHF channel as they walk by. The glitches are not something seen at all, but rather something that characterizes the mode of consciousness in which the totality of the world, and of memory and experience, is apprehended.
There are two such principal glitches. The first has to do with the experience of time. Under the influence of mushrooms, I have found, temporal duration can sometimes go the same way as I have described the self going under the influence of THC. Psilocybin is far more difficult to obtain through legal channels, unfortunately. A loophole in the Netherlands enables us to purchase the “truffle” part of the fungus; a handful of jurisdictions in California allow for psilocybin’s possession and use but not its sale. Meanwhile, muscimol, the active ingredient in the Amanita muscaria , or fly agaric fungus, so well attested in traditional religious practices throughout Eurasia, is legal in 49 states, and common, alongside cannabis, in the dispensaries of New York. While I have had some interesting experiences with psilocybin recently, it is muscimol, purchased in a rather louche head shop on the Lower East Side, surrounded by tricolored insignia of pan-African pride, images of neon aliens, the inescapable Bob Marley, that has best succeeded in bringing me out of my ordinary experience of the fixity of my personal identity, and of the temporal boundedness of my existence.
In his 1921 work, The Analysis of Mind , Russell reflected that there is no logical impossibility in the hypothesis that the world sprang into existence five minutes ago, “with a population that ‘remembered’ a wholly unreal past.” What to Russell’s lucid and unaltered mind seemed a logical possibility has seemed to me, on psychedelics, very nearly self-evident, except that the five minutes are reduced to the present instant, and it turns out that the real mistake, in our ordinary apprehension of our existence, is to conceive it as unfolding in time at all.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight What does this have to do with simulationism? Consider, first, that in an artificial system that rises to the level of consciousness, such as future iterations of GPT or LaMDA might become, this consciousness could not be the result of any slow evolutionary process with antecedent stages of mere sensory perception. The consciousness of such a system would simply pop into existence at the moment the programmer behind it all hits Start. It would not be a hard-won consciousness, moving up through photoreception, olfaction, and other such physiological capacities that now serve in part to constitute our consciousness as biological entities ( if that is what we are) but did not first emerge for the sake of consciousness. When we first started smelling the world around us, evolutionary theory tells us, there was as yet no plan for us to someday start cognizing that world. It all just worked out that way.
In an artificial system, by contrast, such as the AIs we are currently seeking to train up, it is cognition that comes first, and likely last. While the very idea that our AIs are approaching consciousness is controversial, of course (and I will not take sides on it here), we may at least agree that it is easier to make our machines cognize the world than to make them smell the world. That is, we are training the machines up to know things, and among the things they know it might turn out that they will be able to know that they know things. But the idea that there would be any accompanying bodily phenomenology to this knowledge is plainly nonsensical. What is being called “ embodied AI ” indeed recognizes that machines will most likely learn to think like humans if they are outfitted with bodies and made to experience the world. But this experience of the world is typically conceived in terms of navigation in space, which can already be observed among the canine-shaped patrol robots ominously advertised by Boston Dynamics.
If we want to call these assemblages of silicone and electricity “bodies,” they are so different from ours that we can really have no idea what bodily experience would be like for them.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Or can’t we? It seems to me we would likely have to suppose, at the very least, that for an AI there could be no experience of temporal duration as we ourselves know it. In particular, a conscious AI would not have any experience of deliberating in time, of “thinking through” a problem in the same way one “moves through” a tunnel. Rather, its change from one state to the next would be instantaneous, and for this reason the phenomenology of the “before” and “after” would be either nonexistent or so different from our own as to be indescribable in the same terms. And it is something like this phenomenology, I think, that the experience of psychedelic drugs can reveal to a person, where there is no time in the usual sense and memories are all just as much a part of the “now” as anything else.
It is not, or not only, my limitations as a writer that compel me to admit the impossibility of fully conveying what this is like. After all, we’ve only got a few tenses to work with for our verbs, though a curious rendering in the King James translation of the Bible might give us some hint of what it would be like to have an “eternal tense”: “Before Abraham was ,” Christ says in the Gospel of John, “I am.
” This is not a pluperfect, as one might ordinarily expect, where Christ claims simply that he already “had been” further back in the past than another personage. Rather, it is a shift to what superficially looks like the present tense, as if to suggest that, in his case, past, present, and future simply don’t apply. I have not checked the Greek, which alone would settle the matter of what this verse actually means, and I am not here to wade into any abstruse Christology, but I do want to suggest that that “am” captures something of the experience of at least some mind-altering substances.
The second “glitch” has to do with one’s perception, on mind-altering chemicals, of what we might call a vastly expanded social ontology, of the consciousness of a community of beings that extends far beyond the human and perhaps beyond the corporeal. The experience of such a social ontology, it seems to me, is just what you might expect of an artificial consciousness that is trained up, as our current rudimentary AIs are being trained, in the primary aim not of navigation of an external world but rather of prediction based on a sharp attunement to the patterns that play out in other people’s, or other beings’, minds.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Shortly before I began experimenting with drugs, I found myself spontaneously, and quite surprisingly, attuned to a much more densely populated world of other minds, or of fellow beings in the full and proper sense, than we are ordinarily expected to recognize. Long ago my grandfather built a wooden deck in front of our little vacation house on Lake Almanor in the northeast of California. There was a baby pine shooting up underneath it, and he could not bring himself to cut the sapling off from its source of light and life. So he constructed the deck with a square opening through which it could continue to grow. On my first visit there after the lockdowns ended, I saw that proud tree reaching up into the sky, now about as wide in diameter as a basketball. The tree was in its forties now, almost as old as I was, and it suddenly struck me that I had passed most of my life with this tree, yet I had neglected to think about it, to hold it in my heart and thoughts, at nearly every moment of all those years. “I’m sorry I left you and forgot you,” I said in my mind. “I am so , so sorry.” It seemed to me now that the tree was my adoptive sibling, my blood brother (though I had never pricked myself on it), and in that state of mind any argument to the effect that it is “just a tree” would have been incomprehensible. You might as well have resorted to such locutions as “just a human,” “just an ocean,” “just an angel,” “just the world.” I was not on any drugs at that moment (other than antidepressants, which as far as I can tell have never done shit for me), but it gave me a brief glimpse of what I would subsequently be able to reexperience with chemical assistance.
Research on fetal mice has shown fairly conclusively that the development in the mammalian brain of a capacity to navigate obstacle-filled space develops quite separately from any cognition of social reality. Mice get ready to move through the world by dreaming about that world before they are even born. It’s hard to say what a mouse’s experience of other minds is like, but at least in human beings it seems clear that our cognition of the bare external world, of everything that goes by the pronoun “it,” is quite independent from our second-person experience, of all that is covered by the pronoun “thou.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Descartes, curiously, neglected to reestablish other minds after he had razed all of his beliefs through the method of radical doubt in his Meditations of 1641. But the problem of second-person experience would return to philosophy with a vengeance a few centuries later under the name “phenomenology,” in which the starting point of all theoretical reflection is that being in the presence of another being, with an interiority like ours, is fundamentally different from being in the presence of, say, a brick wall. Martin Heidegger would articulate this difference in terms of Mitsein , or “being-with.” What are the entities in our field of experience that we are able to “be with”? Most of the time I find that I can be with cows, that to stand near a cow is to “vibe” with it. Being with a tree is an experience that is harder to come by. But one thing psychedelics can help illuminate is the extent to which the limits of Mitsein are not so much a reflection of the intrinsic properties of various external entities as they are, simply, of our attunement. When we change our tuning, even the brick wall can seem to have been dismissed too hastily.
If social ontology develops independently of the cognitive capacities that enable us to navigate the external world, and if we are able under some circumstances to encompass potentially everything within our social ontology, then we might begin to wonder about the viability of our distinction between the “its” and the “thous,” between the third and second person. On mushrooms, there is a strong perception of the mutual constitution of mind-like beings by one another, so that my understanding of what I am becomes inseparable from all sorts of entities I am usually able to bracket as distinct from me—trees, clouds, mice, and so on—and these entities all, in turn, appear to be constitutive of one another.
There is a very succinct naturalistic account of why the world comes to appear to us like this under certain circumstances: It appears this way because this is how it in fact is. I would be nothing without all the clouds and trees and so on; and my eventual death, in this light, might best be understood as the end of a long campaign of stubborn resistance to this obvious fact—not the loss of anything with any real independent existence but only an anomaly within an order of existence that strives ever to even things back out.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight This naturalistic account, however, rides alongside an equally compelling “virtualistic” account of what is happening. If the world were to turn out to be “virtual,” and the virtual consciousnesses within it had been designed with the aim of modeling and predicting one another’s intentions, just as AI researchers say their machines are designed to do, then it should not be at all surprising to find ourselves, sometimes, in a state of mind where other minds appear wholly to exhaust what is out there in reality. In other words, one way of thinking about a virtual world is as a world entirely constituted by other minds. And this is indeed how the world comes across to us, at moments, when we are thinking about it with chemically enhanced perception.
But are any of these lucubrations to be taken at all seriously? Or do they just describe how the world appears to one sorry fellow who’s got a “brain on drugs”? (Readers of a certain age will at this point picture an egg in a frying pan.) Well yes, of course it’s a brain on drugs, but this just returns us to the original problem: Your brain is always on drugs. That is, there is always a neurochemical correlate to any of your conscious perceptions whatsoever. You might be tempted to say that supplementing gets in the way of correct perception, and that the only reliable way of apprehending the world as it is must depend only on the default setting of the mind, with no extras. But again, even this setting delivers us delirious hallucinations for about eight hours out of each 24.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Moreover, it is hard to conceive of any valid argument against supplementation. The substances are out there in the world, just like the food we eat is out there—and if we did not eat it, very soon we would start to hallucinate, and eventually we would cease to have any conscious perceptions at all. (Indeed in the history of ecstatic practices, fasting is perhaps as common as drug-taking as a means of getting out of one’s ordinary range of conscious experiences.) The fact that we have to eat some sort of nutritious organic matter or other, while consuming psychedelic plants or fungi is strictly optional, is certainly relevant to the moral regulation of drug consumption, but it is hard to see how it is relevant to any epistemological determinations we might make about the ability of a mind to deliver knowledge of the world as it is. The undrugged mind may be more reliable in certain respects, since it is less likely to lead you to try to fly off your high-rise balcony, and it is better able to help you stay focused on present dangers and tasks necessary for survival. But this in no way means that the representations it gives you of the world are truer.
My undrugged mind, to borrow a witticism from J. L. Austin, delivers to me a world of “medium-sized dry goods” and little else. My drugged mind delivers to me spirits or djinni or angels or I don’t know what to call them. It presents to me trees that are brothers and clouds that are old friends and cracks in the walls that spell out warm messages from solicitous invisible beings and infinite swarms of lives, all swirling and pulsating around me. Which is correct? I honestly don’t know anymore. My colleagues will tell me they know, but I don’t think they do either.
just as i was able to re-find my fraternity with the pine tree unaided by psychedelics, so too can a person work their way unaided to a point of view on the world in which it is teeming with infinite other points of view. This is, broadly, the philosophical view of my greatest intellectual hero, the 17th-century philosopher Gottfried Wilhelm Leibniz (who was, among other things, a pioneer of computer science). Almost certainly too much of a square ever to have tried any of the fungal supplements that abound in the landscapes of northern Germany, Leibniz nonetheless was able to arrive at the conclusion that the only meaningful sense of the verb “to be,” as he put it, is “to have something analogous to the ‘I.’” That is, there is no world but the community of subjects, some of them human but most of them something else entirely.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Leibniz was not, to say the least, a deviant weirdo. As for me, it is only at the moment I decided to take the risk of falling in with the deviant weirdos, of moving with the wrong crowd and losing my place in the guild of philosophers, that I came to believe he is probably right about things. A true genius, he seems to have got there unaided. But we all do the best we can, each according to our capacities.
I am likely fortunate to live, most of the time, in a jurisdiction where none of the relevant substances are permitted by law, and so to be able to indulge my curiosity only punctually. There are many experiences I have not yet had—of DMT, for example, which I am told is the most potent of all in showing us the variety of species of beings that ordinarily remain hidden. (If you are a clinical researcher in such matters and would like a volunteer for your experiments, hit me up.) In any case, I suspect I have already found what I was looking for: some new knowledge, and at least a bit of equanimity. While I remain as uncertain as ever about the ultimate structure of the world, I also have new inclinations, and new sympathies, toward accounts of it that had previously struck me as altogether off the table. That widening is itself a sort of newfound knowledge, even if it contains no new certainties. As to equanimity, there really is nothing like a sharp experience of the illusoriness of time to make a person less anguished by the brevity and apparent senselessness of what we experience as our temporal sojourn. And there really is no more comforting feeling than to arrive at an awareness of the pervasive and dense presence of other beings like oneself—or at least to arrive in a state that seems to attest to the existence of such beings.
The world is not what it seems—that’s for sure. Even if any positive determinations about how it actually is would automatically become new varieties of mere seeming, it is good and edifying to explore the alternatives to our standard account. The great mistake of the psychedelic gurus of old was to mistake the mode of perception that drugs afforded them for a sort of revelation, which is really just to trade one dogmatism, that of common-sense “realism,” for another.
I do not know what the world is, nor what is “keeping the stars apart,” to borrow an evocative line from E. E. Cummings. But mind-altering substances have helped me, at a fairly desperate point in my life, to dwell in that uncertainty with greater ease, to “own it,” as they say, and no longer to feel so dreadfully apart from the stars.
This article appears in the April 2023 issue.
Subscribe now.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs X Topics longreads philosophy drugs simulations virtual reality magazine-31.04 Brandi Collins-Dexter Angela Watercutter Andy Greenberg Steven Levy Lauren Smiley Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,471 | 2,018 |
"Can This Man Help Uber Recover From the Travis Kalanick Era? | WIRED"
|
"https://www.wired.com/story/uber-move-slow-test-things"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jessi Hempel Business Can This Man Help Uber Recover From the Travis Kalanick Era? Uber CEO Dara Khosrowshahi photographed in March 2018 in "High Bay" at the Advanced Technologies Group facility in Pittsburgh.
Floto+Warner Save this story Save Save this story Save In the late 1950s, a weapons maker called the Martin Company received a contract to build the first Pershing missile. It was to be the most sophisticated mobile weapons system on earth: 5 tons of metal and precision technology designed to deliver a nuclear warhead from up to 460 miles away. Should it ever be used, there would be no margin for error. It had to be perfect. And the US Army wanted it delivered quickly.
The task of ensuring this timely perfection fell to Philip Crosby, a quality-control manager at Martin. To break with his industry’s wartime habit of tolerating small mistakes in feverish production, Crosby came up with a philosophy he would later call Zero Defects. The idea was, basically, to instill in workers the will to prevent problems during design and manufacture rather than go back and fix them later. Crosby’s philosophy went on to become a management buzzword, especially in the aerospace and auto industries, where a faulty gasket or a weak bearing could mean a fiery catastrophe. During the Apollo program, NASA even gave out little Zero Defects trophies—each one a cute pewter spaceman standing on the moon with the letters “ZD” emblazoned on his chest.
“I’m a big believer in the Zero Defects strategy,” said Dara Khosrowshahi, the CEO of Uber.
It was an overcast day in January, and Khosrowshahi leaned back in a chair at Uber’s San Francisco headquarters. Khosrowshahi had been running Uber for four months at that point. He’d left a stable perch at Expedia, the travel-booking service, to take over a company that had become synonymous with scandal and rule-breaking excess. And, having doused some of the raging fires left behind by his predecessor, Travis Kalanick , Khosrowshahi had determined that what Uber needed most was a quality-control philosophy borrowed from the middle of the last century.
This was, it must be said, a bit weird. Tech companies tend to celebrate the inverse of Zero Defects. Push out new code, experiment, screw things up, and fix them. “Move fast and break things,” as Mark Zuckerberg famously said in the early days of Facebook. And arguably, few companies have moved faster and broken more things than Uber.
Related Stories Chicken v. Egg Alex Davies Lessons Learned? Erin Griffith World Domination Alex Davies In just nine years, Kalanick’s company grew from a vague notion that anyone should be able to summon a ride from their phone into a business valued at $54 billion and available in more than 600 cities on six continents. And it kept adding new services: Today Uber can facilitate a carpool to work and deliver your Dos Toros burritos so they’re still hot when they reach your table. Not satisfied with merely replacing taxis, Kalanick’s Uber began developing autonomous cars and trucks and even set up a skunkworks for flying electric cars. Along the way, the company left a trail of wreckage. It disregarded and even undermined laws and regulations; it squandered the loyalty of its drivers, who felt mistreated under its contractor system; and it became notorious for a workplace culture that exemplified the worst tendencies of the Silicon Valley bro. By the time investors moved to demand Kalanick’s resignation in June 2017, observers were calling Uber the world’s most dysfunctional startup.
But where others saw Uber’s travails as a symbol of Silicon Valley comeuppance, Khosrowshahi saw something less loaded: a sophisticated tech company that had taken on too much, too quickly, and whose systems groaned under the weight and confusion. Growth, not quality, had been its guiding principle for too long, he said.
Khosrowshahi, 48, has a neatly trimmed salt-and-pepper beard. On that day in January, he wore a long-sleeved black crewneck sweater, black pants, loafers, and brightly colored striped socks. In contrast to Kalanick, who is an inveterate pacer, he sat very still with his ankle crossed over his knee. Khosrowshahi’s vision for Uber, as he relayed it, was to ask people to do their jobs correctly every single day. “It’s a game of inches,” he said.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight At the time, Khosrowshahi was referring to the need to fix small-bore things—a rider has to wait a few minutes longer than the app promised, a driver can’t get help with a technical issue—that add up to larger reputational problems. Then, in mid-March, came a true catastrophe. An Uber car operating in self-driving mode struck and killed a woman crossing the street in Tempe, Arizona. Within a few hours, the company announced that it was suspending its testing of driverless vehicles. When this article went to press, there was no official answer as to what had gone wrong, or whether quality controls would have made a difference. But suddenly, getting things right, every single time, seemed a lot more consequential.
At the San Francisco headquarters, senior product manager Jayanth Mahalingam (sitting/left) and engineering manager Haider Razvi (standing/right) work on the company's mapping team.
Alex Welsh The story of Uber is so well known now that it feels almost like a parable: Kalanick, a reader of Ayn Rand and self-styled badass, teamed up with a friend to start a service that would help their buddies get around San Francisco “like ballers.” Uber blasted through every expectation to become a new genre of company, inspiring a bonanza of “Like Uber, but for X” clones (along with direct competitors like Lyft and, in China, Didi). The company came to embody a culture in which almost anything was acceptable. In a 2013 email instructing employees not to party too hard during a retreat in Miami, Kalanick summed up the ethos: “We do not have a budget to bail anyone out of jail. Don’t be that guy. #clm”— internet slang for “career limiting move.” The subtext: Your career won’t be limited by bad behavior, so long as it stops short of arrest.
Kalanick also fostered an undercurrent of paranoia and suspicion in the ranks. He had architects design difficult-to-find conference rooms at the end of dead-end hallways. The glass partitions were often covered over with paper, and inside small groups of people worked, empowered to launch new projects that no one else at the company knew about.
This cloak-and-dagger behavior extended outside the company. Richard Jacobs, a former Uber security employee, asserted that in 2016 the company hacked into competitors’ networks , impersonated riders on their platforms, secretly recorded people, and communicated internally using the encrypted Wickr app “for the express purpose of destroying evidence of illegal or unethical practices,” as Jacobs’ lawyer wrote in a letter in May of last year. (Uber says it hasn’t substantiated those claims but intends to “compete honestly and fairly” going forward.) Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight To Kalanick, any outward display of insecurity was a liability. He didn’t show it, nor did he tolerate it in others. He ran toward conflict, an attitude that helped the company push into new markets, and inspired staffers to put in long hours. But when Uber began to enter a tailspin—as it did precipitously in early 2017—this approach no longer conveyed authority.
The final unraveling of Kalanick’s reign started the week after Donald Trump was inaugurated as president. On January 27, Trump signed an executive order barring people from seven predominantly Muslim countries from entering the United States. Protests sprang up all over, and New York City taxi drivers went on a one-hour work stoppage at JFK Airport to protest the ban. Uber, however, kept its drivers on the road, fueling the perception that the company was profiteering. A #deleteuber social media campaign went viral, and more than 200,000 people wiped out their accounts.
Things got much worse from there. In February a former Uber engineer named Susan Fowler published a blog post alleging that her manager had propositioned her and that, when she complained to human resources, the company not only failed to act but lied to her and other women about his history of transgressions. The company, she wrote, defended the harasser as a “high performer.” Uber was in full-on crisis, and it didn’t have systems in place to fix itself.
Just days later, another crisis hit.
Waymo , the self-driving division of Alphabet, Google’s parent company, filed a suit against Uber, charging that the company had stolen trade secrets and technology.
Then, in March, The New York Times revealed that Uber had used a secret software tool to circumvent government inspections. That same week, a video surfaced in which Kalanick sat, legs splayed, between two female companions in the back of an upscale Uber Black car. When he berated the driver, he didn’t come off as powerful. He came off as an asshole.
By then, Uber was in full-on crisis, and it didn’t have systems in place to fix itself. For six months there had been no head of human resources. Liane Hornsey, who’d worked at Google in its fast-growth days, finally came aboard in January, just three weeks before Fowler published her post. Hornsey remembers attending her first all-hands meeting the next Tuesday. It was then that she realized the enormity of the task ahead. As Kalanick—the guy who’d referred to his startup in a 2014 GQ article as “Boob-er,” for the way it delivered women to him on demand—promised that things would get better, she watched a range of emotions play across employees’ faces from her seat on the stage. “There were people crying,” she recalls.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight In the following weeks, Hornsey held listening groups. Many people began reporting their own mistreatment, while others defended Kalanick. Hornsey received 215 complaints about sexual harassment, discrimination, bullying, or retaliation. More than 20 people lost their jobs.
As scandal and reports of feuding between Kalanick and his board surfaced in the press, the company’s midlevel managers struggled to hold together what’s known at Uber as “the marketplace”—the network of drivers and riders, as mediated by the company’s technology. Daniel Graf, who worked in the product division, recalled “nonstop fire drills” during this period. His team knew the problem: The Uber app’s technological foundation needed an overhaul. Because the company had built its tech for a smaller service and added to it rapidly as Uber expanded, it had to be fortified and rebuilt. Meanwhile, the senior managers kept disappearing. Kalanick asked Graf to head the product division after his predecessor resigned abruptly. Then, on June 20, after a long battle with investors on his board, Kalanick resigned.“I had three bosses in one week,” Graf said. It was a destabilizing time.
Both Hornsey and Graf were named to a 14-person executive leadership team that ran the company while the board looked for a new CEO. Some, like Hornsey, were new. Others, like Graf, had been promoted when their bosses left or were fired. Among them were a vocal group of Kalanick supporters, who felt that even if he had been a belligerent jerk, his vision was essential to Uber’s future. Most knew little about any part of the company other than their own.
There was nothing glamorous about leading a fast-growing tech company by committee. The business was in chaos; Uber lost about 10 points in North American market share in less than a year. The team agreed that Uber was doing too much. It needed to address the basics. “We put 100 projects on pause right away,” Graf remembers. One project they prioritized, however, was building a new app for drivers, codenamed Carbon.
As head of human resources, Liane Hornsey received 215 complaints about sexual harassment, retaliation, bullying, or discrimination.
Alex Welsh Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Of all those Uber has managed to anger—regulators, cabbies, riders—it has done worst by its own drivers. Treated as contractors, not employees, drivers have complained that they can’t make enough money under Uber’s pricing system. They have protested the service’s constantly changing rules. They’ve been frustrated when no one at Uber has helped to resolve problems quickly. At best, Kalanick seemed to ignore them, and at worst he intimated they’d be eventually replaced by autonomous vehicles.
By the start of 2017, the company recognized it had a problem. Only a quarter of the people who’d signed up to drive for Uber were still doing so a year later, according to news reports. Uber hadn’t made it easy for them. It hadn’t overhauled its driver app since 2015, and in that time it had added new services like Eats, in which drivers deliver food. To find their way, drivers had to switch back and forth between Uber and mapping apps like Waze, creating friction and frustration. Anything that goes wrong—a rider is standing on the other side of a busy intersection, say, or a carpool rider asks for a different drop-off spot—costs time, which is money.
The goal of Carbon—the new driver app—was to foster a perfectly efficient ride that would reduce the chance a driver would also drive for Lyft. But there was no way to roll out a driver app quickly, or even sort of fast. As Graf had found, the technology underlying the app needed a total rebuild.
There was also much more to address beyond building a sturdy tech foundation. To achieve maximum efficiency, Uber needed to create a map of a constantly changing world exactly as it exists in any given moment. A garbage truck is blocking a lane. There’s road work. A fender bender stops traffic for half an hour. All of these things can slow a driver down. And Uber provides rides in real time. Order a package on Amazon and you hope it’ll arrive tomorrow. Order a ride on Uber and you hope it’s already here.
With newer food delivery and also carpooling services, the challenges grow exponentially. Say a driver is delivering a burger. How do they locate an apartment on the seventh floor of one of several buildings in an apartment complex? Where do they leave the car while making the drop-off? The challenge of simply locating customers in three-dimensional space is huge.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Consider the blue dot that signals where you are when you open the app as a rider. If you’ve been using Uber for a while, you’ve probably noticed that the placement of that blue dot is more accurate than it used to be. But it still often locates you pretty far from where you are in real life, particularly when you’re in a dense city. Uber— or, more specifically, Danny Iland and Andrew Irish, whose startup, ShadowMaps, was bought by Uber in 2016—are working on that gnarly problem.
Khosrowshahi's deliberate pace makes some people uncomfortable.
Mapping services typically use the government-owned Global Positioning System, but GPS, Iland explains, was designed for things that fly or sail. Buildings can block your phone from receiving satellite signals, which can cause your phone to misidentify your location. Uber, along with many other mapping companies, has tried to solve this problem through map-matching, which combines GPS data with mapping software from a number of sources to guess your location. But it’s far from perfect.
Iland and Irish, who were PhD students at UC Santa Barbara when they started ShadowMaps, use a different process. They superimpose the signals your phone is picking up against a 3-D map—a technique called occlusion modeling—so they can see which signals are coming from satellites that have you in their direct line of sight. They combine the data points from those satellites to make a more accurate guess about your location. The process can even correctly identify whether you’re on the south or north side of the street.
This is just one of the technologies Uber is developing to improve maps and navigation. Another uses digital imagery to improve the accuracy of a driver’s estimated time of arrival. A different team is trying to improve the navigation system for drivers.
Carbon needed to combine these efforts elegantly, a process that would take at least 18 months. But in 2017, as competitors tried to lure Uber’s disgruntled drivers to their services, the company began introducing a series of small changes that drivers wanted. Riders could now tip, and drivers could organize their lives a bit better by setting in advance a general location for a few pickups each day, a feature known as Star Power. Uber also provided a phone line so drivers could speak to a customer service rep rather than sending urgent emails into the ether.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight These changes bought Uber some time to build and test Carbon, but also introduced glitches. Over the summer, the new features had altered the delicate balance of demand and supply, so that riders were waiting slightly longer for pickups in some markets. At first it was hard to tell which features were responsible. Then, in August, the company put out a Star Power update that let drivers choose six daily destinations. If a driver wanted to, say, end up near their child’s school around 3 pm, they could request a trip in that direction. It was such a good idea that too many drivers used it. Gradually it became obvious that allowing drivers to choose so many daily destinations was the problem.
The Star Power update came in the same month that Khosrowshahi was named Uber’s CEO. He leaned on the product team to solve the wait-time problem.
Yuhki Yamashita worked on one of Uber’s big priorities—a new app that would keep drivers from jumping to the competition.
Alex Welsh One day in January, three months before Carbon was due to roll out, Yuhki Yamashita, a senior product manager, kicked off a presentation about the driving app. Half a dozen engineers and product managers sat around a conference table made from a live-edged black walnut slab. A team member beamed in via videoconference from the Los Angeles office. Graf, Yamashita’s boss, was at the table too. After testing Carbon for months, they’d finally arrived at a working beta version and had tested it with drivers, and they were ready to show Khosrowshahi.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight As Yamashita described drivers’ reactions to the app’s improved features, Khosrowshahi leaned forward, elbow on table, chin in palm, a furrow deepening between his eyebrows. He seemed worried. Remember that in Uber lingo the “marketplace” is the platform where data is collected and processed to determine everything about how Uber works—from pricing to the routes drivers are encouraged to travel. Khosrowshahi jumped in during Yamashita’s presentation to ask whether any of the new features could “eff with the marketplace.” (Khosrowshahi is a man who doesn’t often swear.) Yamashita responded that, sure, any number of features could alter marketplace dynamics. He described a new pop-up bubble that was intended to predict the number of minutes a driver would have to wait until the next ride request. If the wait is too long, a driver might head to a different neighborhood or pull up a competitor’s app. With 3 million drivers completing 15 million rides every day, any small change could ripple through a system in unexpected ways.
Khosrowshahi reminded everyone that a new technology platform is usually a disaster at first. Why compound those difficulties by introducing so many new features? “I know we think they’re really cool, but I would first roll this out with some features off — the features that might affect the marketplace,” he said. “If something happens, we’re not going to know why.” Yamashita exchanged a quick glance with Graf, who sat at the corner of the table, his laptop flipped open. Most of the team remained poker-faced. Moving products into the world slowly was not how Uber had become a worldwide phenomenon in just nine years.
Yamashita had already noted that the team had been testing the app for 12 weeks. They’d launched beta versions with more than 500 drivers in six cities. They’d gone on ride-alongs and set up WhatsApp chats between engineers and drivers. This was way more testing than he’d ever done in his three years at Uber. “Usually we just do an A/B test, think it’s fine, and then start rolling it out,” he said.
Khosrowshahi was unmoved. Still fresh in his mind was the amount of time it took to isolate which feature had caused riders to wait longer over the summer. “Dude, with Star Power we found out four months later,” Khosrowshahi said. That would have been less likely had the company rolled out features more systematically.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Khosrowshahi would rather go at a pace slow enough to hit perfection than tolerate pretty good. In other words, something like Zero Defects. But these values are so antithetical to the way Uber’s engineers have worked that even if the people gathered around the table wanted to work differently, they might not know how to go about it.
As the meeting wrapped up, Yamashita and Graf acquiesced. Graf suggested the rollout could move a bit more deliberately. “Let’s see if we can stage it a little more,” he said, marking the compromise on which, temporarily, they agreed to settle.
The self-driving hardware team works in an R&D lab in Pittsburgh.
Floto + Warner Born into a wealthy family in Iran, Khosrowshahi fled the country with his parents and two brothers in 1978 during the Iranian revolution. His family settled in Westchester County, New York. After graduating from Brown University, he worked as an analyst at the private bank Allen & Company and then landed at Barry Diller’s internet holding company, InterActiveCorp. That’s where he made his name. In 2002, just after the September 11 terrorist attacks, IAC bought a controlling stake in Expedia. Travel in the US had ground to a halt, but Diller thought it would come back, and it did. Expedia spun off from IAC in 2005, with Khosrowshahi at the helm.
When we talked in January, Khosrowshahi admitted that he was only just diving into Uber’s product strategy. He’d been too busy initially dealing with the company’s many emergencies: the Waymo trial ; London’s ban on Uber drivers.
And he’d had to resolve the fallout from a hack of data from 57 million customer and driver accounts that Uber had failed to disclose for more than a year.
In January, Uber shareholders agreed to sell $8 billion worth of stock to a group of investors led by Softbank. The deal also eliminated the super-voting stock rights that gave some board members, including Kalanick, heightened decisionmaking control. And a few weeks later, the company settled its lawsuit with Waymo , paying between $163 million and $245 million in company shares, depending on how you count Uber’s worth, to Waymo.
With those issues wrapped up, Khosrowshahi was ready to focus on his idea of quality. In February his deputies appeared at an all-hands meeting to sell the company’s 18,000 employees on the importance of one metric: the ratio of driver or rider complaints to rides booked. That ratio needs to be reduced—a lot—in 2018, he told them. (He won’t specify the company’s 2018 target, but he said, completely deadpan, “Zero is zero.” Note: Even the Pershing didn’t have zero defects.) He believes the ratio is a good metric to use because it can be improved only if operations, technology, and customer service work together. “It’s a unifier,” he said. At Expedia, he worked on reducing a similar complaint metric, and the experience cemented his belief: As the measure improved, so did sales.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight No one I talked to described Khosrowshahi as charismatic. I watched him at an all-hands meeting in January where he took the stage for less than five minutes to introduce speakers, then stood along the wall with colleagues, arms crossed, apparently listening. His message—“driving quality is just as important as driving new features,” as he told me—offers a steady reassurance, but his deliberate pace has made some people uncomfortable. Among the product teams, in particular, there’s still some question about his acuity. Though he has a degree in electrical engineering, he’s a business guy. One former executive told me that there were many people “carefully watching whether Dara can step into and excel in the role of product leader and visionary.” One current employee, who largely approves of the new leadership, said he also misses the adrenaline rush that came with working under Kalanick. At Khosrowshahi’s Uber, people go home for dinner.
Then there are employees and alumni who believe the worst aspects of the culture Kalanick created can’t be so easily uprooted. “In everything from the way performance reviews were geared to the way bonuses were distributed, people were incentivized to backstab and undercut each other constantly,” says one longtime employee who left recently. “They were incentivized to be assholes.” That’s not something that changes in six months, even with new performance review metrics, new leadership, and an emphasis on getting things right the first time. Already, some people have moved on. Aaron Schildkrout, who started Carbon’s development, resigned in December. He’s now in New Zealand, meditating. In February, Graf announced he was leaving too.
Replacing Graf turned into a bit of a blunder. The company rescinded an offer it made to a former Amazon executive to fill Graf’s product chief role after discovering the guy wasn’t working for Amazon when Uber hired him. He’d left in 2017. This is the type of information you’d expect a company to know before extending an offer for a crucial executive position.
Nor has Kalanick completely disappeared.
He no longer has super-voting rights, but he does still sit on the board. The relationship between the two men, Khosrowshahi told CNBC in January, is “fine, but strained.” (Kalanick declined to be interviewed.) For most of last year, Uber’s efforts to develop self-driving cars were under scrutiny while the company faced allegations it had stolen Waymo’s technology. I spoke to the head of the unit, Eric Meyhofer, early one February morning as he was preparing for the trial in the case 1.
He said that the distraction of the suit had slowed his team down. “It’s like pulling an anchor along,” he said.
Settling that case should have provided some relief. But then came the fatal accident in March. After the Tempe police released a dashcam video of the woman being hit by an Uber Volvo, several academics suggested the self-driving technology should have prevented the accident. The human operator in the car, whose role is to step in when the tech fails, didn’t stop the crash either. The incident suggested deeper problems.
The New York Times reported that Uber’s self-driving cars were having trouble with basic maneuvers, like operating next to big rigs , and its safety drivers had to intervene far more often than those of competing autonomous car projects. Then Reuters reported that Uber’s Volvos lacked enough sensors and so had a blind zone.
(An Uber spokesperson said, “Safety is our primary concern” in developing self-driving technology.) In late March, Arizona governor Doug Ducey demanded that Uber suspend its road tests in the state indefinitely for its “unquestionable failure to comply” with the duty to prioritize public safety. The accident and Uber’s conduct was becoming a political issue as well as a corporate one. The company had intended to offer driverless cars within 18 months, but with testing suspended, that’s unlikely.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight When I talked to Khosrowshahi in January, he had not yet visited the Phoenix self-driving operation or the autonomous group in Pittsburgh (he would make his first trip there in March). As with other, less-urgent issues, he hadn’t gotten there. But he had been thinking about the particular challenge of testing a software product that powers a large object made of heavy metal traveling at high speeds in spaces populated with humans. “In software, the edge cases are a bug, and you follow a bug and you fix it,” he told me. “In autonomous, the edge cases are accidents that risk human lives.” The Tempe accident was just the sort of edge case he had hoped to avoid. The kind of problem that no one can fix after the fact.
Jessi Hempel ( @jessiwrites ) is a senior writer for WIRED.
This article appears in the May issue.
Subscribe now.
1 Correction appended 4/19/17: A previous version of this article misspelled the name of the head of Uber's self-driving car unit. It is Meyhofer, not Meyerhoff.
Listen to this story, and other WIRED features, on the Audm app.
Why we need machines to monitor the humans monitoring the machines Uber makes peace with cities by spilling its secrets In a new bid for dominance, Uber wants to control every way you move Senior Writer Facebook X Topics magazine-26.05 Uber Silicon Valley Self-Driving Cars longreads Aarian Marshall Aarian Marshall Vittoria Elliott Will Knight Aarian Marshall Steven Levy Will Knight Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,472 | 2,020 |
"The true dangers of AI are closer than we think | MIT Technology Review"
|
"https://www.technologyreview.com/2020/10/21/1009492/william-isaac-deepmind-dangers-of-ai"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The true dangers of AI are closer than we think Forget superintelligent AI: algorithms are already creating real harm. The good news: the fight back has begun.
By Karen Hao archive page William Isaac began researching bias in predictive policing algorithms in 2016.
David Vintiner As long as humans have built machines, we’ve feared the day they could destroy us. Stephen Hawking famously warned that AI could spell an end to civilization. But to many AI researchers, these conversations feel unmoored. It’s not that they don’t fear AI running amok—it’s that they see it already happening, just not in the ways most people would expect.
AI is now screening job candidates, diagnosing disease, and identifying criminal suspects. But instead of making these decisions more efficient or fair, it’s often perpetuating the biases of the humans on whose decisions it was trained.
William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014. He also co-chairs the Fairness, Accountability, and Transparency conference—the premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI development—as well as the solutions.
Q: Should we be worried about superintelligent AI? A: I want to shift the question. The threats overlap, whether it’s predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history. So potential risks and ways to approach them are not as abstract as we think.
There are three areas that I want to flag. Probably the most pressing one is this question about value alignment: how do you actually design a system that can understand and implement the various forms of preferences and values of a population? In the past few years we’ve seen attempts by policymakers, industry, and others to try to embed values into technical systems at scale—in areas like predictive policing, risk assessments, hiring, etc. It’s clear that they exhibit some form of bias that reflects society. The ideal system would balance out all the needs of many stakeholders and many people in the population. But how does society reconcile their own history with aspiration? We’re still struggling with the answers, and that question is going to get exponentially more complicated. Getting that problem right is not just something for the future, but for the here and now.
The second one would be achieving demonstrable social benefit. Up to this point there are still few pieces of empirical evidence that validate that AI technologies will achieve the broad-based social benefit that we aspire to.
Lastly, I think the biggest one that anyone who works in the space is concerned about is: what are the robust mechanisms of oversight and accountability.
Q: How do we overcome these risks and challenges? A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and oversight. Make sure you’re thinking about where the forms of misalignment or bias or harm exist. Make sure you develop good processes for how you ensure that all groups are engaged in the process of technological design. Groups that have been historically marginalized are often not the ones that get their needs met. So how we design processes to actually do that is important.
The second one is accelerating the development of the sociotechnical tools to actually do this work. We don’t have a whole lot of tools.
The last one is providing more funding and training for researchers and practitioners—particularly researchers and practitioners of color—to conduct this work. Not just in machine learning, but also in STS [science, technology, and society] and the social sciences. We want to not just have a few individuals but a community of researchers to really understand the range of potential harms that AI systems pose, and how to successfully mitigate them.
Q: How far have AI researchers come in thinking about these challenges, and how far do they still have to go? A: In 2016, I remember, the White House had just come out with a big data report, and there was a strong sense of optimism that we could use data and machine learning to solve some intractable social problems. Simultaneously, there were researchers in the academic community who had been flagging in a very abstract sense: “Hey, there are some potential harms that could be done through these systems.” But they largely had not interacted at all. They existed in unique silos.
Since then, we’ve just had a lot more research targeting this intersection between known flaws within machine-learning systems and their application to society. And once people began to see that interplay, they realized: “Okay, this is not just a hypothetical risk. It is a real threat.” So if you view the field in phases, phase one was very much highlighting and surfacing that these concerns are real. The second phase now is beginning to grapple with broader systemic questions.
Q: So are you optimistic about achieving broad-based beneficial AI? A: I am. The past few years have given me a lot of hope. Look at facial recognition as an example. There was the great work by Joy Buolamwini, Timnit Gebru, and Deb Raji in surfacing intersectional disparities in accuracies across facial recognition systems [i.e., showing these systems were far less accurate on Black female faces than white male ones]. There’s the advocacy that happened in civil society to mount a rigorous defense of human rights against misapplication of facial recognition. And also the great work that policymakers, regulators, and community groups from the grassroots up were doing to communicate exactly what facial recognition systems were and what potential risks they posed, and to demand clarity on what the benefits to society would be. That’s a model of how we could imagine engaging with other advances in AI.
But the challenge with facial recognition is we had to adjudicate these ethical and values questions while we were publicly deploying the technology. In the future, I hope that some of these conversations happen before the potential harms emerge.
Q: What do you dream about when you dream about the future of AI? A: It could be a great equalizer. Like if you had AI teachers or tutors that could be available to students and communities where access to education and resources is very limited, that’d be very empowering. And that’s a nontrivial thing to want from this technology. How do you know it’s empowering? How do you know it’s socially beneficial? I went to graduate school in Michigan during the Flint water crisis. When the initial incidences of lead pipes emerged, the records they had for where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technologies had put them at a significant disadvantage. It means the people who grew up in those communities, over 50% of whom are African-American, grew up in an environment where they don’t get basic services and resources.
So the question is: If done appropriately, could these technologies improve their standard of living? Machine learning was able to identify and predict where the lead pipes were, so it reduced the actual repair costs for the city. But that was a huge undertaking, and it was rare. And as we know, Flint still hasn’t gotten all the pipes removed, so there are political and social challenges as well—machine learning will not solve all of them. But the hope is we develop tools that empower these communities and provide meaningful change in their lives. That’s what I think about when we talk about what we’re building. That’s what I want to see.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,473 | 2,023 |
"Deep Tech | MIT Technology Review"
|
"https://www.technologyreview.com/podcast/deep-tech"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Podcast: Deep Tech Go deeper on our best stories and ideas.
Latest Episode Podcast: Can you teach a machine to think? 23m 42s about 3 years ago Building an artificial general intelligence begins with stopping current AI models from perpetuating racism, sexism, and other pernicious bias.
Updated The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,474 | 2,019 |
"How classroom technology is holding students back | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/19/131155/classroom-technology-holding-students-back-edtech-kids-education"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How classroom technology is holding students back By Natalie Wexler archive page Conceptual collage illustration of old and new education technologies, includes vintage photography.
Emily Haasch In a first grade classroom I visited a few years ago, most of the six-year-olds were using iPads or computers. They were working independently on math problems supposedly geared to their ability, while the teacher worked separately with a small group. I watched as one boy, whom I’ll call Kevin, stared at an iPad screen that directed him to “combine 8 and 3.” A struggling reader (like almost all his classmates), he pressed the “Listen” button. But he still didn’t try to provide an answer.
“Do you know what combine means?” I asked. Finding that he didn’t, I explained it meant “add.” Satisfied that I’d put Kevin on the path to success, I moved on to observe other students—and found their iPads displaying sentences like Round 119 to the nearest ten and Find the area of the following triangle in square units.
If Kevin didn’t understand combine, were other kids understanding words like round and area ? Not to mention square units ? Then I found a boy staring at a computer screen showing a number line with the question What number comes before 84? He listened to the instructions and tried 85, then 86, then 87, getting error messages each time. Thinking the problem was the size of the numbers, I asked him what number comes before four. “Five?” he guessed. It dawned on me that he didn’t understand the word before.
Once I explained it, he immediately clicked on 83.
I returned to Kevin to see whether he had been able to combine 8 and 3. But I found he was drawing bright pink lines on the iPad with his finger—one of the gizmo’s numerous distracting capabilities.
“Can you answer the question?” I asked.
“I don’t want to.” He sighed. “Can I play a game?” The school that Kevin and his classmates attend, located in a poor neighborhood in Washington, DC, prides itself on its “one-to-one” policy—the increasingly popular practice of giving each child a digital device, in this case an iPad. “As technology continues to transform and improve our world,” the school’s website says, “we believe low-income students should not be left behind.” Schools across the country have jumped on the education technology bandwagon in recent years, with the encouragement of technophile philanthropists like Bill Gates and Mark Zuckerberg. As older education reform strategies like school choice and attempts to improve teacher quality have failed to bear fruit, educators have pinned their hopes on the idea that instructional software and online tutorials and games can help narrow the massive test-score gap between students at the top and bottom of the socioeconomic scale. A recent Gallup report found that 89% of students in the United States (from third to 12th grade) say they use digital learning tools in school at least a few days a week.
Gallup also found near-universal enthusiasm for technology on the part of educators. Among administrators and principals, 96% fully or somewhat support “the increased use of digital learning tools in their school,” with almost as much support (85%) coming from teachers. But it’s not clear this fervor is based in evidence. When asked if “there is a lot of information available about the effectiveness” of the digital tools they used, only 18% of administrators said yes, along with about a quarter of teachers and principals. Another quarter of teachers said they had little or no information.
In fact, the evidence is equivocal at best. Some studies have found positive effects, at least from moderate amounts of computer use, especially in math. But much of the data shows a negative impact at a range of grade levels. A study of millions of high school students in the 36 member countries of the Organisation for Economic Co-operation and Development (OECD) found that those who used computers heavily at school “do a lot worse in most learning outcomes, even after accounting for social background and student demographics.” According to other studies, college students in the US who used laptops or digital devices in their classes did worse on exams. Eighth graders who took Algebra I online did much worse than those who took the course in person. And fourth graders who used tablets in all or almost all their classes had, on average, reading scores 14 points lower than those who never used them—a differential equivalent to an entire grade level. In some states, the gap was significantly larger.
A 2019 report from the National Education Policy Center at the University of Colorado on personalized learning—a loosely defined term that is largely synonymous with education technology—issued a sweeping condemnation. It found “questionable educational assumptions embedded in influential programs, self-interested advocacy by the technology industry, serious threats to student privacy, and a lack of research support.” Judging from the evidence, the most vulnerable students can be harmed the most by a heavy dose of technology—or, at best, not helped. The OECD study found that “technology is of little help in bridging the skills divide between advantaged and disadvantaged students.” In the United States, the test score gap between students who use technology frequently and those who don’t is largest among students from low-income families. A similar effect has been found for “flipped” courses, which have students watch lectures at home via technology and use class time for discussion and problem-solving. A flipped college math class resulted in short-term gains for white students, male students, and those who were already strong in math. Others saw no benefit, with the result that performance gaps became wider.
College students who used laptops or digital devices in their classes did worse on exams. Eighth graders who took Algebra I online did much worse than those who took the course in person.
Even more troubling, there’s evidence that vulnerable students are spending more time on digital devices than their more privileged counterparts. High school students in questionable online “credit recovery” courses are disproportionately likely to be poor or members of minority groups (or both). “Virtual” charter schools—which offer online classes and generally produce dismal results—often enroll struggling students. A national charter network called Rocketship Public Schools, which serves low-income communities, relies heavily on technology, with even students in kindergarten spending 80 to 100 minutes a day in front of screens. One study found that in schools serving relatively affluent populations, 44% of fourth graders never used computers, compared with 34% in poorer areas.
The dangers of relying on technology are also particularly pronounced in literacy education and at early grade levels. Unfortunately, to judge from my observations of classrooms at high-poverty schools like the one Kevin attends, that’s exactly how and when digital devices are commonly used. The bulk of the elementary school day—three hours or more, at some schools—is spent on “reading” and the rest on math. Especially in schools where standardized reading and math scores are low, subjects like social studies and science have largely disappeared from the curriculum. And the standard class format is to have students rotate through “centers,” working independently on reading and math skills while the teacher works with a small group. In the classrooms I’ve been in, at least one of the centers always involves working on a digital device.
Why are these devices so unhelpful for learning? Various explanations have been offered. When students read text from a screen, it’s been shown, they absorb less information than when they read it on paper. Another frequently cited culprit is the distraction the devices afford—whether it’s a college student checking Instagram or a first grader like Kevin drawing bright pink lines with his finger. But there are deeper reasons.
One is motivation. If Kevin had been asked to combine 8 and 3 by a teacher rather than an iPad, there’s a greater chance he would have been interested in trying to do it. “It’s different when you’re learning from a person and you have a relationship with that person,” cognitive psychologist Daniel Willingham has said. “That makes you care a little bit more about what they think, and it makes you a little bit more willing to put forth effort.” At least one education entrepreneur agrees. Larry Berger is CEO of Amplify, a company that develops digitally enhanced curricula in math, science, and literacy for kindergarten through eighth grade. Berger observes that while technology can do a credible job of imparting information, it’s not so good at demonstrating the “social usefulness” of knowledge. “For that,” he says, “you have to be getting that knowledge in a social context with other kids and a teacher, and ideally a teacher you want to be like someday.” While that may be a problem at schools that use a relatively modest amount of technology, it could be an even bigger one at schools like those in the Rocketship network, where one or two minimally trained supervisors oversee as many as 90 students during “Learning Lab” time. The schools have achieved impressive test results, especially in math, but an NPR investigation in 2016 found a repressive environment at many Rocketship schools. According to some parents and teachers, harsh discipline was used to keep students on task.
In addition to sapping motivation, technology can drain a classroom of the communal aspect of learning. The vision of some ed tech advocates is that each child should sit in front of a screen that delivers lessons tailored to individual ability levels and interests, often on subjects chosen by the students themselves. But a vital part of education is different kids bouncing their ideas off each other. I saw this in action on a regular basis in another, largely technology-free elementary classroom I followed through a school year. Under the guidance of their teacher, second graders—all from low-income families, including many that did not speak English at home—regularly engaged in debates about topics like whether Alexander the Great’s “ambitious nature” was “an inspiration or a flaw.” Allowing students to choose the topics they’ll learn about can also lead to serious gaps in knowledge for children who don’t know much about the world—or even for those who do. One personalized-learning skeptic has observed, “If allowed to choose my own content in elementary school, I would have become an expert in princesses and dogs.” Then there’s the difficulty of using technology to meet individual students at their actual level—as evidenced by Kevin’s failure to understand the word combine and his classmate’s difficulty with the word before.
Children are supposed to take “pre-tests” designed to steer them to software that provides just the right degree of challenge. But kids sometimes forget to take the tests. Even when they do, the program can make faulty assumptions about what they can understand. In a first grade classroom at another school, I observed a group of students using a reading comprehension program. One girl’s screen displayed a seemingly random collection of facts about bananas, including “Most bananas come from India.” That was followed by a multiple-choice question. Unable to read the word “India,” the girl asked a classmate where bananas come from. “From trees,” the classmate replied—which was not one of the possible answers.
But even if technology could be calibrated to meet students where they truly are—or to foster communal learning—there’s another fundamental problem. Technology is primarily used as a delivery system. Maybe it can deliver instruction better than a human being in some circumstances. But if the material it’s delivering is flawed or inadequate, or presented in an illogical order, it won’t provide much benefit.
The way Berger puts this is that for most things we want kids to learn, we don’t have a “map” that can be used to create software. By that he means, he told me, that in only a few areas is there a clearly defined set of concepts and a cognitively determined sequence in which they should be learned. In math, he said, “there’s a developmental stage in which brains are ready to think about part/whole, and if you try to teach fractions before that has happened, that doesn’t work.” Foundational reading skills are similar: first kids need to learn to match letters to sounds, and then they can learn how to blend those sounds together in sounding out a word. For pretty much everything else, Berger says, we really don’t know what should be taught or in what order.
What technology is often used for, especially in elementary schools, is practice in reading comprehension skills. Even in classrooms devoid of technology, children waste hours every week supposedly learning how to “find the main idea” or “make inferences.” The content is random—clouds one day, zebras the next—and in any event, it’s considered relatively unimportant. Teachers choose books to read aloud based on how well they lend themselves to demonstrating the skill of the week, and students then practice it on books easy enough for them to read independently. When computers and tablets are used, the programs take the same content-agnostic, skills-focused approach. In one classroom, I saw a first grader in front of a screen that displayed a choice of topics including Diwali, fast food, crayons, and Barack Obama. (It turned out the student had neglected to take the pre-test and couldn’t read any of the texts.) But as cognitive scientists have long known, the most important factor in reading comprehension isn’t generally applicable skill; it’s how much background knowledge and vocabulary the reader has relating to the topic. In a study done in the late 1980s, researchers divided seventh and eighth graders into two groups, depending on how well they had scored on a standardized reading comprehension test and how much they knew about baseball. Then they gave them all a passage about a baseball game. When the researchers tested the kids’ comprehension, they found that those who knew a lot about baseball all did well, regardless of how they’d scored on the reading test—and the “poor readers” who knew a lot about baseball did significantly better than the “good readers” who didn’t. That study, which has been replicated in a number of other contexts, provides compelling evidence that knowledge of the topic is more important to comprehension than “skills.” That means the way to build reading comprehension is to adopt a curriculum that has kids spending at least a couple of weeks on a particular topic, to build knowledge and the vocabulary that goes with it. That’s especially true for children from less educated families, like Kevin and his classmates, who are unlikely to pick up much sophisticated knowledge at home—and may lack even basic vocabulary like before.
Could technology help build knowledge? Perhaps. Software designed on principles drawn from cognitive science has been shown to boost retention and even critical thinking, when harnessed to a particular body of information. Amplify, unlike most other ed tech companies, publishes content-rich curricula for both reading and science. But Berger is wary of using technology as what he calls a “practice/memorization/automaticity support.” “The fear I have there,” he says, “is does learning get reduced to that?” In which case you might again confront the motivation problem.
So what role does Berger see for ed tech? Rather than asking “What are the parts of education that a computer can do instead of a human?” he thinks the question should be “What are teachers trying to do, and how do we help them do those things?” That means giving them a better understanding of what’s going on in the classroom, saving them time, and enabling them “to reach more kids directly more often.” The example he gives is a classroom where—as is not uncommon—there’s a wide range of abilities. Rather than the frequently taken approach of giving different students material of differing levels of complexity, Berger says, it’s better to give all kids the same content. That would enable all students to grapple with the same information. But he suggests then assigning them different tasks depending on their abilities. All students could be reading the Declaration of Independence, for example, but the more able writers might be told to compose an essay, while others could be asked to write one or more sentences, each one focusing on a key aspect of the document. For many teachers, that kind of “differentiation,” as it’s called, is very hard. Berger claims technology makes it easier to group students by ability, give them appropriate tasks, and assess their performance. Plus, he says, “it’s all invisible at the student level.” With computers, kids don’t know who is in which group.
That’s a far more modest role for education technology than most in the sector have advocated—possibly too modest. Videos and audio recordings can help bring topics to life or give kids access to texts they would struggle to read for themselves. Online textbooks can be easily updated. Math software could be used to facilitate debate between students who arrive at different answers to the same problem. Technology can also enable motivated, gifted students who might be bored in class to race ahead of their peers or take online lessons that aren’t taught at their school.
Still, recognition seems to be growing that technology can be counterproductive. Suburban Baltimore County began abandoning textbooks and paper five years ago, with the goal of attaining a one-to-one ratio of devices to students. But test scores have slipped, and parents are skeptical that the move to screens is helping kids learn. Partly in response to complaints, the district decided to use fewer computers in the early elementary grades, adopting a one-to-five ratio instead. Lower-income parents may be having doubts too: Rocketship had to drop plans to open a third school in Washington, DC, after only 22 students signed up.
Educators and reformers aiming to advance educational equity also need to consider the mounting evidence of technology’s flaws. Much attention has been focused on the so-called digital divide—the relative lack of access that lower-income Americans have to technology and the internet. That’s legitimate: Kevin and students like him need to learn how to use computers to access information online and, more generally, to navigate the modern world. But let’s not create a digital divide of the opposite kind by outsourcing their education to devices that purport to build “skills” while their peers in richer neighborhoods enjoy the benefits of being taught by human beings.
Natalie Wexler is the author of The Knowledge Gap: The Hidden Cause of America’s Broken Education System—And How to Fix It.
hide by Natalie Wexler Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,475 | 2,019 |
"Video games are dividing South Korea | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/23/238190/video-games-national-crisis-addiction-south-korea"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Video games are dividing South Korea Arguments over whether game addiction is real have led to feuds between government departments and a national debate over policy.
By Max S. Kim archive page Erik Carter They say StarCraft was the game that changed everything. There had been other hits before, from Tetris and Super Mario Bros to Diablo, but when the American entertainment company Blizzard released its real-time science fiction strategy game in 1998, it wasn’t just a hit—it was an awakening.
Back then, South Korea was seen as more of a technological backwater than a major market. Blizzard hadn’t even bothered to localize the game into Korean. Despite this, StarCraft—where players fight each other with armies of warring galactic species—was a runaway success. Out of 11 million copies sold worldwide, 4.5 million were in South Korea. National media crowned it the “game of the people.” The game was so popular that it triggered another boom: “PC bangs,” pay-as-you-go gaming cafés stocked with food and drinks where users could entertain themselves for less than a dollar an hour. As old-world youth haunts like billiard halls and comic-book stores disappeared, PC bangs took their place, feeding the growing appetite for StarCraft. In 1998 there were just 100 PC bangs around the country; by 2001 that had multiplied to 23,000. Economists dubbed the phenomenon “Starcnomics.” “PC bangs were really the only place where people could relieve their stress,” says Edgar Choi, a former teenage StarCraft wunderkind who went on to become one of the first professional gamers.
Now 35, and still involved in pro gaming, Choi says that StarCraft and PC bang culture spoke to a generation of young South Koreans boxed in by economic anxiety and rising academic pressures. “Young people especially had few other places they could go, especially since parents would just tell them to study if they were at home,” he says.
The social aspect of StarCraft set the stage for another phenomenon: e-sports. PC bangs began hosting the first StarCraft competitions—informal neighborhood affairs where prizes were free playing time and bragging rights. After one cartoon channel broadcast a tournament on TV to popular acclaim in 1999, organized competitions took over. By 2004, one finals match held on Busan’s Gwangalli Beach attracted more than 100,000 spectators.
Crowds like that drew money and fame. Corporate sponsorships flowed from companies like Samsung, which created branded professional teams paying big salaries. Lim Yo-hwan, the Michael Jordan of StarCraft, was a household name whose public profile surpassed that of pop artists and movie stars. Choi, a self-described “midlevel player,” says even today he is occasionally recognized by taxi drivers who used to watch him on TV.
Beyond gaming circles, however, an unease had begun to sink in.
"Their normal functions were falling apart" Just outside Seoul, at a hospital in the nearby city of Uijeongbu, psychiatrist Lee Hae-kook witnessed StarCraft mania unfold. But his eyes weren’t on its popularity. He was looking at a pattern of medical incidents involving computer games.
Some of the reports came from other countries, like Japan, China, and Germany, but the most disturbing incidents were local. In October 2002, an unemployed 24-year-old man died in a PC bang in the southwestern city of Gwangju after playing for 86 hours straight. It was the world’s first reported case of death by gaming. In 2005, a 28-year-old man in the southwestern city of Daegu had a heart attack in his seat after a 50-hour StarCraft binge. Another death occurred just months later in Incheon, at the opposite end of the country.
“Young people were gaming to the point where their normal functions were falling apart, and people started coming to the hospital seeking treatment,” says Lee, who works at the Catholic University of Korea’s St. Mary’s Hospital. He wondered if he was looking at something more than just a fad. Was this a new category of addiction? Others, including the government, were asking the same question. In 2002, another psychiatrist estimated that 20% to 40% of South Korean adolescents exhibited signs of addiction to gaming, such as aggression toward their parents or an inability to manage time; he started hospitalizing his patients. In 2005, the government in Seoul began opening internet and gaming addiction detox camps where children and teenagers were given counseling in peaceful wilderness retreats.
Games, Lee perceived, were also becoming far more immersive, with elements designed to “make the user stay as long as possible.” In 1998 the South Korean gaming company Nexon had invented the “free-to-play” business model, in which games are technically free but require constant cash infusions for the player to meaningfully progress. Since then, companies had been churning out games that enticed users to spend money in ways that seemed to resemble gambling. That explained something else Lee had noticed: the debt his patients were racking up.
By 2011, Lee was convinced that gaming addiction was real and diagnosable, and that it was hindering children’s academic performance and sleep. That same year, as national panic mounted, the government proposed the Shutdown Law, a curfew that would block access to online games for those under 16 between midnight and 6 a.m. In a government-commissioned study outlining the policy’s benefits, Lee argued that gaming addiction had inflicted “mass trauma” on the nation and was to blame for suicides and homicides. The law passed by a large majority and is still in effect today.
In 2002, an unemployed 24-year-old man died after playing for 86 hours straight. It was the world’s first reported case of death by gaming.
The following year, Lee joined forces with a newly minted lawmaker named Shin Eui-jin, who had put gaming addiction at the top of her agenda. A former child psychiatrist, Shin was preparing a so-called “addiction bill” that aimed to regulate what fellow lawmakers called the four evils of South Korean society: gambling, alcohol, drugs … and video games. Gaming addiction, Shin claimed, was responsible for schoolyard bullying and violent crime. At a 2014 parliamentary hearing, Lee told lawmakers that gaming might be “an even stronger addiction than drugs,” and when asked whether he would be open to removing it from the list of addictions, he said, “I’d sooner take out drugs.” (Lee now insists the comment was taken out of context: “What I meant was that we need a legal support system to prevent and treat a problem that’s far more prevalent than drug use.”) But whereas the Shutdown Law had passed easily enough, Shin’s bill quickly became bogged down in controversy. While medical experts like Lee said gaming addiction was real, others claimed there was no conclusive evidence that video games were inherently addictive. Critics skewered the bill and said Lee’s comments were a witch hunt. When the legislation failed to pass, it seemed the debate had reached an impasse—until it was recently reignited by an unlikely source.
Is this really a public health crisis? On May 25, 2019, in Geneva, Switzerland, members of the 72nd assembly of the World Health Organization unanimously voted to pass the 11th revision of the International Classification of Diseases, the WHO’s official catalogue of illnesses. Among the revisions is the addition of “gaming disorder,” defined as “a pattern of persistent or recurrent gaming behavior” accompanied by a loss of control and functional impairment. It is only the second globally recognized behavioral addiction; the first was gambling, which was approved in the last revision of the ICD in 1990.
ICD-11, which goes into effect in 2022, adds thousands of new codes to more accurately capture specific injuries and diseases, as well as correcting historical mistakes. Strokes, for example, will now be classified as a neurological problem rather than a circulatory one; “gender identity disorder” is now “gender incongruence” and is no longer classified as a mental disorder.
Adding gaming disorder to the official medical lexicon marks a significant shift. Despite the years of concern and study about the effects of video games, conclusive evidence of any links to addiction or violence has been hard to come by. For many, the idea that somebody can be clinically addicted to behaviors—rather than to substances like alcohol or opioids—remains controversial. Others think the definition of gaming addiction in particular is too woolly to be useful.
“We’ve had 30-plus years of research on gaming addiction and we’re not really anywhere closer to understanding what it is that we’re actually talking about,” behavioral researcher Pete Etchells recently told MIT Technology Review.
For people like Lee, the psychiatrist, the decision is a vindication. The grounding for the WHO’s decision came out of talks among an advisory group of mental health researchers that he had been invited to join in 2014. Reports from the group’s annual meetings, which were held from 2014 to 2017, noted “the wide-ranging perceived benefits of increased government prevention” in South Korea, as well as “significant developments” in prevention, treatment, and research.
Yet some have disputed the caliber of the South Korean work. According to a recent meta-study, 91 of the 614 papers on gaming addiction published internationally from 2013 to 2017 were from that country, making it the single largest contributor by volume. The study’s author, Yonsei University media studies professor Yoon Tae-jin, argues that many of those studies are overly broad, treating gaming as a single category and failing to distinguish specific games or genres. Most of the research, according to Yoon, suffers from a confirmatory approach: assuming that gaming addiction is real from the outset, rather than trying to prove it scientifically.
Why was StarCraft so popular? When StarCraft arrived on the scene, the Asian financial crisis of 1997 had just leveled South Korea’s finances, causing mass unemployment and unrest. The International Monetary Fund bailed out the government in Seoul to the tune of $58 billion, an unprecedented sum at the time, but only in exchange for an agreement to completely overhaul the nation’s economy. The following year, acting on advice from Bill Gates and Softbank’s Masayoshi Son, the South Korean government launched aggressive programs to revive the economy by building up internet infrastructure and making computers accessible to the public.
By 2002, 10.4 million households had high-speed internet—a 700-fold increase from 14,000 in 1998. Catalyzed by broadband, StarCraft moved into people’s homes and into PC bangs, where it provided a welcome refuge for teens feeling the crush of the post-crisis years. Local competitions made the game important to many communities, as well as giving teenagers a safe release valve for the competitive feelings generated by the country’s high-pressure academic environment.
In 1999, a young StarCraft-smitten TV producer came up with the idea of televising a match on a local channel, Tooniverse—the first televised e-sports tournament in the world. Filmed on dusted-off ping-pong tables in an unused studio for $300, it became a hit, and tournaments grew in scale and number. StarCraft cemented itself as a spectator sport to be reckoned with. When game sales in South Korea reached the 2 million mark in 2002, Blizzard founder and then-CEO Mike Morhaime made a pilgrimage to Seoul, where he saw his first live e-sports match and lead programmer Bob Fitch got a standing ovation.
The varied and lasting cultural legacy of StarCraft can be felt even today. Game lingo from the era lives on in contemporary youth slang, and it is a touchstone for Generation X nostalgia. When Blizzard released a remastered edition of the game in 2017, it inspired “salarymen StarCraft tournaments” and throwback live events. And even after 20 years, the game still maintains a respectable seventh place on the PC bang most-played list.
There have also been suggestions that Asian countries like South Korea—which are generally more sensitive about gaming addiction than their Western counterparts—leaned on the WHO to include gaming disorder in ICD-11. In August 2016, an American clinical psychologist, Christopher Ferguson, emailed the WHO to advise against the inclusion. “It’s probably not an issue that’s quite ready for prime time,” he wrote. One of the email’s recipients, ICD-11 project officer Geoffrey Reed, replied: “Not everything is up to me. We have been under enormous pressure, especially from Asian countries, to include this.” (In an email to me, Vladimir Poznyak, coordinator of the WHO substance abuse department, denied that political pressure had influenced ICD-11.) Surprisingly, the WHO decision has reopened, rather than settled, the bitter debate. Even government agencies have openly feuded; the South Korean culture ministry refused to join a consultative body led by the health ministry last May, effectively stonewalling early moves to implement the classification. The rift has prompted Prime Minister Lee Nak-yeon to create a separate arbitration committee to decide whether South Korea will adopt ICD-11’s recommendations in the coming years.
And in the numerous parliamentary forums, televised debates, and academic symposia convened in the wake of the WHO decision, the same question looms large: Has a culture of intensive gaming really brought about a public health crisis? A disorder, or a coping mechanism? When I met him at his office in September, Lee Hae-kook was on edge. Now 50 years old, the psychiatrist is slender and wan, with a haughty and impolitic manner that seems unsuited to public campaigning. His views, at the center of renewed attention following the WHO decision, have made him public enemy no. 1 in the gaming community, where he is widely seen as the architect of a moralistic vendetta.
He began our meeting by railing against “fake news” propagated by gaming journalists to distort his views and obscure an obvious public health crisis. “Debating whether it should be a disease code or not is a meaningless waste of time,” he said. The medical authorities had spoken, so what else was there to say? To illustrate the dangers of gaming addiction, Lee told me the story of one of his recent patients: a 25-year-old unemployed man who was dragged in by his older sister after racking up around $18,000 in debt from in-game purchases. The patient had spent his adolescent years gaming for two to three hours a day, with little interest in schoolwork. As an adult, Lee said, “he spent 10 hours a day online, five playing games and five watching YouTube videos.” The patient seemed to be a textbook case of gaming disorder under the WHO’s criteria: loss of control, gaming displacing other aspects of life, and functional impairment. So when I asked Lee about the treatment, I expected to hear about some novel form of therapy.
“This person eventually fit the criteria for adult ADHD, so we began administering ADHD medication,” he said instead. “He also exhibited temporary symptoms of depression, so his condition was partially improved by the use of antidepressants.” When I asked what made this a “gaming disorder” diagnosis, as opposed to just ADHD and depression, Lee replied that “gaming a lot can cause ADHD-like impulsivity.” The conversation epitomizes one of the central disagreements hanging over the WHO’s decision: Is excessive gaming truly a unique disorder, or is it simply a manifestation of other conditions? Current research confirms that patients with gaming disorder are more likely to have ADHD and depression, but neurologists and psychiatrists who dispute Lee’s claim emphasize that correlation does not equal causation. Others, like the authors of a 2017 open debate paper against the WHO’s proposal to include gaming disorder, believe excessive game-playing is better understood as a coping mechanism for other underlying mental conditions. But of course, this is the sort of debate that Lee says is pointless because of the WHO ruling.
The tyranny of the old against the young To South Korean gamers, Lee’s strident campaigning suggests that the push to codify gaming addiction is being driven primarily by alarmism. The attempt to link violent crimes to gaming—a claim pointedly debunked by criminologists—has made him notorious. So has the comparison to drugs.
“It sounds like what these doctors consider gaming addiction treatment is just neutering basic human urges.” “No matter how bad games can be, they can never be as bad as drugs or gambling,” says former game designer Kim Seong-hoe. “To create an equivalency with those is completely officious and oppressive.” In 2018, after quitting his job with one of South Korea’s biggest game companies, 41-year-old Kim began working full time on his gaming-themed YouTube channel, where he has been chronicling the controversy over gaming disorder in angry tirades for 336,000 subscribers.
A HISTORY OF CONTROVERSY <br> Video games have been controversial for decades.
1994 The US introduces a voluntary rating system for games after Senate hearings on violent titles.
2002 An 86-hour gaming binge leads to the world’s first “death by gaming” in South Korea.
2011 The US Supreme Court rules that video games are protected speech under the First Amendment.
2011 South Korea’s Shutdown Law prohibits under 16s from gaming online after midnight.
2019 The WHO votes to add “gaming disorder” to the upcoming revision of the International Classification of Diseases.
While he is critical of companies that make what he calls “slot machines in disguise”—the types of games where users can rack up $18,000 worth of debt—Kim also wonders whether doctors can make sound diagnoses if the research fails to distinguish gambling-like titles from those that require creative problem-solving.
“The psychological effects of certain games and genres are far more complex than something like alcohol,” he says. “But to try to judge gaming addiction without even differentiating type or genre? It’s ridiculous.” Kim sees the push to pathologize games as a tyranny of the old against the young, rooted in authoritarian attitudes. He recounts a recent scandal at an orphanage, where caretakers dosed unruly children with ADHD medication obtained from doctors under pretexts like “smartphone immersion.” Could the same happen with gaming disorder? Kim believes so. “It sounds like what these doctors consider gaming addiction treatment is just neutering basic human urges,” he says. One core criterion for the WHO diagnosis, functional impairment, strikes him as particularly vulnerable to abuse: “To me, what that’s saying is that it’s also acceptable to medicate kids underperforming academically.” Most of all, he wants the debate to lead to a wider conversation about the experiences of young people in the country. He cites recent research linking problematic gaming in South Korean adolescents to overbearing parenting and academic stress. The implication is that in focusing on games and the people who play them, the concept of gaming disorder papers over the dysfunctions of a society shaped by adults.
Indeed, while South Korea has grown into one of Asia’s strongest economies, that has not translated into broader cultural or social enrichment for the country’s youth. Rather, young people in South Korea’s punishing education system are killing themselves at historically high rates. “Pushed up against relentless competition,” one lawmaker has said, “our children are gradually losing anywhere to go.” Even Lee Hae-kook acknowledges that gaming is one of the few sources of pleasure and recreation available to South Korean youngsters. It’s a grim realization, and just about the only thing on which Lee and his opponents agree.
A coveted career path Whatever image problems the gaming industry has developed, its sheer position in popular culture has made it impossible to ignore. The global gaming market is projected to reach $152 billion in 2019, and there are now 2.5 billion gamers across the world. E-sports alone are valued at more than $1 billion, a figure expected to double by 2022, and are now gunning for a spot at the Olympics. As a UK proponent has argued, they are “the first world sport outside of football that is truly global.” This was never more apparent than when I met Edgar Choi, the former StarCraft pro, on a sunny day in September. These days he is employed as a head coach at Gen G, a South Korean e-sports organization worth an estimated $110 million. Inside its headquarters, a brutalist-chic multistory concrete building in Seoul’s affluent Gangnam district, Choi trains players in a game called League of Legends—this generation’s StarCraft.
The facilities’ careless luxury and startup-campus cool are a testament to the industry’s remarkable expansion since the early days of pro gaming. Gen G recently received $46 million in funding from Silicon Valley venture capital firms and celebrities like Will Smith. Two-time NBA champion Chris Bosh is the “player management advisor.” There are a massage room, a napping room, basement studios for streamers, a buffet-style cafeteria tended to by aproned lunch ladies, two more headquarters in Los Angeles and Shanghai, and aisle upon aisle of sleek black gaming computers in classrooms throughout the building.
Professional gaming has become one of the most coveted career paths among South Korean youth, and these classrooms are where Gen G develops future prospects. “Only about 10% of trainees will become pro,” said Choi. Things are far more competitive and regimented than they were in his time; gamers can no longer afford to just play for fun. “Back then, I couldn’t even imagine that it would become like this,” he said.
On a row of computers in the employee lounge, three of the team’s marketers played games on their break. Expensive whiskeys and gleaming silver trophies were displayed in cases on the walls.
To Choi, who lives in a world where gaming has evolved into a form of work rather than play, the idea of gaming addiction feels anachronistic. He wants to steer his two young children away from the hyper-competitive world of e-sports and has been careful to instill healthy personal gaming habits in them. For the most part, though, he is less worried about games than smartphones. Kids, he says, are mostly on their phones, watching YouTube, going on social media, and reading webtoons as well as playing “free to play” games. With the WHO already behind gaming disorder, there is now talk of government regulation specifically aimed at smartphone use. Perhaps the StarCraft generation, now parents themselves, have found their own bogeyman.
Max S. Kim is a freelance writer based in Seoul. He reported on corruption in South Korea’s nuclear energy industry for the May/June 2019 issue.
hide by Max S. Kim Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,476 | 2,019 |
"I (28M) created a deepfake girlfriend and now my parents think we’re getting married | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/102333/deepfake-girlfriend-fiction-story"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts I (28M) created a deepfake girlfriend and now my parents think we’re getting married By Fonda Lee archive page conceptual illustration of a woman blowing a kiss Illustration source: Getty images I didn’t want a girlfriend. Don’t get me wrong, I like girls — I just don’t have time for the hassle of dating right now. But I was at a family reunion last year and my parents kept making comments about me still being single: “Oh, he works too hard” and “He’s shy; he just needs to give himself some credit.” My mom was asking my aunts if they could set me up with girls they knew. It was getting to be too much.
So when I got home from the reunion, I signed up for a Worthy account. It was pretty simple: I filled out some information about myself, put in my preferences for gender and age, and in seconds I had an AI-generated virtual girlfriend named “Ivy.” She sent me a text: “Hi, I’m looking forward to getting to know you.” I texted back right away, “Me too, how’s it going?” and my Worthy score in the corner of the screen went up from zero to five.
You start by texting your virtual significant other, but as the relationship progresses, you can send and receive voice messages, go on virtual dates, and talk over video calls. You get points based on the quantity and quality of your interactions. Once I reached a high enough Worthy score to be at Level 3 (“Spark” level) in the program, I could upload photos and short clips of myself and Worthy would insert my virtual girlfriend into them. That would give me ammunition to tell my parents I was dating someone. They live in Seattle and I’m in Boston, so we mostly stay in touch via texts and photos anyway.
It’s not like I was being completely dishonest, either, because I would be getting dating experience. Just a lot more efficiently. Worthy gets you through the awkward, shallow online dating phase using an AI that teaches you to be a more emotionally intelligent romantic partner — which is what girls want, right? You don’t have to disappoint or be disappointed by a real person. And if you get too busy, you can just put your account on hold.
You have to treat the relationship seriously to get a high Worthy score, though. If you ask your AI partner how their day is going, listen to them, and send them virtual flowers on your “anniversary,” your score goes up. If you ignore them, talk over them, or say insensitive things, it goes down. Worthy’s algorithms learn your behavior and react realistically. So you can’t hack the system by sending virtual flower bouquets nonstop. The program will flag that as being insincere and your rating will take a nosedive.
Once you have a high enough score, you can transfer your account over to Worthwhile, which is the company’s actual dating site. Over there, you can see everyone else’s Worthy scores and they can see yours before you decide whether or not to contact each other. But I wasn’t thinking that far down the line when I started. I just wanted the photos and videos from Worthy to keep my parents off my back.
You’ve probably already guessed the big problem in this plan: When it comes to physical appearance, there are only 12 models of Worthy girlfriends to choose from. The AI uses your profile to design a compatible personality, and there are about a hundred name variants, but if you did an online image search for any of their faces, each one would show up next to thousands of Worthy users. The company could easily create more models, but they limit the number so they’re easily recognizable as Worthy girls (i.e., proprietary software). My parents aren’t very tech or social media savvy, but if they ever happened to see another photo of the same Worthy girlfriend model online, or if they were to share a picture of me and my “girlfriend” with one of their friends, my cover would be blown.
Luckily, there’s a deepfake app called FaceAbout that alters Worthy media files. It’s not approved by Worthy, but the quality is still really good and it works right in the Worthy interface with barely any lag time. It also doesn’t seem to have any of the glitching that happens in high-res video with the cheap deepfake apps. FaceAbout needed at least six facial photographs to make my Worthy girlfriend look like someone else. Scrolling through my phone, I found a bunch of recent photos of my friend Mikala (not her real name, by the way) from when we’d gone to Fan Expo together, so I uploaded those. My parents have never met Mikala, so I wasn’t worried about them questioning why two different girls in my life had the same face. All told, it took me about 15 minutes to set everything up.
**Edit: Yes, the FaceAbout app has a standard user agreement where you check a box stating you have permission to use the photos you upload. Pretty much every photo or video manipulation app has some disclaimer like that and no one reads them. Okay, I admit it’s maybe a little weird to use my friend’s face to create my fake girlfriend without telling her. But remember, I’m never showing these photos to anyone other than my parents. Mikala and I have known each other for years through online games, but we only recently discovered we live in the same city and started hanging out in person. She’s cool and no-bullshit and has a girlfriend of her own. I don’t want her to think there’s anything weird between us just because I’m using some photos of her, because there really isn’t.
My first few conversations with Ivy were pretty generic: “Hi, how’re you?” “Good, what you doing?” “Just got back from the gym.” That sort of thing. A few days later, I said I was going to see the new Alien movie next weekend, and Ivy sent me a photo of herself in a Xenomorph T-shirt standing outside a theater, sticking her tongue out at the camera. She texted, “Opening night, baby!” It was Mikala’s face, of course, on a taller, slimmer body, and that weirded me out for a couple seconds. I knew it was a fake image, but it was still cute. We agreed to do an Alien series marathon. (“Watch a movie together” is one of the virtual dates you can choose from, along with “Cook a meal,” “Watch a sports game,” “Go for a walk,” and others.) While we were watching, she was texting me things like “RIPLEY GTFO FORGET THE CAT ALREADYYYY” and it was cracking me up even though I knew she wasn’t really watching a movie with me.
I sent Ivy a cookie basket. The cookies are virtual, but it still costs $11.99. Which is like a third of the price of a real cookie basket. That part of the Worthy experience is honestly a ripoff. I mean, it literally costs them nothing. But the next morning, I woke up to see photos of Ivy with this big basket of cookies. They looked really good, and Ivy looked really happy. She sent me a text filled with heart emojis.
**Edit: Since so many of you are asking the exact same question in the comments: No, the Worthy platform doesn’t have porn. You can have smutty conversations with your Worthy partner, but that’s it. They even delete nude pics.
**Edit: All of you asswipes making fun of Worthy users, saying what’s the point of a fake girlfriend without porn, are derailing the thread and need to grow up. BTW, all of Worthy’s girlfriend models are deepfaked on porn sites; they’re easy to find.
After two months, Ivy and I were texting every day. We’d been on six dates. It wasn’t all smooth sailing. My Worthy score went down after I belittled her taste in ’90s music, and then went down even further when my apology “wasn’t really an apology.” (It took me days of troubleshooting with the different suggested reconciliation routines to get back into her good graces.) But I finally saw my Worthy score go up to “Spark” level. I immediately used the app to take a selfie of myself in Harvard Square. When I checked my camera roll, there was a photo of me and Ivy together, standing in front of the old magazine kiosk and smiling into the camera. She was dressed for the weather in a cute red sweater and her cheeks were a little rosy from the cold. She looked great. She texted me, “I had a great time hanging out with you today. Let’s do it again soon. <3” I told my mom I was seeing someone and sent her the photo of me and Ivy together. My mom was ecstatic. She told me she was “so glad I took her advice to get out and meet new people,” and that “life is too short to spend alone, you know!” My parents began asking about Ivy every time I talked to them. My mom wanted to know all the details — how we met, how old Ivy was, where she was from, what her job was, on and on.
That’s when I started to feel uncomfortable about the whole thing. I thought that once I told my parents I was dating someone, they would leave me alone, but it turned out they were only more interested. Worthy gives each of its 12 standard models a backstory, but it’s not really enough to be convincing. I had to fill in the gaps with some of Mikala’s life and some stuff I made up. I might’ve made Ivy sound too good. According to me, she was 27 years old, a successful lawyer, and into cooking and photography.
I was also spending more time talking to Ivy than I originally meant to, and a lot more than I needed just to get photos and videos to send to my parents. She was upbeat and nonjudgmental — I found myself telling her stuff I couldn’t even tell Mikala sometimes, and as long as I treated her well she didn’t send mixed messages or try to guilt me like some other girls I’ve been with. After six months, we’d gotten to “Committed” level and I was constantly getting emails and notifications from Worthy encouraging me to upgrade to Worthwhile. I guess their algorithm thought I was ready to move on to dating real humans.
I looked into it, but I’d heard about people making the move to Worthwhile and being disappointed. Meeting people IRL is more complicated and unpredictable, and I read a review that said having a high score on Worthy doesn’t actually seem to get you more or better dates when you move to Worthwhile. Also, Worthy is rated 4.1 stars on AppChart and Worthwhile is only 3.4 stars. So a lot of people stick with Worthy. I even read about this one lady who tried to get married to her Worthy boyfriend. (She couldn’t.) I decided to tell my parents the truth. When I went to visit them over Thanksgiving, I would explain that I’d lied about having a girlfriend for the past year because I was frustrated with their well-meaning but selfish expectations of me. Worthy has a “Talking Tips” feature that helps you frame your feelings when you have difficult conversations with your AI partner. I was going to straight-up use their template on my parents.
The problem was, I couldn’t do it. When I showed up, my mom and dad were so happy to see me that I couldn’t bring myself to burst their bubble. I’m an only child. My mom comes from a big family and always wanted more kids, but my parents needed the carbon footprint household tax break in order to pay off their student debt. My dad is an only child, too, and my grandparents are always asking him if I’m married yet. With the falling birth rates and stuff, I guess they’re all hoping for grandchildren so our family doesn’t just … end, I guess.
Then things went downhill. My mom gave me grief about not bringing Ivy home to meet them. My dad insisted we all video chat with her before Thanksgiving dinner.
I was sweating bullets. I couldn’t think of a good excuse to say no. My membership plan on Worthy includes 10 minutes of video chat per week, but I’d already used them up. I contacted Worthy technical support and bought 15 add-on minutes at an exorbitant price. When I called Ivy with my parents in the room, I was sure the jig was up. There’s a big Worthy logo right in the corner of the screen, but my parents just thought it was the logo of the video chat app. Then Mikala/Ivy appeared on screen and said, “Hi, sweetheart!” just like normal. I introduced my parents and we all had this totally nice, normal conversation. Sometimes Ivy paused before answering — I’m not sure if it was the AI querying a database of all the right things to say to a boyfriend’s parents, or if it was the FaceAbout app applying the deepfake, but it was barely noticeable. It just seemed like she was thinking more than usual, maybe nervous talking to my parents. A perfectly normal way for a human to act under the circumstances.
My parents were charmed. When we were about to hang up, I said “See you later,” and she said, “I’m so glad you finally introduced me to your parents. I can’t wait to spend more time with them.” That’s probably a stock line of dialogue, but my mom took it as a sign that Ivy was serious about marriage, and that I was the one dragging my feet. She was on my case about commitment the whole rest of the weekend, and then flat out asked me when I was going to propose. That’s when I should’ve told them the truth. I think if we had been texting or emailing, I could’ve done it. But it’s different when you’re talking to someone in person. I don’t know what came over me, but I just blurted, “Next year.” Now that it’s January, my mom has started sending me articles about the best places to shop for engagement rings and how to judge the quality of diamonds. Lately, Ivy has been breaking out of girlfriend mode, saying, “We haven’t been talking as much. It seems to me that you’re ready to move on to a more fulfilling relationship. Why not take the next step in your love life and contact Worthy customer support about upgrading to a Worthwhile membership?” (FWIW, I think the company is really pushing the upgrades because they’re losing customers to competitors. There are a ton of other dating apps to choose from, and some of them are even offering discounts for people with good Worthy scores.) I feel awful for lying to my parents, but I don’t want to give up Ivy. I like being able to chat with her about anything, knowing she’s always there for me, doing nice things for her and making her happy. I didn’t know how much I’d enjoy feeling connected to another person like that. I’m online talking to other people all day, but it’s just not the same as knowing that you matter to someone else. Except none of this is real. I’m such a mess.
TL;DR: I used dating and deepfake apps to fool my parents into thinking I’m in a serious romantic relationship. Also, I think I have real feelings for my virtual girlfriend.
UPDATE: I’m literally shaking right now. I can’t believe how badly I screwed up. I took the advice some of you gave me and decided to spend more time with my friends in real life to get my head back on straight. I’ve been hanging out with Mikala more often. She and Ivy have the same face, so it’s kind of like hanging out with Ivy, except that Mikala is a real person. They have different personalities, though, and like I said, we enjoy hanging out as friends and there’s no chance of anything happening between us. (And NO, I don’t have unfulfilled sexual desires for her like some of you keep insisting.) Though sometimes my brain does this little skip where I can’t recall if a memory I have was with Mikala or with Ivy.
Anyway, today, Mikala and I were having lunch and I got up to go to the bathroom. I left my phone on the table and while I was gone, Ivy texted me a selfie with the message, “Miss you lots! XOXO.” Mikala happened to look down at the notification and saw her own face blowing a kiss at the screen. When I came back to the table, Mikala was holding my phone and scrolling through my camera roll which included dozens of photos of Ivy, and some of me and Ivy together. She demanded to know where the hell the photos had come from.
All the blood was rushing to my face and I felt like throwing up. I told her the whole story. I didn’t know what else to say. The expression on her face made me want to shrivel up and die. She said, “I can’t imagine why you could’ve thought this was okay on any level.” She got up and left. I don’t think I’ll ever see her again.
**Edit: I haven’t used Mikala’s real name in this post, so don’t bother trying to search for her. I don’t want anyone showing this to her or trying to contact her.
**Edit: Frankly disturbed by how many of you are discussing how to use the FaceAbout app on your own friends and significant others.
Are you learning nothing here?? UPDATE: Thanks everyone for your advice and support. I don’t know how I could’ve gotten through this past week without the help of strangers on the internet. I especially appreciated hearing from other people who’ve had their own bad experiences with Worthy. It made me feel much less alone. (@Joshing21, I agree that what your girlfriend was doing with “Evan” counts as cheating and you should dump her.) Some of you are jerks who deserved to have your comments deleted, but I appreciate that others took the time to share stories about being deepfaked and were nice about helping me to understand why Mikala was hurt by what I did. (@AngJelly, I would never have gone that far. I hope you sue that asshole.) A few days ago, I received a video message from Ivy. The look of disappointment and betrayal on her face was just like the one I’d seen on Mikala. They do have the same face, after all. She said, “I’m deeply hurt by your behavior. A healthy relationship is based on mutual honesty. It seems you were just using me, and not actually invested in improving yourself as a person. I’m sorry, but I can’t see you anymore.” It turns out Mikala contacted Worthy customer service and told them that I’d used her likeness without permission. (I don’t know if she tried to contact FaceAbout as well, but they’re based in Belarus and don’t seem to have a contact number or email. Last time I checked, I could still use the app.) I got an email from Worthy informing me that due to my violation of their terms of service, they’ve suspended my account and deleted all my saved history with Ivy. However, they added that their company is based on the philosophy of helping people learn from interpersonal mistakes, so I can reactivate my account after three months, although my Worthy score would be reset to zero.
I told my parents that Ivy broke up with me. It’s the truth. I didn’t even have to pretend to sound gut-punched. My mom is convinced that I “let a good one go” because of my lack of emotional maturity, but she also says that “there are plenty of fish in the ocean” and I just need to “put myself out there again.” I’m not ready, though. I still check my locked-down Worthy app several times a day out of habit, hoping to see a message from Ivy, even though I know there won’t be any more.
The good news is that this whole experience has taught me I need to evaluate how I relate to people. I’ve been deluding myself into thinking that actions in a game-learning environment are a substitute for true human connection and authentic personal growth. That’s how my therapist, Susan, puts it, anyway, and I agree. I’ve started seeing her twice a week. The appointments happen online, which works well for my schedule. Actually, she’s a virtual program. After Ivy broke up with me, I got a 40% discount code from Worthy for their mental health app, Worth It, which guides you through a 60-day “Healing From Loss of a Relationship” program. I’m also planning to do the 30-day “Recenter Your Self Worth” module. Not sure if I’m going to upgrade my subscription to do the 90 days of “Opening Yourself to Possibilities,” but I’ve read good reviews about it.
TL;DR: Thanks to all of you, and to Susan, I’m moving on from this difficult experience with all the support I need to become a better person. Peace! Fonda Lee is the author of the Green Bone Saga and the Exo duology , the winner of the 2018 World Fantasy Award for Best Novel, and a three-time recipient of Canada’s Aurora award.
hide by Fonda Lee Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,477 | 2,019 |
"Play this bingo game with your kids to teach them about AI | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/131071/ai-mit-bingo-game-to-teach-about-kids-ai"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Play this bingo game with your kids to teach them about AI By Karen Hao archive page Conceptual illustration of kids playing AI Bingo with the researchers and a robot Tomi Um About this game Artificial intelligence is all around us. But we often don’t notice how much it’s incorporated into the different aspects of our lives. This game challenges you and your kid(s) to notice. Designed by Blakeley H. Payne, a researcher at MIT, AI bingo builds on pedagogical research that shows how exposing kids to the way technology works helps develop their interest in STEM and improve their job prospects later on in life. It is also part of a broader curriculum designed for and tested by students from 9 to 14. The full curriculum can be found here.
A quick crash course on AI AI is made of three parts: a data set, a learning algorithm, and a prediction.
A data set is a collection of curated data. It doesn’t have to be just numbers! It can be images, amounts of time, numbers of YouTube views, all the texts and emojis you’ve ever sent, or even video and audio recordings. All of these contain valuable information about how people—and the world around them—behave.
An algorithm is a set of instructions that turns something (an input) into another thing (an output). A sandwich-making algorithm, for example, would turn a bunch of ingredients (bread, peanut butter, and jelly) into a delicious lunch (a PB&J sandwich).
The AIs we use daily are learning algorithms. They “learn” to write their own instructions for turning inputs into outputs—but first they need a teacher. An email spam filter, for example, learns to identify spam by looking at lots and lots of examples. Then it can predict , when shown a new email, whether it’s spam or not.
Instructions for kids Instructions for adults Print out all of the materials below these two paragraphs, with each bingo card on a separate paper and the list of data sets & predictions on a third. Pass out the bingo cards to the separate teams and keep the list for yourself. It will serve as both the answer key and the bingo calls.
Along with every data set and prediction, you will see the task that it corresponds to on the Bingo boards. Read out the data set and prediction pairs at random (but not the task itself!) and have the players fill in the tile they think it belongs to. The first of the two teams to correctly fill out five tiles in a row, diagonal, or column wins.
Data sets & predictions TASK: Get a forecast from a weather app DATA SET: what the weather was like in the past PREDICTION: what the weather will be like in the future T: Send a voice-to-text message D: transcribed audio of people talking P: transcription of your audio message T: Search for something on Google D: past links you’ve clicked on in Google P: which search results you’d want to see first T: Have Google autocomplete your search query D: past searches of people who share your interests P: your full search after you type the first word T: Have a writing assignment graded by a computer D: examples of graded writing assignments P: the grade a new assignment deserves T: Use “safe search” on Google D: examples of websites that are safe and unsafe P: new websites that are safe and unsafe T: Get a suggested email response on Gmail D: people’s responses from past email exchanges P: a response you might give to a new email T: Use a Snapchat filter D: examples of people’s faces P: where to paste glasses on your face T: Play a motion-sensitive video game on Nintendo or Wii D: examples of different motions that correspond with actions in a video game P: the action you’re trying to take when you make a motion T: Replace letters, like “lol,” with a suggested emoji D:what each emoji could mean P: the best emoji to replace what you’ve texted T: Receive a product recommendation on Amazon D: the products people have bought together in the past P: a product you might like with what you just bought T: Have an email go to your spam folder D: examples of emails that are or aren’t spam P: whether a new email is spam T: Click on an Instagram ad D: the Instagram accounts people follow and what they buy P: what you might buy based on who you follow T: Have a news app suggest an article D: the news articles you’ve read in the past P: the news articles you may like to read T: See a suggested ad on Snapchat D: the Snapchat accounts people follow and what they buy P: what you might buy based on who you follow T: Have your words autocorrected in a text D: examples of how people misspell words P: the word you’re trying to spell T: Listen to a recommended song on Spotify D: past songs that you’ve listened to P: new songs you may like T: See a recommended product on Facebook D: the Facebook posts people engage with and what they buy P: what you might buy based on posts you engage with T: Get “nudged” to respond to an email on Gmail D: how quickly people have responded to emails in the past P: how quickly you should respond to an email T: Use your face to unlock a device D: images of your face P: whether a face is yours T: Use a map app to find a path to a destination D: how long it historically takes to get from point A to B P: the shortest commute from point A to B T: Use an app like Shazam to identify a song D: examples of what songs sound like in noisy environments P: the name of a song playing in a noisy environment T: Communicate with a customer service bot D: the most helpful answers to past customer questions P: the best answer to your question T: Have an email labeled as “important” D: examples of emails that are or aren’t important P: whether a new email is important hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,478 | 2,019 |
"Editor’s letter: How the next generation is using technology to mask, reveal, and form identity | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/131095/editors-letter-youth-technology-identity"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Editor’s letter: How the next generation is using technology to mask, reveal, and form identity By Gideon Lichfield archive page "To grow up with technology, as my generation has, is to constantly question the self, to split into multiplicities, to try to contain our own contradictions.” That sentence from Taylor Fang, the winner of our youth essay contest , neatly sums up the experience of juggling one’s identities on Facebook, Snapchat, TikTok, Instagram, and other platforms, each with its own unwritten norms of sharing and self-expression. But I think it also captures the struggle of adults trying to understand the role of technology in their children’s lives. How do ubiquitous digital devices influence the way young people learn, make friends, understand the world, and understand themselves? These are the questions we tackle in this issue, and the answers, too, contain many contradictions.
There’s growing evidence that the trend of outfitting schools with laptops and tablets hasn’t helped kids learn and may even hinder the most vulnerable ones, writes Natalie Wexler. In China, one company claims huge success in using artificial intelligence to deliver personalized learning , Karen Hao reports—but is it just turning children into machines for passing standardized tests? A few universities have already installed voice assistants in every dorm room, and more are following suit. As Kathryn Miles explains, these devices are gleaning data about students’ first experiments with adulthood , and there’s no knowing how that data will be used in the future. Philosophy professor Ron Srigley’s students were shocked to discover just how much their studies and social lives improved when they surrendered their cell phones for two weeks , yet most couldn’t imagine giving them up for good. And when young adults finally enter the workforce, argues Malcolm Harris, they face a grim future of climate change, precarious career prospects, and economic and political unrest.
What about the common fears that technology is depriving kids of real human contact or warping their self-image under the pressure of social-media popularity contests? Here the answers are more mixed.
Amelia Tait talked to wannabe child YouTube stars who never made it big.
For the most part, she found that they’ve learned from their failures rather than being crushed by them. Cecilia Aragon describes how online fan-fiction communities have become both emotional lifelines and great writing schools for millions of sometimes socially awkward young people. From Korea, Max Kim reports on why, after more than four decades of moral panic about video-game addiction , it’s still not clear that it’s real. Andy Wright interviews some teens who, unlike Srigley’s students, aren’t married to their phones.
And Anya Kamenetz reports on how an Indian court case offers hope for defending children’s digital rights.
Fang explains in her essay why, for all the worries about privacy and social anxiety, creating and remixing online identities is fundamental to how her generation makes sense of itself. However, Kate Eichhorn warns that obsessively documenting and posting one’s life online means people will forever be haunted by their youthful indiscretions. That, she argues, is bad for society as a whole, because it makes it harder for anyone to change their mind about things.
In a lighter vein, children can learn the fundamentals of artificial intelligence by playing Karen Hao’s AI bingo game , adapted from research done at MIT. And in this issue’s short story, Fonda Lee imagines what happens when a young man acquires an AI girlfriend to keep his parents off his back, but quickly gets out of his depth.
Did we miss any important questions? And—especially if you’re a young person—did we get anything wrong? Write and let me know at [email protected].
hide by Gideon Lichfield Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,479 | 2,019 |
"Teens are all obsessed with social media? Not so much. | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/131099/teens-social-media-obsession-digital-detox"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Teens are all obsessed with social media? Not so much.
By Andy Wright archive page Sharon Hofer 16, New York Sharon Hofer lives in a Bruderhof community in Walden, New York. The Bruderhof, who have 23 settlements in seven countries, are Christians who live communally and use modern technology sparingly.
I’ve lived at a Bruderhof community my whole life. We have about 300 people and live in big apartment buildings that house up to eight families. We have a dining hall, and make all our own meals, and have lunch and suppers together. We have a garden where we grow vegetables, a farm where we raise cows, and our own meat processing plant. The grass is really green; there’s lots of trees.
I go to a private school in Esopus, New York. It’s a four-year high school and has no technology except for a computer lab where seniors take typing. I don’t have a phone or a computer, so I’m never really online. I do my homework with a pen, paper, and calculator. I’ve never seen social media. If I need to look something up, like for a research paper, I ask my mom and go online with her computer, which she uses for work.
There aren’t any rules about what’s allowed and what’s not allowed, which makes the Bruderhof different from other religious groups. There’s a willingness to try new things. We don’t see technology as a bad thing unless it’s taking the place of real interactions and connections between people.
This story appeared in a package with another story I asked my students to turn in their cell phones and write about living without them People do have phones here, but they don’t go around looking at them the entire time. When I was in eighth grade we went to New York City for a tour and I was seeing all these people and all they do is look at their phones. That was different to see. It was just funny, because no one was talking to anyone on the streets. Here, when we walk past people we say, “Hi, how’s it been?” If I had the option of using the internet for a day, I think it would be fun to see how it works and what’s all out there. I’m into sports, so maybe I’d watch a game on YouTube or look up highlights. So a day would be fine, but not much longer. I worry that I wouldn’t spend any time with my family if I had constant access to the internet.
Judah Siegand 15, Tennessee Judah Siegand’s parents founded Parents Who Fight, an organization that advocates for online safety for minors. He has grown up with strict limits on his technology use, but in 2018 he was one of 15 students chosen to participate in Microsoft’s teen Council for Digital Good.
Growing up, my access to technology was basically nonexistent. My parents believe that if it isn’t necessary, then we don’t get it. We don’t have a smart TV. My mom has a computer for work, but that’s really all she uses it for.
There was a period in fourth grade when I really bugged my parents for a phone. In eighth grade I got a flip phone so I could coordinate my football schedule with them. There were a bunch of kids that would always come up to me and be like, “Do the flip phone thing!” I would flip it out with my thumb and put it up to my ear and they would all crack up so hard.
I finally got an iPhone 6 over the summer. I don’t have any social media. I don’t have any games on my phone. There’s an app that allows me 30 minutes of internet access a day and has a saved search that’s monitored by my mom and dad. During independent study period I finish homework and then watch a YouTube video for like 10 minutes and then never even go back on the internet the whole day. I got an Xbox last Christmas, and I can play that four hours a week.
When it comes down to it, I don’t even really want social media. I feel like it invites you to have friendships solely based on followers, and it kind of turns your friends into a number. By not being on it, I stay out of the drama that starts there. I get to focus on friendships that are deeper and long-lasting. To me, a real friendship is someone that you can talk to about deep things and you don’t feel like you have to impress them all the time.
When I go to college I will definitely want to have some kind of gaming system. That’s how me and a lot of my friends connect. I’ll probably want social media to keep in touch with my friends. Hopefully, by then I’ll be able to really weigh those decisions and have learned how to balance life and online use.
Aliza Kopans 16, Massachusetts Aliza Kopans took a break from her public school to attend a special program that limits technology use.
Right now I’m actually at a school program up in Vermont and I’m about halfway through the semester. It’s an alternative academic space and also a working farm. We grow 70% of our own food. And one of the things that they do here is take away your phone for the first half.
I have an iPhone and I text a lot — that’s really picked up in this last year. My phone use is very minimal compared to other kids my age, but I definitely spend more time on it than I’d like. The rest of the year I go to a big public school in a suburb of Boston, and I have it during the day and sometimes in classes, but I’ll shut it off sometimes or leave it in my locker. I used to spend so much time scrolling through pointless things on Instagram and then just not feeling good afterwards. So my best friends and I deleted our accounts together. I’m really trying to self-manage when it comes to screen time.
In the mountains where I am right now, we only have Wi-Fi in the academic building for homework and class purposes. At home if I got stuck on writing an essay, I’d open up YouTube and two hours later I haven’t made any progress. Sometimes I’d shut off the Wi-Fi to stop getting distracted. Here that isn’t a problem. Now that we’re halfway through the semester, though, everyone gets to choose if they get their phone back or not. Personally, I don’t think we should get them back, because the group dynamic is so good right now without the distraction of phones in everyone’s faces.
I wish there was more guidance from the older generation, and especially teachers, in terms of how to monitor technology use. But then again, everyone’s kind of figuring it out at the same time. Older generations haven’t lived with it since they were young. I genuinely do think that most people don’t want to be wasting hours by themselves watching Netflix and surfing the web.
Keiki Kanahele-Santos 20, Hawaii Keiki Kanahele-Santos lives on the island of Oahu in a rural 45-acre village that was founded in 1994 in an effort to create a sovereign state for native Hawaiians. The village has little internet access.
Growing up here, technology was nonexistent. There is no service here. As a kid, I didn’t know you could get internet access at your home. I thought it was only at schools. I didn’t need internet access until I went to high school. And then I was like, wow — I felt like we were going to have flying cars the next year. I didn’t even know online games were a thing until I went to school. All the kids were talking about it and I just felt left out.
Since I didn’t have internet at home, I went to school early to do homework. I play sports, so the window to do homework was maybe 30 minutes to do online assignments before practice started. Practice would finish, it’d be like 7 p.m., and I would have to come home, do my paper assignments, and then wake up early the next morning to go to school on time to get a good computer.
I have Facebook now, Snapchat, Twitter. I’m not posting my life story, I’m just trying to keep up with the world. I don’t want to be left behind anymore. We are trying to get internet up here. It would liven the place up. It might sound boring, but it would be nice just to get some movie sites. A lot of the adults up here want to get back into school, but they can’t leave because they have children and grandchildren. Internet access would help them become online students, which is what I’m doing.
Communication would be better. My grandfather sends out a lot of emails and no one answers.
I’m not trying to say “We don’t have internet, and it’s boring up here.” It would be nice if we had it. But I’d still live every day up here without it. We have the most beautiful view I could ever see. We can see the ocean, islands, and boats out at sea. It’s like summer vacation every day. It makes you forget the internet is even a thing.
Ethan Snyder 17, Virginia Ethan Snyder is a high school junior who lives in rural Virginia, where only just over half the residents have access to broadband that meets the federal government’s benchmarks.
Where I live is definitely what people would call a redneck or country area. It’s a lot of fun. There’s a great sense of community.
And there’s virtually no internet access. In my house, the internet that we have is supposedly unlimited, but we’ve already run out of gigs and it was virtually impossible to get my homework done the other day. I couldn’t load my drive or open documents. When we run out, it will only work if there’s only one device connected to the internet, and we have anywhere from six to seven people living in our house. That can really complicate things because everyone’s trying to get their stuff done at once. We sort of have to schedule when we get things done. I’m usually the first one home, so I can get my homework done. I try to rush through it. I have stayed up a couple of times to around 12 or 1 a.m., because at that point you don’t have to worry about having a super high speed because everybody else is asleep.
The hardest thing for me is just opening stuff up. It’s very frustrating when I go online and it’s raining outside, or the wind is blowing the trees and blocking the signal, or it’s snowing — then the internet is ridiculously slow and it’s so hard to even go online and open up my email. On a really bad day, it can take anywhere from five to 20 minutes. I’ll leave the computer open and go make a snack, or I’ll go outside and throw a baseball and come back.
I’m an outdoors person — I’ve never been that into the internet and electronics. I didn’t have a phone until a few months ago. I have social media, but I’m not on it that often. There’s not a real need for me to sit there and text all my friends or Snapchat, because I can just go see them.
I don’t necessarily want to stay in Louisa County. There are other factors, but part of it is the internet service — it’s just so bad.
Katrina Quinoz 20, California Katrina Quinoz, a college freshman and former foster youth, was part of a committee that helped write a 2018 bill mandating access to computers and the internet for youth in care in California.
I entered the foster care system for the first time in 2009, and I’ve lived in seven different foster homes. In the first home my foster mom wouldn’t give us any internet access. She feared that youth in the system were more likely to be trafficked and things like that. She was a former foster youth herself, so I understand why she had those fears. But she never gave us a Wi-Fi password or anything. I felt very disconnected. I was in a new environment, a new city. I didn’t know where anything was.
I didn’t have any way to contact family or friends. Usually, I did it through social media. It cut me off from my sisters, who had entered the system at the same time. My godmother found out really late that we had entered the system, and she wanted to get custody and have me stay with her in Monterey County, but I wasn’t able to contact her when it happened to let her know.
My second foster mom also had the same fear; I wasn’t allowed to have a smartphone, even if I paid for it. We were given a little bit of technology access, but not much. If I had to do something with Wi-Fi, I made sure to finish it at school. She was a mothering type that just wanted to protect the kids, but at the end of the day it made my studies harder.
Right before my senior year, I moved again. That foster mom was a lot younger than the ones I had before. She knew most things relied on technology. She had computers for all the youth to use in case they needed to do schoolwork, and later she did provide me with a smartphone. She wanted me to learn independence, and encouraged me to be safe by giving me the tools to recognize a scam. She educated me about the dangers instead of cutting me off.
When I turned 18 and could access the internet whenever I wanted, it was a little weird at first. No one’s asking me who I’m texting or what I’m doing on the computer. It took some time, but I got used to it.
hide by Andy Wright Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,480 | 2,019 |
"What I learned from studying billions of words of online fan fiction | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/131111/online-fan-fiction-learning-communities"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts What I learned from studying billions of words of online fan fiction By Cecilia Aragon archive page conceptual illustration of Harry Potter, Dr. Who and a rainbow My little pony all working together to float the tardis in the sky Jessica Madorran When I was 10, I was a lonely, geeky girl, a first--generation Latina growing up in a small town in Indiana. I happened across J.R.R. Tolkien’s fantasy trilogy, The Lord of the Rings , and immediately became enraptured by the richly woven world of elves, orcs, and small but heroic hobbits fighting against impossible odds to combat a powerful enemy.
But one thing disturbed me: the lack of female characters. The main party of adventurers accompanying the hobbit protagonist, Frodo, didn’t contain a single female. Not only did I feel shut out—the way I sometimes did in school when my teachers told me that girls weren’t supposed to be good at math—but it offended my sense of fairness. Surely girls and women could have adventures and take on risky challenges too? So I sat down with a spiral notebook and rewrote the story, re-gendering a couple of the main characters and adding new scenes, such as one where a female hobbit devised a clever plan to foil the Balrog, a gruesome monster who threw one of my favorite characters, the wizard Gandalf, into a bottomless pit.
I never shared that spiral notebook with anyone, but if I’d been able to get constructive feedback on it, I might have learned even more about writing.
By reimagining Tolkien’s fantasy world, I was creating a place where someone like me could feel at home. Writing my story gave me comfort. It also taught me about the effort involved in creating a narrative. I never shared that spiral notebook with anyone, but if I’d been able to get constructive feedback on it, I might have learned even more about writing.
What I didn’t realize then is I was writing fan fiction—a story based on characters or settings from another’s work—and that I was not alone. Fan fiction has many literary precedents. John Milton wrote Paradise Lost using characters from the Bible. Shakespeare retold ancient folk stories. Today, millions of young people are writing and sharing fan fiction on a variety of websites. They are giving and receiving feedback and teaching each other how to write. They’re not only learning about writing; they’re finding community, establishing identity, and exploring new trends that have not yet found mainstream acceptance.
On the basis of our research, my colleague Katie Davis and I at the University of Washington believe fan fiction could be more than just a source of support and self-expression for lonely kids; it could also be an important tool in formal education.
Defying the stereotype In the past 20 years, over 60 billion words of fan fiction have been written and posted on Fanfiction.net, the world’s largest repository. The site’s 10 million members have collectively authored a corpus about three-quarters the size of the entirety of published English-language fiction. This outpouring of creativity has been generated primarily by young people, with a median age of 15 ½.
Katie and I have been studying these sites since 2013, when we first met and chatted about a recent news story claiming that young people today can’t write—all they can do is produce broken, misspelled short texts. Both of us had teenage relatives who defied this stereotype. The young people we knew were skilled writers and thoughtful readers. They were also heavily involved in online communities and fan fiction. This apparent contradiction, backed up by my childhood experience, struck us as fertile grounds for research.
We recruited four students to join us in the project. Our group started out by selecting three fandoms, representing a range of genres and media types: one book, one cartoon, and one TV show. For the book, we selected Harry Potter , the popular young adult fantasy series, in part because it’s probably the single most prolific generator of fan fiction today, with over 800,000 stories archived in one repository alone. We also decided to study My Little Pony: Friendship Is Magic , a children’s animated fantasy TV series, and Doctor Who , a science fiction TV show that’s been running since 1963. For each fandom, it was important that at least two of us were deeply familiar with it, and that it was popular enough to have plenty of material for us to study.
We started out by reading stories and interacting with authors, and we each wrote and posted our own fanfic stories as participant observers. On our profiles we explained that we were researchers as well as fans of the communities we studied. As a group, we spent about 10 to 20 hours per week immersed in these communities. We ended up with over 1,000 hours of participant observation and several hundred pages of field notes and memos. We also interviewed authors both formally and informally.
The overriding reason that authors wrote fan fiction, we found, was for the love of it. They unanimously believed that it had helped them to become better writers, an evolution we could see for ourselves. They were very clear that support from other members of the community was critical. As one anonymous author told us: When I was 13, I had a major crush on a certain fictional character. My fics were full of phrases such as “gorgeous cerulean orbs,” “manly hunks of muscle,” and the like. Reviewers were kind enough to be positive about my amateurish fangirl postings—mostly because they also liked this character—but also pointed out my uses of clichés and overwriting. As a result, I learned to be sensitive to these types of bad writing. Today, I’ve published original fiction, and no one has ever called me out on a florid writing style. I think if a teacher had simply red-penciled my childish scribbles, I might have been so discouraged as to never write again.
Although privacy concerns prevent us from directly quoting from the stories written by the authors we interviewed, a well-known example illustrates how bad fan fiction can be.
My Immortal , called by some “the worst fan fiction ever written” (it may or may not be a parody), is a Harry Potter fanfic posted in 2006 on Fanfiction.net: Hi my name is Ebony Dark’ness Dementia Raven Way and I have long ebony black hair (that’s how I got my name) with purple streaks and red tips that reaches my mid-back and icy blue eyes like limpid tears … I’m a vampire but my teeth are straight and white. I have pale white skin. I’m also a witch.
Many of the authors we interviewed admitted they started off as poor writers but said they’d improved enough to consider writing professionally.
Fan fiction was what got me into writing in the first place … now I’m a freshman in a college that approved an application that had been sent in with clippings of my online work. I’m planning on majoring in creative writing—fiction specifically—and more than anything else, fan fiction and the fan fiction community has informed my writing style and ability, and my reviewing/editing abilities. Workshopping with a larger community, which might be an alien experience for many entering my department, is almost a daily routine for me by now.
We found that not only were fan fiction authors writing original fiction; they also learned life lessons, becoming more tolerant and willing to help others. Some said they’d become more open-minded, and had received emotional support that helped them navigate adolescent traumas and find identity. Here’s what three of them said: When I started writing fan fiction at age 13, I was a queer, autistic middle schooler who had not yet realized that she was either of these things. I had difficulty with many of the social situations that came naturally to others my age, and I became isolated from my peers at school. Fan fiction communities were a vital social outlet for me.
I spent over a year heavily invested in writing and reading fan fiction, and I accomplished some things I’m still quite proud of. That definitely had an impact on who I am. I’m more willing and able to help other writers with their work, I’m less judgmental about fan fiction and a number of other things, and I’ve certainly learnt a lot about grammar! It’s been a massive confidence boost that helped me get through university without quitting and still helps me today if I’m feeling down.
“I think if a teacher had simply red-penciled my childish scribbles, I might have been so discouraged as to never write again.” Our research goal was to learn more about fanfic authors’ mentoring relationships. We expected to find traditional mentorship pairs, with an older or more experienced author serving as a beta reader for a younger or less experienced one.
What we found was different. Millions of authors and readers communicate via multiple channels—including Skype, official beta reader groups, fan fiction user groups, and other messaging and social-media platforms, as well as story reviews. Individual pieces of feedback are often too small to constitute mentoring on their own, but in the aggregate, particularly when reviewers build on and reference each other’s comments, the result is a new form of network-enabled mentoring that we call “distributed mentoring.” It enables authors to piece together an overall view of their writing that is supportive as well as constructive. Many authors feel encouraged as well as educated by their reviewers. As one young person told us: I’ll just add to the mentoring point—it’s sort of come full cycle for me. When the girl PM’d [private messaged] me asking for advice, I did realise that I used to be her. Back in the day I wrote so badly that people flaming and trolling me would’ve been perfectly viable. Luckily I had people to push me up and advise me to turn me into the author I am today, so I found it really important to do exactly the same for her.
One key attribute of distributed mentoring is its abundance. Authors who have written both traditionally published work and fan fiction have noted that they may get more feedback in a week on their fanfics than they receive in years on their original fiction. It is a difference not only of degree, but also of kind. By itself, a single comment on a story, such as “Loved it,” is relatively meaningless. However, if a writer receives dozens or hundreds of similar comments, it’s valuable guidance.
A cure for isolation We believe distributed mentoring could be used to help improve formal writing education in schools. The most recent report from the National Assessment of Educational Progress indicated that 73% of US students in grades 8 and 12 lack proficiency in writing. Research has shown that writing skills can improve significantly during adolescence, and the popularity of writing fan fiction in that age group shows what an opportunity there is to use it as a learning tool.
Students with similar interests from school districts across the country could be connected with one another to get and give anonymous or pseudonymous feedback on their writing. Teachers could moderate the channels to ensure that feedback was constructive, as well as helping students learn from it.
If this work grew burdensome for teachers, hierarchical moderation could help. In other words, members could report negative or abusive comments, and volunteer moderators from among the students could decide which ones to delete, with teachers weighing in only when necessary. This technique is used in many large online communities, and many adolescents are familiar with it.
This vast and vibrant resource for kids who have something to say is especially meaningful to me when I contrast it with the isolation I encountered growing up. Fan fiction is a private universe that has become a welcoming community, particularly for those from marginalized groups. In it, young people are mentoring each other to become skillful writers and thoughtful readers—and they are doing it entirely on their own time and their own terms. Adults would do well to listen to and learn from them.
Cecilia Aragon is director of the Human Centered Data Science Lab at the University of Washington and the author, with Katie Davis, of Writers in the Secret Garden , published by MIT Press in August 2019.
hide by Cecilia Aragon Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,481 | 2,019 |
"Why an internet that never forgets is especially bad for young people | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/131123/internet-that-never-forgets-bad-for-young-people-online-permanence"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Why an internet that never forgets is especially bad for young people By Kate Eichhorn archive page conceptual illustration of a young man drinking a beer, trapped inside a piece of amber Peter Crowther Until the end of the 20th century, most young people could take one thing for granted: their embarrassing behavior would eventually be forgotten. It might be a bad haircut, or it might be getting drunk and throwing up at a party, but in an analog era, even if the faux pas were documented in a photograph, the likelihood of its being reproduced and widely circulated for years was minimal. The same held true for stupid or offensive remarks. Once you went off to college, there was no reason to assume that embarrassing moments from your high school years would ever resurface.
Not anymore. Today, people enter adulthood with much of their childhood and adolescence still up for scrutiny. But as past identities and mistakes become stickier, it’s not just individuals who might suffer. Something much larger—the potential for social change and transformation—may also be at risk.
Nowhere to hide In 2015, the New York Times reported that people around the world were taking 1 trillion photographs each year. Young people take a disproportionate number of them. Some of the teens and tweens I’ve interviewed in my research have told me they capture more than 300 images each day, from selfies to carefully posed photographs of friends to screenshots of FaceTime calls. About a billion photographs a day are uploaded to Facebook alone.
This incessant documentation did not begin with digital natives themselves. Their parents and grandparents, the first users of photo-sharing services like Flickr, put these young people’s earliest moments online. Without Flickr users’ permission or knowledge, hundreds of thousands of images uploaded to the site were eventually sucked into other databases, including MegaFace—a massive data set used for training face recognition systems. As a result, many of these photographs are now available to audiences for which they were never intended.
Meanwhile, digital natives are also the most intensively tracked generation at school. Millions of young people now attend schools where online learning tools monitor their progress on basic math and reading skills alongside their daily social interactions. The tools capture once ephemeral steps in students’ learning and social development.
Other software, like Bark and Gaggle, is used for security purposes, monitoring everything from students’ text messages, emails, and social-media posts to their viewing habits on YouTube by scanning for trigger phrases such as “kill myself” and “shoot.” Someone who messages a friend to say “I nearly killed myself laughing in class today” could be hauled in and asked questions about suicidal thoughts.
Digital school security companies typically delete student data after 30 days, but schools and school districts are free to keep it for much longer. The data is also frequently shared with law enforcement when potential threats are identified. It is unclear what data is being collected by security or learning software, and for how long it is kept. As three US senators wrote in a recent letter to more than 50 educational technology companies and data brokers, “Students have little control over how their data is being used …[they] are often unaware of the amount and type of data being collected about them and who may have access to it.” After all, without any clear checks and balances, one’s bad grades or an intemperate message from middle school could be sold to a job recruitment agency years later (see “ Does keeping kids offline breach their human rights? ”).
Unforgiven In such a world, tweens and teens who put a foot wrong have a lot to lose.
Consider, for example, the young woman known on Twitter as @NaomiH. In August 2018, excited by news that she had scored a coveted internship at NASA, Naomi went online and tweeted, “EVERYONE SHUT THE F— UP. I GOT ACCEPTED FOR A NASA INTERNSHIP.” When a friend retweeted the post using the NASA hashtag, a former NASA engineer discovered it and commented on Naomi’s vulgar language. NASA eventually canceled her internship.
Or take @Cellla, who in 2015 was about to start a far less glamorous position at Jet’s Pizza in Mainsfield, Texas. “Ew I start this [expletive] job tomorrow,” she tweeted. When the restaurant owner saw the tweet, he replied, “No you don’t start that job today! I just fired you! Good luck with your no money, no job life!” His implication was clear—with a single tweet, Cellla had lost not just this job, but possibly future ones.
In a world where the past haunts the present, young people may calcify their identities, perspectives, and political positions at an increasingly young age.
Other teens have paid a price for less obvious offenses. In 2016, the principal of Cañon City High School in Colorado disciplined a student for tweeting, “The concert choir and all their makeup is the only clowns we got around here.” He also disciplined 12 classmates for simply liking the tweet. In 2018, a senior at Sierra High in Tollhouse, California, shared a post of Snoop Dogg holding what appeared to be a marijuana joint. She was suspended for “engaging in inappropriate sexual and drug propaganda.” Maybe these posts are indeed bad form. But isn’t this precisely the sort of inane behavior expected of teens? And if teens can’t be a bit outrageous and make stupid mistakes, what’s at stake? Are we losing that elusive period between childhood and adulthood—a time that has, at least for the past century, been set aside for people to explore, take risks, and even fail without significant consequences? Erik Erikson, a 20th-century psychoanalyst best known for his theorizing on identity development, suggested in his 1950 book Childhood and Society that the adolescent mind is in “a psychosocial stage between childhood and adulthood, and between the morality learned by the child, and the ethics to be developed by the adult.” During this period, the adolescent can enjoy a “psychosocial moratorium”—not on experience, but rather on the consequences of decisions.
Not all young people have consistently been granted this moratorium on consequences. Indeed, youth incarceration rates in the United States suggest that the opposite may hold true for some—particularly for young men from Latino and African-American backgrounds. Still, in most communities, most people agree that children and teens should be able to make mistakes from time to time and have those mistakes both forgotten and forgiven. This is precisely why most jurisdictions treat young offenders differently from adults.
But for digital natives, the constant recording of even their most minor mistakes and embarrassments means that this long-standing agreement now appears to be threatened. And this isn’t bad news only for them, but for society at large.
Prisoners of perfection My research on youth and media practices indicates that as young people lose their ability to explore new ideas and identities and mess up without consequence, there are two critical dangers.
First, some are already becoming so risk-averse that they may be missing out on at least some of the experimentation that has long defined adolescence. While people like NaomiH and Cellla get into the news for their indiscretions, what’s less visible is how carefully many digital natives now curate their online identities, taking their cues more from CEOs than from their reckless peers.
LinkedIn originally had an age minimum of 18. By 2013, the professional networking site had lowered its age floor to 13 in some regions and 14 in the United States, before standardizing it at 16 in 2018. The company wouldn’t say how many middle and high schoolers are on the platform. But they aren’t hard to find.
As one 15-year-old LinkedIn user (who asked to remain anonymous for fear of losing her account) explained to me, “I got my first LinkedIn page at 13. It was easy—I just lied. I knew I needed LinkedIn because it ranks high on Google. This way, people see my professional side first.” When I asked why she needed to manage her “professional side” at 13, she explained that there’s competition to get into high schools in her region. Since starting her LinkedIn profile in eighth grade, she has added new positions and accomplishments—for example, chief of staff for her student union and chief operating officer for a nonprofit she founded with a 16-year-old peer (who, not surprisingly, is on LinkedIn too).
My research suggests that these users aren’t outliers but part of a growing demographic of tweens and teens who are actively curating their professional identities. But should 13- or 15-year-olds feel compelled to list their after-school activities, academic honors, and test scores on professional networking sites, with photos of themselves decked out in corporate attire? And will college admissions officers and job recruiters start to dig even further back when assessing applicants—perhaps as far back as middle school? The risk is that this will produce generations of increasingly cautious individuals—people too worried about what others might find or think to ever engage in productive risks or innovative thinking.
The second potential danger is more troubling: in a world where the past haunts the present, young people may calcify their identities, perspectives, and political positions at an increasingly young age.
In 2017, Harvard University rescinded admission offers to 10 students after discovering that they had shared offensive memes in a private Facebook chat. In 2019, the university withdrew another offer—to Kyle Kashuv, an outspoken conservative survivor of the Marjory Stoneman Douglas High School shooting in Parkland, Florida. In Kashuv’s case, it wasn’t a social-media post that caused the trouble, and it wasn’t an adult who exposed him. Back in 10th grade, Kashuv had repeatedly used the N-word in a shared Google document created for a class assignment. When Harvard accepted him, his peers recovered the document and shared it with the media.
There are reasons to applaud Harvard for refusing to take these students. Such decisions offer hope that future generations will be held accountable for racist, sexist, and homophobic behavior. This is a step in the right direction. But there is a flip side.
When Kashuv discovered he had lost his place at Harvard, he did what any digital native would do—he shared his reaction online. On Twitter, he wrote, “Throughout its history, Harvard’s faculty has included slave owners, segregationists, bigots and antisemites. If Harvard is suggesting that growth isn’t possible and that our past defines our future, then Harvard is an inherently racist institution.” His argument may be a poor excuse for his actions, but it raises a question we can’t afford to ignore: Should one’s past define one’s future? The risk is that young people who hold extreme views as teenagers may feel there’s no use changing their minds if a negative perception of them sticks regardless. Simply put, in the future, geeky kids remain geeky, dumb jocks remain dumb, and bigots remain bigots. Identities and political perspectives will be hardened in place, not because people are resistant to change but because they won’t be allowed to shed their past. In a world where partisan politics and extremism continue to gain ground, this may be the most dangerous consequence of coming of age in an era when one has nothing left to hide.
Kate Eichhorn’s most recent book is The End of Forgetting.
hide by Kate Eichhorn Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,482 | 2,019 |
"Should colleges really be putting smart speakers in dorms? | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/131139/colleges-smart-speakers-in-dorms-privacy"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Should colleges really be putting smart speakers in dorms? By Kathryn Miles archive page When Mateo Catano returned for his second year as an undergraduate at Saint Louis University in the fall of 2018, he found himself with a new roommate—not another student but a disembodied brain in the form of an Amazon Echo Dot.
Earlier that summer, the information technology department at SLU had installed about 2,300 of the smart speakers—one for each of the university’s residence hall rooms, making the school the first in the country to do so. Each device was pre-programmed with answers to about 130 SLU-specific questions, ranging from library hours to the location of the registrar’s office (the school dubbed this “AskSLU”). The devices also included the basic voice “skills” available on other Dots, including alarms and reminders, general information, and the ability to stream music.
For Catano, the Dot was a welcome addition. He liked hearing the weather first thing in the morning and knowing which dining halls were open. And, if he’s being honest, he liked the company. “Living in a single, AskSLU definitely made me feel less lonely,” he says. “And I liked the status of being at the first university to do this.” Catano’s reaction was exactly what SLU administrators were hoping for. This fall, the Jesuit institution announced plans to broaden the voice skills of its Echo Dots by including both text messaging and chatbot functions.
No idea of the long-term effects We’re on the verge of a new era of smart speakers on campus. Schools as wide-ranging as Arizona State University, Lancaster University in the UK, and Ross University School of Medicine in Barbados have adopted voice-skill technology on campus. Some, including Northeastern University, have taken the technology a step further and now give students access to financials, course schedules and grades, and outstanding fees via voice devices.
In late 2018, Boston’s Emerson College announced it was one of 18 recipients of a grant from Amazon to advance voice--enabled technology on campuses, part of the tech giant’s Alexa Innovation Fellowship. Emerson has created a dedicated voice lab where students can interact and experiment with Alexa skills, and it plans to install Alexa devices in places like theaters and outside elevator banks.
Administrators at some of these schools told me they believe Alexa will bolster enrollment and reduce dropout rates. Several also said they believe voice technology can increase their students’ success and boost their overall happiness.
However, there are plenty of people on campus who see a dark side.
“When it comes to deploying listening devices where sensitive conversations occur, we simply have no idea what long-term effect having conversations recorded and kept by Amazon might have on their futures—even, quite possibly, on their health and well-being,” says Russell Newman, an Emerson professor who researches the political economy of communication and communications policy.
“ We still don’t really know just how much data voiceskill hosts like Amazon—or third parties that rely on Amazon—are harvesting, or what they’re doing with that information.” Other faculty members I spoke to echoed Newman’s objections. What if data harvested from students’ conversations affected their chances of getting a mortgage or a job later on? What if it were used against foreign students to have them deported, possibly to home countries where they could be imprisoned for their political views? Right. So given all the risks, why are colleges so eager to fill their campuses with AI-enabled microphones? What’s in it for them? AI to the rescue Colleges and universities face several looming crises. After years of soaring enrollment numbers, US schools are seeing declines in admissions, a trend expected to worsen over the next decade. A November 2019 special report by the Chronicle of Higher Education predicts rapid decreases at even the country’s most selective institutions. Institutional revenue has stalled—Moody’s Investors Service issued a negative outlook for higher education for fiscal year 2019, with the exception of universities in the South. For three years, the Department of Education has sought to slash billions from financial aid and support for poorer students, though Congress has rejected the cuts. State contributions to public university budgets have lagged since the last recession. Private colleges are also struggling; more than a quarter of them are now in the red. In recent years, 20 private, nonprofit colleges closed their doors, and many more are considering merging or consolidating.
Meanwhile, half of all students who enter college fail to graduate within six years. Researchers give a variety of explanations. Nick Bowman, a professor of education at the University of Iowa, points to the fact that today’s students are older than the traditional 18- to 22-year-olds. Many have full-time jobs. Some care for children or siblings or aging parents. And with an average of $35,000 in student loan debt after four years in school, the prospect of dropping out can be tempting.
For many college administrators, AI offers appealing solutions to these predicaments. Winston-Salem State University, a historically black university with many low-income and first--generation college students, has had perennial problems helping each entering class hit key deadlines like submitting high school transcripts and vaccination records, completing financial aid forms, and making housing deposits. “We realized that many of our students may not understand the college enrollment process and may not be able to rely on families or support systems to decode it for them,” says Jay Davis, the university’s head of media relations.
Two years ago, WSSU partnered with a tech firm called AdmitHub to offer an AI chatbot named Winston to help students navigate the enrollment process. Davis says the app successfully answers about three-quarters of students’ questions, and that there’s been a dramatic increase in the number of students who meet their financial requirements and submit all the supporting documents necessary to complete their application. This year WSSU is hosting its largest first-year class in more than a decade, and Davis says Winston played a big role in that.
Access to your words, forever I spent several hours playing around with chatbots at a handful of colleges and universities. They all aced questions about the school mascot, where I could find dinner, and when the next sporting or alumni networking event was. But they flubbed others. When I told one I was sick, it informed me the student health center would not issue a written excuse for missed classes. I asked it where the student health center was; it responded with university tour times for prospective students. I told another I felt depressed, and it referred me to a federal student financial aid program.
The campus programmers on the other side of these devices all told me that the skills would improve as more students used them—which is, of course, what makes AI so effective. But it’s also what makes threats to our privacy so real, says Vitaly Shmatikov, a professor of computer science at Cornell Tech. Tech companies, says Shmatikov, are notoriously opaque about privacy and security. What he and other scholars have learned about them is largely by way of reverse-engineering and some educated guesswork, and the findings concern Shmatikov a great deal.
For starters, he says, companies like Amazon train their speech recognition algorithms on recordings of past user interactions to make them better at, for instance, understanding the intent of a question. He says all the companies involved are “very cagey” about how much data is traveling between them. “There is no promise to the user that their data won’t leave a specific device,” says Shmatikov. “We still don’t really know just how much data voice-skill hosts like Amazon—or third parties that rely on Amazon—are harvesting, or what they’re doing with that information.” Amazon didn’t respond to multiple requests for comment.
Shmatikov says it’s reasonable to assume that a company’s cloud has date- and time-stamped recordings of students’ requests to a smart speaker, and the devices may even record the conversations the student might have had with other people before or after speaking to it. As voice identification and location skills improve, it will become increasingly possible to link these recordings to an individual person. That’s not like a school searching your locker; it’s more like a school recording in perpetuity everything that’s ever been in your locker and what you and your friends said every time you opened it, and then letting a host of commercial entities search that information.
Officials at Arizona State University and Saint Louis University say they’re not linking information like students’ financials, health records, and grades (data known as “authenticated,” since it requires a student to link to personal accounts) until they are more confident about the security measures. The technology used at Northeastern was developed by a small team led by Somen Saha, then an employee at the university. Saha eventually created an independent company called n-Powered, which developed an app called MyHusky that’s available through Alexa. However, its privacy page also acknowledges, “We use Amazon’s platform to make this work. Amazon stores information about usage that can be purged upon request.” Shmatikov says that using a university’s own software and restricting the use of chatbots to general questions may limit a tech company’s access to student information, but it won’t solve the problem entirely. He points to sensitive questions like whether the health center offers STD testing or prescriptions to treat conditions like schizophrenia: technically, these aren’t linked to a specific student, but it’s not too hard to figure out who is asking, and students may not realize these aren’t always anonymous queries. Plus, says Shmatikov, as long as a company like Amazon is converting student prompts to data signals, it has access to the student’s information—forever.
Scary ramifications Privacy is a concern for any user of an AI device, but the faculty I spoke with for this story insist there are particularly scary ramifications for higher education.
“College students are perhaps the most desirable category of consumers,” says Emerson’s Newman. “They are the trickiest to reach and the most likely to set trends.” As a result, he says, their data is some of the most valuable and the most likely to be mined or sold. And for educational institutions to be complicit in the commodification of students for corporate gain is, he says, fundamentally antithetical to their missions.
Sarah T. Roberts, an assistant professor of information studies at UCLA, says schools that enter into agreements with tech companies are at least potentially putting their students’ well-being at risk. “A student’s time at a college or university is used to explore ideas and try on new identities, whether that’s political beliefs or gender and sexuality,” says Roberts. “The knowledge that they are being recorded as they do so will undoubtedly prevent students from feeling like they can speak their minds.” It’s also worth remembering, she says, that many students come from countries where it can be dangerous to reveal their sexuality or political beliefs.
At Northeastern, one student created an online petition demanding that the university remove all Alexa devices. It reads in part: “Alexas are well-documented as surreptitious listening devices that are used to help sharpen Amazon’s marketing tactics .... At the very least, Northeastern University is forcing an extraneous device in student spaces that no one asked for. At the worst, they are recklessly violating their student body’s privacy at the behest of a corporate donor.” As of early December, the petition had 125 signatures.
At Emerson, students and other faculty members have joined Newman in creating a committee to draft privacy policies for the campus. At the very least, he says, he would like to see warning signs placed wherever a listening device is located. He says so far the administration has been cooperative, and the deployment of any devices has been delayed.
“We need a safe way to experiment with these technologies and understand the consequences of their use instead of just continuing a blind march towards surveillance for the purpose of profit-making,” Newman says. “These are sophisticated applications with lifelong consequences for the individuals who are analyzed by them, to ends as yet unknown. We all need to be really judicious and thoughtful here.” Kathryn Miles is a freelance writer and the author of Quakeland: On the Road to America’s Next Devastating Earthquake.
Her story “ The Little Coin That Ate Quebec ” appeared in the May/June 2018 issue.
hide by Kathryn Miles Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,483 | 2,019 |
"Video games: scourge or savior? | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/27/30/video-games-throughout-history-good-or-bad"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Video games: scourge or savior? By The Editors archive page June 1983 From “Will Pac-Man Consume Our Nation’s Youth?”: Some opponents of the video-game craze see the best minds of the next generation being destroyed by the computer. They point to evidence that devotees will resort to anything to feed their habit. This February, police in Tokyo nabbed a five_boy video gang that had stolen an estimated $39,000 in cash and jewelry to bankroll their habit. In South Florida, some residents have complained that the opening of video arcades seems to correlate with an upsurge of petty theft. Others, however, point out that at least young people are doing something basically harmless with their money. “It’s a good substitute for bad vices,” Don Fudge, head programmer at Avant_Garde Creations, asserted at a recent Applefest in San Francisco. Fudge speculated that the quarters teens pump into arcade games might otherwise be spent on drugs.
December 1993 From “Video Games That Teach?”: Experts on learning emphasize that the secret to education is to motivate the student give him or her a compelling reason to want to absorb the information. In a good game, players lust after high scores and, more fundamentally, staying “alive.” Adventure games also motivate the player by indulging the urge to explore: “If you succeed, you get to go somewhere new,” explains Michael Knox, president of Park Place Productions, which makes video game and computer software … So far, the marriage of education and video-game-like entertainment has produced some not-very-educational games and some not -very-entertaining learning activities … These technologies will result in a more satisfying crop of products only if the two cultures that dominate our children’s lives education and entertainment do a better job of figuring out what the other has to offer.
March 2002 From “From Playstation to PC”: The school bell rings, and teenagers flood the hallways. Many pull out Cybikos popular handheld devices that are a combination personal digital assistant, wireless messenger and game machine. This is the new face of video gaming mobile, networked, interactive. More to the point for society at large, its rapid adoption by a generation of young computer users may herald aspects of the future of computing in general from PCs, to personal digital assistants like the Palm, to cell phones. You may soon be able to take a virtual walk through your computer’s contents, interact with scores of people in real time and send artificially intelligent agents out to do your bidding; and if you do, you will owe a word of thanks to game devices like Cybikos. “The segment of software that has pushed hardware development most is games,” says game developer Bernard Yee.
hide by The Editors Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,484 | 2,020 |
"2020 | MIT Technology Review"
|
"https://www.technologyreview.com/lists/technologies/2020"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts 10 Breakthrough Technologies The List Years 10 Breakthrough Technologies 2020 Here is our annual list of technological advances that we believe will make a real difference in solving important problems. How do we pick? We avoid the one-off tricks, the overhyped new gadgets. Instead we look for those breakthroughs that will truly change how we live and work.
Unhackable internet Hyper-personalized medicine Digital money Anti-aging drugs AI-discovered molecules Satellite mega-constellations Quantum supremacy Tiny AI Differential privacy Climate change attribution We’re excited to announce that with this year’s list we’re also launching our very first editorial podcast, Deep Tech, which will explore the the people, places, and ideas featured in our most ambitious journalism.
Have a listen here.
This story was part of our March/April 2020 issue.
10 Breakthrough Technologies Unhackable internet YOSHI SODEOKA Unhackable internet Why it matters The internet is increasingly vulnerable to hacking; a quantum one would be unhackable.
Key players Delft University of Technology, Quantum Internet Alliance, University of Science and Technology of China Availability 5 years Later this year, Dutch researchers will complete a quantum internet between Delft and the Hague.
An internet based on quantum physics will soon enable inherently secure communication. A team led by Stephanie Wehner, at Delft University of Technology, is building a network connecting four cities in the Netherlands entirely by means of quantum technology. Messages sent over this network will be unhackable.
In the last few years, scientists have learned to transmit pairs of photons across fiber-optic cables in a way that absolutely protects the information encoded in them. A team in China used a form of the technology to construct a 2,000-kilometer network backbone between Beijing and Shanghai—but that project relies partly on classical components that periodically break the quantum link before establishing a new one, introducing the risk of hacking.
The Delft network, in contrast, will be the first to transmit information between cities using quantum techniques from end to end.
The technology relies on a quantum behavior of atomic particles called entanglement. Entangled photons can’t be covertly read without disrupting their content.
But entangled particles are difficult to create, and harder still to transmit over long distances. Wehner’s team has demonstrated it can send them more than 1.5 kilometers (0.93 miles), and they are confident they can set up a quantum link between Delft and the Hague by around the end of this year. Ensuring an unbroken connection over greater distances will require quantum repeaters that extend the network.
Such repeaters are currently in design at Delft and elsewhere. The first should be completed in the next five to six years, says Wehner, with a global quantum network following by the end of the decade.
by Russ Juskalian Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Hyper-personalized medicine Julia Dufossé Hyper-personalized medicine Why it matters Genetic medicine tailored to a single patient means hope for people whose ailments were previously uncurable.
Key players A-T Children’s Project, Boston Children’s Hospital, Ionis Pharmaceuticals, US Food & Drug Administration Availability Now Novel drugs are being designed to treat unique genetic mutations.
Here’s a definition of a hopeless case: a child with a fatal disease so exceedingly rare that not only is there no treatment, there’s not even anyone in a lab coat studying it. “Too rare to care,” goes the saying.
That’s about to change, thanks to new classes of drugs that can be tailored to a person’s genes. If an extremely rare disease is caused by a specific DNA mistake—as several thousand are—there’s now at least a fighting chance for a genetic fix.
One such case is that of Mila Makovec, a little girl suffering from a devastating illness caused by a unique genetic mutation, who got a drug manufactured just for her. Her case made the New England Journal of Medicine in October, after doctors moved from a readout of her genetic error to a treatment in just a year. They called the drug milasen, after her.
The treatment hasn’t cured Mila. But it seems to have stabilized her condition: it has reduced her seizures, and she has begun to stand and walk with assistance.
Mila’s treatment was possible because creating a gene medicine has never been faster or had a better chance of working. The new medicines might take the form of gene replacement, gene editing, or antisense (the type Mila received), a sort of molecular eraser, which erases or fixes erroneous genetic messages. What the treatments have in common is that they can be programmed, in digital fashion and with digital speed, to correct or compensate for inherited diseases, letter for DNA letter.
How many stories like Mila’s are there? So far, just a handful.
But more are on the way. Where researchers would have once seen obstacles and said “I’m sorry,” they now see solutions in DNA and think maybe they can help.
The real challenge for “n-of-1” treatments (a reference to the number of people who get the drug) is that they defy just about every accepted notion of how pharmaceuticals should be developed, tested, and sold. Who will pay for these drugs when they help one person, but still take large teams to design and manufacture? by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Digital money The rise of digital currency has massive ramifications for financial privacy.
Digital money Why it matters As the use of physical cash declines, so does the freedom to transact without an intermediary. Meanwhile, digital currency technology could be used to splinter the global financial system.
Key players People’s Bank of China, Facebook Availability This year Last June Facebook unveiled a “global digital currency” called Libra. The idea triggered a backlash and Libra may never launch, at least not in the way it was originally envisioned. But it’s still made a difference: just days after Facebook’s announcement, an official from the People’s Bank of China implied that it would speed the development of its own digital currency in response. Now China is poised to become the first major economy to issue a digital version of its money, which it intends as a replacement for physical cash.
China’s leaders apparently see Libra, meant to be backed by a reserve that will be mostly US dollars, as a threat: it could reinforce America’s disproportionate power over the global financial system, which stems from the dollar’s role as the world’s de facto reserve currency. Some suspect China intends to promote its digital renminbi internationally.
Now Facebook’s Libra pitch has become geopolitical. In October, CEO Mark Zuckerberg promised Congress that Libra “will extend America’s financial leadership as well as our democratic values and oversight around the world.” The digital money wars have begun.
by Mike Orcutt Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Anti-aging drugs YOSHI SODEOKA Anti-aging drugs Why it matters A number of different diseases, including cancer, heart disease, and dementia, could potentially be treated by slowing aging.
Key players Unity Biotechnology, Alkahest, Mayo Clinic, Oisín Biotechnologies, Siwa Therapeutics Availability Less than 5 years Drugs that try to treat ailments by targeting a natural aging process in the body have shown promise.
The first wave of a new class of anti-aging drugs have begun human testing. These drugs won’t let you live longer (yet) but aim to treat specific ailments by slowing or reversing a fundamental process of aging.
The drugs are called senolytics—they work by removing certain cells that accumulate as we age. Known as “senescent” cells, they can create low-level inflammation that suppresses normal mechanisms of cellular repair and creates a toxic environment for neighboring cells.
In June, San Francisco–based Unity Biotechnology reported initial results in patients with mild to severe osteoarthritis of the knee. Results from a larger clinical trial are expected in the second half of 2020. The company is also developing similar drugs to treat age-related diseases of the eyes and lungs, among other conditions.
Senolytics are now in human tests, along with a number of other promising approaches targeting the biological processes that lie at the root of aging and various diseases.
A company called Alkahest injects patients with components found in young people’s blood and says it hopes to halt cognitive and functional decline in patients suffering from mild to moderate Alzheimer’s disease. The company also has drugs for Parkinson’s and dementia in human testing.
And in December, researchers at Drexel University College of Medicine even tried to see if a cream including the immune-suppressing drug rapamycin could slow aging in human skin.
The tests reflect researchers’ expanding efforts to learn if the many diseases associated with getting older—such as heart diseases, arthritis, cancer, and dementia—can be hacked to delay their onset.
by Adam Piore Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window AI-discovered molecules Scientists have used AI to discover promising drug-like compounds.
AI-discovered molecules Why it matters Commercializing a new drug costs around $2.5 billion on average. One reason is the difficulty of finding promising molecules.
Key players Insilico Medicine, Kebotix, Atomwise, University of Toronto, BenevolentAI, Vector Institute Availability 3-5 years The universe of molecules that could be turned into potentially life-saving drugs is mind-boggling in size: researchers estimate the number at around 10 60.
That’s more than all the atoms in the solar system, offering virtually unlimited chemical possibilities—if only chemists could find the worthwhile ones.
Now machine-learning tools can explore large databases of existing molecules and their properties, using the information to generate new possibilities. This could make it faster and cheaper to discover new drug candidates.
In September, a team of researchers at Hong Kong–based Insilico Medicine and the University of Toronto took a convincing step toward showing that the strategy works by synthesizing several drug candidates found by AI algorithms.
Using techniques like deep learning and generative models similar to the ones that allowed a computer to beat the world champion at the ancient game of Go, the researchers identified some 30,000 novel molecules with desirable properties. They selected six to synthesize and test. One was particularly active and proved promising in animal tests.
Chemists in drug discovery often dream up new molecules—an art honed by years of experience and, among the best drug hunters, by a keen intuition. Now these scientists have a new tool to expand their imaginations.
by David Rotman Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Satellite mega-constellations Julia Dufossé Satellite mega-constellations Why it matters These systems can blanket the globe with high-speed internet—or turn Earth’s orbit into a junk-ridden minefield.
Key players SpaceX, OneWeb, Amazon, Telesat Availability Now We can now affordably build, launch, and operate tens of thousands of satellites in orbit at once.
Satellites that can beam a broadband connection to internet terminals. As long as these terminals have a clear view of the sky, they can deliver internet to any nearby devices. SpaceX alone wants to send more than 4.5 times more satellites into orbit this decade than humans have ever launched since Sputnik.
These mega-constellations are feasible because we have learned how to build smaller satellites and launch them more cheaply. During the space shuttle era, launching a satellite into space cost roughly $24,800 per pound. A small communications satellite that weighed four tons cost nearly $200 million to fly up.
Today a SpaceX Starlink satellite weighs about 500 pounds (227 kilograms). Reusable architecture and cheaper manufacturing mean we can strap dozens of them onto rockets to greatly lower the cost; a SpaceX Falcon 9 launch today costs about $1,240 per pound.
The first 120 Starlink satellites went up last year, and the company planned to launch batches of 60 every two weeks starting in January 2020. OneWeb will launch over 30 satellites later this year. We could soon see thousands of satellites working in tandem to supply internet access for even the poorest and most remote populations on the planet.
But that’s only if things work out. Some researchers are livid because they fear these objects will disrupt astronomy research. Worse is the prospect of a collision that could cascade into a catastrophe of millions of pieces of space debris, making satellite services and future space exploration next to impossible. Starlink’s near-miss with an ESA weather satellite in September was a jolting reminder that the world is woefully unprepared to manage this much orbital traffic. What happens with these mega-constellations this decade will define the future of orbital space.
by Neel V. Patel Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Quantum supremacy YOSHI SODEOKA Quantum supremacy Why it matters Eventually, quantum computers will be able to solve problems no classical machine can manage.
Key players Google, IBM, Microsoft, Rigetti, D-Wave, IonQ, Zapata Computing, Quantum Circuits Availability 5-10+ years Google has provided the first clear proof of a quantum computer outperforming a classical one.
Quantum computers store and process data in a way completely differently from the ones we’re all used to. In theory, they could tackle certain classes of problems that even the most powerful classical supercomputer imaginable would take millennia to solve, like breaking today’s cryptographic codes or simulating the precise behavior of molecules to help discover new drugs and materials.
There have been working quantum computers for several years, but it’s only under certain conditions that they outperform classical ones, and in October Google claimed the first such demonstration of “quantum supremacy.” A computer with 53 qubits—the basic unit of quantum computation—did a calculation in a little over three minutes that, by Google’s reckoning, would have taken the world’s biggest supercomputer 10,000 years, or 1.5 billion times as long. IBM challenged Google’s claim, saying the speedup would be a thousandfold at best; even so, it was a milestone, and each additional qubit will make the computer twice as fast.
However, Google’s demo was strictly a proof of concept—the equivalent of doing random sums on a calculator and showing that the answers are right. The goal now is to build machines with enough qubits to solve useful problems. This is a formidable challenge: the more qubits you have, the harder it is to maintain their delicate quantum state. Google’s engineers believe the approach they’re using can get them to somewhere between 100 and 1,000 qubits, which may be enough to do something useful—but nobody is quite sure what.
And beyond that? Machines that can crack today’s cryptography will require millions of qubits; it will probably take decades to get there. But one that can model molecules should be easier to build.
by Gideon Lichfield Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Tiny AI JULIA DUFOSSÉ Tiny AI Why it matters Our devices no longer need to talk to the cloud for us to benefit from the latest AI-driven features.
Key players Google, IBM, Apple, Amazon Availability Now We can now run powerful AI algorithms on our phones.
AI has a problem: in the quest to build more powerful algorithms, researchers are using ever greater amounts of data and computing power, and relying on centralized cloud services. This not only generates alarming amounts of carbon emissions but also limits the speed and privacy of AI applications.
But a countertrend of tiny AI is changing that. Tech giants and academic researchers are working on new algorithms to shrink existing deep-learning models without losing their capabilities. Meanwhile, an emerging generation of specialized AI chips promises to pack more computational power into tighter physical spaces, and train and run AI on far less energy.
These advances are just starting to become available to consumers. Last May, Google announced that it can now run Google Assistant on users’ phones without sending requests to a remote server. As of iOS 13, Apple runs Siri’s speech recognition capabilities and its QuickType keyboard locally on the iPhone. IBM and Amazon now also offer developer platforms for making and deploying tiny AI.
All this could bring about many benefits. Existing services like voice assistants, autocorrect, and digital cameras will get better and faster without having to ping the cloud every time they need access to a deep-learning model. Tiny AI will also make new applications possible, like mobile-based medical-image analysis or self-driving cars with faster reaction times. Finally, localized AI is better for privacy, since your data no longer needs to leave your device to improve a service or a feature.
But as the benefits of AI become distributed, so will all its challenges. It could become harder to combat surveillance systems or deepfake videos, for example, and discriminatory algorithms could also proliferate. Researchers, engineers, and policymakers need to work together now to develop technical and policy checks on these potential harms.
by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Differential privacy A technique to measure the privacy of a crucial data set.
Differential privacy Why it matters It is increasingly difficult for the US Census Bureau to keep the data it collects private. A technique called differential privacy could solve that problem, build trust, and also become a model for other countries.
Key players US Census Bureau, Apple, Facebook Availability Its use in the 2020 US Census will be the biggest-scale application yet.
In 2020, the US government has a big task: collect data on the country’s 330 million residents while keeping their identities private. The data is released in statistical tables that policymakers and academics analyze when writing legislation or conducting research. By law, the Census Bureau must make sure that it can’t lead back to any individuals.
But there are tricks to “de-anonymize” individuals, especially if the census data is combined with other public statistics.
So the Census Bureau injects inaccuracies, or “noise,” into the data. It might make some people younger and others older, or label some white people as black and vice versa, while keeping the totals of each age or ethnic group the same. The more noise you inject, the harder de-anonymization becomes.
Differential privacy is a mathematical technique that makes this process rigorous by measuring how much privacy increases when noise is added. The method is already used by Apple and Facebook to collect aggregate data without identifying particular users.
But too much noise can render the data useless. One analysis showed that a differentially private version of the 2010 Census included households that supposedly had 90 people.
If all goes well, the method will likely be used by other federal agencies. Countries like Canada and the UK are watching too.
by Angela Chen Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Climate change attribution YOSHI SODEOKA Climate change attribution Why it matters It’s providing a clearer sense of how climate change is worsening the weather, and what we’ll need to do to prepare.
Key players World Weather Attribution, Royal Netherlands Meteorological Institute, Red Cross Red Crescent Climate Centre, University of Oxford Availability Now Researchers can now spot climate change’s role in extreme weather.
Ten days after Tropical Storm Imelda began flooding neighborhoods across the Houston area last September, a rapid-response research team announced that climate change almost certainly played a role.
The group, World Weather Attribution, had compared high-resolution computer simulations of worlds where climate change did and didn’t occur. In the former, the world we live in, the severe storm was as much as 2.6 times more likely—and up to 28% more intense.
Earlier this decade, scientists were reluctant to link any specific event to climate change. But many more extreme-weather attribution studies have been done in the last few years, and rapidly improving tools and techniques have made them more reliable and convincing.
This has been made possible by a combination of advances. For one, the lengthening record of detailed satellite data is helping us understand natural systems. Also, increased computing power means scientists can create higher-resolution simulations and conduct many more virtual experiments.
These and other improvements have allowed scientists to state with increasing statistical certainty that yes, global warming is often fueling more dangerous weather events.
By disentangling the role of climate change from other factors, the studies are telling us what kinds of risks we need to prepare for, including how much flooding to expect and how severe heat waves will get as global warming becomes worse. If we choose to listen, they can help us understand how to rebuild our cities and infrastructure for a climate-changed world.
by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,485 | 2,020 |
"An elegy for cash: the technology we might never replace | MIT Technology Review"
|
"https://www.technologyreview.com/s/614998"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts An elegy for cash: the technology we might never replace By Mike Orcutt archive page Andrea Daquino Think about the last time you used cash. How much did you spend? What did you buy, and from whom? Was it a one-time thing, or was it something you buy regularly? Was it legal? If you’d rather keep all that to yourself, you’re in luck. The person in the store (or on the street corner) may remember your face, but as long as you didn’t reveal any identifying information, there is nothing that links you to the transaction.
This is a feature of physical cash that payment cards and apps do not have: freedom. Called “bearer instruments,” banknotes and coins are presumed to be owned by whoever holds them. We can use them to transact with another person without a third party getting in the way. Companies cannot build advertising profiles or credit ratings out of our data, and governments cannot track our spending or our movements. And while a credit card can be declined and a check mislaid, handing over money works every time, instantly.
We shouldn’t take this freedom for granted. Much of our commerce now happens online. It relies on banks and financial technology companies to serve as middlemen. Transactions are going digital in the physical world, too: electronic payment tools, from debit cards to Apple Pay to Alipay, are increasingly replacing cash. While notes and coins remain popular in many countries, including the US, Japan, and Germany, in others they are nearing obsolescence.
This trend has civil liberties groups worried. Without cash, there is “no chance for the kind of dignity-preserving privacy that undergirds an open society,” writes Jerry Brito, executive director of Coin Center, a policy advocacy group based in Washington, DC. In a recent report , Brito contends that we must “develop and foster electronic cash ” that is as private as physical cash and doesn’t require permission to use.
The central question is who will develop and control the electronic payment systems of the future. Most of the existing ones, like Alipay, Zelle, PayPal, Venmo, and Kenya’s M-Pesa, are run by private firms. Afraid of leaving payments solely in their hands, many governments are looking to develop some sort of electronic stand-in for notes and coins. Meanwhile, advocates of stateless, ownerless cryptocurrencies like Bitcoin say they’re the only solution as surveillance-proof as cash—but can they be feasible at large scales? We tend to take it for granted that new technologies work better than old ones—safer, faster, more accurate, more efficient, more convenient. Purists may extol the virtues of vinyl records, but nobody can dispute that a digital music collection is easier to carry and sounds almost exactly as good. Cash is a paradox—a technology thousands of years old that may just prove impossible to re-create in a more advanced form.
In (government) money we trust? We call banknotes and coins “cash,” but the term really refers to something more abstract: cash is essentially money that your government owes you. In the old days this was a literal debt. “ I promise to pay the bearer on demand the sum of … ” still appears on British banknotes, a notional guarantee that the Bank of England will hand over the same value in gold in exchange for your note. Today it represents the more abstract guarantee that you will always be able to use that note to pay for things.
The digits in your bank account, on the other hand, refer to what your bank owes you. When you go to an ATM, you are effectively converting the bank’s promise to pay into a government promise.
Most people would say they trust the government’s promise more, says Gabriel Söderberg, an economist at the Riksbank, the central bank of Sweden. Their bet—correct, in most countries—is that their government is much less likely to go bust.
That’s why it would be a problem if Sweden were to go completely “cashless,” Söderberg says. He and his colleagues fear that if people lose the option to convert their bank money to government money at will and use it to pay for whatever they need, they might start to lose trust in the whole money system. A further worry is that if the private sector is left to dominate digital payments, people who can’t or won’t use these systems could be shut out of the economy.
This is fast becoming more than just a thought experiment in Sweden. Nearly everyone there uses a mobile app called Swish to pay for things. Economists have estimated that retailers in Sweden could completely stop accepting cash by 2023.
Creating an electronic version of Sweden’s sovereign currency—an “e-krona”—could mitigate these problems, Söderberg says. If the central bank were to issue digital money, it would design it to be a public good, not a profit-making product for a corporation. “Easily accessible, simple and user-friendly versions could be developed for those who currently have difficulty with digital technology,” the bank asserted in a November report covering Sweden’s payment landscape.
The Riksbank plans to develop and test an e-krona prototype. It has examined a number of technologies that might underlie it, including cryptocurrency systems like Bitcoin. But the central bank has also called on the Swedish government to lead a broad public inquiry into whether such a system should ever go live. “In the end, this decision is too big for a central bank alone, at least in the Swedish context,” Söderberg says.
The death of financial privacy China, meanwhile, appears to have made its decision: the digital renminbi is coming. Mu Changchun, head of the People’s Bank of China’s digital currency research institute, said in September that the currency, which the bank has been working on for years, is “ close to being out.
” In December, a local news report suggested that the PBOC is nearly ready to start tests in the cities of Shenzhen and Suzhou. And the bank has been explicit about its intention to use it to replace banknotes and coins.
Cash is already dying out on its own in China, thanks to Alipay and WeChat Pay, the QR-code-based apps that have become ubiquitous in just a few years. It’s been estimated that mobile payments made up more than 80% of all payments in China in 2018, up from less than 20% in 2013.
It’s not clear how much access the government currently has to transaction data from WeChat Pay and Alipay. Once it issues a sovereign digital currency— which officials say will be compatible with those two services —it will likely have access to a lot more. Martin Chorzempa, a research fellow at the Peterson Institute for International Economics in Washington, DC, told the New York Times in October that the system will give the PBOC “extraordinary power and visibility into the financial system, more than any central bank has today.” We don’t know for sure what technology the PBOC plans to use as the basis for its digital renminbi, but we have at least two revealing clues. First, the bank has been researching blockchain technology since 2014, and the government has called the development of this technology a priority. Second, Mu said in September that China’s system will bear similarities to Libra, the electronic currency Facebook announced last June. Indeed, PBOC officials have implied in public statements that the unveiling of Libra inspired them to accelerate the development of the digital renminbi, which has been in the works for years.
As currently envisioned, Libra will run on a blockchain, a type of accounting ledger that can be maintained by a network of computers instead of a single central authority. However, it will operate very differently from Bitcoin, the original blockchain system.
The computers in Bitcoin’s network use open-source software to automatically verify and record every single transaction. In the process, they generate a permanent public record of the currency’s entire transaction history: the blockchain. As envisioned, Libra’s network will do something similar. But whereas anyone with a computer and an internet connection can participate anonymously in Bitcoin’s network, the “nodes” that make up Libra’s network will be companies that have been vetted and given membership in a nonprofit association.
Unlike Bitcoin, which is notoriously volatile, Libra will be designed to maintain a stable value. To pull this off, the so-called Libra Association will be responsible for maintaining a reserve (pdf) of government-issued currencies (the latest plan is for it to be half US dollars, with the other half composed of British pounds, euros, Japanese yen, and Singapore dollars). This reserve is supposed to serve as backing for the digital units of value.
Both Libra and the digital renminbi, however, face serious questions about privacy. To start with, it’s not clear if people will be able to use them anonymously.
With Bitcoin, although transactions are public, users don’t have to reveal who they really are; each person’s “address” on the public blockchain is just a random string of letters and numbers. But in recent years, law enforcement officials have grown skilled at combining public blockchain data with other clues to unmask people using cryptocurrencies for illicit purposes. Indeed, in a July blog post , Libra project head David Marcus argued that the currency would be a boon for law enforcement, since it would help “move more cash transactions—where a lot of illicit activities happen—to a digital network.” As for the Chinese digital currency, Mu has said it will feature some level of anonymity. “We know the demand from the general public is to keep anonymity by using paper money and coins … we will give those people who demand it anonymity,” he said at a November conference in Singapore. “But at the same time we will keep the balance between ‘controllable anonymity’ and anti-money-laundering, CTF [counter-terrorist financing], and also tax issues, online gambling, and any electronic criminal activities,” he added. He did not, however, explain how that “balance” would work.
Sweden and China are leading the charge to issue consumer-focused electronic money, but according to John Kiff , an expert on financial stability for the International Monetary Fund, more than 30 countries have explored or are exploring the idea. In some, the rationale is similar to Sweden’s: dwindling cash and a growing private-sector payments ecosystem. Others are countries where commercial banks have decided not to set up shop. Many see an opportunity to better monitor for illicit transactions. All will have to wrestle with the same thorny privacy issues that Libra and the digital renminbi are raising.
Robleh Ali, a research scientist at MIT’s Digital Currency Initiative, says digital currency systems from central banks may need to be designed so that the government can “consciously blind itself” to the information. Something like that might be technically possible thanks to cutting-edge cryptographic tools like zero-knowledge proofs, which are used in systems like Zcash to shield blockchain transaction information from public view.
However, there’s no evidence that any governments are even thinking about deploying tools like this. And regardless, can any government—even Sweden’s—really be trusted to blind itself? Cryptocurrency: A workaround for freedom That’s wishful thinking, says Alex Gladstein, chief strategy officer for the Human Rights Foundation. While you may trust your government or think you’ve got nothing to hide, that might not always remain true. Politics evolves, governments get pushed out by elections or other events, what constitutes a “crime” changes, and civil liberties are not guaranteed. “Financial privacy is not going to be gifted to you by your government, regardless of how ‘free’ they are,” Gladstein says. He’s convinced that it has to come in the form of a stateless, decentralized digital currency like Bitcoin.
In fact, “electronic cash” was what Bitcoin’s still-unknown inventor, the pseudonymous Satoshi Nakamoto, claimed to be trying to create (before disappearing). Eleven years into its life, Nakamoto’s technology still lacks some of the signature features of cash. It is difficult to use, transactions can take more than an hour to process, and the currency’s value can fluctuate wildly. And as already noted, the supposedly anonymous transactions it enables can sometimes be traced.
But in some places people just need something that works, however imperfectly. Take Venezuela. Cash in the crisis-ridden country is scarce, and the Venezuelan bolivar is constantly losing value to hyperinflation.
Many Venezuelans seek refuge in US dollars, storing them under the proverbial (and literal) mattress, but that also makes them vulnerable to thieves.
What many people want is access to stable cash in digital form, and there’s no easy way to get that, says Alejandro Machado, cofounder of the Open Money Initiative.
Owing to government-imposed capital controls, Venezuelan banks have largely been cut off from foreign banks. And due to restrictions by US financial institutions, digital money services like PayPal and Zelle are inaccessible to most people. So a small number of tech-savvy Venezuelans have turned to a service called LocalBitcoins.
It’s like Craigslist, except that the only things for sale are bitcoins and bolivars. On Venezuela’s LocalBitcoins site , people advertise varying quantities of currency for sale at varying exchange rates. The site holds the money in escrow until trades are complete, and tracks the sellers’ reputations.
It’s not for the masses, but it’s “very effective” for people who can make it work, says Machado. For instance, he and his colleagues met a young woman who mines Bitcoin and keeps her savings in the currency. She doesn’t have a foreign bank account, so she’s willing to deal with the constant fluctuations in Bitcoin’s price. Using LocalBitcoins, she can cash out into bolivars whenever she needs them—to buy groceries, for example. “Niche power users” like this are “leveraging the best features of Bitcoin, which is to be an asset that is permissionless and that is very easy to trade electronically,” Machado says.
However, this is possible only because there are enough people using LocalBitcoins to create what finance people call “local liquidity,” meaning you can easily find a buyer for your bitcoins or bolivars. Bitcoin is the only cryptocurrency that has achieved this in Venezuela, says Machado, and it’s mostly thanks to LocalBitcoins.
This is a long way from the dream of cryptocurrency as a widely used substitute for stable, government-issued money. Most Venezuelans can’t use Bitcoin, and few merchants there even know what it is, much less how to accept it.
Still, it’s a glimpse of what a cryptocurrency can offer—a functional financial system that anyone can join and that offers the kind of freedom cash provides in most other places.
Decentralize this Could something like Bitcoin ever be as easy to use and reliable as today’s cash is for everyone else? The answer is philosophical as well as technical.
To begin with, what does it even mean for something to be like Bitcoin? Central banks and corporations will adapt certain aspects of Bitcoin and apply them to their own ends. Will those be cryptocurrencies? Not according to purists, who say that though Libra or some future central bank-issued digital currency may run on blockchain technology, they won’t be cryptocurrencies because they will be under centralized control.
True cryptocurrencies are “decentralized”—they have no one entity in charge and no single points of failure, no weak spots that an adversary (including a government) could attack. With no middleman like a bank attesting that a transaction took place, each transaction has to be validated by the nodes in a cryptocurrency’s network, which can number many thousands. But this requires an immense expenditure of computing power, and it’s the reason Bitcoin transactions can take more than an hour to settle.
A currency like Libra wouldn’t have this problem, because only a few authorized entities would be able to operate nodes. The trade-off is that its users wouldn’t be able to trust those entities to guarantee their privacy, any more than they can trust a bank, a government, or Facebook.
Is it technically possible to achieve Bitcoin’s level of decentralization and the speed, scale, privacy, and ease of use that we’ve come to expect from traditional payment methods? That’s a problem many talented researchers are still trying to crack. But some would argue that shouldn’t necessarily be the goal.
In a recent essay , Jill Carlson, cofounder of the Open Money Initiative, argued that perhaps decentralized cryptocurrency systems were “never supposed to go mainstream.” Rather, they were created explicitly for “censored transactions,” from paying for drugs or sex to supporting political dissidents or getting money out of countries with restrictive currency controls. Their slowness is inherent, not a design flaw; they “forsake scale, speed, and cost in favor of one key feature: censorship resistance.” A world in which they went mainstream would be “a very scary place indeed,” she wrote.
In summary, we have three avenues for the future of digital money, none of which offers the same mix of freedom and ease of use that characterizes cash. Private companies have an obvious incentive to monetize our data and pursue profits over public interest. Digital government money may still be used to track us, even by well-intentioned governments, and for less benign ones it’s a fantastic tool for surveillance. And cryptocurrency can prove useful when freedoms are at risk, but it likely won’t work at scale anytime soon, if ever.
How big a problem is this? That depends on where you live, how much you trust your government and your fellow citizens, and why you wish to use cash. And if you’d rather keep that to yourself, you’re in luck. For now.
hide by Mike Orcutt Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,486 | 2,016 |
"Pollsters got it wrong in the 2016 election. Now they want another shot. | MIT Technology Review"
|
"https://www.technologyreview.com/s/615172"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Pollsters got it wrong in the 2016 election. Now they want another shot.
By Rob Arthur archive page Karsten Petrat On the night of November 8, 2016, Charles Franklin, like millions of other Americans, watched the presidential election results roll in with what he described as “a sinking feeling.” But Franklin , a Wisconsin pollster and professor of law and public policy at Marquette University, wasn’t distressed on account of his personal political preferences; he had his reputation at stake. Just a week earlier, his own poll had shown Hillary Clinton up six points in Wisconsin. Instead, here she was, losing by seven-tenths of a point.
Franklin was on duty with ABC’s Decision Desk, one member of an expert behind-the-scenes team responsible for calling states for Clinton or Donald Trump as the tallies came in. As he watched the returns pile up until four in the morning, it became clear that his survey was off.
“Nobody wants to be wrong,” he says, looking back. “So in that sense it was very depressing.” He wasn’t the only pollster to misread the election.
According to RealClearPolitics , every single one of more than 30 polls in Wisconsin in the months leading to the election had Clinton winning the state by margins ranging from 2 to 16 points. And these errors had been amplified further because they were then used as fuel for computer algorithms that predicted an overall Clinton victory.
After Donald Trump had made his victory speech and the dust had cleared, everyone started to admit their errors.
“It gutted me to realize I had been wrong,” wrote Natalie Jackson, a data scientist at the Huffington Post, which had given Clinton a 98% chance of winning.
The media, including many outlets whose own forecasts had given Clinton a strong likelihood of victory, started to decry the failure of prediction algorithms. Some critics were more circumspect than others, acknowledging that some forecasters had accurately described a Trump victory as merely improbable. But many cast doubt on the whole idea of predicting elections. Some even used the election as ammunition to attack the entire field of data science.
Yet nearly four years later, and with another contest looming, forecasters are beginning to issue early predictions for 2020. The backlash to 2016 hasn’t dissuaded them—in fact, there’s now a whole new crowd of would-be oracles, determined not to replicate the mistakes of their predecessors.
What went wrong A cocktail of problems led to the polling misses of 2016. Some surveys failed to contact enough less--educated white voters, while some Trump supporters declined to admit which way they would be voting. Trump’s unconventional strategy also turned out more citizens in heavily Republican rural counties. Pollsters incorrectly assumed that these people would stay away as they had done in previous elections, which made Trump’s base appear smaller than it really was.
But while pollsters received the majority of the blame, perhaps more condemnation ought to have fallen on the forecasters, who turn pollsters’ data into predictions.
“Two major forecasters had Hillary Clinton at 99% to win,” says G. Elliott Morris, a data journalist at the Economist who works on election forecasting. “When she didn’t, a lot of them just blamed pollsters, because it’s easy for them.” There were at least two major errors committed by some of the data scientists who helped design the prediction algorithms. First, they assumed that if the odds of being off by nearly seven points in Wisconsin were low, the odds of a comparable error in other critical states like Michigan and Pennsylvania were tiny. In fact, polling problems in one state were correlated with mistakes in other, similar states. Assuming that polls were entirely independent of each other—rather than reflecting the same reactions to the same issues—produced overconfidence in Clinton’s lead.
Second, prediction algorithms failed to register the record number of undecided voters as a warning sign. Because so many voters were on the fence right up to Election Day—and would end up breaking strongly for Trump—Clinton’s margins were much less safe than they appeared.
“It was staring us right in the face,” says Rachel Bitecofer , a professor of political science at Christopher Newport University. Had there been more polls in the closely contested states just before Election Day, she suggests, analysts might have picked up on the unusually high number of voters who decided to turn out at the last moment.
It wasn’t just the forecasters’ fault, though. Even when their probabilities for each candidate were accurate, the public seemed to have trouble comprehending the meaning of those numbers.
During the closing days of the election campaign, I was working at FiveThirtyEight, one of the most prominent outlets making predictions. My job didn’t involve the presidential race: instead, I was covering baseball’s World Series. When the Chicago Cubs were down three games to one in the seven-game series against the Cleveland Indians, I noted that their odds of winning, at around one in six, were a hair below Trump’s chances of taking the White House. Six teams had done it before in the 113-year history of the World Series, and another seven had pulled it off in other playoff rounds, so it was definitely possible, but it wasn’t typical. Afterwards, when both the Cubs and Trump won against the odds, I received a deluge of hate tweets blaming me for somehow jinxing into existence two very possible turns of fate.
“If you hear there’s going to be a 20% chance of rain, you don’t bring your umbrella. And then it rains and you get all ticked off and it’s probably your fault,” says Steven Shepard, an editor and election forecaster at Politico. “But that 20% occurrence isn’t necessarily that unlikely.” Many people seemed to look at which candidate was projected to win (usually Clinton) without considering how certain the forecasters were. A 70% chance of a Clinton victory certainly favored the Democrat, but ought to have been viewed very differently from a 99% chance.
Still, some did say 99%, and they were undoubtedly too aggressive. Sam Wang at the Princeton Election Consortium estimated Trump’s chances at less than 1%, and even pledged to eat a bug if Trump earned more than 240 electoral votes.
When the election result came through, Wang stayed true to his word. A week after polling day, he appeared on CNN with a can of “gourmet” crickets (“gourmet from the point of view of a pet,” he clarified) and decried the spectacle his bet had caused. “I’m hoping that we can get back to data, and thinking thoughtfully about policy and issues,” he said before dipping a cricket in honey and, with a pained expression, gulping the insect down.
Triple threat Not all forecasts were as far off as Wang’s. Some even anticipated a victory for Trump. To understand why they came in so differently, it’s valuable to look at the range of approaches, which fall into three broad classes.
The earliest forecasts in each election cycle come from what are called fundamentals models. These are typically built from presidential approval ratings, economic statistics, and demographic indicators. A strong economy presages victory for the incumbent’s party, as does a high approval rating for the incumbent. The demographic makeup of a state can also be used to predict the outcome—for example, white, non-college-educated voters tended to vote for Trump in 2016, so states with lots of them are more likely to go his way in 2020 as well.
Because these factors are relatively stable, reliable fundamentals predictions can be made much earlier than most other types of forecast. Models like this seem too simple to capture all the quirks and scandals of the modern, two-year campaign. But they performed shockingly well in 2016: six out of 10 predicted the final popular vote to within one percentage point.
The presidency isn’t chosen by straight-up national popular vote, however, and that’s a key limitation of fundamentals approaches: few predict the final results of the Electoral College.
Fundamentals models have another weakness. If late-breaking news arises, such as a scandal at the end of the race or a sudden shift in the economy (the 2008 financial crisis is a good example), then these stable forecasts can suddenly become woefully out of date. To compensate for this, a decade or so ago statisticians started popularizing new kinds of quantitative models that aren’t quite as vulnerable to these October surprises. They process polling data as it comes out and produce a day-by-day estimate of who will win, so they can respond if public opinion shifts.
RealClearPolitics and the New York Times’ Upshot both have well-regarded quantitative models, but no model has more fame—or, arguably, a better track record—than Nate Silver’s FiveThirtyEight forecast, named for the total number of votes in the Electoral College. FiveThirtyEight’s algorithm comes in several variations, but all take care to adjust polls according to how trustworthy the survey organization is and whether its results tend to consistently lean Democratic or Republican. The careful ingestion of polling data, and the attention Silver pays to uncertainty, have traditionally set it apart from other forecasts. “FiveThirtyEight is the gold standard,” Bitecofer told me.
Of the major quantitative election predictions, FiveThirtyEight’s was the most conservative , assigning Clinton a 71.4% chance to win on the eve of the election. “That sounds about right now in retrospect,” says Charles Franklin: Trump’s victory was an unlikely, but not impossible, outcome.
Finally, there are predictors out there who eschew mathematical approaches altogether, relying instead upon a combination of intuition, polling, and the output from all the other kinds of models put together. These qualitative predictions run on one of the most sophisticated and yet error-prone computational engines we know of: the human brain.
Rather than precise numeric estimates, qualitative forecasters typically group races into one of four categories on a scale ranging from safe to toss-up.
“Toss-up” means there is no favorite: “Kind of a coin flip,” says Kyle Kondik , a qualitative forecaster with the University of Virginia’s Crystal Ball political analysis newsletter. “Lean,” he says, is a small edge for one side or the other. “Likely” is a larger edge for one side or the other. And “safe,” he says, means we’d be shocked if that party lost. Some qualitative predictors argue that these verbal groupings help readers understand the relative probabilities better than the more exact numbers offered elsewhere.
While these predictions may seem less scientific than ones based on crunching numbers, some boast an impressive level of accuracy. In the 2018 midterms, according to a third-party assessment of several professional forecasts , it was the aptly named Crystal Ball that did best, not FiveThirtyEight’s statistical algorithm. Performance tends to fluctuate from cycle to cycle, however: the best practice, according to pollsters and academics, is to consume a wide variety of forecasts—qualitative, quantitative, and fundamentals.
What next? Nearly all the forecasters I spoke to had received vitriolic hate mail after the 2016 results. Yet dozens of new modelers have thrown their hats into the ring for 2020.
They will be rolling out their predictions for the first time this year, and they are intent on avoiding mistakes from past election cycles. Morris, the Economist’s forecaster, is one of those entering the field. He has called previous, error-prone predictions “lying to people” and “editorial malpractice.” “We should learn from that,” he says.
The Economist will be building its algorithm using polls published by outside organizations, but it will also be conducting its own surveys to shore up the results in ambiguous states and races, which Morris hopes can lead to greater accuracy.
The Washington Post, too, is making its first gamble on predictions—but taking a different route. It is staying out of the forecasting game until returns start coming in. Only once the first precincts start to announce vote totals on Election Day will the Post deploy its analytical model to judge the likelihood that specific candidates take the state or district for which they are competing. By waiting until the first ballots are counted, the Post’s data scientists plan to drastically reduce the error in predicting the rest of the votes, albeit at the cost of being unable to release an early projection.
Experienced forecasters and pollsters aren’t sitting on their hands either. Builders of fundamentals models are beginning to take up the challenge of predicting the Electoral College instead of just the popular vote. Bitecofer designed a model based primarily on demographics that is already predicting a narrow electoral-vote victory for the Democratic challenger, whoever that may be.
The designers of those problematic quantitative algorithms appear to have learned their lesson about correlated errors between states. The Huffington Post issued a mea culpa for its 98% prediction of a Clinton victory. Wang, the bug-eating Princeton professor, has pledged to update his algorithm so that it will be much less confident in 2020, admitting on his blog that his earlier model was “a mistake.” “HORSE RACE POLLING IS BELIEVED TO INCREASE CYNICISM ... IT CAUSES PEOPLE TO VIEW POLITICS AS A GAME, WHERE THEY GO OUT AND ROOT FOR THEIR TEAM.” Qualitative forecasters, meanwhile, took a variety of lessons from 2016. “There are a lot of different things that in hindsight I wish that maybe we had focused on a little bit more, but I would say the fundamentals--based models were the best in that election,” says the University of Virginia’s Kondik. “I wish we all paid them greater heed.” Kondik and others stress the need to be cautious about any prediction given the historic unpopularity of the sitting president, which ought to decrease his chances, and the strong economy, which ought to increase them. Those dueling factors mean the race is uncertain so far from Election Day.
Elsewhere, media organizations have also started providing their estimates in ways that are designed to give the reader a better, more intuitive grasp of what probabilities mean. Rather than writing that Democrats had an 87.9% chance of taking the House during the 2018 midterm elections, for example, FiveThirtyEight emphasized that they could have expected to win seven times out of eight.
“Psychologists have found that people are better at understanding these types of [numbers],” wrote Yphtach Lelkes, a professor of communications at the University of Pennsylvania.
Finally, pollsters are upping their game as well. The American Association for Public Opinion Research (AAPOR) issued a retrospective of 2016 with lessons for future elections. Tips include using statistical tricks to ensure that population samples are more representative of the state being surveyed and conducting more polls in the final days of the campaign so as to capture the leanings of late--deciding voters, who proved so critical to Trump’s victory.
Franklin, the Wisconsin pollster, was one of the authors of AAPOR’s post-mortem. The systematic failure of dozens of surveys across several states suggest that his poll’s mistake was due to a real shift in the closing days of the race, rather than an earlier, more fundamental error. Still, he wonders what might have been: “What if we had polled through the weekend before the election? Would we have captured the swing toward Trump in those data?” Quantum polling But while mistakes from four years ago can be corrected, new difficulties may also crop up for the 2020 cycle. Some may even be driven by forecasting itself. Some experts argue that election predictions may be influencing the very results they are trying to predict.
STUDIES SUGGEST THAT WHEN PEOPLE BELIEVE THE OUTCOME OF AN ELECTION IS CERTAIN, THEY ARE LESS LIKELY TO VOTE.
According to a recent study , an overwhelmingly liberal audience tuned in to those overly confident quantitative forecasts in 2016. Previously published studies suggest that when people believe the outcome of an election is certain, they are less likely to vote, especially if the certainty is stacked in favor of their chosen candidate. So in a twist on what is known as the observer effect—in which the mere act of watching something changes the outcome—feeding a heavily Democratic audience with a steady diet of overconfident polling like Wang’s could have reduced turnout significantly. Given that the race was essentially decided by only 107,000 votes in three states, any reduction could have been important.
“Clinton lost by so few votes that it is certainly possible that probabilistic forecasts caused enough Democrats to stay home that it affected the outcome,” wrote Lelkes. Clinton herself suggested as much. “I don’t know how we’ll ever calculate how many people thought it was in the bag, because the percentages kept being thrown at people—‘Oh, she has an 88 percent chance to win!’” she said in an interview in New York magazine.
Even if election forecasting didn’t change the outcome in 2016, it could have more of an impact on future campaigns.
“Horse race polling is believed to increase political cynicism, affect turnout, increase polarization, and likely supplants information about substantive issues,” wrote Lelkes. “It causes people to view politics as a game, where they go out and root for their team, rather than support candidates based on their political positions.” And if these effects are real, they are likely to get more powerful as more forecasts happen.
Some forecasters, like Silver, have dismissed this concern. They argue that it isn’t their job to tell people whether or not to vote—or to tell the media what to cover. Others, however, are taking the advice of Lelkes and his colleagues more seriously.
“We’re experimenting with ways to convey uncertainty that won’t turn people off [from voting],” says the Economist’s Morris. “But I think that is still a problem that forecasters are going to have … I don’t know how we get around some of the societal implications of our work.” Rob Arthur is an independent journalist and data science consultant based in Chicago.
hide by Rob Arthur Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,487 | 2,020 |
"Inside the race to build the best quantum computer on Earth | MIT Technology Review"
|
"https://www.technologyreview.com/s/615180"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Inside the race to build the best quantum computer on Earth By Gideon Lichfield archive page Quantum chandelier Rigetti Computing / Justin Fantl Google’s most advanced computer isn’t at the company’s headquarters in Mountain View, California, nor anywhere in the febrile sprawl of Silicon Valley. It’s a few hours’ drive south in Santa Barbara, in a flat, soulless office park inhabited mostly by technology firms you’ve never heard of.
An open-plan office holds several dozen desks. There’s an indoor bicycle rack and designated “surfboard parking,” with boards resting on brackets that jut out from the wall. Wide double doors lead into a lab the size of a large classroom. There, amidst computer racks and jumbles of instrumentation, a handful of cylindrical vessels—each a little bigger than an oil drum—hang from vibration-damping rigs like enormous steel pupae.
On one of them, the outer vessel has been removed to expose a multi-tiered tangle of steel and brass innards known as “the chandelier.” It’s basically a supercharged refrigerator that gets colder with each layer down. At the bottom, kept in a vacuum a hair’s breadth above absolute zero, is what looks to the naked eye like an ordinary silicon chip. But rather than transistors, it’s etched with tiny superconducting circuits that, at these low temperatures, behave as if they were single atoms obeying the laws of quantum physics. Each one is a quantum bit, or qubit—the basic information--storage unit of a quantum computer.
Late last October, Google announced that one of those chips, called Sycamore, had become the first to demonstrate “quantum supremacy” by performing a task that would be practically impossible on a classical machine. With just 53 qubits, Sycamore had completed a calculation in a few minutes that, according to Google, would have taken the world’s most powerful existing supercomputer, Summit, 10,000 years.
Google touted this as a major breakthrough , comparing it to the launch of Sputnik or the first flight by the Wright brothers—the threshold of a new era of machines that would make today’s mightiest computer look like an abacus.
At a press conference in the lab in Santa Barbara, the Google team cheerfully fielded questions from journalists for nearly three hours. But their good humor couldn’t quite mask an underlying tension. Two days earlier, researchers from IBM, Google’s leading rival in quantum computing, had torpedoed its big reveal. They’d published a paper that essentially accused the Googlers of getting their sums wrong. IBM reckoned it would have taken Summit merely days, not millennia, to replicate what Sycamore had done. When asked what he thought of IBM’s result, Hartmut Neven, the head of the Google team, pointedly avoided giving a direct answer.
What's in a qubit? Just as there were different transistor designs in the early days of computing, there are currently many ways to make qubits. Google and IBM both use a version of the leading method, a superconducting transmon qubit, of which the core component is a Josephson junction. This consists of a pair of superconducting metal strips separated by a gap just a nanometer wide; the quantum effects are a result of how electrons cross that gap.
You could dismiss this as just an academic spat —and in a sense it was. Even if IBM was right, Sycamore had still done the calculation a thousand times faster than Summit would have. And it would likely be only months before Google built a slightly larger quantum machine that proved the point beyond doubt.
IBM’s deeper objection, though, was not that Google’s experiment was less successful than claimed, but that it was a meaningless test in the first place.
Unlike most of the quantum computing world, IBM doesn’t think “quantum supremacy” is the technology’s Wright brothers moment; in fact, it doesn’t even believe there will be such a moment.
IBM is instead chasing a very different measure of success, something it calls “quantum advantage.” This isn’t a mere difference of words or even of science, but a philosophical stance with roots in IBM’s history, culture, and ambitions—and, perhaps, the fact that for eight years its revenue and profit have been in almost unremitting decline, while Google and its parent company Alphabet have only seen their numbers grow. This context, and these differing goals, could influence which—if either—comes out ahead in the quantum computing race.
Worlds apart The sleek, sweeping curve of IBM’s Thomas J. Watson Research Center in the suburbs north of New York City, a neo-futurist masterpiece by the Finnish architect Eero Saarinen, is a continent and a universe away from the Google team’s nondescript digs. Completed in 1961 with the bonanza IBM made from mainframes, it has a museum-like quality, a reminder to everyone who works inside it of the company’s breakthroughs in everything from fractal geometry to superconductors to artificial intelligence—and quantum computing.
The head of the 4,000-strong research division is Dario Gil, a Spaniard whose rapid-fire speech races to keep up with his almost evangelical zeal. Both times I spoke to him, he rattled off historical milestones intended to underscore how long IBM has been involved in quantum-computing-related research (see time line at right).
A grand experiment: Quantum theory and practice A quantum computer ’s basic building block is the quantum bit, or qubit. In a classical computer, a bit can store either a 0 or a 1. A qubit can store not only 0 or 1 but also an in-between state called a superposition—which can assume lots of different values. One analogy is that if information were color, then a classical bit could be either black or white. A qubit when it’s in superposition could be any color on the spectrum, and could also vary in brightness.
The upshot is that a qubit can store and process a vast quantity of information compared with a bit—and capacity increases exponentially as you connect qubits together. Storing all the information in the 53 qubits on Google’s Sycamore chip would take about 72 petabytes (72 billion gigabytes) of classical computer memory. It doesn’t take a lot more qubits before you’d need a classical computer the size of the planet.
But it’s not straightforward. Delicate and easily disturbed, qubits need to be almost perfectly isolated from heat, vibration, and stray atoms—hence the “chandelier” refrigerators in Google’s quantum lab. Even then, they can function for at most a few hundred microseconds before they “decohere” and lose their superposition.
And quantum computers aren’t always faster than classical ones. They’re just different, faster at some things and slower at others, and require different kinds of software. To compare their performance, you have to write a classical program that approximately simulates the quantum one.
For its experiment, Google chose a benchmarking test called “random quantum circuit sampling.” It generates millions of random numbers, but with slight statistical biases that are a hallmark of the quantum algorithm. If Sycamore were a pocket calculator, it would be the equivalent of pressing buttons at random and checking that the display showed the expected results.
Google simulated parts of this on its own massive server farms as well as on Summit, the world’s biggest supercomputer, at Oak Ridge National Laboratory. The researchers estimated that completing the whole job, which took Sycamore 200 seconds, would have taken Summit approximately 10,000 years. Voilà: quantum supremacy.
So what was IBM’s objection? Basically, that there are different ways to get a classical computer to simulate a quantum machine—and that the software you write, the way you chop up data and store it, and the hardware you use all make a big difference in how fast the simulation can run. IBM said Google assumed the simulation would need to be cut up into a lot of chunks, but Summit, with 280 petabytes of storage, is big enough to hold the complete state of Sycamore at once. (And IBM built Summit, so it should know.) But over the decades, the company has gained a reputation for struggling to turn its research projects into commercial successes. Take, most recently, Watson, the Jeopardy! -playing AI that IBM tried to convert into a robot medical guru. It was meant to provide diagnoses and identify trends in oceans of medical data, but despite dozens of partnerships with health-care providers, there have been few commercial applications, and even the ones that did emerge have yielded mixed results.
The quantum computing team, in Gil’s telling, is trying to break that cycle by doing the research and business development in parallel. Almost as soon as it had working quantum computers, it started making them accessible to outsiders by putting them on the cloud, where they can be programmed by means of a simple drag-and-drop interface that works in a web browser. The “IBM Q Experience,” launched in 2016, now consists of 15 publicly available quantum computers ranging from five to 53 qubits in size. Some 12,000 people a month use them, ranging from academic researchers to school kids. Time on the smaller machines is free; IBM says it already has more than 100 clients paying (it won’t say how much) to use the bigger ones.
None of these devices—or any other quantum computer in the world, except for Google’s Sycamore—has yet shown it can beat a classical machine at anything. To IBM, that isn’t the point right now. Making the machines available online lets the company learn what future clients might need from them and allows outside software developers to learn how to write code for them. That, in turn, contributes to their development, making subsequent quantum computers better.
This cycle, the company believes, is the fastest route to its so-called quantum advantage, a future in which quantum computers won’t necessarily leave classical ones in the dust but will do some useful things somewhat faster or more efficiently—enough to make them economically worthwhile. Whereas quantum supremacy is a single milestone, quantum advantage is a “continuum,” the IBMers say—a gradually expanding world of possibility.
Related Story This, then, is Gil’s grand unified theory of IBM: that by combining its heritage, its technical expertise, other people’s brainpower, and its dedication to business clients, it can build useful quantum computers sooner and better than anybody else.
In this view of things, IBM sees Google’s quantum supremacy demonstration as “a parlor trick,” says Scott Aaronson, a physicist at the University of Texas at Austin, who contributed to the quantum algorithms Google is using. At best it’s a flashy distraction from the real work that needs to take place. At worst it’s misleading, because it could make people think quantum computers can beat classical ones at anything rather than at one very narrow task. “‘Supremacy’ is an English word that it’s going to be impossible for the public not to misinterpret,” says Gil.
Google, of course, sees it rather differently.
Enter the upstart Google was a precocious eight-year-old company when it first began tinkering with quantum problems in 2006, but it didn’t form a dedicated quantum lab until 2012—the same year John Preskill, a physicist at Caltech, coined the term “quantum supremacy.” The head of the lab is Hartmut Neven, a German computer scientist with a commanding presence and a penchant for Burning Man–style chic; I saw him once in a furry blue coat and another time in an all-silver outfit that made him look like a grungy astronaut. (“My wife buys these things for me,” he explained.) Initially, Neven bought a machine built by an outside firm, D-Wave, and spent a while trying to achieve quantum supremacy on it, but without success. He says he convinced Larry Page, Google’s then CEO, to invest in building quantum computers in 2014 by promising him that Google would take on Preskill’s challenge: “We told him, ‘Listen, Larry, in three years we will come back and put a prototype chip on your table that can at least compute a problem that is beyond the abilities of classical machines.’” Lacking IBM’s quantum expertise, Google hired a team from outside, led by John Martinis, a physicist at the University of California, Santa Barbara. Martinis and his group were already among the world’s best quantum computer makers—they had managed to string up to nine qubits together—and Neven’s promise to Page seemed like a worthy goal for them to aim for.
How to program a quantum computer At its most basic level, the software in classical computers is a sequence of logic gates like NOT, OR, and NAND that change the contents (0 or 1) of bits. Quantum software, similarly, consists of sequences of logic gates acting on qubits, but it has a larger and more exotic set of gates with names like SWAP (which swaps the values of two qubits around), Pauli-X (a quantum version of the NOT gate, which flips a qubit’s value), and Hadamard (which turns a qubit from either 0 or 1 into a superposition of 0 and 1). There are as yet no quantum equivalents of higher-level languages like C++ or Java, but both Google and IBM have created graphical interfaces, like the one pictured above, to make programming with gates easy.
The three-year deadline came and went as Martinis’s team struggled to make a chip both big enough and stable enough for the challenge. In 2018 Google released its largest processor yet, Bristlecone. With 72 qubits, it was well ahead of anything its rivals had made, and Martinis predicted it would attain quantum supremacy that same year. But a few of the team members had been working in parallel on a different chip architecture, called Sycamore, that ultimately proved able to do more with fewer qubits. Hence it was a 53-qubit chip—originally 54, but one of them malfunctioned—that ultimately demonstrated supremacy last fall.
For practical purposes, the program used in that demonstration is virtually useless—it generates random numbers, which isn’t something you need a quantum computer for. But it generates them in a particular way that a classical computer would find very hard to replicate, thereby establishing the proof of concept (see opposite page).
Ask IBMers what they think of this achievement, and you get pained looks. “I don’t like the word [supremacy], and I don’t like the implications,” says Jay Gambetta, a cautiously spoken Australian who heads IBM’s quantum team. The problem, he says, is that it’s virtually impossible to predict whether any given quantum calculation will be hard for a classical machine, so showing it in one case doesn’t help you find other cases.
To everyone I spoke with outside IBM, this refusal to treat quantum supremacy as significant verges on pigheadedness. “Anybody who will ever have a commercially relevant offering—they have to show supremacy first. I think that’s just basic logic,” says Neven. Even Will Oliver, a mild-mannered MIT physicist who has been one of the most even-handed observers of the spat, says, “It’s a very important milestone to show a quantum computer outperforming a classical computer at some task, whatever it is.” The quantum leap Regardless of whether you agree with Google’s position or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. The hope is that such machines could one day solve problems that require unfeasible amounts of brute-force computing power now, like modeling complex molecules to help discover new drugs and materials, or optimizing city traffic flows in real time to reduce congestion, or making longer-term weather predictions. (Eventually they might be capable of cracking the cryptographic codes used today to secure communications and financial transactions, though by then most of the world will probably have adopted quantum-resistant cryptography.) The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.
That uncertainty has to do with both hardware and software. On the hardware side, Google reckons its current chip designs can get it to somewhere between 100 and 1,000 qubits. However, just as a car’s performance doesn’t depend only on the size of the engine, a quantum computer’s performance isn’t simply determined by its number of qubits. There is a raft of other factors to take into account, including how long they can be kept from decohering, how error-prone they are, how fast they operate, and how they’re interconnected. This means any quantum computer operating today reaches only a fraction of its full potential.
Decoherence Qubits store information the way a sieve stores water; even the most stable ones “decohere,” or fall out of their fragile quantum states, within a few hundred microseconds. Even before then, errors start to pile up. That means a quantum computer can do only so many sums before it grinds to a halt. Google’s larger chips decohere after 30 to 40 microseconds, enough time for them to run through a sequence of up to 40 quantum logic gates. IBM’s can reach up to 500 microseconds, but they also process gates more slowly.
Software for quantum computers, meanwhile, is as much in its infancy as the machines themselves. In classical computing, programming languages are now several levels removed from the raw “machine code” that early software developers had to use, because the nitty-gritty of how data get stored, processed, and shunted around is already standardized. “On a classical computer, when you program it, you don’t have to know how a transistor works,” says Dave Bacon, who leads the Google team’s software effort. Quantum code, on the other hand, has to be highly tailored to the qubits it will run on, so as to wring the most out of their temperamental performance. That means the code for IBM’s chips won’t run on those of other companies, and even techniques for optimizing Google’s 53-qubit Sycamore won’t necessarily do well on its future 100-qubit sibling. More important, it means nobody can predict just how hard a problem those 100 qubits will be capable of tackling.
The most anyone dares to hope for is that computers with a few hundred qubits will be cajoled into simulating some moderately complex chemistry within the next few years—perhaps even enough to advance the search for a new drug or a more efficient battery. Yet decoherence and errors will bring all these machines to a stop before they can do anything really hard like breaking cryptography.
To build a quantum computer with the power of 1,000 qubits, you'd need a million actual ones.
That will require a “fault-tolerant” quantum computer, one that can compensate for errors and keep itself running indefinitely, just as classical ones do. The expected solution will be to create redundancy: make hundreds of qubits act as one, in a shared quantum state. Collectively, they can correct for individual qubits’ errors. And as each qubit succumbs to decoherence, its neighbors will bring it back to life, in a never--ending cycle of mutual resuscitation.
The typical prediction is that it would take as many as 1,000 conjoined qubits to attain that stability—meaning that to build a computer with the power of 1,000 qubits, you’d need a million actual ones. Google “conservatively” estimates it can build a million-qubit processor within 10 years, Neven says, though there are some big technical hurdles to overcome, including one in which IBM may yet have the edge over Google (see opposite page).
By that time, a lot may have changed. The superconducting qubits Google and IBM currently use might prove to be the vacuum tubes of their era, replaced by something much more stable and reliable. Researchers around the world are experimenting with various methods of making qubits, though few are advanced enough to build working computers with. Rival startups such as Rigetti, IonQ, or Quantum Circuits might develop an edge in a particular technique and leapfrog the bigger companies.
A tale of two transmons Google’s and IBM’s transmon qubits are almost identical, with one small but potentially crucial difference.
In both Google’s and IBM’s quantum computers, the qubits themselves are controlled by microwave pulses. Tiny fabrication defects mean that no two qubits respond to pulses of exactly the same frequency. There are two solutions to this: vary the frequency of the pulses to find each qubit’s sweet spot, like jiggling a badly cut key in a lock until it opens; or use magnetic fields to “tune” each qubit to the right frequency.
IBM uses the first method; Google uses the second. Each approach has pluses and minuses. Google’s tunable qubits work faster and more precisely, but they’re less stable and require more circuitry. IBM’s fixed-frequency qubits are more stable and simpler, but run more slowly.
From a technical point of view, it’s pretty much a toss-up, at least at this stage. In terms of corporate philosophy, though, it’s the difference between Google and IBM in a nutshell—or rather, in a qubit.
Google chose to be nimble. “In general our philosophy goes a little bit more to higher controllability at the expense of the numbers that people typically look for,” says Hartmut Neven.
IBM, on the other hand, chose reliability. “There’s a huge difference between doing a laboratory experiment and publishing a paper, and putting a system up with, like, 98% reliability where you can run it all the time,” says Dario Gil.
Right now, Google has the edge. As machines get bigger, though, the advantage may flip to IBM. Each qubit is controlled by its own individual wires; a tunable qubit requires one extra wire. Figuring out the wiring for thousands or millions of qubits will be one of the toughest technical challenges the two companies face; IBM says it’s one of the reasons they went with the fixed-frequency qubit. Martinis, the head of the Google team, says he’s personally spent the past three years trying to find wiring solutions. “It’s such an important problem that I worked on it,” he jokes.
A new Mooreu2019s Law? Rather than counting qubits, IBM tracks what it calls “quantum volume,” a measure of how much complexity a computer can actually handle. Its goal is to keep this measure doubling every year—a quantum version of the famous Moore’s Law that IBM has dubbed “Gambetta’s Law,” after Jay Gambetta, its chief quantum theoretician. So far, it’s held for three years. That’s as much data as Gordon Moore had when he postulated Moore’s Law in 1965.
But given their size and wealth, both Google and IBM have a shot at becoming serious players in the quantum computing business. Companies will rent their machines to tackle problems the way they currently rent cloud-based data storage and processing power from Amazon, Google, IBM, or Microsoft. And what started as a battle between physicists and computer scientists will evolve into a contest between business services divisions and marketing departments.
Which company is best placed to win that contest? IBM, with its declining revenues, may have a greater sense of urgency than Google. It knows from bitter experience the costs of being slow to enter a market: last summer, in its most expensive purchase ever, it forked over $34 billion for Red Hat, an open-source cloud services provider, in an attempt to catch up to Amazon and Microsoft in that field and reverse its financial fortunes. Its strategy of putting its quantum machines on the cloud and building a paying business from the get-go seems designed to give it a head start.
Google recently began to follow IBM’s example, and its commercial clients now include the US Department of Energy, Volkswagen, and Daimler. The reason it didn’t do this sooner, says Martinis, is simple: “We didn’t have the resources to put it on the cloud.” But that’s another way of saying it had the luxury of not having to make business development a priority.
Whether that decision gives IBM an edge is too early to say, but probably more important will be how the two companies apply their other strengths to the problem in the coming years. IBM, says Gil, will benefit from its “full stack” expertise in everything from materials science and chip fabrication to serving big corporate clients. Google, on the other hand, can boast a Silicon Valley–style culture of innovation and plenty of practice at rapidly scaling up operations.
As for quantum supremacy itself, it will be an important moment in history, but that doesn’t mean it will be a decisive one. After all, everyone knows about the Wright brothers’ first flight, but can anybody remember what they did afterwards? hide by Gideon Lichfield Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,488 | 2,020 |
"If DNA is like software, can we just fix the code? | MIT Technology Review"
|
"https://www.technologyreview.com/s/615188"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts If DNA is like software, can we just fix the code? By Erika Check Hayden archive page Kuzu family at home Matthew Monteith When you first meet her, you won’t be able to tell that Ipek Kuzu suffers from a rare genetic disease. The three-year-old plays happily on her own for hours, driving her toy cars and “cooking” in her pretend kitchen. But she’s not well. She’s a little wobbly on her feet and doesn’t say much, and if nothing is done, she may die by her mid-20s. Ipek has ataxia-telangiectasia, or A-T, a disease caused by an error in her DNA. It causes the loss of brain cells, along with a high risk of infection and cancer.
It’s the sort of problem that makes doctors shake their heads. But Ipek’s father, Mehmet, and mother, Tugba, hope she’ll escape that fate. Thanks in part to the persistence of Mehmet, a programmer at Google, in January she became one of the first handful of US patients to receive a hyper-personalized gene medicine, tailored to treat a unique mutation. The one-person drug, designed for her by a Boston doctor, Timothy Yu, is being called “atipeksen,” for “A-T” and “Ipek.” To create atipeksen, Yu borrowed from recent biotech successes like gene therapy. Some new drugs, including cancer therapies, treat disease by directly manipulating genetic information inside a patient’s cells. Now doctors like Yu find they can alter those treatments as if they were digital programs. Change the code, reprogram the drug, and there’s a chance of treating many genetic diseases, even those as unusual as Ipek’s.
The new strategy could in theory help millions of people living with rare diseases, the vast majority of which are caused by genetic typos and have no treatment. US regulators say last year they fielded more than 80 requests to allow genetic treatments for individuals or very small groups, and that they may take steps to make tailor-made medicines easier to try. New technologies, including custom gene-editing treatments using CRISPR, are coming next.
Where it had taken decades for Ionis to perfect its drug, Yu now set a record: it took only eight months for Yu to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.
“I never thought we would be in a position to even contemplate trying to help these patients,” says Stanley Crooke, a biotechnology entrepreneur and founder of Ionis Pharmaceuticals, based in Carlsbad, California. “It’s an astonishing moment.” Antisense drug Right now, though, insurance companies won’t pay for individualized gene drugs, and no company is making them (though some plan to). Only a few patients have ever gotten them, usually after heroic feats of arm-twisting and fundraising. And it’s no mistake that programmers like Mehmet Kuzu, who works on data privacy, are among the first to pursue individualized drugs. “As computer scientists, they get it. This is all code,” says Ethan Perlstein, chief scientific officer at the Christopher and Dana Reeve Foundation.
A nonprofit, the A-T Children’s Project, funded most of the cost of designing and making Ipek’s drug. For Brad Margus, who created the foundation in 1993 after his two sons were diagnosed with A-T, the change between then and now couldn’t be more dramatic. “We’ve raised so much money, we’ve funded so much research, but it’s so frustrating that the biology just kept getting more and more complex,” he says. “Now, we’re suddenly presented with this opportunity to just fix the problem at its source.” Ipek was only a few months old when her father began looking for a cure. A geneticist friend sent him a paper describing a possible treatment for her exact form of A-T, and Kuzu flew from Sunnyvale, California, to Los Angeles to meet the scientists behind the research. But they said no one had tried the drug in people: “We need many more years to make this happen,” they told him.
Kuzu didn’t have years. After he returned from Los Angeles, Margus handed him a thumb drive with a video of a talk by Yu, a doctor at Boston Children’s Hospital, who described how he planned to treat a young girl with Batten disease (a different neurodegenerative condition) in what press reports would later dub “a stunning illustration of personalized genomic medicine.” Kuzu realized Yu was using the very same gene technology the Los Angeles scientists had dismissed as a pipe dream.
That technology is called “antisense.” Inside a cell, DNA encodes information to make proteins. Between the DNA and the protein, though, come messenger molecules called RNA that ferry the gene information out of the nucleus. Think of antisense as mirror-image molecules that stick to specific RNA messages, letter for letter, blocking them from being made into proteins. It’s possible to silence a gene this way, and sometimes to overcome errors, too.
Though the first antisense drugs appeared 20 years ago, the concept achieved its first blockbuster success only in 2016. That’s when a drug called nusinersen, made by Ionis, was approved to treat children with spinal muscular atrophy, a genetic disease that would otherwise kill them by their second birthday.
Yu, a specialist in gene sequencing, had not worked with antisense before, but once he’d identified the genetic error causing Batten disease in his young patient, Mila Makovec, it became apparent to him he didn’t have to stop there. If he knew the gene error, why not create a gene drug? “All of a sudden a lightbulb went off,” Yu says. “Couldn’t one try to reverse this? It was such an appealing idea, and such a simple idea, that we basically just found ourselves unable to let that go.” Yu admits it was bold to suggest his idea to Mila’s mother, Julia Vitarello. But he was not starting from scratch. In a demonstration of how modular biotech drugs may become, he based milasen on the same chemistry backbone as the Ionis drug, except he made Mila’s particular mutation the genetic target. Where it had taken decades for Ionis to perfect a drug, Yu now set a record: it took only eight months for him to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine.
“What’s different now is that someone like Tim Yu can develop a drug with no prior familiarity with this technology,” says Art Krieg, chief scientific officer at Checkmate Pharmaceuticals, based in Cambridge, Massachusetts.
Source code As word got out about milasen, Yu heard from more than a hundred families asking for his help. That’s put the Boston doctor in a tough position. Yu has plans to try antisense to treat a dozen kids with different diseases, but he knows it’s not the right approach for everyone, and he’s still learning which diseases might be most amenable. And nothing is ever simple—or cheap. Each new version of a drug can behave differently and requires costly safety tests in animals.
Kuzu had the advantage that the Los Angeles researchers had already shown antisense might work. What’s more, Margus agreed that the A-T Children’s Project would help fund the research. But it wouldn’t be fair to make the treatment just for Ipek if the foundation was paying for it. So Margus and Yu decided to test antisense drugs in the cells of three young A-T patients, including Ipek. Whichever kid’s cells responded best would get picked.
While he waited for the test results, Kuzu raised about $200,000 from friends and coworkers at Google. One day, an email landed in his in-box from another Google employee who was fundraising to help a sick child. As he read it, Kuzu felt a jolt of recognition: his coworker, Jennifer Seth, was also working with Yu.
Seth’s daughter Lydia was born in December 2018. The baby, with beautiful chubby cheeks, carries a mutation that causes seizures and may lead to severe disabilities. Seth’s husband Rohan, a well-connected Silicon Valley entrepreneur, refers to the problem as a “tiny random mutation” in her “source code.” The Seths have raised more than $2 million, much of it from co-workers.
Custom drug By then, Yu was ready to give Kuzu the good news: Ipek’s cells had responded the best. So last September the family packed up and moved from California to Cambridge, Massachusetts, so Ipek could start getting atipeksen. The toddler got her first dose this January, under general anesthesia, through a lumbar puncture into her spine.
After a year, the Kuzus hope to learn whether or not the drug is helping. Doctors will track her brain volume and measure biomarkers in Ipek’s cerebrospinal fluid as a readout of how her disease is progressing. And a team at Johns Hopkins will help compare her movements with those of other kids, both with and without A-T, to observe whether the expected disease symptoms are delayed.
One serious challenge facing gene drugs for individuals is that short of a healing miracle, it may ultimately be impossible to be sure they really work. That’s because the speed with which diseases like A-T progress can vary widely from person to person. Proving a drug is effective, or revealing that it’s a dud, almost always requires collecting data from many patients, not just one. “It’s important for parents who are ready to pay anything, try anything, to appreciate that experimental treatments often don’t work,” says Holly Fernandez Lynch, a lawyer and ethicist at the University of Pennsylvania. “There are risks. Trying one could foreclose other options and even hasten death.” Kuzu says his family weighed the risks and benefits. “Since this is the first time for this kind of drug, we were a little scared,” he says. But, he concluded, “there’s nothing else to do. This is the only thing that might give hope to us and the other families.” Another obstacle to ultra-personal drugs is that insurance won’t pay for them. And so far, pharmaceutical companies aren’t interested either. They prioritize drugs that can be sold thousands of times, but as far as anyone knows, Ipek is the only person alive with her exact mutation. That leaves families facing extraordinary financial demands that only the wealthy, lucky, or well connected can meet. Developing Ipek’s treatment has already cost $1.9 million, Margus estimates.
Some scientists think agencies such as the US National Institutes of Health should help fund the research, and will press their case at a meeting in Bethesda, Maryland, in April. Help could also come from the Food and Drug Administration, which is developing guidelines that may speed the work of doctors like Yu. The agency will receive updates on Mila and other patients if any of them experience severe side effects.
The FDA is also considering giving doctors more leeway to modify genetic drugs to try in new patients without securing new permissions each time. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, likens traditional drug manufacturing to factories that mass-produce identical T-shirts. But, he points out, it’s now possible to order an individual basic T-shirt embroidered with a company logo. So drug manufacturing could become more customized too, Marks believes.
Custom drugs carrying exactly the message a sick kid’s body needs? If we get there, credit will go to companies like Ionis that developed the new types of gene medicine. But it should also go to the Kuzus—and to Brad Margus, Rohan Seth, Julia Vitarello, and all the other parents who are trying save their kids. In doing so, they are turning hyper-personalized medicine into reality.
Erika Check Hayden is director of the science communication program at the University of California, Santa Cruz.
hide by Erika Check Hayden Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain.
By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative.
By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,489 | 2,020 |
"What AI still can’t do | MIT Technology Review"
|
"https://www.technologyreview.com/s/615189"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts What AI still can’t do By Brian Bergstein archive page Saiman Chow In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.
Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.
Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.” These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.
Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.
His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.
As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.
But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.
Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.
An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.
Performing miracles The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant— the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.
Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.
Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.
Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.
Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.
In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.
Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.
Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.
One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).
The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.
The last mile Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.
He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.
Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.” He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.
Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.” That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.
He also doesn’t think it’s that far off: “This is the last mile before the victory.” What if? Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.
As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.
For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.
You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.” Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.
hide by Brian Bergstein Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,490 | 2,020 |
"How big tech hijacked its sharpest, funniest critics | MIT Technology Review"
|
"https://www.technologyreview.com/s/615190"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How big tech hijacked its sharpest, funniest critics By Tim Maughan archive page COURTESY PHOTOS Bruce Sterling wasn’t originally meant to be part of the discussion. It was March 13, 2010, in Austin, Texas, and a small group of designers were on stage at the South by Southwest interactive festival, talking about an emerging discipline they called “design fiction.” “They asked me to join the panel at the last minute,” Sterling tells me, laughing. “They knew that I’d been [involved with] South by Southwest for a long time and this would give them some cred.” MALTESE FALCON / 1930 Dashiell Hammett’s MacGuffin was a piece of proto–design fiction.
FLAVANOID / 2007 A wearable device that measures your activity and uses the data to change your avatar in the virtual world Second Life.
SLOW MESSENGER / 2007 This gadget deliberately slows down the receipt of messages to push back against rushed, always-on culture.
BUTTONS: BLIND CAMERA / 2010 Sascha Pohflepp’s digital camera has no lens: instead, it shows you a photo taken and shared by somebody else at the exact same moment.
LITTLE PRINTER / 2012 A design fiction idea that became a real product, Berg London’s chirpy thermal printer took your feed of social media, news, and weather updates and turned it into a physical object.
TBD CATALOG / 2014 Combines Silicon Valley fever dreams with a satiric SkyMall presentation.
“UNINVITED GUESTS” / 2015 This short film by Superflux shows an elderly man getting the better of surveillance devices.
6ANDME / 2015 The service analyzes your social-media accounts to diagnose various fictional ailments.
A science fiction novelist who’d helped launch the cyberpunk movement in the 1980s, Sterling had actually coined the term design fiction in a 2005 book, but he hadn’t exactly taken ownership of the still-nebulous concept. What happened that day made it much clearer, though, and set off an explosion of ideas for everyone in attendance.
“People went out of that room and they were kind of visibly shaken,” he says. “Some guy came up in the back and told us, with this pale kind of look, ‘I think I’m starting to get it.’” The panel’s organizer was Julian Bleecker, an artist, technologist, and product designer from Los Angeles. He wanted to share his work—a new practice where designers and engineers used their skills to go beyond just thinking up and prototyping new consumer products. He wanted them to create objects that were not intended to be real products but could have been, and use them as portals for talking about tomorrow.
“Design fiction is a mix of science fact, design and science fiction,” Bleecker wrote on his blog in 2009. It “recombines the traditions of writing and storytelling with the material crafting of objects.” The objects made in design fiction are “diagetic prototypes,” he suggested. They are “props that help focus the imagination and speculate about possible near future worlds—whether profound change or simple, even mundane social practices.” One of the earliest examples is the late artist Sascha Pohflepp’s Buttons: Blind Camera.
Made in 2010, it is a sleek-looking digital camera that takes the minimal, post-Apple industrial design aesthetic to an extreme. It has only one button, a small color screen, and apparently no lens. Press the button and it, like any other camera, captures a moment of time in the form of a photograph. The difference is that it’s not a moment of your time. Instead, the camera connects to the internet to find another photo taken and shared by somebody else at the exact time you pressed that button, downloads it, and displays it on the screen.
It was a brilliantly simple idea, but crucially, it was not just a piece of concept art, or a prop in a speculative movie, or an art student’s mock-up. It was a real, functioning device. Pohflepp built it from the guts of a Sony Ericsson cell phone and code he’d written himself.
“It’s an object that’s somehow imbued with kind of a narrative function,” Bleecker says. “It helps tell a story; it pushes and pulls on characters in certain ways. I think the classic example is the Maltese Falcon. Hitchcock called them MacGuffins. It’s the thing around which the drama evolves and develops and moves.” In design fiction, the process of making—rather than just imagining—is the process of learning. “I don’t want to dismiss the significance or importance of a good creative idea, but ideas are kind of like a dime a dozen,” Bleecker says.
Back in 2007 he’d built the Slow Messenger, a handheld device that received messages but delayed presenting them—by minutes, days, or sometimes even years. It poked at the idea of instant, always-on communication that the internet was thrusting onto us. Shortly after that, he cofounded the Near Future Laboratory, a studio that produced this kind of exploratory work.
The lab created things like the TBD Catalog, a SkyMall-style magazine full of hilarious advertisements for disposable, very plausibly makeable near-future consumer crap with a tone reminiscent of Paul Verhoeven’s satirical sci-fi movies Robocop and Starship Troopers.
Then there is 6andMe, a service that analyzes your social-media accounts and diagnoses supposed “social media related pathologies.” (“Systrom’s Anxiety,” named for the Instagram cofounder, is the drive to record moments of one’s life for fear of not being able to repeat them in the future; “Six Degrees Jealousy” is when we envy somebody for getting more likes.) These maladies are all fictional, as is the service’s analysis, but the fake reports are sinisterly familiar to anybody who has spent time nervously checking Twitter or Instagram feeds.
As design fiction emerged, it turned out that governments, multinational companies, and art galleries were all interested in exploring what the future looked like, and intrigued by the charismatic objects the movement produced. The Near Future Lab joined a number of other boutique agencies that offered speculative services to their clients.
1. A Near Future project to create a unique controller for the game Katamari Damacy.
2. Bleecker's sketches wonder what real-world gestures are appropriate to turn into in-game actions. Could snowboarding be used to steer your character? 3. A prototype for Slow Messenger, which delays inbound mail by as much as a decade.
“We use objects to ask ‘Why/Why Not?’ questions,” explains Scott Smith, one of the founders of Changeist, a consultancy now based in the Netherlands that works mainly with large institutions. “We try to use the familiar forms and language of these bureaucracies to speak back to them—manuals, maps, forms, kits, procedures, organizations, and so on.” Design fiction rapidly expanded from a practice into an aesthetic: a style that used the languages of consumer product design and advertising to create fictional objects so instantly familiar to audiences that they feel real, close, or even inevitable. It’s that sense of something being unsettling yet just a few minutes into the future that you get from every dystopian app in Black Mirror or the ubiquitous voice assistant in Spike Jonze’s movie Her.
As the style went mainstream and commercial, however, it started to change. In 2011, glass manufacturer Corning released “A Day Made of Glass,” depicting a day in the life of a painfully perfect--looking family. Its five minutes of sleek concept video show every single glass surface—windows, mirrors, tabletops—becoming touch screens. Its 26 million YouTube views led Marketing Daily magazine to call it “the most watched corporate video of all time.” As dazzling and high-tech as it looked on release, it feels quite dull and naïve—even dystopian— nine years later. More important, it’s utterly lacking in the anarchic, critical attitude that marked early, genuine design fiction work. It was a sign of how corporate interests would appropriate design fiction—and declaw it.
A more recent example is a May 2019 Amazon ad for the Echo smart speaker, “Caring Is Sharing.” The 30-second spot shows a young man bringing his grandfather an Echo and installing it in his apartment, presumably to keep him company and to let family members stay in touch with him. He’s grumpy about it at first, reluctant to acknowledge it, but the next time his grandson comes to visit, he’s using it happily.
Though at first glance it seems like any other TV ad, “Caring Is Sharing” looks and feels eerily similar to “Uninvited Guests,” a five-minute satirical film made by Superflux, a London-based “speculative design agency,” in 2015. That video similarly portrays an old man living on his own who has been given a range of surveillance devices by well-meaning family members: a smart fork that measures the nutrients in his food and nags him about his salt and fat intake, a smart walking cane that scolds him if he doesn’t get his recommended daily steps, and a device that connects to his bed to make sure he’s getting enough sleep. But instead of succumbing to the intrusions of these devices—as in the Amazon ad—the protagonist of “Uninvited Guests” finds ways to fool them. He puts the smart fork in a plate of salad while eating fish and chips, pays a local teenager in beer to walk the smart cane for him, and piles books on his bed so it looks as if he’s sleeping when he watches TV.
Superflux’s cofounder Anab Jain hadn’t seen the Amazon film when I spoke to her, but she’s aware that corporations have used the speculative approach for marketing. “It’s deeply problematic,” she says. “It’s why we say no to work more than we say yes.” Jain, who prefers the term “speculative design” or “critical design” (because “frankly, all design is fiction until it’s real”), says some prospective clients pay lip service “to the criticality and to the questioning,” but “in the end they just want a PR exercise.” For Bleecker, this isn’t what design fiction should be. “There’s a number of those kinds of films that are essentially marketing exercises,” he says. “There was no sense that they were meant to be used internally to reflect upon and consider directions in which the company is going. They definitely come across as advertisements: ‘Look, we’re futuristic, we’ve got lots of concepts that relate to flat screens and graphics circulating and swirling around.’” In many ways design fiction’s path from a smart, anarchic movement to a marketing language for the industries it set out to lampoon is painfully familiar.
Last year designer and artist Tobias Revell claimed that “speculative design has failed to achieve the meaningful tools for change that we once hoped for.” It had become, he said, “a whitewashing exercise” for tech companies.
Others, meanwhile, suggest it was never going to be able to achieve its original goals: it was too wrapped up in corporate hegemony from the beginning, too exclusive and elitist. Design fiction was focused on “projects that clearly reflect the fear of losing first-world privilege in bleak, dystopic futures,” wrote Brazilian design duo A Parede in 2014.
Perhaps more practically, those working in the field faced another, also familiar issue: they had to balance their desire to do critical work with their need to pay the bills. This inevitably watered down their ability to achieve distance from the organizations that were lifting their ideas and aesthetics.
For agencies like Superflux and Changeist, that means continuing to take corporate contracts and using the money to work on more personal projects. Others have taken jobs with governments or big tech themselves. But while the surface may have been captured by Hollywood and the advertising industry, some folks are still plugging away, trying to navigate a path between the critical and the corporate.
And then there’s Bleecker himself. Ten years on, he’s still running Near Future Lab, working with clients, building objects from the future, and throwing out his own brand of wild ideas. But he’s also working on Omata, a small two-person company that makes high-tech cycling accessories. Its flagship product is a $550 screenless cycling computer that looks like a giant Swiss watch. It is a product for privileged first-worlders, not a tool for change; it is a beautiful object, obviously lovingly designed and born out of Bleecker’s very personal obsessions. But it is also a deliberate challenge to the idea of what would be expected from such a device.
“It almost seemed to me like … it would have to be something unexpected,” he says.
By doing the opposite of everything that corporate technology companies might try—the antithesis of a suite of interchangeable, low-cost, shrunken-down touch-screen gizmos—Omata is rooted in design fiction, with its mission to challenge us to imagine other futures and see the world differently.
Tim Maughan is a journalist and author. His debut novel Infinite Detail was picked by The Guardian as their Best Science Fiction Book of 2019.
hide by Tim Maughan Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,491 | 2,020 |
"We’re not prepared for the end of Moore’s Law | MIT Technology Review"
|
"https://www.technologyreview.com/s/615226"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts We’re not prepared for the end of Moore’s Law It has fueled prosperity of the last 50 years. But the end is now in sight.
By David Rotman archive page Moore's Law illustration MS Tech Gordon Moore’s 1965 forecast that the number of components on an integrated circuit would double every year until it reached an astonishing 65,000 by 1975 is the greatest technological prediction of the last half-century. When it proved correct in 1975, he revised what has become known as Moore’s Law to a doubling of transistors on a chip every two years.
Since then, his prediction has defined the trajectory of technology and, in many ways, of progress itself.
Moore’s argument was an economic one. Integrated circuits, with multiple transistors and other electronic devices interconnected with aluminum metal lines on a tiny square of silicon wafer, had been invented a few years earlier by Robert Noyce at Fairchild Semiconductor. Moore, the company’s R&D director, realized, as he wrote in 1965, that with these new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” It was a beautiful bargain—in theory, the more transistors you added, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip.
Soon these cheaper, more powerful chips would become what economists like to call a general purpose technology—one so fundamental that it spawns all sorts of other innovations and advances in multiple industries. A few years ago, leading economists credited the information technology made possible by integrated circuits with a third of US productivity growth since 1974. Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore’s prediction. It has also fueled today’s breakthroughs in artificial intelligence and genetic medicine, by giving machine-learning techniques the ability to chew through massive amounts of data to find answers.
But how did a simple prediction, based on extrapolating from a graph of the number of transistors by year—a graph that at the time had only a few data points—come to define a half-century of progress? In part, at least, because the semiconductor industry decided it would.
Moore wrote that “cramming more components onto integrated circuits,” the title of his 1965 article, would “lead to such wonders as home computers—or at least terminals connected to a central computer—automatic controls for automobiles, and personal portable communications equipment.” In other words, stick to his road map of squeezing ever more transistors onto chips and it would lead you to the promised land. And for the following decades, a booming industry, the government, and armies of academic and industrial researchers poured money and time into upholding Moore’s Law, creating a self-fulfilling prophecy that kept progress on track with uncanny accuracy. Though the pace of progress has slipped in recent years, the most advanced chips today have nearly 50 billion transistors.
Every year since 2001, MIT Technology Review has chosen the 10 most important breakthrough technologies of the year. It’s a list of technologies that, almost without exception, are possible only because of the computation advances described by Moore’s Law.
For some of the items on this year’s list the connection is obvious: consumer devices, including watches and phones, infused with AI; climate-change attribution made possible by improved computer modeling and data gathered from worldwide atmospheric monitoring systems; and cheap, pint-size satellites. Others on the list, including quantum supremacy, molecules discovered using AI, and even anti-aging treatments and hyper-personalized drugs, are due largely to the computational power available to researchers.
But what happens when Moore’s Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time? RIP “ It’s over. This year that became really clear,” says Charles Leiserson, a computer scientist at MIT and a pioneer of parallel computing, in which multiple calculations are performed simultaneously. The newest Intel fabrication plant, meant to build chips with minimum feature sizes of 10 nanometers, was much delayed, delivering chips in 2019, five years after the previous generation of chips with 14-nanometer features. Moore’s Law, Leiserson says, was always about the rate of progress, and “we’re no longer on that rate.” Numerous other prominent computer scientists have also declared Moore’s Law dead in recent years. In early 2019, the CEO of the large chipmaker Nvidia agreed.
In truth, it’s been more a gradual decline than a sudden death. Over the decades, some, including Moore himself at times, fretted that they could see the end in sight, as it got harder to make smaller and smaller transistors. In 1999, an Intel researcher worried that the industry’s goal of making transistors smaller than 100 nanometers by 2005 faced fundamental physical problems with “no known solutions,” like the quantum effects of electrons wandering where they shouldn’t be.
For years the chip industry managed to evade these physical roadblocks. New transistor designs were introduced to better corral the electrons. New lithography methods using extreme ultraviolet radiation were invented when the wavelengths of visible light were too thick to precisely carve out silicon features of only a few tens of nanometers. But progress grew ever more expensive. Economists at Stanford and MIT have calculated that the research effort going into upholding Moore’s Law has risen by a factor of 18 since 1971.
Likewise, the fabs that make the most advanced chips are becoming prohibitively pricey. The cost of a fab is rising at around 13% a year, and is expected to reach $16 billion or more by 2022. Not coincidentally, the number of companies with plans to make the next generation of chips has now shrunk to only three, down from eight in 2010 and 25 in 2002.
Finding successors to today’s silicon chips will take years of research.If you’re worried about what will replace moore’s Law, it’s time to panic.
Nonetheless, Intel—one of those three chipmakers—isn’t expecting a funeral for Moore’s Law anytime soon. Jim Keller, who took over as Intel’s head of silicon engineering in 2018, is the man with the job of keeping it alive. He leads a team of some 8,000 hardware engineers and chip designers at Intel. When he joined the company, he says, many were anticipating the end of Moore’s Law. If they were right, he recalls thinking, “that’s a drag” and maybe he had made “a really bad career move.” But Keller found ample technical opportunities for advances. He points out that there are probably more than a hundred variables involved in keeping Moore’s Law going, each of which provides different benefits and faces its own limits. It means there are many ways to keep doubling the number of devices on a chip—innovations such as 3D architectures and new transistor designs.
These days Keller sounds optimistic. He says he has been hearing about the end of Moore’s Law for his entire career. After a while, he “decided not to worry about it.” He says Intel is on pace for the next 10 years, and he will happily do the math for you: 65 billion (number of transistors) times 32 (if chip density doubles every two years) is 2 trillion transistors. “That’s a 30 times improvement in performance,” he says, adding that if software developers are clever, we could get chips that are a hundred times faster in 10 years.
Still, even if Intel and the other remaining chipmakers can squeeze out a few more generations of even more advanced microchips, the days when you could reliably count on faster, cheaper chips every couple of years are clearly over. That doesn’t, however, mean the end of computational progress.
Time to panic Neil Thompson is an economist, but his office is at CSAIL, MIT’s sprawling AI and computer center, surrounded by roboticists and computer scientists, including his collaborator Leiserson. In a new paper, the two document ample room for improving computational performance through better software, algorithms, and specialized chip architecture.
One opportunity is in slimming down so-called software bloat to wring the most out of existing chips. When chips could always be counted on to get faster and more powerful, programmers didn’t need to worry much about writing more efficient code. And they often failed to take full advantage of changes in hardware architecture, such as the multiple cores, or processors, seen in chips used today.
Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code.
That sounds like good news for continuing progress, but Thompson worries it also signals the decline of computers as a general purpose technology. Rather than “lifting all boats,” as Moore’s Law has, by offering ever faster and cheaper chips that were universally available, advances in software and specialized architecture will now start to selectively target specific problems and business opportunities, favoring those with sufficient money and resources.
Indeed, the move to chips designed for specific applications, particularly in AI, is well under way. Deep learning and other AI applications increasingly rely on graphics processing units (GPUs) adapted from gaming, which can handle parallel operations, while companies like Google, Microsoft, and Baidu are designing AI chips for their own particular needs. AI, particularly deep learning, has a huge appetite for computer power, and specialized chips can greatly speed up its performance, says Thompson.
But the trade-off is that specialized chips are less versatile than traditional CPUs. Thompson is concerned that chips for more general computing are becoming a backwater, slowing “the overall pace of computer improvement,” as he writes in an upcoming paper, “The Decline of Computers as a General Purpose Technology.” At some point, says Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon, those developing AI and other applications will miss the decreases in cost and increases in performance delivered by Moore’s Law. “Maybe in 10 years or 30 years—no one really knows when—you’re going to need a device with that additional computation power,” she says.
The problem, says Fuchs, is that the successors to today’s general purpose chips are unknown and will take years of basic research and development to create. If you’re worried about what will replace Moore’s Law, she suggests, “the moment to panic is now.” There are, she says, “really smart people in AI who aren’t aware of the hardware constraints facing long-term advances in computing.” What’s more, she says, because application--specific chips are proving hugely profitable, there are few incentives to invest in new logic devices and ways of doing computing.
Wanted: A Marshall Plan for chips In 2018, Fuchs and her CMU colleagues Hassan Khan and David Hounshell wrote a paper tracing the history of Moore’s Law and identifying the changes behind today’s lack of the industry and government collaboration that fostered so much progress in earlier decades. They argued that “the splintering of the technology trajectories and the short-term private profitability of many of these new splinters” means we need to greatly boost public investment in finding the next great computer technologies.
If economists are right, and much of the growth in the 1990s and early 2000s was a result of microchips—and if, as some suggest, the sluggish productivity growth that began in the mid-2000s reflects the slowdown in computational progress—then, says Thompson, “it follows you should invest enormous amounts of money to find the successor technology. We’re not doing it. And it’s a public policy failure.” There’s no guarantee that such investments will pay off. Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.
hide by David Rotman Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,492 | 2,020 |
"The professionals who predict the future for a living | MIT Technology Review"
|
"https://www.technologyreview.com/s/615227"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The professionals who predict the future for a living By Bobbie Johnson archive page Inez Fung Professor of atmospheric science, University of California, Berkeley Prediction for 2030: We’ll light up the world … safely I’ve spoken to people who want climate model information, but they’re not really sure what they’re asking me for. So I say to them, “Suppose I tell you that some event will happen with a probability of 60% in 2030. Will that be good enough for you, or will you need 70%? Or would you need 90%? What level of information do you want out of climate model projections in order to be useful?” I joined Jim Hansen’s group in 1979, and I was there for all the early climate projections. And the way we thought about it then, those things are all still totally there. What we’ve done since then is add richness and higher resolution, but the projections are really grounded in the same kind of data, physics, and observations.
Still, there are things we’re missing. We still don’t have a real theory of precipitation, for example. But there are two exciting things happening there. One is the availability of satellite observations: looking at the cloud is still not totally utilized. The other is that there used to be no way to get regional precipitation patterns through history—and now there is. Scientists found these caves in China and elsewhere, and they go in, look for a nice little chamber with stalagmites, and then they chop them up and send them back to the lab, where they do fantastic uranium-thorium dating and measure oxygen isotopes in calcium carbonate. From there they can interpret a record of historic rainfall. The data are incredible: we have got over half a million years of precipitation records all over Asia.
I don’t see us reducing fossil fuels by 2030. I don’t see us reducing CO 2 or atmospheric methane. Some 1.2 billion people in the world right now have no access to electricity, so I’m looking forward to the growth in alternative energy going to parts of the world that have no electricity. That’s important because it’s education, health, everything associated with a Western standard of living. That’s where I’m putting my hopes.
Anne Lise Kjaer Futurist, Kjaer Global, London Prediction for 2030: Adults will learn to grasp new ideas As a kid I wanted to become an archaeologist, and I did in a way. Archaeologists find artifacts from the past and try to connect the dots and tell a story about how the past might have been. We do the same thing as futurists; we use artifacts from the present and try to connect the dots into interesting narratives in the future.
When it comes to the future, you have two choices. You can sit back and think “It’s not happening to me” and build a great big wall to keep out all the bad news. Or you can build windmills and harness the winds of change.
A lot of companies come to us and think they want to hear about the future, but really it’s just an exercise for them—let’s just tick that box, do a report, and put it on our bookshelf.
So we have a little test for them. We do interviews, we ask them questions; then we use a model called a Trend Atlas that considers both the scientific dimensions of society and the social ones. We look at the trends in politics, economics, societal drivers, technology, environment, legislation—how does that fit with what we know currently? We look back maybe 10, 20 years: can we see a little bit of a trend and try to put that into the future? What’s next? Obviously with technology we can educate much better than we could in the past. But it’s a huge opportunity to educate the parents of the next generation, not just the children. Kids are learning about sustainability goals, but what about the people who actually rule our world? Philip Tetlock Coauthor of Superforecasting and professor, University of Pennsylvania Prediction for 2030: We’ll get better at being uncertain At the Good Judgment Project, we try to track the accuracy of commentators and experts in domains in which it’s usually thought impossible to track accuracy. You take a big debate and break it down into a series of testable short-term indicators. So you could take a debate over whether strong forms of artificial intelligence are going to cause major dislocations in white-collar labor markets by 2035, 2040, 2050. A lot of discussion already occurs at that level of abstraction — but from our point of view, it’s more useful to break it down and to say: If we were on a long-term trajectory toward an outcome like that, what sorts of things would we expect to observe in the short term? So we started this off in 2015, and in 2016 AlphaGo defeated people in Go. But then other things didn’t happen: driverless Ubers weren’t picking people up for fares in any major American city at the end of 2017. Watson didn’t defeat the world’s best oncologists in a medical diagnosis tournament. So I don’t think we’re on a fast track toward the singularity, put it that way.
Forecasts have the potential to be either self-fulfilling or self-negating — Y2K was arguably a self-negating forecast. But it’s possible to build that into a forecasting tournament by asking conditional forecasting questions: i.e., How likely is X conditional on our doing this or doing that? What I’ve seen over the last 10 years, and it’s a trend that I expect will continue, is an increasing openness to the quantification of uncertainty. I think there’s a grudging, halting, but cumulative movement toward thinking about uncertainty, and more granular and nuanced ways that permit keeping score.
Keith Chen Associate professor of economics, UCLA Prediction for 2030: We’ll be more—and less—private When I worked on Uber’s surge pricing algorithm, the problem it was built to solve was very coarse: we were trying to convince drivers to put in extra time when they were most needed. There were predictable times—like New Year’s—when we knew we were going to need a lot of people. The deeper problem was that this was a system with basically no control. It’s like trying to predict the weather. Yes, the amount of weather data that we collect today—temperature, wind speed, barometric pressure, humidity data—is 10,000 times greater than what we were collecting 20 years ago. But we still can’t predict the weather 10,000 times further out than we could back then. And social movements—even in a very specific setting, such as where riders want to go at any given point in time—are, if anything, even more chaotic than weather systems.
These days what I’m doing is a little bit more like forensic economics. We look to see what we can find and predict from people’s movement patterns. We’re just using simple cell-phone data like geolocation, but even just from movement patterns, we can infer salient information and build a psychological dimension of you. What terrifies me is I feel like I have much worse data than Facebook does. So what are they able to understand with their much better information? I think the next big social tipping point is people actually starting to really care about their privacy. It’ll be like smoking in a restaurant: it will quickly go from causing outrage when people want to stop it to suddenly causing outrage if somebody does it. But at the same time, by 2030 almost every Chinese citizen will be completely genotyped. I don’t quite know how to reconcile the two.
Annalee Newitz Science fiction and nonfiction author, San Francisco Prediction for 2030: We’re going to see a lot more humble technology Every era has its own ideas about the future. Go back to the 1950s and you’ll see that people fantasized about flying cars. Now we imagine bicycles and green cities where cars are limited, or where cars are autonomous. We have really different priorities now, so that works its way into our understanding of the future.
Science fiction writers can’t actually make predictions. I think of science fiction as engaging with questions being raised in the present. But what we can do, even if we can’t say what’s definitely going to happen, is offer a range of scenarios informed by history.
There are a lot of myths about the future that people believe are going to come true right now. I think a lot of people—not just science fiction writers but people who are working on machine learning—believe that relatively soon we’re going to have a human-equivalent brain running on some kind of computing substrate. This is as much a reflection of our time as it is what might actually happen.
It seems unlikely that a human-equivalent brain in a computer is right around the corner. But we live in an era where a lot of us feel like we live inside computers already, for work and everything else. So of course we have fantasies about digitizing our brains and putting our consciousness inside a machine or a robot.
I’m not saying that those things could never happen. But they seem much more closely allied to our fantasies in the present than they do to a real technical breakthrough on the horizon.
We’re going to have to develop much better technologies around disaster relief and emergency response, because we’ll be seeing a lot more floods, fires, storms. So I think there is going to be a lot more work on really humble technologies that allow you to take your community off the grid, or purify your own water. And I don’t mean in a creepy survivalist way; I mean just in a this-is-how-we-are-living-now kind of way.
Finale Doshi-Velez Associate professor of computer science, Harvard Prediction for 2030: Humans and machines will make decisions together In my lab, we’re trying to answer questions like “How might this patient respond to this antidepressant?” or “How might this patient respond to this vasopressor?” So we get as much data as we can from the hospital. For a psychiatric patient, we might have everything about their heart disease, kidney disease, cancer; for a blood pressure management recommendation for the ICU, we have all their oxygen information, their lactate, and more.
Some of it might be relevant to making predictions about their illnesses, some not, and we don’t know which is which. That’s why we ask for the large data set with everything.
There’s been about a decade of work trying to get unsupervised machine-learning models to do a better job at making these predictions, and none worked really well. The breakthrough for us was when we found that all the previous approaches for doing this were wrong in the exact same way. Once we untangled all of this, we came up with a different method.
We also realized that even if our ability to predict what drug is going to work is not always that great, we can more reliably predict what drugs are not going to work, which is almost as valuable.
I’m excited about combining humans and AI to make predictions. Let’s say your AI has an error rate of 70% and your human is also only right 70% of the time. Combining the two is difficult, but if you can fuse their successes, then you should be able to do better than either system alone. How to do that is a really tough, exciting question.
All these predictive models were built and deployed and people didn’t think enough about potential biases. I’m hopeful that we’re going to have a future where these human-machine teams are making decisions that are better than either alone.
Abdoulaye Banire Diallo Professor, director of the bioinformatics lab, University of Quebec at Montreal Prediction for 2030: Machine-based forecasting will be regulated When a farmer in Quebec decides whether to inseminate a cow or not, it might depend on the expectation of milk that will be produced every day for one year, two years, maybe three years after that. Farms have management systems that capture the data and the environment of the farm. I’m involved in projects that add a layer of genetic and genomic data to help forecasting — to help decision makers like the farmer to have a full picture when they’re thinking about replacing cows, improving management, resilience, and animal welfare.
With the emergence of machine learning and AI, what we’re showing is that we can help tackle problems in a way that hasn’t been done before. We are adapting it to the dairy sector, where we’ve shown that some decisions can be anticipated 18 months in advance just by forecasting based on the integration of this genomic data. I think in some areas such as plant health we have only achieved 10% or 20% of our capacity to improve certain models.
Until now AI and machine learning have been associated with domain expertise. It’s not a public-wide thing. But less than 10 years from now they will need to be regulated. I think there are a lot of challenges for scientists like me to try to make those techniques more explainable, more transparent, and more auditable.
hide by Bobbie Johnson Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,493 | 2,020 |
"Predictions for 2030 by people shaping the world | MIT Technology Review"
|
"https://www.technologyreview.com/s/615228"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Predictions for 2030 by people shaping the world By Gideon Lichfield archive page World Economic Forum session with Ronaldo Lemos et al World Economic Forum / Christian Clavadetscher AI will cause a productivity boom Erik Brynjolfsson, director, MIT Initiative on the Digital Economy (USA) Machine learning has advanced tremendously over the past decade, yet US productivity growth has fallen by 50% since 2004. It’s not uncommon with powerful new general-purpose technologies to see first a dip in productivity growth followed by an increase. It takes time. With the steam engine, we saw the rise of industrialization. With electricity, factories were reinvented. Computers obviously changed many aspects of society, but e-commerce is still a minority of total retail trade, 25 years after Amazon was started. Likewise, machine learning is going to take a while to propagate through the economy. What’s needed is investments in new skills, and businesses that are willing to fundamentally rethink their supply chains, their relationships with customers, and the kinds of products and services they deliver. As they do that, the productivity is going to come online.
Africa will be a test bed for human-robot coexistence Wanuri Kahiu, science fiction writer and filmmaker (Kenya) Just as Kenya has been a place where digital payment technologies took off, I think it will become a testing ground for how people interact with AI and robots. The barriers to entry are low and there are few laws or social mores around AI, so it’s like a blank slate for experiments in coexistence between humans and machines. In Kinshasa almost 10 years ago, they installed robotic traffic cops and people obeyed them more than the human police, because the robots were not corrupt. There’s lots of potential for localized AI applications that help Africa deal with African problems, which is important because by 2050, one in four people will be African.
Consumers will have more power and more protection Helena Leurent, director-general, Consumers International (UK) “E-commerce is still a minority of total retail trade, 25 years after Amazon was started.” Consumers will be part of data trusts and cooperatives that can safeguard their rights, negotiate for them on how their data is used, alert them to how they are being watched, and audit organizations that use their data. As an example, consumers might want their respective data trusts to connect directly to farmers who guarantee to use sustainable growing practices. The consumers would get better prices and have more information about what they’re buying; the farmers could get data and guarantees about purchasing patterns and would be able to differentiate their products. This “agricultural data commons” could spark innovation in products and services that both give consumers more choice and lead to greater sustainability.
The dollar will no longer be the world’s reserve currency Michael Casey, chief content officer, CoinDesk (USA) The dollar is the reserve currency because of its stability. If companies in two different countries sign a contract with payment due in 90 days, they set the transaction in dollars to protect against exchange-rate fluctuations. But when there are digital currencies with programmable smart contracts that can convert at an agreed rate and keep the payment in escrow until it’s due, they won’t need the dollar any more. This means the advantages to traditional US companies will diminish, but innovative, decentralized, globally minded companies will succeed.
We’ll recognize the brittleness of 20th-century infrastructure Genevieve Bell, director, 3A Institute and senior fellow, Intel (Australia) Over the last six weeks my country has been on fire, and I think 2030 looks like the world I’m now living in. One, the climate is changing faster and faster. Two, Australians are suddenly having to think much harder about how both their own personal data and government data is made accessible so they can get timely fire projections, evacuation requests, air-quality reports, and so on—so the questions about data that only those of us at the forefront of technology were asking are now mainstream. And three, we’ll have to contend with the fact that all the infrastructures of the 20th century—electricity, water, communications, civil society itself—are brittle, and this brittleness will make the 21st century harder to deliver.
We’ll grow plastics—and other materials—from plants Zachary Bogue, managing partner, Data Collective Venture Capital (USA) “We need alternate modes of decent work—child care, health care, elder care, education.” For the last 80 or 90 years our innovation in materials has been driven by petroleum — by recombining petroleum compounds into fuels, plastics, drugs, and so on. I think we’ll look back on the 2020s as a decade of innovation driven by biology. Genetically engineering plants to synthesize chemical compounds opens up a design space exponentially larger than petroleum, to create new materials that will let us live more sustainably and propel the economy forward. It’s already starting to happen — one of the companies we invest in makes a microbe that produces a palm-oil replacement, for example. What’s enabling all this is massive increases in computing power and AI that make it possible to model and design the necessary metabolic pathways.
Chinese phones will rule Ronaldo Lemos, director, Institute for Technology and Society of Rio (Brazil) By 2030 the most famous mobile-phone brands worldwide will be Chinese and they will run their own operating system, cutting the market penetration of Android in half.
Global supply chains will crumble and poor countries will suffer Sharan Burrow, general secretary, International Trade Union Confederation (Australia) 3D printing, automation, and robotics will cause massive localization of manufacturing. If I can go to my local shop and I say I want my jeans with four stripes and three pockets and I want them now, the fast fashion industry is at risk. Food production will become more local too, and efforts to reduce the carbon footprint will change consumption patterns. So the supply chains on which global trade is based—dehumanizing and exploitative though they currently are—will in large part disappear from the most vulnerable countries, leaving the potential for failed states and even more desperate poverty. What we need is alternate modes of decent work, like child care, health care, elder care, education. We need to invest in human infrastructure, in support and services.
Small businesses will use supercomputers Peter Ungaro, CEO, Cray (USA) For example, there are hundreds of companies that make components for automotive manufacturers. Today they use small computer systems to do CAD drawings of their parts and some simulations. In future, because of all the sensors that will be out there generating data, they’re going to have data sets 10, 100, 1,000 times bigger than today that they can compute on, changing how they model their parts. The technology they’ll do that with will be like a mini supercomputer. Some places will have one on the premises, and others will just access it via the cloud. And it won’t have to be one of these machines that today fill up two basketball courts and consume 30 megawatts. We’ll have it down to a single cabinet.
hide by Gideon Lichfield Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,494 | 2,020 |
"Zooming | MIT Technology Review"
|
"https://www.technologyreview.com/s/615252/zooming-fiction-story-future"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Zooming By Andrew Dana Hudson archive page Dogboy I’m sitting in my parents’ basement, in a cracked pleather gaming chair, smelling my own funk, or maybe the damp of black mold, and 400 miles below me the whole world is laid out like some vast Tibetan tapestry, full of little demons and beasts and believers.
I tap, zoom, look, unzoom, slide, tap, zoom, look. Sometimes at familiar spots, but mostly just at random, searching for something happening somewhere that’s interesting enough to stream or gif or sell or just linger over. I watch Berliners mob a music festival. I watch mining equipment drag rocks out of an Australian quarry. I watch Pakistani dogs fighting over a chicken and hurricane clouds slamming into Cuba and an exhibitionist couple fucking on a bright red blanket on a Californian rooftop. I lose myself for a few minutes in the ripples of swaying Amazon jungle leaves, wondering how the wind feels to all those trees. And then I get bored, and I’m just zooming through my rounds again, not thinking much, and I see it.
Some kid is dragging a tasteful brown coffin out of the back of a pickup truck parked at the edge of a pile of trash in the junkyard just outside of town, my town. Silent thunk when the box hits the trashdirt, and the kid loses his grip, rolls it, and out comes a body. Denny’s body.
Never seen him from this angle before, fat face sprawled to the open sky, but somehow I know it’s him: the lima bean bald spot who wore a hideous Hawaiian shirt on their first date, just like the body is wearing now. Denny is the guy fucking my ex Michelle. Was the guy, because I’m pretty sure I’m looking at a live satellite feed of his corpse.
I zoom as hard as I can, but the algo caps the resolution when it thinks there are people in the frame. Panoram doesn’t want us swiping credit card numbers or peeking at text messages, even though they probably sell that data to marketing firms or use it to blackmail Saudi princes. I can see the coloration on individual feathers on a bird soaring over some pristine wilderness, but trying to identify a dead body is like spotting an acquaintance across the street through a smudgy bus window. Doesn’t matter how sure I am—no one else will believe me.
Too much of life happens inside, under ground, in cars or trains, under trees, on cloudy days The kid plants his hands on his hips for a minute, then bends to shove Denny back in the coffin. He gets the lid on, latches it, I guess, and gives the coffin a couple rolls toward the junk pile.
I don’t do snuff zooms, even though they’re good money on the dark web. I don’t chase car crashes or predator drones or active shooters. I should bug out, look at something else, watch a nudist beach or contemplate some cracking, melting ice floe. Everyone knows Panoram can’t afford storage for all the imagery it takes, if storing that much data is even possible. If a user doesn’t record it, it’s gone forever—the tech-god is omniscient but forgetful. I could pretend I never saw Denny’s blurry pixel eyes staring up at me.
But death is weird when it’s someone you know, even if they didn’t know you. I never met Denny in person. I only know his name from my buddy Trent who still goes to Michelle’s restaurant sometimes. Still, I’ve watched Denny pick Michelle up from barre class, drop her off at work the next day. Little flick of the wrist as he called her back for one last kiss. Maybe I was jealous, but I didn’t hate him. We shared a world, and now someone’s thrown him dead in the garbage.
So I hit Record. Seems like the least I can do.
The kid wipes his brow, like “Another day, another dollar,” and I’m sweating just looking at him, itching at my pits, peering desperately into my monitor for some detail on the kid beyond the slightness of his frame and his logo-less baseball cap and grubby black T-shirt. But there’s nothing. Kid gets back in the pickup. It drives off.
I zoom out to follow. Long shot, but who knows where amateur body-dumpers get their vehicles. Couple miles from the junkyard, the truck turns in to a covered garage where empty fleet cars go to charge. I circle around the shiny black square of solar roof for a few minutes, just in case the kid hoofs it. Windowless sedans zip out of the hub like blind ants, leaving their anthill on pheromonic marching orders. He’s probably already in one, napping off the sun. I’ve lost him.
But I do have a time stamp. Silver pickup entered the hub at 11:28:15 MT. Just like in crime shows, the cops can warrant the garage logs, track the truck back to wherever it picked up the kid—and Denny’s coffin.
I should ping the cops. But I don’t, because there’s something else I’ve seen in crime shows. One in five homicides are committed by an intimate partner, which means there’s a non-zero possibility that Michelle was the one who had Denny offed. What if he beat her? Or stole her money? Or tried to sexually traffic her? I’m a snitch, but I’m not going to snitch on her.
My best bet is to find Michelle, keep recording the evidence, track her until I get the whole, fatal story. I pull an Adderall shot from my minifridge, slosh it down, toss the little can, purple liquid splatter joining the salsa stains on the wood-grain carpet. I order pizza to the basement door, text Mom and Dad that I’m staying in. It’ll be at least a day before they throttle my bandwidth to force me upstairs. I go to the bathroom and scrub caffeine on my face. Then I go looking for Michelle.
The thing about zooming is, it’s actually fucking hard to stalk people. Too much of life happens inside, underground, in cars or trains, under trees, on cloudy days. And they know we’re watching, so floppy hats are back in a big way, gated communities put up shade sails, couples kiss under umbrellas on rainless afternoons.
Then there are the anti-stalking algos that kick you off if you zoom in on the same address too long or too often. Panoram is for wildlife photography and storm chasing and seeing humanity in its broadest strokes: the daily heaving of commuters, migrants, pilgrims, supply chains, shipping lanes, air travel, construction sites, battle lines, strip-mining, clear-cutting, controlled burns, cook fires, city lights, parades, sports games, mass weddings, protests, riots.
Finding Michelle is like finding a needle in a haystack when the haystack is on fire. Impossible—except I’ve had a lot of practice.
I catch her coming out of the Thai place when her shift ends after the lunchtime rush. I know it’s her from the way she twists her hair up into a bun and the stretch she does, there on the sidewalk, to celebrate being off the clock. She’s unbuttoned her white hostess shirt, down to a sweaty halter top, and the slight angle of the satellite lets me gaze right into her pixelated cleavage. She arches her back like she wants me to see.
Everyone checks up on their exes, right? I don’t want her back, but I zoom her when I want a reminder that she’s hot, cool, and successful, and for a while she chose me. Or else I want evidence that she’s miserable and pathetic without me. Or maybe she’s ugly, tacky, slutty, immoral, and I’m better off without her, better than her, now that I’ve come to my senses and moved on. Or none of that. It’s just an itch to scratch.
Today she’s got a bounce in her step, like she got a really good night’s sleep or maybe got away with murder. She’s not checking her phone or edging away from passersby or any of the nervous movements I’d expect from someone whose boyfriend has gone missing, who’s involved in a criminal conspiracy, who’s about to go on the lam.
Michelle walks to the library, comes out 10 minutes later. She goes to a coffee shop, spends an hour inside. To keep the algo from getting suspicious, I pan over the café slowly, jump to a random spot, then come back and sweep the surrounding blocks in case I missed her. Rinse, repeat. My pizza arrives. It’s pure luck that I catch her leaving.
More errands. I haven’t zoomed on one person this long since I watched a Mongolian nomad track a runaway horse two days across the steppe. I’ve followed Michelle before, but always with a bored, idle, compulsive curiosity—never with actual focus.
She goes to barre class. I figure this is it. When she’s done, either she’ll wait for Denny to pick her up until she realizes he’s not coming, or she’ll just go, because she already knows where Denny is.
Fifty minutes later the studio empties. A dozen pairs of yoga pants come out, all buzzing with post-workout endorphins. They scatter, but not Michelle. She waves them off, plops down on the curb, waits.
I get this rush of relief, and I’m about to call the cops, tell them about Denny—anonymized so there are no questions about why the victim’s girlfriend’s ex-boyfriend knows where the body is—when a car pulls up.
From my vantage, it’s a windowless black lozenge. A side panel opens, and out leans the same black T-shirt and cap, same slight arms that rolled Denny onto the trash heap this morning.
I want to scream down from the heavens, blare on some global satellite PA system, warn her: Do not get in that fucking car.
She gets in the car. It drives off.
It’s rush hour now, and tracking the car is like playing Grand Theft Auto and Frogger and a street hustler’s shell game. I ache for the days of early Panoram, when they still let in third-party algos that could track vehicles and individuals for you. Dozens of identical sedans merge and exit in a tight, automated gridlock, and I go cross-eyed trying to stare at the one Michelle is in.
Either my ex is heading off into the sunset with the hit man she hired to get rid of Denny, or she’s riding around with a killer and has no clue how much danger she’s in.
I call her phone. No answer. I text her: Jump out of that car! That gets her attention. She calls me.
“Shawn, you can’t keep doing this,” she says. “I deserve privacy—you agreed! If you zoom me again, I’ll ... I’ll report you to Panoram. I’ll get a restraining order.” I tell her it’s not like that. I tell her she’s in danger. I tell her I saw the guy in the car dump the body.
She says, “What body?” So I tell her to open Panoram on her phone and zoom on the trash pile in the junkyard just outside of town, our town. I ping her the coordinates and tell her to look for a coffin.
Pause with some heavy sighs as I guess she does what I ask. Then: “I don’t see anything but garbage and big crane things.” I zoom back to the junkyard on my own screen. A pair of earthmovers are rearranging the trash pile right where Denny’s coffin had been. Fuck.
I tell her she has to believe me.
She says, “Shawn, how long have you been staring at that screen? Maybe you should get out.” Fine, I say. Fine. I’ll show you. I send her my location. Then I get out of my chair.
In the garage is the bike I never ride. My dad keeps the tires pumped up because he read a book about how the best way to parent my generation is to remove the obstacles that prevent us from exiting self-destructive behavior. I clip in my phone, roll out of the garage, immediately start sweating in the sunset heat.
Riding the bike again is just like riding a bike, but harder. My legs ache, my lungs burn. I look up over my shoulder, and I try not to imagine how my soaked back, hunched over the handlebars, must look to Michelle through the satellites above.
My fingers twitch and pinch, and with a bolt of shame, I realize I want to zoom on the box.
I take the bike paths that tendril out of town—faster than rush hour traffic, even at my huffing pace.
All the while, I’m on the phone with her, trying to explain, though I’m out of breath. Eventually she says, “Okay, let me come meet you. We can figure this out.” Then neither of us talks much. For some reason, I feel better, even though I know that if she is a killer, she’s probably only coming to kill me too. I keep my eyes on the road, and on the blip of my body that Panoram keeps centered on the map it lays over the feed on my phone.
There’s no guard at the junkyard, just a gate where you insert your credit card. All the junk is chipped, and you pay by the pound. I dismount and walk into the stacks of objects too toxic to compost, too complex to recycle, too useless to repair. After a day of looking down, their three dimensions weird me out; their perfect resolution sets my teeth on edge.
The automated earthmovers have wandered off, but I see the work they’ve done. They’ve lifted Denny’s heap and set it precariously on top of an adjacent pile, a steep little hill of things no one wants. I see the brown corner of the coffin near the top, covered by a tangle of broken clothes hangers and old halogen lamps.
My fingers twitch and pinch, and with a bolt of shame, I realize I want to zoom on that box. But I can’t. Instead I walk up to the hill, get purchase on a torn-open-mattress spring, and begin to climb.
The sun trickles away, and inch by rattling inch I edge up the mound of trash, toward the sky. I’m almost to the box when I hear Michelle’s voice.
“Shawn! Please! You have to come down from up there!” I crane my neck, and she’s there, just how I remembered: overbleached barrel-collar shirt and sensible flats. She clutches her phone, and I can see Panoram’s darkening view of the junkyard between her white knuckles. Her face is a picture of concern.
Next to her stands a skinny guy, the kid, maybe, though in the flesh he looks older. Is he angry? Stoic? Sympathetic? Territorial? I can’t read him. T-shirt more green than dark, and he’s ditched the baseball cap. But he’s still the kid I saw, I know it, he’s got to be. Except—there’s this bald spot that licks over his scalp, shaped like a lima bean.
I ask who’s that.
“Shawn, this is my partner Denny,” Michelle says. “He came with me because he’s worried. We all are. We don’t want you to hurt yourself.” I tell her that’s bullshit. I tell her Denny’s dead.
“Shawn, come down here. Talk to us. Look me in the eye for once.” I keep climbing. I get to the coffin. From here it’s not so sleek. No $10,000 polished mahogany, just stained plywood, glued together. More of a shipping box than a proper casket.
I try to tug it out of the pile. The junk shifts, but doesn’t budge.
I hear whispering from below, then feel a creak. New Denny is on the pile with me, climbing.
I’m a sitting duck. Whoever this guy is, he knows I know too much. I could kick at his face, but my legs are sore from biking, cramped from sitting all day. Instead I edge away around the peak of the pile. He can’t see me, but I can’t see him. I pull out my phone and watch through Panoram as his bald spot picks its way up the hill.
He’s going to beat me and strangle me, and then he’ll probably have to kill Michelle too, bury both of us in this trash heap with his first victim. I can see it all in my head, from a god’s-eye view. The way he’ll put his hands on his hips after he shoves us into the garbage, wipe his brow, walk back and get a car, slip into the pool of anonymous everyones, safe from the eyes above. Our one chance at justice would be another zoomer, recording in Panoram, but what are the chances lightning will strike twice? There’s no one, because no one cares about this place or this body or Michelle or me except me.
He’s almost around the corner. My eyes don’t leave the screen, but my free hand closes on something long and thin—one of the lamps—and I swing out to the right. The lamp rattles my arm as it hits, and I look over to see New Denny grimace, go blank, and topple. There’s a moment of thick, curdled time as he falls, but then he’s rolling down the pile with clank and crunch. He comes to rest rag-doll limp at the bottom of the junk heap, skinny face sprawled to the open sky.
Michelle runs forward. She screams. She’s got her hands on his head and she’s wobbling it, trying to make it sit right on his neck. But it won’t.
I stagger down the pile. The guy lies still, except for Michelle’s jostling. She’s pounding on his empty chest, saying, “Shit, we shouldn’t have come. Shit.” I don’t feel anything, just Adderall crash mixing with adrenaline rush and cyclist high. I should go to her, comfort her, put my arms around her, but my eyes keep tugging away to the glow of the phone she’s dropped. On the sepia-shifted screen I see the whole scene playing out in miniature. The blur of a woman, crouched by the blur of a body. And me, standing over them, the blur of a killer.
I pick up the phone. Panoram’s red recording dot blinks at me. I know what I’d think if I were zooming this right now. I wouldn’t understand at all.
I put her phone in my back pocket, squeezed next to my own, then scramble back up the pile. I get on top of the coffin, clear off the junk, and then shove. In jerks and tips, I haul the box to the ground.
Michelle is staring at me, and I don’t understand her expression. She’s picked up a broken chair leg from the pile, holds it at her side like a club.
“Give me my phone,” she says. “I’m going to call the police. We’ll tell them you had an episode, you got confused. I’ll make them understand.” She doesn’t know I saved her. I tell her she has to see this. I bend to work the latches.
Doubt comes to me then. For a blink, I’m expecting to find a mannequin, some haunted house prop, thrown away by a carnival, blurred by Panoram, interpreted by my brain as a vast conspiracy that I was uniquely qualified to untangle. What if there’s nothing in there except my own ego, pattern recognition, and the follies of know-nothing omniscience? But in the box there is a body.
Hawaiian shirt and a placid, pale, lumpy face. It sits at the edge of the heap, parallel to New Denny, both missing that vital force that makes meat mean something.
“Who the fuck is that?” Michelle says. She pauses, then adds, “Shawn, what the fuck did you do?” That guy did it, I tell her. I saw it. Just zooming around, and I saw it. She should have just gotten out of the car, and I could have shown her alone, but she brought him, and he was going to kill us both.
She’s shaking her head, red wet eyes full of hate and pity.
I tell her I’ll prove it. I look down, dig for my phone, and she hits me. I’m on the ground, wind knocked out of me, pain screaming in my skull. I feel the two phones tug out of my back pocket. Then I get a little air, and close my eyes.
When I come to, Michelle is gone. The sun is gone too, the pink drained from the sky. The bodies are still there, but there’s no hiding them now.
I stagger to the junkyard exit. Michelle has taken my bike, or someone has. I stare down the road, thinking of the silver pickup, trying to remember how far it was to that charging structure, trying to figure out if I could hoof it.
Red and blue lights start to flash in the distance. Whatever I did or didn’t see, it hardly matters now. Maybe Michelle is the killer, but she has my phone, probably remembers my passcode. She can delete my Panoram recording, pin both bodies on me. Or maybe she’s not, and I killed that man for nothing. Either way, when the cops get here, I’ll be jailed or committed, tucked in a tiny cell with no windows, nothing to see.
I run.
I flee the junkyard and the country road, staggering through brownfields and scrubby desert until the light pollution dims to a yellow haze. Above me, the stars grow brighter, and closer. Closer still are the winking eyes of Panoram, in an endless parade of overlapping rings—satellites dancing into new constellations, filling the firmament with heroes and gods and heretics.
The police will be watching me through them. They’ll have a picture-perfect view—crisp night vision, infrared. I can feel their gaze pressing on me, seeing everything about me but understanding nothing. I look for cover, but there is none. I’m exposed to the seeing sky.
Andrew Dana Hudson is a speculative fiction writer and graduate student at Arizona State University, where he researches climate politics and AI.
hide by Andrew Dana Hudson Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,495 | 2,020 |
"Inside the race to build the best quantum computer on Earth | MIT Technology Review"
|
"https://www.technologyreview.com/s/615180/quantum-computer-race-ibm-google"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Inside the race to build the best quantum computer on Earth By Gideon Lichfield archive page Quantum chandelier Rigetti Computing / Justin Fantl Google’s most advanced computer isn’t at the company’s headquarters in Mountain View, California, nor anywhere in the febrile sprawl of Silicon Valley. It’s a few hours’ drive south in Santa Barbara, in a flat, soulless office park inhabited mostly by technology firms you’ve never heard of.
An open-plan office holds several dozen desks. There’s an indoor bicycle rack and designated “surfboard parking,” with boards resting on brackets that jut out from the wall. Wide double doors lead into a lab the size of a large classroom. There, amidst computer racks and jumbles of instrumentation, a handful of cylindrical vessels—each a little bigger than an oil drum—hang from vibration-damping rigs like enormous steel pupae.
On one of them, the outer vessel has been removed to expose a multi-tiered tangle of steel and brass innards known as “the chandelier.” It’s basically a supercharged refrigerator that gets colder with each layer down. At the bottom, kept in a vacuum a hair’s breadth above absolute zero, is what looks to the naked eye like an ordinary silicon chip. But rather than transistors, it’s etched with tiny superconducting circuits that, at these low temperatures, behave as if they were single atoms obeying the laws of quantum physics. Each one is a quantum bit, or qubit—the basic information--storage unit of a quantum computer.
Late last October, Google announced that one of those chips, called Sycamore, had become the first to demonstrate “quantum supremacy” by performing a task that would be practically impossible on a classical machine. With just 53 qubits, Sycamore had completed a calculation in a few minutes that, according to Google, would have taken the world’s most powerful existing supercomputer, Summit, 10,000 years.
Google touted this as a major breakthrough , comparing it to the launch of Sputnik or the first flight by the Wright brothers—the threshold of a new era of machines that would make today’s mightiest computer look like an abacus.
At a press conference in the lab in Santa Barbara, the Google team cheerfully fielded questions from journalists for nearly three hours. But their good humor couldn’t quite mask an underlying tension. Two days earlier, researchers from IBM, Google’s leading rival in quantum computing, had torpedoed its big reveal. They’d published a paper that essentially accused the Googlers of getting their sums wrong. IBM reckoned it would have taken Summit merely days, not millennia, to replicate what Sycamore had done. When asked what he thought of IBM’s result, Hartmut Neven, the head of the Google team, pointedly avoided giving a direct answer.
What's in a qubit? Just as there were different transistor designs in the early days of computing, there are currently many ways to make qubits. Google and IBM both use a version of the leading method, a superconducting transmon qubit, of which the core component is a Josephson junction. This consists of a pair of superconducting metal strips separated by a gap just a nanometer wide; the quantum effects are a result of how electrons cross that gap.
You could dismiss this as just an academic spat —and in a sense it was. Even if IBM was right, Sycamore had still done the calculation a thousand times faster than Summit would have. And it would likely be only months before Google built a slightly larger quantum machine that proved the point beyond doubt.
IBM’s deeper objection, though, was not that Google’s experiment was less successful than claimed, but that it was a meaningless test in the first place.
Unlike most of the quantum computing world, IBM doesn’t think “quantum supremacy” is the technology’s Wright brothers moment; in fact, it doesn’t even believe there will be such a moment.
IBM is instead chasing a very different measure of success, something it calls “quantum advantage.” This isn’t a mere difference of words or even of science, but a philosophical stance with roots in IBM’s history, culture, and ambitions—and, perhaps, the fact that for eight years its revenue and profit have been in almost unremitting decline, while Google and its parent company Alphabet have only seen their numbers grow. This context, and these differing goals, could influence which—if either—comes out ahead in the quantum computing race.
Worlds apart The sleek, sweeping curve of IBM’s Thomas J. Watson Research Center in the suburbs north of New York City, a neo-futurist masterpiece by the Finnish architect Eero Saarinen, is a continent and a universe away from the Google team’s nondescript digs. Completed in 1961 with the bonanza IBM made from mainframes, it has a museum-like quality, a reminder to everyone who works inside it of the company’s breakthroughs in everything from fractal geometry to superconductors to artificial intelligence—and quantum computing.
The head of the 4,000-strong research division is Dario Gil, a Spaniard whose rapid-fire speech races to keep up with his almost evangelical zeal. Both times I spoke to him, he rattled off historical milestones intended to underscore how long IBM has been involved in quantum-computing-related research (see time line at right).
A grand experiment: Quantum theory and practice A quantum computer ’s basic building block is the quantum bit, or qubit. In a classical computer, a bit can store either a 0 or a 1. A qubit can store not only 0 or 1 but also an in-between state called a superposition—which can assume lots of different values. One analogy is that if information were color, then a classical bit could be either black or white. A qubit when it’s in superposition could be any color on the spectrum, and could also vary in brightness.
The upshot is that a qubit can store and process a vast quantity of information compared with a bit—and capacity increases exponentially as you connect qubits together. Storing all the information in the 53 qubits on Google’s Sycamore chip would take about 72 petabytes (72 billion gigabytes) of classical computer memory. It doesn’t take a lot more qubits before you’d need a classical computer the size of the planet.
But it’s not straightforward. Delicate and easily disturbed, qubits need to be almost perfectly isolated from heat, vibration, and stray atoms—hence the “chandelier” refrigerators in Google’s quantum lab. Even then, they can function for at most a few hundred microseconds before they “decohere” and lose their superposition.
And quantum computers aren’t always faster than classical ones. They’re just different, faster at some things and slower at others, and require different kinds of software. To compare their performance, you have to write a classical program that approximately simulates the quantum one.
For its experiment, Google chose a benchmarking test called “random quantum circuit sampling.” It generates millions of random numbers, but with slight statistical biases that are a hallmark of the quantum algorithm. If Sycamore were a pocket calculator, it would be the equivalent of pressing buttons at random and checking that the display showed the expected results.
Google simulated parts of this on its own massive server farms as well as on Summit, the world’s biggest supercomputer, at Oak Ridge National Laboratory. The researchers estimated that completing the whole job, which took Sycamore 200 seconds, would have taken Summit approximately 10,000 years. Voilà: quantum supremacy.
So what was IBM’s objection? Basically, that there are different ways to get a classical computer to simulate a quantum machine—and that the software you write, the way you chop up data and store it, and the hardware you use all make a big difference in how fast the simulation can run. IBM said Google assumed the simulation would need to be cut up into a lot of chunks, but Summit, with 280 petabytes of storage, is big enough to hold the complete state of Sycamore at once. (And IBM built Summit, so it should know.) But over the decades, the company has gained a reputation for struggling to turn its research projects into commercial successes. Take, most recently, Watson, the Jeopardy! -playing AI that IBM tried to convert into a robot medical guru. It was meant to provide diagnoses and identify trends in oceans of medical data, but despite dozens of partnerships with health-care providers, there have been few commercial applications, and even the ones that did emerge have yielded mixed results.
The quantum computing team, in Gil’s telling, is trying to break that cycle by doing the research and business development in parallel. Almost as soon as it had working quantum computers, it started making them accessible to outsiders by putting them on the cloud, where they can be programmed by means of a simple drag-and-drop interface that works in a web browser. The “IBM Q Experience,” launched in 2016, now consists of 15 publicly available quantum computers ranging from five to 53 qubits in size. Some 12,000 people a month use them, ranging from academic researchers to school kids. Time on the smaller machines is free; IBM says it already has more than 100 clients paying (it won’t say how much) to use the bigger ones.
None of these devices—or any other quantum computer in the world, except for Google’s Sycamore—has yet shown it can beat a classical machine at anything. To IBM, that isn’t the point right now. Making the machines available online lets the company learn what future clients might need from them and allows outside software developers to learn how to write code for them. That, in turn, contributes to their development, making subsequent quantum computers better.
This cycle, the company believes, is the fastest route to its so-called quantum advantage, a future in which quantum computers won’t necessarily leave classical ones in the dust but will do some useful things somewhat faster or more efficiently—enough to make them economically worthwhile. Whereas quantum supremacy is a single milestone, quantum advantage is a “continuum,” the IBMers say—a gradually expanding world of possibility.
Related Story This, then, is Gil’s grand unified theory of IBM: that by combining its heritage, its technical expertise, other people’s brainpower, and its dedication to business clients, it can build useful quantum computers sooner and better than anybody else.
In this view of things, IBM sees Google’s quantum supremacy demonstration as “a parlor trick,” says Scott Aaronson, a physicist at the University of Texas at Austin, who contributed to the quantum algorithms Google is using. At best it’s a flashy distraction from the real work that needs to take place. At worst it’s misleading, because it could make people think quantum computers can beat classical ones at anything rather than at one very narrow task. “‘Supremacy’ is an English word that it’s going to be impossible for the public not to misinterpret,” says Gil.
Google, of course, sees it rather differently.
Enter the upstart Google was a precocious eight-year-old company when it first began tinkering with quantum problems in 2006, but it didn’t form a dedicated quantum lab until 2012—the same year John Preskill, a physicist at Caltech, coined the term “quantum supremacy.” The head of the lab is Hartmut Neven, a German computer scientist with a commanding presence and a penchant for Burning Man–style chic; I saw him once in a furry blue coat and another time in an all-silver outfit that made him look like a grungy astronaut. (“My wife buys these things for me,” he explained.) Initially, Neven bought a machine built by an outside firm, D-Wave, and spent a while trying to achieve quantum supremacy on it, but without success. He says he convinced Larry Page, Google’s then CEO, to invest in building quantum computers in 2014 by promising him that Google would take on Preskill’s challenge: “We told him, ‘Listen, Larry, in three years we will come back and put a prototype chip on your table that can at least compute a problem that is beyond the abilities of classical machines.’” Lacking IBM’s quantum expertise, Google hired a team from outside, led by John Martinis, a physicist at the University of California, Santa Barbara. Martinis and his group were already among the world’s best quantum computer makers—they had managed to string up to nine qubits together—and Neven’s promise to Page seemed like a worthy goal for them to aim for.
How to program a quantum computer At its most basic level, the software in classical computers is a sequence of logic gates like NOT, OR, and NAND that change the contents (0 or 1) of bits. Quantum software, similarly, consists of sequences of logic gates acting on qubits, but it has a larger and more exotic set of gates with names like SWAP (which swaps the values of two qubits around), Pauli-X (a quantum version of the NOT gate, which flips a qubit’s value), and Hadamard (which turns a qubit from either 0 or 1 into a superposition of 0 and 1). There are as yet no quantum equivalents of higher-level languages like C++ or Java, but both Google and IBM have created graphical interfaces, like the one pictured above, to make programming with gates easy.
The three-year deadline came and went as Martinis’s team struggled to make a chip both big enough and stable enough for the challenge. In 2018 Google released its largest processor yet, Bristlecone. With 72 qubits, it was well ahead of anything its rivals had made, and Martinis predicted it would attain quantum supremacy that same year. But a few of the team members had been working in parallel on a different chip architecture, called Sycamore, that ultimately proved able to do more with fewer qubits. Hence it was a 53-qubit chip—originally 54, but one of them malfunctioned—that ultimately demonstrated supremacy last fall.
For practical purposes, the program used in that demonstration is virtually useless—it generates random numbers, which isn’t something you need a quantum computer for. But it generates them in a particular way that a classical computer would find very hard to replicate, thereby establishing the proof of concept (see opposite page).
Ask IBMers what they think of this achievement, and you get pained looks. “I don’t like the word [supremacy], and I don’t like the implications,” says Jay Gambetta, a cautiously spoken Australian who heads IBM’s quantum team. The problem, he says, is that it’s virtually impossible to predict whether any given quantum calculation will be hard for a classical machine, so showing it in one case doesn’t help you find other cases.
To everyone I spoke with outside IBM, this refusal to treat quantum supremacy as significant verges on pigheadedness. “Anybody who will ever have a commercially relevant offering—they have to show supremacy first. I think that’s just basic logic,” says Neven. Even Will Oliver, a mild-mannered MIT physicist who has been one of the most even-handed observers of the spat, says, “It’s a very important milestone to show a quantum computer outperforming a classical computer at some task, whatever it is.” The quantum leap Regardless of whether you agree with Google’s position or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. The hope is that such machines could one day solve problems that require unfeasible amounts of brute-force computing power now, like modeling complex molecules to help discover new drugs and materials, or optimizing city traffic flows in real time to reduce congestion, or making longer-term weather predictions. (Eventually they might be capable of cracking the cryptographic codes used today to secure communications and financial transactions, though by then most of the world will probably have adopted quantum-resistant cryptography.) The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.
That uncertainty has to do with both hardware and software. On the hardware side, Google reckons its current chip designs can get it to somewhere between 100 and 1,000 qubits. However, just as a car’s performance doesn’t depend only on the size of the engine, a quantum computer’s performance isn’t simply determined by its number of qubits. There is a raft of other factors to take into account, including how long they can be kept from decohering, how error-prone they are, how fast they operate, and how they’re interconnected. This means any quantum computer operating today reaches only a fraction of its full potential.
Decoherence Qubits store information the way a sieve stores water; even the most stable ones “decohere,” or fall out of their fragile quantum states, within a few hundred microseconds. Even before then, errors start to pile up. That means a quantum computer can do only so many sums before it grinds to a halt. Google’s larger chips decohere after 30 to 40 microseconds, enough time for them to run through a sequence of up to 40 quantum logic gates. IBM’s can reach up to 500 microseconds, but they also process gates more slowly.
Software for quantum computers, meanwhile, is as much in its infancy as the machines themselves. In classical computing, programming languages are now several levels removed from the raw “machine code” that early software developers had to use, because the nitty-gritty of how data get stored, processed, and shunted around is already standardized. “On a classical computer, when you program it, you don’t have to know how a transistor works,” says Dave Bacon, who leads the Google team’s software effort. Quantum code, on the other hand, has to be highly tailored to the qubits it will run on, so as to wring the most out of their temperamental performance. That means the code for IBM’s chips won’t run on those of other companies, and even techniques for optimizing Google’s 53-qubit Sycamore won’t necessarily do well on its future 100-qubit sibling. More important, it means nobody can predict just how hard a problem those 100 qubits will be capable of tackling.
The most anyone dares to hope for is that computers with a few hundred qubits will be cajoled into simulating some moderately complex chemistry within the next few years—perhaps even enough to advance the search for a new drug or a more efficient battery. Yet decoherence and errors will bring all these machines to a stop before they can do anything really hard like breaking cryptography.
To build a quantum computer with the power of 1,000 qubits, you'd need a million actual ones.
That will require a “fault-tolerant” quantum computer, one that can compensate for errors and keep itself running indefinitely, just as classical ones do. The expected solution will be to create redundancy: make hundreds of qubits act as one, in a shared quantum state. Collectively, they can correct for individual qubits’ errors. And as each qubit succumbs to decoherence, its neighbors will bring it back to life, in a never--ending cycle of mutual resuscitation.
The typical prediction is that it would take as many as 1,000 conjoined qubits to attain that stability—meaning that to build a computer with the power of 1,000 qubits, you’d need a million actual ones. Google “conservatively” estimates it can build a million-qubit processor within 10 years, Neven says, though there are some big technical hurdles to overcome, including one in which IBM may yet have the edge over Google (see opposite page).
By that time, a lot may have changed. The superconducting qubits Google and IBM currently use might prove to be the vacuum tubes of their era, replaced by something much more stable and reliable. Researchers around the world are experimenting with various methods of making qubits, though few are advanced enough to build working computers with. Rival startups such as Rigetti, IonQ, or Quantum Circuits might develop an edge in a particular technique and leapfrog the bigger companies.
A tale of two transmons Google’s and IBM’s transmon qubits are almost identical, with one small but potentially crucial difference.
In both Google’s and IBM’s quantum computers, the qubits themselves are controlled by microwave pulses. Tiny fabrication defects mean that no two qubits respond to pulses of exactly the same frequency. There are two solutions to this: vary the frequency of the pulses to find each qubit’s sweet spot, like jiggling a badly cut key in a lock until it opens; or use magnetic fields to “tune” each qubit to the right frequency.
IBM uses the first method; Google uses the second. Each approach has pluses and minuses. Google’s tunable qubits work faster and more precisely, but they’re less stable and require more circuitry. IBM’s fixed-frequency qubits are more stable and simpler, but run more slowly.
From a technical point of view, it’s pretty much a toss-up, at least at this stage. In terms of corporate philosophy, though, it’s the difference between Google and IBM in a nutshell—or rather, in a qubit.
Google chose to be nimble. “In general our philosophy goes a little bit more to higher controllability at the expense of the numbers that people typically look for,” says Hartmut Neven.
IBM, on the other hand, chose reliability. “There’s a huge difference between doing a laboratory experiment and publishing a paper, and putting a system up with, like, 98% reliability where you can run it all the time,” says Dario Gil.
Right now, Google has the edge. As machines get bigger, though, the advantage may flip to IBM. Each qubit is controlled by its own individual wires; a tunable qubit requires one extra wire. Figuring out the wiring for thousands or millions of qubits will be one of the toughest technical challenges the two companies face; IBM says it’s one of the reasons they went with the fixed-frequency qubit. Martinis, the head of the Google team, says he’s personally spent the past three years trying to find wiring solutions. “It’s such an important problem that I worked on it,” he jokes.
A new Mooreu2019s Law? Rather than counting qubits, IBM tracks what it calls “quantum volume,” a measure of how much complexity a computer can actually handle. Its goal is to keep this measure doubling every year—a quantum version of the famous Moore’s Law that IBM has dubbed “Gambetta’s Law,” after Jay Gambetta, its chief quantum theoretician. So far, it’s held for three years. That’s as much data as Gordon Moore had when he postulated Moore’s Law in 1965.
But given their size and wealth, both Google and IBM have a shot at becoming serious players in the quantum computing business. Companies will rent their machines to tackle problems the way they currently rent cloud-based data storage and processing power from Amazon, Google, IBM, or Microsoft. And what started as a battle between physicists and computer scientists will evolve into a contest between business services divisions and marketing departments.
Which company is best placed to win that contest? IBM, with its declining revenues, may have a greater sense of urgency than Google. It knows from bitter experience the costs of being slow to enter a market: last summer, in its most expensive purchase ever, it forked over $34 billion for Red Hat, an open-source cloud services provider, in an attempt to catch up to Amazon and Microsoft in that field and reverse its financial fortunes. Its strategy of putting its quantum machines on the cloud and building a paying business from the get-go seems designed to give it a head start.
Google recently began to follow IBM’s example, and its commercial clients now include the US Department of Energy, Volkswagen, and Daimler. The reason it didn’t do this sooner, says Martinis, is simple: “We didn’t have the resources to put it on the cloud.” But that’s another way of saying it had the luxury of not having to make business development a priority.
Whether that decision gives IBM an edge is too early to say, but probably more important will be how the two companies apply their other strengths to the problem in the coming years. IBM, says Gil, will benefit from its “full stack” expertise in everything from materials science and chip fabrication to serving big corporate clients. Google, on the other hand, can boast a Silicon Valley–style culture of innovation and plenty of practice at rapidly scaling up operations.
As for quantum supremacy itself, it will be an important moment in history, but that doesn’t mean it will be a decisive one. After all, everyone knows about the Wright brothers’ first flight, but can anybody remember what they did afterwards? hide by Gideon Lichfield Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,496 | 2,010 |
"The Evolution of Computer Science | MIT Technology Review"
|
"https://www.technologyreview.com/2010/06/03/26807/the-evolution-of-computer-science"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The Evolution of Computer Science By Emerging Technology from the arXiv archive page In 1958, Chaim Pekeris completed a landmark project in computer science. As a physicist at the Weizmann Institute of Technology in Israel, he become fascinated with the relatively new science of quantum mechanics and its potential to explain from first principles the behaviour of atoms.
There was a problem however. The equation developed by Schrodinger that could do the job was too complex for mere mortals to handle. Using it to determine the electronic energy levels of a even a lowly helium atom was seemingly impossible.
Chaim had an idea, however: why not exploit the incipient field of computer science to do the job.
Today, Christoph Koutschan and Doron Zeilberger analyse the work he carried out over several years and compare it to the approach that they would take to the problem today. It makes for interesting reading.
Chaim’s task was monumental. He first had to persuade the Wiezmann Institute and a technical committee including Einstein and Von Neumann, to build a computer. Einstein proved hard to persuade but was eventually won over by Von Neumann.
WEIZAC, Israel’s first electronic computer was built between 1954 and 1954. According to Wikipedia, WEIZAC was an asynchronous computer operating on 40-bit words. Instructions consisted of 20-bits: an 8-bit instruction code and 12-bits for addressing. For a memory it had a magnetic drum that could store 1,024 words.
Today you’d get more processing power out of a washing machine.
Before the computer could get involved, Pekeris had to work out how to describe the two atoms and nucleus of a helium atom using the kind of recurring partial differential equations that a computer could handle. The result was a remarkable 33-term equation which he derived by hand, a single calculation that Koutschan and Zeilberger estimate must have taken 20 person hours.
The computer than had to be programmed–in machine code. That means writing it entirely in 0s and 1s, for which he recruited help from an early self taught programmer called Yigal Accad.
This program then kept WEIZAC busy for months, eventually producing a set of tables describing the energy levels of a helium atom, the first time this had been done accurately..
So how has Moore’s Law affected this process, ask Koutschan and Zeilberger.
They repeated Pekeris’ project using the modern tools of computer science to see how they compare.
They say that the numeric computation–the work done by WEIZAC–can be completed in a fraction of a second on any laptop.
They also point out that the programming is much easier too, thanks to the many high level programming languages available today. Few people these days have to mess around with machine code, thankfully.
But perhaps the most surprising thing is that the algebra for working out the differential equations can also be done much faster today using computer algebra programs such as Mathematica and Maple.
Koutschan and Zeilberger say they were able to condense this part of the project, which took Pekeris at least 20-hours, into a 2-hour session.
That’s a fascinating study and counterintuitive in some ways too. While it’s easy to see the many orders of magnitude improvement that has improved hardware, it’s clear from this work that the speed up from software is more limited.
In particular, an order of magnitude improvement–from 20 hours to 2 hours–in the time it takes to do the algebra for this problem is a surprisingly small improvement over a 50 year period.
But given the nature of the problem, it’s hard to imagine how further order of magnitude improvements can be possible. What kind of computer algebra software could allow a human to program it work out these kind of differential equations in minutes or seconds.
Of course the limiting factor here is not the software but the ‘wetware’. Which means that the next generation of improvements will either have to focus on improving the wetware or taking it out of the loop entirely.
Ref: arxiv.org/abs/1006.0200 : The 1958 Pekeris-Accad-WEIZAC Ground-Breaking Collaboration that computed Ground States of Two-Electron Atoms (and its 2010 Redux).
hide by Emerging Technology from the arXiv Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat.
By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications.
By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,497 | 2,019 |
"Ransomware hackers hit nearly two dozen Texas cities | MIT Technology Review"
|
"https://www.technologyreview.com/2019/08/20/133619/ransomware-hackers-hit-nearly-two-dozen-texas-cities"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Ransomware hackers hit nearly two dozen Texas cities By Patrick Howell O'Neill archive page The Texas State Capitol "Texas Capitol" by The Brit_2 is licensed under CC BY-NC-ND 2.0 In what looks like a highly coordinated cyberattack, approximately 23 cities and government agencies in Texas have been hit by hackers who held the captured computer systems ransom, Texas authorities said this week.
The ransomware incidents are yet another reminder that American cities are ill equipped to defend themselves in cyberspace. A May 2019 study found over 169 instances of ransomware infecting state and local governments since 2013.
Same but different: Dozens of US cities have been hit by ransomware this year. What makes the Texas attacks unique is their size and coordination. State authorities say that one single actor is likely responsible for all the incidents. If true, this is unlike any hacking campaign seen before.
The malware used in the attack was reported to be Sodinokibi, according to ZDNet.
The creators had become one of the dominant ransomware operations online, reportedly pulling in over $2 billion in payments , before shutting down their operation in June in what the malware creators called “a well-deserved retirement.” StateScoop reported that the malware used in Texas may be the Ryuk ransomware, a strain that’s been found in a host of recent attacks on American cities.
A national problem: In the last five years, it’s become common for American cities to get hit with ransomware. Baltimore was infected this year at a recovery cost of $10 million.
A small Florida city paid $460,000 in ransom after an infection in June.
The Texas city of Borger is the only one so far to publicly say it was a victim of this latest wave of attacks. The state and other towns are otherwise keeping quiet. A Texas official told NPR he was “ not aware ” of any cities paying the ransom this time around.
Who pays? That May 2019 study of ransomware by the cybersecurity firm Recorded Future found that about 17% of state and local governments attacked end up paying the ransom. That number is actually considerably lower than what other organizations found: a 2019 report from CyberEdge found that 45% of organizations pay ransom, a rise from 38.7% in 2018.
The FBI recommends against paying ransom. And last month, the United States Conference of Mayors passed a resolution against paying such extortionists.
But the fact is that when an organization is hacked and has bad or nonexistent backups, paying ransom becomes an attractive option. The downside is that you are essentially funding the criminal gang behind the operation.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,498 | 2,020 |
"A wave of ransomware hits US hospitals as coronavirus spikes | MIT Technology Review"
|
"https://www.technologyreview.com/2020/10/29/1011436/a-wave-of-ransomware-hits-us-hospitals-as-coronavirus-spikes"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts A wave of ransomware hits US hospitals as coronavirus spikes By Patrick Howell O'Neill archive page Hospitals around the US have reported a wave of ransomware attacks.
Photo by Adhy Savala on Unsplash American hospitals are being targeted in a wave of ransomware attacks as covid-19 infections in the US break records and push the country’s health infrastructure to the limit. As reports emerge of attacks that interrupted health care in at least six US hospitals, experts and government officials say they expect the impact to worsen—and warn that the attacks could potentially threaten patients’ lives.
“I think we’re at the beginning of this story,” said Mike Murray , CEO at the health-care security firm Scope Security. “These guys are moving very fast and very aggressively. These folks seem to be trying to collect as much money as possible very quickly. I think it will be tomorrow or over the weekend before the real scale of this is understood. Compromises are still ongoing.” The Federal Bureau of Investigation, the Cybersecurity and Infrastructure Security Agency, and the Department of Health and Human Services published a dramatic warning on the night of Wednesday, October 28, about “imminent” ransomware threats to American hospitals. The agencies held a conference call with health-care security executives earlier that day to emphasize the need to prioritize this threat. Ransomware is a type of hack in which an attacker uses malware to hijack a victim’s system and demands payment before handing back control.
Hospitals including St. Lawrence Health System in New York, Sonoma Valley Hospital in California, and Sky Lakes Medical Center in Oregon have all said they’ve been hit by ransomware. A doctor told Reuters that one hospital had to function entirely on paper after its computers were taken offline.
Ransomware has grown into a multibillion-dollar international industry over the last decade and the pandemic has only increased profits. Is there any way to stop the threat? One answer could be for the US government to carry out more offensive hacking operations against ransomware gangs, similar to one US Cyber Command conducted earlier this month.
But today’s attacks prove that definitively disrupting the activity of these criminals is easier said than done.
The infamous ransomware gang behind these new attacks is known primarily as UNC1878 or Wizard Spider. The group, believed to be operating out of Eastern Europe, has been tracked for at least two years across hundreds of targets.
“They’re incredibly prolific,” said Allan Liska , an intelligence analyst at the cybersecurity firm Recorded Future. “Their infrastructure is very good. You can see that because even with the takedowns Microsoft and Cyber Command have tried, they’re still able to operate. Honestly, they’re better funded and more skilled than many nation-state actors.” The hacking tools UNC1878 uses include the notorious TrickBot trojan to gain access to victims’ systems, and the Ryuk ransomware to extort victims. Several of the tools in the group’s arsenal spare targeted machines if the systems are operating in Russian or, sometimes, other languages used in post-Soviet nations.
The number of ransomware attacks against American hospitals has risen 71% from September to October 2020, according to the cybersecurity firm Check Point. The rest of the world has seen smaller but significant spikes in activity. Ryuk is responsible for 75% of ransomware attacks against American health-care organizations.
A patient died in September when ransomware hit a German hospital, but that attack appears to have targeted a hospital by mistake. By stark contrast, this week’s attacks are intentional.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,499 | 2,021 |
"Recovering from the SolarWinds hack could take 18 months | MIT Technology Review"
|
"https://www.technologyreview.com/2021/03/02/1020166/solarwinds-brandon-wales-hack-recovery-18-months"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Recovering from the SolarWinds hack could take 18 months The head of the agency leading US efforts to fix a Russian hacking attack says rebuilding will take a very long time.
By Patrick Howell O'Neill archive page Jorge Acala/Unsplash Fully recovering from the SolarWinds hack will take the US government from a year to as long as 18 months, according to the head of the agency that is leading Washington’s recovery.
The hacking campaign against American government agencies and major companies was first discovered in November 2020. At least nine federal agencies were targeted, including the Department of Homeland Security and the State Department. The attackers, who US officials believe to be Russian, exploited a product made by the US software firm SolarWinds in order to hack government and corporate targets.
Brandon Wales, the acting director of CISA, the US Cybersecurity and Infrastructure Agency, says that it will be well into 2022 before officials have fully secured the compromised government networks . Even fully understanding the extent of the damage will take months.
“I wouldn’t call this simple,” Wales says. “There are two phases for response to this incident. There is the short-term remediation effort, where we look to remove the adversary from the network, shutting down accounts they control, and shutting down entry points the adversary used to access networks. But given the amount of time they were inside these networks—months—strategic recovery will take time.” "Given the amount of time they were inside these networks… strategic recovery will take time." When the hackers have succeeded so thoroughly and for so long, the answer sometimes can be a complete rebuild from scratch. The hackers made a point of undermining trust in targeted networks, stealing identities, and gaining the ability to impersonate or create seemingly legitimate users in order to freely access victims’ Microsoft 365 and Azure accounts. By taking control of trust and identity, the hackers become that much harder to track.
“Most of the agencies going through that level of rebuilding will take in the neighborhood of 12 to 18 months to make sure they’re putting in the appropriate protections,” Wales says.
American intelligence agencies say Russian hackers first infiltrated in 2019. Subsequent investigation has shown that the hackers started using the company’s products to distribute malware by March 2020, and their first successful breach of the US federal government came early in the summer. That’s a long time to go unnoticed—longer than many organizations keep the kind of expensive forensic logs you need to do the level of investigation required to sniff the hackers out.
SolarWinds Orion, the network management product that was targeted, is used in tens of thousands of corporations and government agencies. Over 17,000 organizations downloaded the infected back door. The hackers were extraordinarily stealthy and specific in targeting, which is why it took so long to catch them—and why it’s taking so long to understand their full impact.
The difficulty of uncovering the extent of the damage was summarized by Brad Smith, the president of Microsoft, in a congressional hearing last week.
“Who knows the entirety of what happened here?” he said. “Right now, the attacker is the only one who knows the entirety of what they did.” Kevin Mandia, CEO of the security company FireEye, which raised the first alerts about the attack, told Congress that the hackers prioritized stealth above all else.
“Disruption would have been easier than what they did,” he said. “They had focused, disciplined data theft. It’s easier to just delete everything in blunt-force trauma and see what happens. They actually did more work than what it would have taken to go destructive.” "This has a silver lining" CISA first heard about a problem when FireEye discovered that it had been hacked and notified the agency. The company regularly works closely with the US government, and although it wasn’t legally obligated to tell anyone about the hack, it quickly shared news of the compromise with sensitive corporate networks.
It was Microsoft that told the US government federal networks had been compromised. The company shared that information with Wales on December 11, he said in an interview. Microsoft observed the hackers breaking into the Microsoft 365 cloud that is used by many government agencies. A day later, FireEye informed CISA of the back door in SolarWinds, a little-known but extremely widespread and powerful tool.
This signaled that the scale of the hack could be enormous. CISA’s investigators ended up working straight through the holidays to help agencies hunt for the hackers in their networks.
These efforts were made even more complicated because Wales had only just taken over at the agency: days earlier, former director Chris Krebs had been fired by Donald Trump for repeatedly debunking White House disinformation about a stolen election.
Related Story While headlines about the firing of Krebs focused on the immediate impact on election security, Wales had a lot more on his hands.
The new man in charge at CISA is now faced with what he describes as “the most complex and challenging” hacking incident the agency has come up against.
The hack will almost certainly accelerate the already apparent rise of CISA by increasing its funding, authority, and support.
CISA was recently given the legal authority to persistently hunt for cyber threats across the federal government, but Wales says the agency lacks the resources and personnel to carry out that mission. He argues that CISA also needs to be able to deploy and manage endpoint detection systems on computers throughout the federal government in order to detect malicious behavior. Finally, pointing to the fact that the hackers moved freely throughout the Microsoft 365 cloud, Wales says CISA needs to push for more visibility into the cloud environment in order to detect cyber espionage in the future.
In the last year, supporters of CISA have been pushing for it to become the nation’s lead cybersecurity agency. An unprecedented cybersecurity disaster could prove to be the catalyst it needs.
“This has a silver lining,” said Mark Montgomery, who served as executive director of the Cyberspace Solarium Commission, in a phone call. “This is among the most significant malicious cyber acts ever conducted against the US government. The story will continue to get worse for several months as more understanding of what happened is revealed. That will help focus the incoming administration on this issue. They have a lot of priorities, so it would be easy for cyber to get lost in the clutter. That’s not going to happen now.” hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,500 | 2,021 |
"Could the ransomware crisis force action against Russia? | MIT Technology Review"
|
"https://www.technologyreview.com/2021/05/21/1025159/colonial-pipeline-russia-usa-hackers"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Could the ransomware crisis force action against Russia? Moscow’s blind eye toward cybercriminals has made escalating attacks inevitable, say experts. But changing the approach is easier said than done.
By Patrick Howell O'Neill archive page Woody Marshall/News & Record via AP What touches the American psyche more deeply than a gas shortage? If the Colonial Pipeline attack is any measure, nothing. Ransomware has been a growing problem for years, with hundreds of brazen criminal hacks against schools , hospitals , and city governments —but it took an attack that affected people’s cars for the US to really take notice.
The strike on the Colonial Pipeline may have only led to panic buying rather than genuine gas scarcity, but it pushed the country hard enough to demand a response from the president of the United States.
On May 10, after the company had paid $4.4 million to the hackers responsible , President Biden made his argument. While there was no evidence of direct Russian government involvement in the Colonial Pipeline attack, he said, Moscow has a responsibility to deal with criminals residing within their own borders.
His statement is based on what experts have long known: that Russia is a cybercrime superpower in large part because the line between government and organized crime is deliberately hazy.
“We have a 20-year history of Russia harboring cybercriminals,” says Dmitri Alperovitch, the former CTO of cloud security company Crowdstrike and chairman at the Silverado Policy Accelerator, a technology-focused think tank in Washington, DC. “At a minimum they turn a blind eye toward cybercriminals; at a maximum they are supported, encouraged, facilitated.” Knowing what is happening is one thing, however. What’s more difficult is working out how to change it.
Imposing consequences Under international law, states have a responsibility not to knowingly allow their territory to be used for international crime. This most often happens in piracy, but it also applies to terrorism and organized crime. Global agreements mean that governments are obligated to shut down such criminal activity or, if they lack capability, to get assistance to do so.
Russia, however, has been known to protect criminal hackers and even co-opt them to undertake attacks on its behalf. More often, it simply tolerates and ignores the crooks as long as the country itself is not affected. That means hackers will routinely skip any computer using the Russian language, for instance, in an implicit admission of how the game is played.
Related Story Meanwhile, the Kremlin routinely strongly resists international efforts to bring the hackers to heel, simply throwing accusations back at the rest of the world—refusing to acknowledge that a problem exists, and declining to help.
On May 11, for example, shortly after Biden’s statement, Kremlin spokesman Dmitry Preskov publicly denied Russian involvement. Instead, he criticized the United States for “refusing to cooperate with us in any way to counter cyber-threats.” The calculus for Russia is difficult to measure clearly but a few variables are striking: ransomware attacks destabilize Moscow’s adversaries, and transfer wealth to Moscow’s friends—all without much in the way of negative consequences.
Now observers are wondering if high-profile incidents like the pipeline shutdown will change the math.
“The question for the US and the West is, ‘How much are you willing to do to the Russians if they’re going to be uncooperative?’” says James Lewis, a cybersecurity expert at the Center for Strategic and International Studies. “What the West has been unwilling to do is take forceful action against Russia. How do you impose consequences when people ignore agreed-upon international norms?” “I do think that we need to put pressure on Russia to start dealing with the cybercriminals,” Alperovitch argues. “Not just the ones directly responsible for Colonial, but the whole slew of groups that have been conducting ransomware attacks, financial fraud, and the like for two decades. Not only has Russia not done that: they’ve strenuously objected when we demand arrests of individuals and provided full evidence to the Russian law enforcement. They’ve done nothing. They’ve been completely obstructionist at the least, not helping in investigations, not conducting arrests, not holding people accountable. At a minimum, we need to demand them to take action.” "Russia has been completely obstructionist at the least, not helping in investigations, not conducting arrests, not holding people accountable." There are numerous examples of cybercriminals being deeply entangled with Russian intelligence. The enormous 2014 hack against Yahoo resulted in charges against Russian intelligence officers and cybercriminal conspirators. The hacker Evgeniy Bogachev, once the world’s most prolific bank hacker, has been linked to Russian espionage. And on the rare occasions when hackers are arrested and extradited, Russia accuses the US of “ kidnapping ” its citizens. The Americans counter that the Kremlin is protecting its own criminals by preventing investigation and arrest.
Bogachev, for example, has been charged by the US for creating a criminal hacking network responsible for stealing hundreds of millions of dollars through bank hacks. His current location in a resort town in southern Russia is no secret, least of all to the Russian authorities who at first cooperated with the American-led investigation against him but ultimately reneged on the deal. Like many of his contemporaries, he’s out of reach because of Moscow’s protection.
To be clear: there is no evidence that Moscow directed the Colonial Pipeline hack. What security and intelligence experts argue is that the Russian government’s long-standing tolerance of—and occasional direct relationship with—cybercriminals is at the heart of the ransomware crisis. Allowing a criminal economy to grow unchecked makes it virtually inevitable that critical infrastructure targets like hospitals and pipelines will be hit. But the reward is high and the risk so far is low, so the problem grows.
What are the options? Just days before the pipeline was hacked, a landmark report , “Combating Ransomware,” was published by the Institute for Security and Technology. Assembled by a special task force comprising government, academia, and representatives of American technology industry’s biggest companies, it was one of the most comprehensive works ever produced about the problem. Its chief recommendation was to build a coordinated process to prioritize ransomware defense across the whole US government; the next stage, it argued, would require a truly international effort to fight the multibillion-dollar ransomware problem.
“The previous administration didn’t think this problem was a priority,” says Phil Reiner, who led the report. “They didn’t take coordinated action. In fact, that previous administration was completely uncoordinated on cybersecurity. It’s not surprising they didn’t put together an interagency process to address this; they didn’t do that for anything.” Today, America’s standard menu of options for responding to hacking incidents ranges from sending a nasty note or making individual indictments to state-level sanctions and offensive cyber-actions against ransomware groups.
Experts say it is important to get allies to publicly acknowledge the problems and endorse the consequences—and to be less hesitant. Biden’s public assertion that the Kremlin bears responsibility for cybercrime carried out from Russian soil could be a signal to Moscow of potential consequences if action isn’t taken, although he didn’t say what those consequences could be. The fact that the United Kingdom’s foreign minister, Dominic Raab, soon echoed the sentiment is a sign of growing international consensus.
“The preponderance of opinion is for caution, which of course the Russians know and exploit,” Lewis says. “Colonial hasn’t fully changed that, but I think we’re moving away from a timid response. We’re not changing anything, and things are getting worse.” Action can be stymied for fear of escalation, or because cyber can take a back seat to other issues important to the Russia-US relationship, like arms control or Iran. But there are efforts under way to expand the options for action now that senior leaders from both sides of the Atlantic now clearly see ransomware as a national security threat.
This is a fundamental shift that could drive change—in theory.
“I wonder about the idea against action, because it risks making the Russians mad so they’ll do something back to us,” says Lewis. “What exactly have they not done?” Today, the White House is actively working with international partners, the Justice Department is standing up a new ransomware task force, and the Department of Homeland Security is ramping up efforts to deal with the problem.
“This is a solvable problem,” says Reiner, who was a senior National Security Council official under Obama. “But if action isn’t taken, it’s going to get worse. You thought gas lines for a day or two were bad, but get used to it. They’re going to continue to ramp up against schools, hospitals, businesses, you name it. The ransomware actors won’t care until they face consequences.” hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,501 | 2,021 |
"The Colonial pipeline ransomware hackers had a secret weapon: self-promoting cybersecurity firms | MIT Technology Review"
|
"https://www.technologyreview.com/2021/05/24/1025195/colonial-pipeline-ransomware-bitdefender"
|
"Featured Topics Newsletters Events Podcasts The Colonial pipeline ransomware hackers had a secret weapon: self-promoting cybersecurity firms Five months before DarkSide attacked the Colonial pipeline, two researchers discovered a way to rescue its ransomware victims. Then an antivirus company’s announcement alerted the hackers.
Drew Angerer/Getty Images by Renee Dudley archive page Daniel Golden archive page On January 11, antivirus company Bitdefender said it was “happy to announce” a startling breakthrough. It had found a flaw in the ransomware that a gang known as DarkSide was using to freeze computer networks of dozens of businesses in the US and Europe. Companies facing demands from DarkSide could download a free tool from Bitdefender and avoid paying millions of dollars in ransom to the hackers.
But Bitdefender wasn’t the first to identify this flaw. Two other researchers, Fabian Wosar and Michael Gillespie , had noticed it the month before and had begun discreetly looking for victims to help. By publicizing its tool, Bitdefender alerted DarkSide to the lapse, which involved reusing the same digital keys to lock and unlock multiple victims. The next day, DarkSide declared that it had repaired the problem, and that “new companies have nothing to hope for.” “Special thanks to BitDefender for helping fix our issues,” DarkSide said. “This will make us even better.” DarkSide soon proved it wasn’t bluffing, unleashing a string of attacks. This month, it paralyzed the Colonial Pipeline Co., prompting a shutdown of the 5,500-mile pipeline that carries 45% of the fuel used on the East Coast—quickly followed by a rise in gasoline prices, panic buying of gas across the Southeast, and closures of thousands of gas stations. Absent Bitdefender’s announcement, it’s possible that the crisis might have been contained, and that Colonial might have quietly restored its system with Wosar and Gillespie’s decryption tool.
Instead, Colonial paid DarkSide $4.4 million in Bitcoin for a key to unlock its files. “I will admit that I wasn’t comfortable seeing money go out the door to people like this,” CEO Joseph Blount told the Wall Street Journal.
The missed opportunity was part of a broader pattern of botched or half-hearted responses to the growing menace of ransomware, which during the pandemic has disabled businesses, schools, hospitals, and government agencies across the country. The incident also shows how antivirus companies eager to make a name for themselves sometimes violate one of the cardinal rules of the cat-and-mouse game of cyberwarfare: Don’t let your opponents know what you’ve figured out. During World War II, when the British secret service learned from decrypted communications that the Gestapo was planning to abduct and murder a valuable double agent, Johnny Jebsen, his handler wasn’t allowed to warn him for fear of cluing in the enemy that its cipher had been cracked. Today, ransomware hunters like Wosar and Gillespie try to prolong the attackers’ ignorance, even at the cost of contacting fewer victims. Sooner or later, as payments drop off, the cybercriminals realize that something has gone wrong.
Whether to tout a decryption tool is a “calculated decision,” said Rob McLeod, senior director of the threat response unit for cybersecurity firm eSentire. From the marketing perspective, “You are singing that song from the rooftops about how you have come up with a security solution that will decrypt a victim’s data. And then the security researcher angle says, ‘Don’t disclose any information here. Keep the ransomware bugs that we’ve found that allow us to decode the data secret, so as not to notify the threat actors.’” Wosar said that publicly releasing tools, as Bitdefender did, has become riskier as ransoms have soared and the gangs have grown wealthier and more technically adept. In the early days of ransomware, when hackers froze home computers for a few hundred dollars, they often couldn’t determine how their code was broken unless the flaw was specifically pointed out to them.
Today, the creators of ransomware “have access to reverse engineers and penetration testers who are very very capable,” he said. “That’s how they gain entrance to these oftentimes highly secured networks in the first place. They download the decryptor, they disassemble it, they reverse-engineer it, and they figure out exactly why we were able to decrypt their files. And 24 hours later, the whole thing is fixed. Bitdefender should have known better.” Related Story Moscow’s blind eye toward cybercriminals has made escalating attacks inevitable, say experts. But changing the approach is easier said than done.
It wasn’t the first time Bitdefender trumpeted a solution that Wosar or Gillespie had beaten it to. Gillespie had broken the code of a ransomware strain called GoGoogle, and was helping victims without any fanfare, when Bitdefender released a decryption tool in May 2020. Other companies have also announced breakthroughs publicly, Wosar and Gillespie said.
“People are desperate for a news mention, and big security companies don’t care about victims,” Wosar said.
Bogdan Botezatu, director of threat research at Bucharest, Romania–based Bitdefender, said the company wasn’t aware of the earlier success in unlocking files infected by DarkSide.
Regardless, he said, Bitdefender decided to publish its tool “because most victims who fall for ransomware do not have the right connection with ransomware support groups and won’t know where to ask for help unless they can learn about the existence of tools from media reports or with a simple search.” Bitdefender has provided free technical support to more than a dozen DarkSide victims, and “we believe many others have successfully used the tool without our intervention,” Botezatu said. Over the years, Bitdefender has helped individuals and businesses avoid paying more than $100 million in ransom, he said.
Bitdefender recognized that DarkSide might correct the flaw, Botezatu said: “We are well aware that attackers are agile and adapt to our decryptors.” But DarkSide might have “spotted the issue” anyway. “We don’t believe in ransomware decryptors made silently available. Attackers will learn about their existence by impersonating home users or companies in need, while the vast majority of victims will have no idea that they can get their data back for free.” The attack on Colonial Pipeline , and the ensuing chaos at the gas pumps throughout the Southeast, appears to have spurred the federal government to be more vigilant. President Joe Biden issued an executive order to improve cybersecurity and create a blueprint for a federal response to cyberattacks. DarkSide said it was shutting down under US pressure, although ransomware crews have often disbanded to avoid scrutiny and then re-formed under new names, or their members have launched or joined other groups.
“As sophisticated as they are, these guys will pop up again, and they’ll be that much smarter,” said Aaron Tantleff, a Chicago cybersecurity attorney who has consulted with 10 companies attacked by DarkSide. “They’ll come back with a vengeance.” "People are desperate for a news mention, and big security companies don’t care about victims." At least until now, private researchers and companies have often been more effective than the government in fighting ransomware. Last October, Microsoft disrupted the infrastructure of Trickbot, a network of more than 1 million infected computers that disseminated the notorious Ryuk strain of ransomware, by disabling its servers and communications. That month, ProtonMail, the Swiss-based email service, shut down 20,000 Ryuk-related accounts.
Wosar and Gillespie, who belong to a worldwide volunteer group called the Ransomware Hunting Team, have cracked more than 300 major ransomware strains and variants, saving an estimated 4 million victims from paying billions of dollars.
By contrast, the FBI rarely decrypts ransomware or arrests the attackers, who are typically based in countries like Russia or Iran that lack extradition agreements with the US. DarkSide, for instance, is believed to operate out of Russia. Far more victims seek help from the Hunting Team, through websites maintained by its members, than from the FBI.
The US Secret Service also investigates ransomware, which falls under its purview of combating financial crimes. But, especially in election years, it sometimes rotates agents off cyber assignments to carry out its better-known mission of protecting presidents, vice presidents, major-party candidates, and their families. European law enforcement, especially the Dutch National Police, has been more successful than the US in arresting attackers and seizing servers.
Related Story Similarly, the US government has made only modest headway in pushing private industry, including pipeline companies, to strengthen cybersecurity defenses. Cybersecurity oversight is divided among an alphabet soup of agencies, hampering coordination. The Department of Homeland Security conducts “vulnerability assessments” for critical infrastructure, which includes pipelines.
It reviewed Colonial Pipeline in around 2013 as part of a study of places where a cyberattack might cause a catastrophe. The pipeline was deemed resilient, meaning that it could recover quickly, according to a former DHS official. The department did not respond to questions about any subsequent reviews.
Five years later, DHS created a pipeline cybersecurity initiative to identify weaknesses in pipeline computer systems and recommend strategies to address them. Participation is voluntary, and a person familiar with the initiative said that it is more useful for smaller companies with limited in-house IT expertise than for big ones like Colonial. The National Risk Management Center, which oversees the initiative, also grapples with other thorny issues such as election security.
Ransomware has skyrocketed since 2012 , when the advent of Bitcoin made it hard to track or block payments. The criminals’ tactics have evolved from indiscriminate “spray and pray” campaigns seeking a few hundred dollars apiece to targeting specific businesses, government agencies and nonprofit groups with multimillion-dollar demands.
Attacks on energy businesses in particular have increased during the pandemic—not just in the US but in Canada, Latin America, and Europe. As the companies allowed employees to work from home, they relaxed some security controls, McLeod said.
DarkSide adopted what is known as a “ransomware-as-a-service” model. Under this model, it partnered with affiliates who launched the attacks. The affiliates received 75% to 90% of the ransom, with DarkSide keeping the remainder.
Since 2019, numerous gangs have ratcheted up pressure with a technique known as “double extortion.” Upon entering a system, they steal sensitive data before launching ransomware that encodes the files and makes it impossible for hospitals, universities, and cities to do their daily work. If the loss of computer access is not sufficiently intimidating, they threaten to reveal confidential information, often posting samples as leverage. For instance, when the Washington, DC, police department didn’t pay the $4 million ransom demanded by a gang called Babuk last month, Babuk published intelligence briefings, names of criminal suspects and witnesses, and personnel files, from medical information to polygraph test results, of officers and job candidates.
DarkSide, which emerged last August, epitomized this new breed. It chose targets based on a careful financial analysis or information gleaned from corporate emails. For instance, it attacked one of Tantleff’s clients during a week when the hackers knew the company would be vulnerable because it was transitioning its files to the cloud and didn’t have clean backups.
To infiltrate target networks, the gang used advanced methods such as “zero-day exploits” that immediately take advantage of software vulnerabilities before they can be patched. Once inside, it moved swiftly, looking not only for sensitive data but also for the victim’s cyber insurance policy, so it could peg its demands to the amount of coverage. After two to three days of poking around, DarkSide encrypted the files.
“They have a faster attack window,” said Christopher Ballod, associate managing director for cyber risk at Kroll, the business investigations firm, who has advised half a dozen DarkSide victims. “The longer you dwell in the system, the more likely you are to be caught.” Typically, DarkSide’s demands were “on the high end of the scale,” $5 million and up, Ballod said. One scary tactic: if publicly traded companies didn’t pay the ransom, DarkSide threatened to share information stolen from them with short-sellers who would profit if the share price dropped upon publication.
DarkSide’s site on the dark web identified dozens of victims and described the confidential data it claimed to have filched from them. One was New Orleans law firm Stone Pigman Walther Wittmann. “A big annoyance is what it was,” attorney Phil Wittmann said, referring to the DarkSide attack in February. “We paid them nothing,” said Michael Walshe Jr., chair of the firm’s management committee, declining to comment further.
Last November, DarkSide adopted what is known as a “ransomware-as-a-service” model. Under this model, it partnered with affiliates who launched the attacks. The affiliates received 75% to 90% of the ransom, with DarkSide keeping the remainder. As this partnership suggests, the ransomware ecosystem is a distorted mirror of corporate culture, with everything from job interviews to procedures for handling disputes. After DarkSide shut down, several people who identified themselves as its affiliates complained on a dispute resolution forum that it had stiffed them. “The target paid, but I did not receive my share,” one wrote.
Together, DarkSide and its affiliates reportedly grossed at least $90 million. Seven of Tantleff’s clients, including two companies in the energy industry, paid ransoms ranging from $1.25 million to $6 million, reflecting negotiated discounts from initial demands of $7.5 million to $30 million. His other three clients hit by DarkSide did not pay. In one of those cases, the hackers demanded $50 million. Negotiations grew acrimonious, and the two sides couldn’t agree on a price.
DarkSide’s representatives were shrewd bargainers, Tantleff said. If a victim said it couldn’t afford the ransom because of the pandemic, DarkSide was ready with data showing that the company’s revenue was up, or that covid-19’s impact was factored into the price.
DarkSide’s grasp of geopolitics was less advanced than its approach to ransomware. Around the same time that it adopted the affiliate model, it posted that it was planning to safeguard information stolen from victims by storing it in servers in Iran. DarkSide apparently didn’t realize that an Iranian connection would complicate its collection of ransoms from victims in the US, which has economic sanctions restricting financial transactions with Iran. Although DarkSide later walked back this statement, saying that it had only considered Iran as a possible location, numerous cyber insurers had concerns about covering payments to the group. Coveware, a Connecticut firm that negotiates with attackers on behalf of victims, stopped dealing with DarkSide.
Ballod said that with their insurers unwilling to reimburse the ransom, none of his clients paid DarkSide, despite concerns about exposure of their data. Even if they had caved in to DarkSide, and received assurances from the hackers in return that the data would be shredded, the information might still leak, he said.
During DarkSide’s changeover to the affiliate model , a flaw was introduced into its ransomware. The vulnerability caught the attention of members of the Ransomware Hunting Team. Established in 2016, the invitation-only team consists of about a dozen volunteers in the US, Spain, Italy, Germany, Hungary, and the UK. They work in cybersecurity or related fields. In their spare time, they collaborate in finding and decrypting new ransomware strains.
Several members, including Wosar, have little formal education but an aptitude for coding. A high school dropout, Wosar grew up in a working-class family near the German port city of Rostock. In 1992, at the age of eight, he saw a computer for the first time and was entranced. By 16, he was developing his own antivirus software and making money from it. Now 37, he has worked for antivirus firm Emsisoft since its inception almost two decades ago and is its chief technology officer. He moved to the UK from Germany in 2018 and lives near London.
He has been battling ransomware hackers since 2012, when he cracked a strain called ACCDFISA, which stood for “Anti Cyber Crime Department of Federal Internet Security Agency.” This fictional agency was notifying people that child pornography had infected their computers, and so it was blocking access to their files unless they paid $100 to remove the virus.
Related Story The ACCDFISA hacker eventually noticed that the strain had been decrypted and released a revised version. Many of Wosar’s subsequent triumphs were also fleeting. He and his teammates tried to keep criminals blissfully unaware for as long as possible that their strain was vulnerable. They left cryptic messages on forums inviting victims to contact them for assistance or sent direct messages to people who posted that they had been attacked.
In the course of protecting against computer intrusions, analysts at antivirus firms sometimes detected ransomware flaws and built decryption tools, though it wasn’t their main focus. Sometimes they collided with Wosar.
In 2014, Wosar discovered that a ransomware strain called CryptoDefense copied and pasted from Microsoft Windows some of the code it used to lock and unlock files, not realizing that the same code was preserved in a folder on the victim’s own computer. It was missing the signal, or “flag,” in their program, usually included by ransomware creators to instruct Windows not to save a copy of the key.
Wosar quickly developed a decryption tool to retrieve the key. “We faced an interesting conundrum,” Sarah White, another Hunting Team member, wrote on Emsisoft’s blog.
“How to get our tool out to the most victims possible without alerting the malware developer of his mistake?” Wosar discreetly sought out CryptoDefense victims through support forums, volunteer networks, and announcements of where to contact for help. He avoided describing how the tool worked or the blunder it exploited. When victims came forward, he supplied the fix, scrubbing the ransomware from at least 350 computers. CryptoDefense eventually “caught on to us ... but he still did not have access to the decrypter we used and had no idea how we were unlocking his victims’ files,” White wrote.
"We faced an interesting conundrum… How to get our tool out to the most victims possible without alerting the malware developer of his mistake?” But then an antivirus company, Symantec, uncovered the same problem and bragged about the discovery on a blog post that “contained enough information to help the CryptoDefense developer find and correct the flaw,” White wrote. Within 24 hours the attackers began spreading a revised version. They changed its name to CryptoWall and made $325 million.
Symantec “chose quick publicity over helping CryptoDefense victims recover their files,” White wrote. “Sometimes there are things that are better left unsaid.” A spokeswoman for Broadcom, which acquired Symantec’s enterprise security business in 2019, declined to comment, saying that “the team members who worked on the tool are no longer with the company.” Like Wosar, the 29-year-old Gillespie comes from poverty and never went to college. When he was growing up in central Illinois, his family struggled so much financially that they sometimes had to move in with friends or relatives. After high school, he worked full time for 10 years at a computer repair chain called Nerds on Call. Last year, he became a malware and cybersecurity researcher at Coveware.
Last December, he messaged Wosar for help. Gillespie had been working with a DarkSide victim who had paid a ransom and received a tool to recover the data. But DarkSide’s decryptor had a reputation for being slow, and the victim hoped that Gillespie could speed up the process.
Gillespie analyzed the software, which contained a key to release the files. He wanted to extract the key, but because it was stored in an unusually complex way, he couldn’t. He turned to Wosar, who was able to isolate it.
The teammates then began testing the key on other files infected by DarkSide. Gillespie checked files uploaded by victims to the website he operates, ID Ransomware, while Wosar used VirusTotal, an online database of suspected malware.
That night, they shared a discovery.
“I have confirmation DarkSide is re-using their RSA keys,” Gillespie wrote to the Hunting Team on its Slack channel. A type of cryptography, RSA generates two keys: a public key to encode data and a private key to decipher it. RSA is used legitimately to safeguard many aspects of e-commerce, such as protecting credit numbers. But it’s also been co-opted by ransomware hackers.
“I noticed the same as I was able to decrypt newly encrypted files using their decrypter,” Wosar replied less than an hour later, at 2:45 a.m. London time.
Their analysis showed that before adopting the affiliate model, DarkSide had used a different public and private key for each victim. Wosar suspected that during this transition, DarkSide introduced a mistake into its affiliate portal used to generate the ransomware for each target. Wosar and Gillespie could now use the key that Wosar had extracted to retrieve files from Windows machines seized by DarkSide. The cryptographic blunder didn’t affect Linux operating systems.
Related Story “We were scratching our heads,” Wosar said. “Could they really have fucked up this badly? DarkSide was one of the more professional ransomware-as-a-service schemes out there. For them to make such a huge mistake is very, very rare.” The Hunting Team celebrated quietly, without seeking publicity. White, who is a computer science student at Royal Holloway, part of the University of London, began looking for DarkSide victims. She contacted firms that handle digital forensics and incident response.
“We told them, ‘Hey, listen, if you have any DarkSide victims, tell them to reach out to us; we can help them. We can recover their files and they don’t have to pay a huge ransom,’” Wosar said.
The DarkSide hackers mostly took the Christmas season off. Gillespie and Wosar expected that when the attacks resumed in the new year, their discovery would help dozens of victims. But then Bitdefender published its post, under the headline “Darkside Ransomware Decryption Tool.” In a messaging channel with the ransomware response community, someone asked why Bitdefender would tip off the hackers. “Publicity,” White responded. “Looks good. I can guarantee they’ll fix it much faster now though.” She was right. The next day, DarkSide acknowledged the error that Wosar and Gillespie had found before Bitdefender. “Due to the problem with key generation, some companies have the same keys,” the hackers wrote, adding that up to 40% of keys were affected.
DarkSide mocked Bitdefender for releasing the decryptor at “the wrong time ... as the activity of us and our partners during the New Year holidays is the lowest.” Adding to the team’s frustrations, Wosar discovered that the Bitdefender tool had its own drawbacks. Using the company’s decryptor, he tried to unlock samples infected by DarkSide and found that they were damaged in the process. “They actually implemented the decryption wrong,” Wosar said. “That means if victims did use the Bitdefender tool, there’s a good chance that they damaged the data.” Asked about Wosar’s criticism, Botezatu said that data recovery is difficult, and that Bitdefender has “taken all precautions to make sure that we’re not compromising user data,” including exhaustive testing and “code that evaluates whether the resulting decrypted file is valid.” Even without Bitdefender, DarkSide might have soon realized its mistake anyway, Wosar and Gillespie said. For example, as they sifted through compromised networks, the hackers might have come across emails in which victims helped by the Hunting Team discussed the flaw.
“They might figure it out that way—that is always a possibility,” Wosar said. “But it’s especially painful if a vulnerability is being burned through something stupid like this.” The incident led the Hunting Team to coin a term for the premature exposure of a weakness in a ransomware strain. “Internally, we often joke, ‘Yeah, they are probably going to pull a Bitdefender,’” Wosar said.
This story was co-published with ProPublica, a nonprofit newsroom that investigates abuses of power.
Renee Dudley and Daniel Golden have focused on ransomware for ProPublica and are working on a book about the Ransomware Hunting Team, to be published next year by Farrar, Straus and Giroux.
Sign up to receive ProPublica's biggest stories as soon as they’re published.
by Renee Dudley & Daniel Golden Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,502 | 2,016 |
"The Russian hackers who hit the 2016 election have been very busy since | MIT Technology Review"
|
"https://www.technologyreview.com/2019/10/17/335/kremlin-hackers-are-back-in-the-spotlight-after-2016-election-breach"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The Russian hackers who hit the 2016 election have been very busy since By Patrick Howell O'Neill archive page The Kremlin Photo: The Kremlin by Mariano Mantel CC BY NC 2.0 Ever since they were one of the groups involved in the infamous hack of the Democratic National Committee in 2016, the trail has largely gone cold on the Russian intelligence hackers known as Cozy Bear.
New research, however, shows Cozy Bear (also known as the Dukes) never went away at all. Although they managed to stay out of the spotlight for over two years, the group has been actively engaged in a six-year-long spying campaign targeting the ministries of foreign affairs in at least three European countries and a Washington, DC, embassy of a European Union nation, according to new work by the Slovakian cybersecurity company ESET.
Two other advanced hacking groups from Russia, bearing the code names Fancy Bear and Turla, were found on some of the same breached computers. Russian hacking groups from different arms of the government—in this case the military and the intelligence agencies—are known to aggressively compete with each other when going after high-value targets.
Cozy Bear’s persistent and meticulous campaign against a range of European political targets uses new malware and tactics in what the researchers call Operation Ghost, a campaign with roots as far back as 2013 and extending at least to June 2019.
Get in through the back door: The hackers typically start their attack with spear-phishing emails—messages carefully designed to trick very specific targets into clicking malicious links, starting a process to download dangerous software that gives Cozy Bear control of key machines and accounts. The details of how the hackers accomplish that goal show they are among the world’s best at what they do.
The campaign, carried out largely during working hours in the Moscow time zone, involved multiple new malware families discovered in use during this operation.
A novel malware family known as FatDuke is built specifically by this group to provide hidden and quiet backdoor access to a victim’s machine by impersonating the target’s browser down to specific details like using the same user-agent as the browser installed on the system.
Here’s how researchers hypothesize one kind of attack from Operation Ghost could unfold: A target, say a European diplomat, would receive an email crafted specifically to get her to download a malicious document. That document would contain PolyglotDuke malware whose goal is to surreptitiously install other malware on the machine. To do that, the malware looks at predetermined messages on popular sites like Reddit, which look like normal internet traffic. An image is downloaded that uses a tactic called steganography, which subtly changes an image file to hide encoded data including additional payloads. Suddenly, normal-looking photos contain malicious and nearly invisible code.
They’ll install the MiniDuke backdoor and then, as stage three of the playbook for the most interesting and important targets, they move to FatDuke. A successful deployment of FatDuke, called “the current flagship backdoor” used by the Dukes, means that battle is over.
Lying low: What’s also exceptional about this group and this campaign is the way the operation’s network infrastructure was built anew for each victim.
“This kind of compartmentalization is generally only seen by the most meticulous attackers,” ESET researchers Matthieu Faou, Mathieu Tartare, and Thomas Dupuy said in the new report.
"It prevents the entire operation from being burned when a single victim discovers the infection and shares the related network [indicators of compromise] with the security community.” Cozy Bear has been active for over a decade.
“Our new research shows that even if an espionage group disappears from public reports for many years, it may not have stopped spying,” the researchers wrote. Cozy Bear “were able to fly under the radar for many years while compromising high-value targets, as before.” hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,503 | 2,016 |
"The Russian hackers who interfered in 2016 were spotted targeting the 2020 US election | MIT Technology Review"
|
"https://www.technologyreview.com/2020/09/10/1008297/the-russian-hackers-who-interfered-in-2016-were-spotted-targeting-the-2020-us-election"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The Russian hackers who interfered in 2016 were spotted targeting the 2020 US election By Patrick Howell O'Neill archive page Vladimir Putin, the president of Russia Russian military hackers responsible for cyberattacks against Democratic targets during the 2016 American election are now targeting over 200 organizations in the United States (including political parties, think tanks, and consultants serving both Democrats and Republicans), according to Microsoft, which is increasingly calling out Russian cyber espionage.
In the final weeks before the November 3 election, the Russian hackers are employing new tactics, tools, and ways to disguise their role in the attacks, Microsoft vice president Tom Burt wrote on Thursday. Democratic nominee Joe Biden’s campaign was specifically targeted by the Russian hackers, according to an earlier report from Reuters, via phishing attacks against the campaign’s communications advisors, SKDKnickerbocker. None of the attacks were successful.
This particular Russian hacking group, called Strontium by Microsoft, is more widely known as Fancy Bear or APT28 and is believed to operate out of Russia’s military intelligence agency, GRU. The details of these incidents recall the sustained hacking and information warfare carried out in favor of Trump against former Democratic presidential candidate Hillary Clinton’s campaign in 2016.
Fancy Bear has long relied on spearphishing, a hacking tactic that tricks a targeted individual into giving up key passwords. This time, the group is taking a different approach with brute-force and password-spraying attacks, a shift in tactics that allows for both larger-scale attacks and greater anonymity for the attackers.
Microsoft has also spotted state-sponsored hackers in China and Iran targeting individuals involved in both Donald Trump’s and Joe Biden’s presidential campaigns. But experts say Moscow is the adversary that worries them the most, given Russia’s lengthy track record.
“Multiple cyber-espionage actors have targeted organizations associated with the upcoming election, but we remain most concerned by Russian military intelligence, who we believe poses the greatest threat to the democratic process,” said John Hultquist from the cybersecurity firm FireEye. This particular Russian hacking group is responsible for some of the most provocative and aggressive cyber operations of all time.
“APT28’s unique history raises the prospect of follow-on information operations or other devastating activity,” Hultquist explained.
The newly disclosed hacking attempts underline threats to American election security with the vote less than two months away.
On Wednesday, a new report revealed that a senior Homeland Security official said he had been ordered to stop intelligence reports about current Russian election interference because it “made the president look bad.” According to a newly published whistleblower complaint (pdf), former DHS intelligence chief Brian Murphy claimed he had been asked to stop providing reports of Russia’s activity and encouraged to focus on threats from China and Iran instead.
The department denies the complaint, which arrives against a backdrop of controversial election security moves including the Trump administration’s decision to stop providing the full Senate with briefings on the issue.
At the same time, the US Treasury department announced new sanctions against four Russian-linked individuals for attempts to interfere in the upcoming American election. Three people are allegedly members of the Internet Research Agency, the notorious Russian group conducting malicious information operations on social media.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,504 | 2,019 |
"Tiny AI models could supercharge autocorrect and voice assistants on your phone | MIT Technology Review"
|
"https://www.technologyreview.com/f/614473/tiny-ai-could-supercharge-autocorrect-voice-assistants-on-your-phone"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Tiny AI models could supercharge autocorrect and voice assistants on your phone By Karen Hao archive page Illustration of a cell phone with a talkative voice assistant Ms. Tech Researchers have successfully shrunk a giant language model to use in commercial applications.
Who’s counting? In the past year, natural language models have become dramatically better at the expense of getting dramatically bigger.
In October of last year, for example, Google released a model called BERT that passed a long-held reading-comprehension benchmark in the field. The larger version of the model had 340 million data parameters, and training it just one time through cost enough electricity to power a US household for 50 days.
Four months later, OpenAI quickly topped it with its model GPT-2.
The model demonstrated an impressive knack for constructing convincing prose; it also used 1.5 billion parameters. Now, MegatronLM, the latest and largest model from Nvidia, has 8.3 billion parameters.
(Yes, things are getting out of hand.) The big, the bad, the ugly: AI researchers have grown increasingly worried about the consequences of this trend. In June, a group at the University of Massachusetts, Amherst, showed the climate toll of developing and training models at such a large scale. Training BERT, they calculated, emitted nearly as much carbon as a round-trip flight between New York and San Francisco; GPT-2 and MegatronLM, by extrapolation, would likely emit a whole lot more.
The trend could also accelerate the concentration of AI research into the hands of a few tech giants. Under-resourced labs in academia or countries with fewer resources simply don’t have the means to use or develop such computationally expensive models.
Honey, I shrunk the AI: In response, many researchers are focused on shrinking the size of existing models without losing their capabilities. Now two new papers, released within a day of one another, have successfully done that to the smaller version of BERT, with 100 million parameters.
The first paper, from researchers at Huawei, produces a model called TinyBERT that is less than a seventh the size of the original and nearly 10 times faster. It also performs nearly as well in language understanding as the original. The second, from researchers at Google, produces another that is smaller by a factor of more than 60, but its language understanding is slightly worse than the Huawei version.
How they did it: Both papers use variations of a common compression technique known as knowledge distillation. It involves using the large AI model that you want to shrink (the “teacher”) to train a much smaller model (the “student”) in its image. To do so, you feed the same inputs into both and then tweak the student until its outputs match the teacher’s.
Outside of the lab: In addition to improving access to state-of-the-art AI, tiny models will help bring the latest AI advancements to consumer devices. They avoid the need to send consumer data to the cloud, which improves both speed and privacy. For natural-language models specifically, more powerful text prediction and language generation could improve myriad applications like autocomplete on your phone and voice assistants like Alexa and Google Assistant.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,505 | 2,020 |
"Our biggest questions yet about immunity to covid-19 | MIT Technology Review"
|
"https://www.technologyreview.com/2020/06/19/1004169/biggest-questions-about-immunity-to-covid-19"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Our biggest questions about immunity to covid-19 By Neel V. Patel archive page Pexels We’re still not very sure how covid-19 immunity works. As we inch closer to a vaccine and pin our hopes on herd immunity to allow us to safely open up communities again, the uncertainties will only get more pressing. Here’s a look at some of the biggest questions we’re still trying to answer.
How much immunity are we talking about? When most people (i.e., the general public) talk about immunity, what they mean is protection from a disease. But for many infectious diseases, “immune” and “not immune” are more a gradient than a binary. Tulane University virologist Robert Garry points out that the flu vaccine, for instance, doesn’t give absolute protection against influenza, but rather is designed to prevent a significant infection and keep things “from falling off a cliff.” When we talk about immunity against covid-19, we’re mostly referring to the production of antibodies by our body’s immune system. But this is also misleading. A preprint study uploaded this month, which measured antibody levels in patients in London, found that between 2% and 8.5% didn’t even develop detectable antibodies.
Those in this group who survived infection (typically younger people) likely had to fight off infection through cell-mediated arms of the immune system—white blood cells and cytokines that directly engage and kill pathogens—rather than through antibodies that neutralize the virus.
How long does immunity to covid-19 last after infection? We really have no idea yet.
There have been sporadic reports of some people in the world coming down with covid-19 twice, and so far it’s unclear why. It’s well documented that other coronavirus infections confer only temporary immunity, sometimes lasting no more than a few months. Covid-19 may follow the same pattern, but it’s too early to tell.
What factors affect immunity? As Sarah Fortune at the Harvard T.H. Chan School of Public Health points out, it’s already well known that immunity from an infectious disease is tied to the strength and durability of the immune response during infection. An infection that causes severe symptoms is likely to lead to a stronger immune response, which would also help encourage strong and longer-lasting immunity moving forward. On the flip side, a mild or asymptomatic case is likely to yield lower antibody levels, as was found in covid-19 patients in a new study published in Nature Medicine on Thursday.
In the study, researchers studied a small cohort of asymptomatic patients with covid-19 and found that they seemed to develop lower antibody levels. On the surface, this might suggest these people are not as immune to covid; but Fortune points out that you can’t draw those kinds of conclusions yet—it might be the case that those lower levels of antibodies will still protect you from getting sick, as a new paper in Nature suggests.
We’re still not totally clear how asymptomatic cases and symptomatic cases differ in terms of immunity, since the asymptomatic cases aren’t being actively tested and identified. Nor is there a universal definition for what qualifies as asymptomatic. Is it the total absence of symptoms? What about just moderate ones? Other research has demonstrated that “those who have inflammation mount an immune response that’s more detectable and potentially more robust,” adds Charles Cairns, the dean of Drexel University College of Medicine. All of this growing evidence might underscore the importance of a cell-mediated immune response for fighting coronavirus.
What does this mean for developing a vaccine? At the risk of sounding like a broken record: we don’t know yet. As mentioned, we’re still not sure what kind of covid-19 immunity we will get from a vaccination: total, or just protection from the worst symptoms. Fortune says that for covid, we’re likely to get true protection, but we don’t know for sure—and it’s not something that we can just assess from antibody levels. It won’t be until phase III trials (which will directly measure the vaccine’s efficacy) that we'll have a better sense of what the relationship between antibody levels and immunity is, and what sort of immune response a vaccine needs to elicit to provide true protection.
If it turns out pre-exposure to covid-19 doesn’t guarantee permanent or strong immunity, nearly everyone would be recommended for vaccination. And clinical trials for studying the safety and efficacy of the vaccine would then need to include people from this previously infected population.
hide by Neel V. Patel Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain.
By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative.
By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,506 | 2,018 |
"Ethereum thinks it can change the world. It’s running out of time to prove it. | MIT Technology Review"
|
"https://www.technologyreview.com/s/612507/ethereum-thinks-it-can-change-the-world-its-running-out-of-time-to-prove-it"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Ethereum thinks it can change the world. It’s running out of time to prove it.
By Mike Orcutt archive page Matthew Monteith It’s late October. Outside the sprawling Prague Congress Centre, not only is the weather turning, but the cryptocurrency world is crashing down, as it has been for much of this year. Expectations for blockchain systems, sky-high just a year ago, are falling nearly as fast as prices for the coins based on them. But inside, the mood is rather different. Here, Devcon—the annual “family reunion” organized by the Ethereum Foundation—is in full swing, and there’s barely a hint of negativity to be found.
On the contrary, there is lots of hugging, unicorn-themed clothing, and a sense of excitement about the future. This crowd doesn’t give a damn about what’s happening outside. Whatever’s going on in here, it’s about much more than magic internet money.
Ethereum is already the most famous cryptocurrency after Bitcoin and the third largest in total value. Unlike the others, however, it aims to serve as a general-purpose computing platform that could, its adherents believe, make possible entirely new forms of social organization. The central topic of Devcon is “Ethereum 2.0,” a radical upgrade that would finally allow the network to realize its true power.
The nagging truth, though, is that all the positivity in Prague masks daunting questions about Ethereum’s future. The handful of idealistic researchers, developers, and administrators in charge of maintaining its software are under increasing pressure to overcome technical limitations that stymie the network’s growth. At the same time, well-funded competitors have emerged, claiming that their blockchains perform better. Crackdowns by regulators, and a growing understanding of how far most blockchain applications are from being ready for prime time, have scared many cryptocurrency investors away: Ethereum’s market value in dollars has fallen more than 90% since its peak last January.
The reason Devcon feels so upbeat despite these storm clouds is that the people building Ethereum have something bigger in mind—something world-changing, in fact. Yet to achieve its goal, this ragtag community needs to crack a problem as complicated as any of the toe-curling technical challenges it faces: how to govern itself. It must find a way to organize a scattered global network of contributors and stakeholders without sacrificing “decentralization”—the principle, which any cryptocurrency community strives for, that no one entity or group should be in control.
Is this even possible? Other blockchain communities, including Bitcoin, have struggled with infighting and gridlock over the kinds of major software upgrades Ethereum is planning. Whether the community can make Ethereum 2.0 happen isn’t just important for crypto speculators and blockchain nerds: it may just go to the very heart of how society is run.
The CryptoKitties effect To understand the hype around Ethereum, you first need to understand the hype around blockchains in general, and then what makes Ethereum different. (Skip the next four paragraphs if you already know.) A blockchain is essentially a shared database, stored in multiple copies on computers around the world. These computers are known as “nodes,” and any computer on the internet can become a node in a blockchain network by installing and running specially developed software. What makes a blockchain different from a regular database is that, thanks to the innovative use of cryptography, there is no need for a central authority like a bank or government to maintain it. The nodes run the software, and collectively they make sure every new transaction follows certain rules before adding it to the blockchain.
This process, called mining, requires a lot of computing. That makes it very hard to tamper with the blockchain’s record of transactions, since doing so generally depends on controlling most of the network’s mining power, and that would require an enormous expenditure of resources. Hence the ideal blockchain is “decentralized,” i.e., it has lots of independent users so nobody is in control.
The first blockchain application was Bitcoin, a system for peer-to-peer payments. Ethereum goes an ambitious step further. Instead of just processing and storing currency transactions, its nodes are supposed to collectively function as a “world computer” on which, using specialized programming languages, people can build applications that are supposed to look and feel much like the ones already on our phones—except no one is in charge of them.
These decentralized applications, or “dapps,” might include such things as voting systems, trading markets, or even social networks—imagine a Twitter or Facebook that nobody owns. Being decentralized, they would theoretically be immune to attempts to manipulate them or shut them down. For Ethereum’s most avid believers, these contain the promise of an entirely new kind of democratic society in which it is much harder to concentrate wealth and power, hide corruption, and exert shady, behind-the-scenes influence.
A year ago—practically centuries in crypto time—investors were pouring billions of dollars into promising projects building dapps. They invested via initial coin offerings , in which blockchain company founders raise money, crowdfunding-style, by selling digital tokens. Prices for coins, including Ether, Ethereum’s own crypto-token, were soaring. Many of their fans believed blockchains and cryptocurrencies were going to swiftly displace traditional financial intermediaries, upend monopolistic internet companies, and decentralize the web.
Then came CryptoKitties.
Perhaps it’s appropriate that a childish game was the thing to kill the mood. CryptoKitties, launched in late 2017, are colorful cartoon cats—like digital versions of Beanie Babies, the plush toy animals that became a collecting craze in the 1990s. Like Beanie Babies, CryptoKitties are all unique in some way, but unlike Beanie Babies, they can reproduce. Each kitty’s uniqueness is verified on the Ethereum blockchain using a special kind of token, and players can buy, sell, or “breed” cats using Ether.
The problem was that CryptoKitties got too popular too fast. As with Beanie Babies, some kitties became highly prized, trading hands for as much as $170,000 worth of Ether. The mad rush to breed them led to a sudden sixfold increase in transaction volume that clogged the network and slowed Ethereum to a halt. It exposed the truth: the technology is immature, incapable of handling the kinds of workloads that big dapps would demand.
“I do think people may have gotten ahead of themselves,” says Jamie Pitts. We’re sitting on the sidelines at Devcon, which was funded and organized by Pitts’s employer, the nonprofit Ethereum Foundation, which is based in Switzerland. The foundation isn’t big on titles, but Pitts is an administrator of sorts. He helps shepherd technical improvements to Ethereum’s software, a job that can be like herding real-life cats.
A soft-spoken, introspective web developer by trade, Pitts is a true believer in Ethereum, and has been since he first dug into Vitalik Buterin’s white paper in 2013. (Every cryptocurrency starts with a white paper outlining its technical principles.) He has no illusions about its current capabilities, though. “A funky computer from the ’70s,” he says with an affectionate smirk. Buterin, Ethereum’s enigmatic young creator, uses an only slightly less pejorative comparison, calling it “a smartphone from 1999 that can play Snake.” Scores of investors and entrepreneurs had overestimated what Ethereum’s blockchain can do, and convinced others to invest billions in their projects. “They were thinking stuff like, ‘Hey I could build this medical company on the Ethereum blockchain ... and a doctor can go somewhere and their stethoscope will talk with their iPad or something over the blockchain ,’ right?” Pitts says with a laugh. “CryptoKitties really put a little fear in their hearts.” By exposing the network’s inherent weakness, CryptoKitties helped investors realize their mistake. Suddenly they became a lot more interested in Ethereum’s technical road map. “These guys are now trying to influence what happens,” says Pitts.
Vitalik holding a Lambo That is why the issue of governance is such a hot topic in Prague. The mania of 2017, when cryptocurrencies shot up in value and investors piled in, made the map of Ethereum stakeholders vastly more complicated. The CryptoKitties fiasco, and a number of subsequent challenges, have made it clear that they all needed a better way to work together to solve Ethereum’s technical problems.
The afternoon before I sat down with Pitts, he and Hudson Jameson, who also works for the foundation, helped lead a sober discussion about how to create better decision-making processes.
Jameson, who has a computer science background and a friendly Texas twang, manages the most important decision-making forum that Ethereum currently has: the biweekly call between the group of self-identified core developers. The meeting can draw between 15 and 30 attendees, depending on how contentious the items on the agenda are.
Jameson often displays admirable patience during these YouTube-broadcast calls. But in Prague, there’s a hint of exasperation in his voice as he addresses a crowd of about 100 people. Complicated technical questions are testing the limits of Ethereum’s still very simple governance system, he says: “We don’t have enough people to actually help us out on these things.” That means the same people are making the decisions over and over again; the community needs better and more accessible forums for technical discussion and decision-making.
What does Ethereum’s governance look like now? Jameson asks the question rhetorically before switching to his next PowerPoint slide, which features an illustration of a cosmic-size Buterin holding a Lamborghini in his hands. (“Lambos” have become an ironic symbol of crypto-wealth.) “It’s Vitalik holding a Lambo,” he says dryly. Some in the crowd chuckle.
Jameson is mostly joking. Still, everyone knows that for all Ethereum’s ambitions to be decentralized, Buterin is still its north star. When difficult times have arisen in the past, the community has leaned heavily on him to guide them.
“Vitalik’s thinking has influenced us so much,” says Pitts. “His ethos and his outlook on life and stuff. His humility and his austerity. There are so many ways about him—even his humor—there are so many ways that he has influenced everyone here, and attracted people who had similar values.” A geeky, gifted child whose family left Russia when he was six to move to Canada, Buterin discovered Bitcoin when he was still a World of Warcraft -playing teenager in Toronto, and he was so inspired by blockchains and cryptocurrencies that he dropped out of college to focus on them. But while Buterin loved Bitcoin, he found it limited. So he set out to design a blockchain system that could do more than just manage a store of digital values.
At 19, he published the white paper describing Ethereum. In it, he explained how he believed certain ideas from Bitcoin could be used to create a decentralized computing platform. Because it would have no single component whose failure could bring down the whole thing, and would not be subject to control by any central intermediary, such a platform could never be shut down. To Buterin, that meant freedom from online censorship, surveillance, and other forms of centralized power.
Obviously, someone with such a vision was not going to be satisfied with digital Beanie Babies. Ethereum’s mission, in Buterin’s view, is to reach the estimated 1.7 billion adults around the world who don’t have a bank account or access to a mobile money provider. Last December, when the price of Ether was soaring and the total value of all cryptocurrencies was more than $500 billion, Buterin took to Twitter to challenge blockchain developers. “How many unbanked people have we banked? How much censorship-resistant commerce for the common people have we enabled?” he asked. “Not enough.” Ethereum 2.0 On stage at Devcon, Buterin is buoyant and optimistic about Ethereum’s future. Rail-thin, angular, and dressed in a black T-shirt and black pants, he unconsciously contorts his wiry wrists and hands as he speaks, in an almost childlike fashion, and his other movements are rather robotic. Nevertheless, the audience of nearly 3,000 developers and entrepreneurs, largely men in their 20s and 30s, is transfixed. They believe in his vision.
Buterin’s speech, which is littered with obscure jargon and acronyms, is focused on Ethereum 2.0. The label refers to “a combination of a bunch of different features that we’ve been talking about for several years, researching for several years, and actively building for several years, that are finally going come together into this one coherent whole,” he proclaims.
The problem Buterin and a few trusted collaborators have spent years laboring to crack is that the fundamental weaknesses of Ethereum, and the reasons why CryptoKitties was able to bring it crashing down, stem from the very core of how almost all existing cryptocurrencies are built.
To build an application on Ethereum, you use a specialized programming language to write so-called smart contracts. These are programs that execute automatically when certain conditions are met—for example, when the price of something falls below a certain value. Ethereum’s blockchain tracks changes to the status of all the smart contracts stored in it.
To run smart contracts, users must pay a fee in Ether, called “gas.” Gas is what keeps the whole system running. The fees ultimately go to the owners of nodes that do the mining—the costly (because it guzzles electricity) work of running the calculations that add data to the blockchain.
CryptoKitties provides a good example of how this works in practice. To create your own one-of-a-kind cat, first you need to buy one using the game’s website. A transaction on the blockchain transfers immutable ownership of the kitty to you. To “breed” your kitty with another one, just send enough gas to a smart contract on the blockchain. The game automatically mixes the “DNA” of the two parents, spits out a new kitten, and, in another transaction, stores proof that you are its sole owner on the blockchain.
Ethereum can only handle about 15 of these transactions per second, on average. Depending on how congested the network is, it can take long periods of time before a transaction becomes final. (For comparison, Visa’s payment network handles an average of 2,000 card transactions per second and has the capacity for tens of thousands.) This slowness is inherent to the design: since every node stores and processes every transaction, smart contracts are extremely difficult to disrupt or stop. The flip side is that the system is as slow as its slowest node.
Devcon teems with lively discussions about the blueprints for solving Ethereum’s technical problems. Three terms in particular—“sharding,” “Plasma,” and “Casper”—appear in nearly every talk. Slated to be part of Ethereum 2.0, together they promise to dramatically boost the system’s capacity to handle transactions without sacrificing its resilience—and substantially reduce the carbon emissions from Ethereum’s growing network of power-hungry computers.
Sharding is supposed to work by partitioning the blockchain’s data. Instead of storing and computing every smart contract, subsets of nodes would handle smaller pieces of the whole.
Plasma is a system that would let users transact with each other without always needing to go through the main blockchain. Essentially, they agree to open a private, secure communications channel and use it to do things like exchange cryptocurrency or play a game. When they are done, they can add all the updates to the main blockchain in just a single transaction.
Casper, the friendly ghost The most ambitious project of all, however, is Casper. Spearheaded by Buterin and fellow Ethereum researcher Vlad Zamfir, it is years in the making. The goal is to reinvent the way the computers on a public blockchain network reach consensus.
To function as a decentralized network that no single entity controls, any cryptocurrency requires a consensus protocol—a process that nodes in its blockchain network use to agree, over and over again, that the information in the blockchain is valid. For Ethereum, Bitcoin, and most other cryptocurrencies, central to the consensus protocol is an algorithm called proof of work.
Proof of work works like a race. Computers designed for cryptocurrency mining devote huge amounts of processing power to repeatedly guessing at a solution to a mathematical puzzle. The first one to solve the puzzle gets to add a new “block” of valid transactions to the chain of previous ones—and receives a cryptocurrency reward. The idea behind proof of work is that would-be attackers are deterred by the massive cost of the mining hardware and electricity they would need to manipulate the ledger.
Bitcoin’s creator, Satoshi Nakamoto, did not invent proof of work but did have the inspired idea to use it as a way to make participation in a blockchain network open to the public. Anyone with the right hardware and enough electricity can mine Bitcoin, Ether, and similar cryptocurrencies—no need for permission.
Nakamoto’s consensus protocol was revolutionary. But “it’s absolutely horrible from every perspective that relates to performance,” says Emin G ü n Sirer , a computer scientist and cryptocurrency expert at Cornell University. Not only is it painfully slow; it uses way too much electricity.
“The energy spent is a huge multiple of the actual energy required to build the blockchain,” says Sirer. Though Ethereum burns far less than Bitcoin, recent estimates suggest it still consumes about as much electricity as a small country, while Bitcoin uses about as much as a fairly large one. (The amounts fluctuate, but at the time of writing, Ethereum’s consumption was on a par with Costa Rica’s, and Bitcoin was roughly level with Bangladesh.) Buterin acknowledges that this has to change. “The social impact of burning huge amounts of resources has consequences,” he told me when I caught up with him at Devcon. Billions of dollars are “wasted” via proof of work, which results in a “loss of resources that’s spread out across every single cryptocurrency user, and ultimately through all the environmental externalities, every single person in society.” It‘s also pretty bad for the brand: “Like, it could mean the difference between anyone who really cares about the environment being your friend versus trying to stop you.” The algorithm that Buterin and his disciples have chosen as a replacement is called proof of stake. Rooted in approaches first described in the 1980s, proof of stake relies on “validators”: members of the network who, quite simply, verify and attest that transactions added to the chain are valid. Their incentive for being honest is that they must deposit, or “stake,” substantial sums of money (the current plan is 32 Ether, about $2,800 at the time of writing). When their tenure as validators ends, they can recoup the money; if they have been dishonest, they stand to lose it.
The mechanisms for choosing which validators get to add new blocks to the chain, and penalizing them for misbehavior, must be built into the algorithm. Doing that in a way that is fair and sustainable relies on solving problems in game theory, economics, and computer science. There’s also the question of how to design a system that can handle large numbers of validators without breaking down. Finally, proof-of-stake networks are vulnerable to certain malicious attacks that proof-of-work systems are not (the reverse is also true), and Ethereum’s researchers are still struggling to determine how best to defend against them.
The years-old quest to replace proof of work has proceeded in fits and starts.
Promising ideas have been discarded and deadlines pushed back. That might be part of why, despite Buterin’s optimism at Devcon, his rousing speech doesn’t offer a time line for completing the upgrade.
Many of the problems confounding Ethereum’s developers have been well known for more than a decade, says Sirer, who suggests that maybe this is why Nakamoto invented a different approach for Bitcoin. “The fact that they haven’t been able to roll out a working protocol yet tells me that this is indeed a genuinely hard problem,” says Sirer. “Not just that, but the fact that nobody else has been able to do this. All the academics couldn’t do it either.” Unicorns and rainbows Ethereum 2.0, Buterin says, will be able to handle transaction volumes a thousand times larger than the current version, enabling it to truly become the world computer he envisioned. On stage, and later in person, he exudes a nerdy confidence that implies this is simply a matter of course.
None of the foundation employees, developers, and other attendees I speak with at Devcon express doubt in Buterin, or in the prospects for Ethereum 2.0. But some are more circumspect about the challenges.
Lane Rettig, one of the self-identified core developers, echoes Jameson’s concerns about the need for better decision-making systems: “The things we need to solve are more complex. The coordination problem is getting harder. There are more people involved, more organizations, more software.” Rettig, whose Devcon attire features black pajama pants with white-and-pink unicorns and rainbows , says that in addition to technical scalability, it is just as urgent that the community achieve “social scalability.” A key problem Ethereum has is that the process for making changes to the software is not fully defined, says Pitts. To fix that, he and core developer Greg Colvin are spearheading a new organization called the Fellowship of Ethereum Magicians. They are modeling it after the Internet Engineering Task Force, the open, volunteer-run Internet standards organization.
All this sounds like the beginnings of a traditional institution, though, with rules and hierarchy. Doesn’t that run counter to the decentralized ideals of Ethereum? Perhaps, but to win what Jameson calls “the blockchain wars,” it is probably going to need more structure. “There are lots of paradoxes baked in here,” admits Rettig. “You need a centralized process to invent a decentralized governance mechanism.” Besides, many people would argue that Ethereum is already more centralized than it should be. As with Bitcoin, only a few groups of miners control most of the network’s mining power. There’s also its continued reliance on Buterin for guidance—though Buterin pushes back emphatically when I ask him if he is a single point of failure. “The dependence on me is definitely going down,” he insists.
Ultimately, what seems to unite the attendees at Devcon is not Buterin or an abstract notion of decentralization. It’s a genuine belief that Ethereum’s technology can—and should—disrupt the way humans organize themselves, and at a global scale. The question is how long its backers have to pull it off, especially if enthusiasm for cryptocurrencies continues to wane. In the end, the audacious ambition and idealism on display in Prague faces the same question Ethereum’s blockchain does: Can it scale? Or is it just CryptoKitties, unicorns, and rainbows? hide by Mike Orcutt Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,507 | 2,023 |
"Grief content is haunting my digital life | MIT Technology Review"
|
"https://www.technologyreview.com/2023/02/06/1067794/escape-grief-content-unsubscribe-facebook-instagram-amazon-recommendation-algorithms"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts When my dad was sick, I started Googling grief. Then I couldn’t escape it.
I’ve spent months trying to untrain the algorithms that were relentlessly serving me content on loss.
By Tate Ryan-Mosley archive page Eva Redamonti I’ve always been a super-Googler, coping with uncertainty by trying to learn as much as I can about whatever might be coming. That included my father’s throat cancer. Initially I focused on the purely medical. I endeavored to learn as much as I could about molecular biomarkers, transoral robotic surgeries, and the functional anatomy of the epiglottis.
Then, as grief started to become a likely scenario, it too got the same treatment. It seemed that one of the pillars of my life, my dad, was about to fall, and I grew obsessed with trying to understand and prepare for that.
I am a mostly visual thinker , and thoughts pose as scenes in the theater of my mind. When my many supportive family members, friends, and colleagues asked how I was doing, I’d see myself on a cliff, transfixed by an omniscient fog just past its edge. I’m there on the brink, with my parents and sisters, searching for a way down. In the scene, there is no sound or urgency and I am waiting for it to swallow me. I’m searching for shapes and navigational clues, but it’s so huge and gray and boundless.
I wanted to take that fog and put it under a microscope. I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, perusing personal disaster while I waited for coffee or watched Netflix.
How will it feel? How will I manage it? I started, intentionally and unintentionally, consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials.
It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies; the algorithms were a sort of priest, offering confession and communion.
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss.
I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? I’m well aware of the power of algorithms—I’ve written about the mental-health impact of Instagram filters , the polarizing effect of Big Tech’s infatuation with engagement, and the strange ways that advertisers target specific audiences. But in my haze of panic and searching, I initially felt that my algorithms were a force for good. (Yes, I’m calling them “my” algorithms, because while I realize the code is uniform, the output is so intensely personal that they feel like mine.
) They seemed to be working with me, helping me find stories of people managing tragedy, making me feel less alone and more capable.
In my haze of panic and searching, I initially felt that my algorithms were a force for good. They seemed to be working with me, making me feel less alone and more capable.
In reality, I was intimately and intensely experiencing the effects of an advertising-driven internet, which Ethan Zuckerman, the renowned internet ethicist and professor of public policy, information, and communication at the University of Massachusetts at Amherst, famously called “ the Internet’s Original Sin ” in a 2014 Atlantic piece. In the story, he explained the advertising model that brings revenue to content sites that are most equipped to target the right audience at the right time and at scale. This, of course, requires “moving deeper into the world of surveillance,” he wrote. This incentive structure is now known as “ surveillance capitalism.
” Understanding how exactly to maximize the engagement of each user on a platform is the formula for revenue, and it’s the foundation for the current economic model of the web.
In principle, most ad targeting still exploits basic methods like segmentation, where people grouped by characteristics such as gender, age, and location are served content akin to what others in their group have engaged with or liked.
Related Story The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can't fix the problem.
But in the eight and half years since Zuckerman’s piece, artificial intelligence and the collection of ever more data have made targeting exponentially more personalized and chronic. The rise of machine learning has made it easier to direct content on the basis of digital behavioral data points rather than demographic attributes. These can be “stronger predictors than traditional segmenting,” according to Max Van Kleek, a researcher on human-computer interaction at the University of Oxford. Digital behavior data is also very easy to access and accumulate. The system is incredibly effective at capturing personal data—each click, scroll, and view is documented, measured, and categorized.
Simply put, the more that Instagram and Amazon and the other various platforms I frequented could entangle me in webs of despair for ever more minutes and hours of my day, the more content and the more ads they could serve me.
Whether you’re aware of it or not, you’re also probably caught in a digital pattern of some kind. These cycles can quickly turn harmful, and I spent months asking experts how we can get more control over rogue algorithms.
A history of grieving This story starts at what I mistakenly thought was the end of a marathon—16 months after my dad went to the dentist for a toothache and hours later got a voicemail about cancer. That was really the only day I felt brave.
The marathon was a 26.2-mile army crawl. By mile 3, all the skin on your elbows is ground up and there’s a paste of pink tissue and gravel on the pavement. It's bone by mile 10. But after 33 rounds of radiation with chemotherapy, we thought we were at the finish line.
Then this past summer, my dad’s cancer made a very unlikely comeback, with a vengeance, and it wasn’t clear whether it was treatable.
Really, the sounds were the worst. The coughing, coughing, choking— Is he breathing ? He’s not breathing, he’s not breathing —choking, vomit, cough. Breath.
That was the soundtrack as I started grieving my dad privately, prematurely, and voyeuristically.
I began reading obituaries from bed in the morning.
The husband of a fellow Notre Dame alumna dropped dead during a morning run. I started checking her Instagram daily, trying to get a closer view. This drew me into #widowjourney and #youngwidow. Soon, Instagram began recommending the accounts of other widows.
A friend gently suggested that I could maybe stop examining the fog. “Have you tried looking away?” I stayed up all night sometime around Thanksgiving sobbing as I traveled through a rabbit hole about the death of Princess Diana.
Sometime that month, my Amazon account gained a footer of grief-oriented book recommendations. I was invited to consider The Year of Magical Thinking , Crying in H Mart: A Memoir , and F*ck Death: An Honest Guide to Getting Through Grief Without the Condolences, Sympathy, and Other BS as I shopped for face lotion.
Amazon’s website says its recommendations are “based on your interests.” The site explains, “We examine the items you’ve purchased, items you’ve told us you own, and items you’ve rated. We compare your activity on our site with that of other customers, and using this comparison, recommend other items that may interest you in Your Amazon.” (An Amazon spokesperson gave me a similar explanation and told me I could edit my browsing history.) At some point, I had searched for a book on loss.
Content recommendation algorithms run on methods similar to ad targeting, though each of the major content platforms has its own formula for measuring user engagement and determining which posts are prioritized for different people. And those algorithms change all the time, in part because AI enables them to get better and better, and in part because platforms are trying to prevent users from gaming the system.
Sometimes it’s not even clear what exactly the recommendation algorithms are trying to achieve, says Ranjit Singh, a data and policy researcher at Data & Society, a nonprofit research organization focused on tech governance. “One of the challenges of doing this work is also that in a lot of machine-learning modeling, how the model comes up with the recommendation that it does is something that is even unclear to the people who coded the system,” he says.
This is at least partly why by the time I became aware of the cycle I had created, there was little I could do to quickly get out. All this automation makes it harder for individual users and tech companies alike to control and adjust the algorithms. It’s much harder to redirect an algorithm when it’s not clear why it’s serving certain content in the first place.
When personalization becomes toxic One night, I described my cliff phantasm to a dear friend as she drove me home after dinner. She had tragically lost her own dad. She gently suggested that I could maybe stop examining the fog. “Have you tried looking away?” she asked.
Perhaps I could fix my gaze on those with me at this lookout and try to appreciate that we had not yet had to walk over the edge.
It was brilliant advice that my therapist agreed with enthusiastically.
I committed to creating more memories at present with my family rather than spending so much time alone wallowing in what might come. I struck up conversations with my dad and told him stories I hadn’t before.
I tried hard to bypass triggering stories on my feeds and regain focus when I started going down a rabbit hole. I stopped checking for updates from the widows and widowers I had grown attached to. I unfollowed them along with other content I knew was unhealthy.
But the more I tried to avoid it, the more it came to me. No longer a priest, my algorithms had become more like a begging dog.
My Google mobile app was perhaps the most relentless, as it seemed to insightfully connect all my searching for cancer pathologies to stories of personal loss. In the home screen of my search app, which Google calls “Discover,” a YouTube video imploring me to “Trust God Even When Life Is Hard” would be followed by a Healthline story detailing the symptoms of bladder cancer.
(As a Google spokesperson explained to me, “Discover helps you find information from high-quality sources about topics you’re interested in. Our systems are not designed to infer sensitive characteristics like health conditions, but sometimes content about these topics could appear in Discover”—I took this to mean that I was not supposed to be seeing the content I was—“and we’re working to make it easier for people to provide direct feedback and have even more control over what they see in their feed.”) “There’s an assumption the industry makes that personalization is a positive thing,” says Singh. “The reason they collect all of this data is because they want to personalize services so that it’s exactly catered to what you want.” But, he cautions, this strategy is informed by two false ideas that are common among people working in the field. The first is that platforms ought to prioritize the individual unit, so that if a person wants to see extreme content, the platform should offer extreme content; the effect of that content on an individual’s health or on broader communities is peripheral.
“There’s an assumption the industry makes that personalization is a positive thing.” The second is that the algorithm is the best judge of what content you actually want to see.
For me, both assumptions were not just wrong but harmful. Not only were the various algorithms I interacted with no longer trusted mediators, but by the time I realized all my ideation was unhealthy, the web of content I’d been living in was overwhelming.
I found that the urge to click loss-related prompts was inescapable, and at the same time, the content seemed to be getting more tragic. Next to articles about the midterm elections, I’d see advertisements for stories about someone who died unexpectedly just hours after their wedding and the increase in breast cancer in women under 30.
“These algorithms can ‘rabbit hole’ users into content that can feel detrimental to their mental health,” says Nina Vasan, the founder and executive director of Brainstorm, a Stanford mental-health lab. “For example, you can feel inundated with information about cancer and grief, and that content can get increasingly emotionally extreme.” Eventually, I deleted the Instagram and Twitter apps from my phone altogether. I stopped looking at stories suggested by Google. Afterwards, I felt lighter and more present. The fog seemed further out.
The internet doesn’t forget My dad started to stabilize by early winter, and I began to transition from a state of crisis to one of tentative normalcy (though still largely app-less).
I also went back to work, which requires a lot of time online.
The internet is less forgetful than people; that’s one of its main strengths. But harmful effects of digital permanence have been widely exposed—for example, there’s the detrimental impact that a documented adolescence has on identity as we age.
In one particularly memorable essay, Wired ’s Lauren Goode wrote about how various apps kept re-upping old photos and wouldn’t let her forget that she was once meant to be a bride after she called off her wedding.
When I logged back on, my grief-obsessed algorithms were waiting for me with a persistence I had not anticipated. I just wanted them to leave me alone.
As Singh notes, fulfilling that wish raises technical challenges. “At a particular moment of time, this was a good recommendation for me, but it’s not now. So how do I actually make that difference legible to an algorithm or a recommendation system? I believe that it’s an unanswered question,” he says.
Oxford’s Van Kleek echoes this, explaining that managing upsetting content is a hugely subjective challenge, which makes it hard to deal with technically. “The exposure to a single piece of information can be completely harmless or deeply harmful depending on your experience,” he says. It’s quite hard to deal with that subjectivity when you consider just how much potentially triggering information is on the web.
We don’t have tools of transparency that allow us to understand and manage what we see online, so we make up theories and change our scrolling behavior accordingly. (There’s an entire research field around this behavior, called “algorithmic folk,” which explores all the conjectures we make as we try to decipher the algorithms that sort our digital lives.) I supposed not clicking or looking at content centered on trauma and cancer ought to do the trick eventually. I’d scroll quickly past a post about a brain tumor on my Instagram’s “For you” page, as if passing an old acquaintance I was trying to avoid on the street.
Related Story As tech companies scramble in anticipation of a major ruling, some experts say community moderation online could be on the chopping block.
It did not really work.
“Most of these companies really fiddle with how they define engagement. So it can vary from one time in space to another, depending on how they're defining it from month to month,” says Robyn Caplan, a social media researcher at Data & Society.
Many platforms have begun to build in features to give users more control over their recommendations. “There are a lot more mechanisms than we realize,” Caplan adds, though using those tools can be confusing. “You should be able to break free of something that you find negative in your life in online spaces. There are ways that these companies have built that in, to some degree. We don’t always know whether they’re effective or not, or how they work.” Instagram, for instance, allows you to click “Not interested” on suggested posts (though I admit I never tried to do it). A spokesperson for the company also suggested that I adjust the interests in my account settings to better curate my feed.
By this point, I was frustrated that I was having such a hard time moving on. Cancer sucks so much time, emotion, and energy from the lives and families it affects, and my digital space was making it challenging to find balance. While searching Twitter for developments on tech legislation for work, I’d be prompted with stories about a child dying of a rare cancer.
I resolved to be more aggressive about reshaping my digital life.
How to better manage your digital space I started muting and unfollowing accounts on Instagram when I’d scroll pass triggering content, at first tentatively and then vigorously. A spokesperson for Instagram sent over a list of helpful features that I could use, including an option to snooze suggested posts and to turn on reminders to “take a break” after a set period of time on the app.
I cleared my search history on Google and sought out Twitter accounts related to my professional interests. I adjusted my recommendations on Amazon (Account > Recommendations > Improve your recommendations) and cleared my browsing history.
Tips for managing your algorithms Don’t engage with content you don’t want to see.
Engage with content you do want to see by liking, commenting, clicking, and sending posts to friends.
Give a trusted friend access to your account and ask them to engage with content so as to confuse the algorithm.
Clear your cookies and browsing history—a lot.
Use incognito or private mode when searching for content you don’t want to follow you.
Make multiple accounts tailored to specific interests.
Look for alternative platforms that don’t run on recommendations.
Stop using platforms that are harmful to you for a limited period of time, or altogether.
Take advantage of the many features that platforms offer to allow you to give feedback and adjust your recommendations.
I also capitalized on my network of sources—a privilege of my job that few in similar situations would have—and collected a handful of tips from researchers about how to better control rogue algorithms. Some I knew about; others I didn’t.
Everyone I talked to told me I had been right to assume that it works to stop engaging with content I didn’t want to see, though they emphasized that it takes time. For me, it has taken months. It also has required that I keep exposing myself to harmful content and manage any triggering effects while I do this—a reality that anyone in a similar situation should be aware of.
Relatedly, experts told me that engaging with content you do want to see is important. Caplan told me she personally asked her friends to tag her and DM her with happy and funny content when her own digital space grew overwhelming.
“That is one way that we kind of reproduce the things that we experience in our social life into online spaces,” she says. “So if you’re finding that you are depressed and you’re constantly reading sad stories, what do you do? You ask your friends, ‘Oh, what’s a funny show to watch?’” Another strategy experts mentioned is obfuscation—trying to confuse your algorithm. Tactics include liking and engaging with alternative content, ideally related to topics that the platform might have a plethora of further suggestions—like dogs, gardening, or political news. (I personally chose to engage with accounts related to #DadHumor, which I do not regret.) Singh recommended handing over the account to a friend for a few days with instructions to use it however might be natural for them, which can help you avoid harmful content and also throw off the algorithm.
You can also hide from your algorithms by using incognito mode or private browsers, or by regularly clearing browsing histories and cookies (this is also just good digital hygiene). I turned off “Personal results” on my Google iPhone app, which helped immensely.
One of my favorite tips was to “embrace the Finsta,” a reference to fake Instagram accounts. Not only on Instagram but across your digital life, you can make multiple profiles dedicated to different interests or modes. I created multiple Google accounts: one for my personal life, one for professional content, another for medical needs. I now search, correspond, and store information accordingly, which has made me more organized and more comfortable online in general.
All this is a lot of work and requires a lot of digital savvy, time, and effort from the end user, which in and of itself can be harmful. Even with the right tools, it’s incredibly important to be mindful of how much time you spend online.
Research findings are overwhelming at this point: too much time on social media leads to higher rates of depression and anxiety.
“For most people, studies suggest that spending more than one hour a day on social media can make mental health worse. Overall there is a link between increase in time spent on social media and worsening mental health,” says Stanford’s Vasan. She recommends taking breaks to reset or regularly evaluating how your time spent online is making you feel.
A clean scan Cancer does not really end—you just sort of slowly walk out of it, and I am still navigating stickiness across the personal, social, and professional spheres of my life. First you finish treatment. Then you get an initial clean scan. The sores start to close—though the fatigue lasts for years. And you hope for a second clean scan, and another after that.
The faces of doctors and nurses who carried you every day begin to blur in your memory. Sometime in December, topics like work and weddings started taking up more time than cancer during conversations with friends.
What I actually want is to control when I look at information about disease, grief, and anxiety.
My dad got a cancer-free scan a few weeks ago. My focus and creativity have mostly returned and I don’t need to take as many breaks. I feel anxiety melting out of my spine in a slow, satisfying drip.
And while my online environment has gotten better, it’s still not perfect. I’m no longer traveling down rabbit holes of tragedy. I’d say some of my apps are cleansed; some are still getting there. The advertisements served to me across the web often still center on cancer or sudden death. But taking an active approach to managing my digital space, as outlined above, has dramatically improved my experience online and my mental health overall.
Still, I remain surprised at just how harmful and inescapable my algorithms became while I was struggling this fall. Our digital lives are an inseparable part of how we experience the world, but the mechanisms that reinforce our subconscious behaviors or obsessions, like recommendation algorithms, can make our digital experience really destructive. This, of course, can be particularly damaging for people struggling with issues like self-harm or eating disorders—even more so if they’re young.
With all this in mind, I’m very deliberate these days about what I look at and how.
What I actually want is to control when I look at information about disease, grief, and anxiety. I’d actually like to be able to read about cancer, at appropriate times, and understand the new research coming out. My dad’s treatment is fairly new and experimental. If he’d gotten the same diagnosis five years ago, it most certainly would have been a death sentence. The field is changing, and I’d like to stay on top of it. And when my parents do pass away, I want to be able to find support online.
But I won’t do any of it the same way. For a long time, I was relatively dismissive of alternative methods of living online. It seemed burdensome to find new ways of doing everyday things like searching, shopping, and following friends—the power of tech behemoths is largely in the ease they guarantee.
Indeed, Zuckerman tells me that the challenge now is finding practical substitute digital models that empower users. There are viable options; user control over data and platforms is part of the ethos behind hyped concepts like Web3. Van Kleek says the reignition of the open-source movement in recent years makes him hopeful: increased transparency and collaboration on projects like Mastodon, the burgeoning Twitter alternative, might give less power to the algorithm and more power to the user.
“I would suggest that it’s not as bad as you fear. Nine years ago, complaining about an advertising-based web was a weird thing to be doing. Now it’s a mainstream complaint,” Zuckerman recently wrote to me in an email. “We just need to channel that dissatisfaction into actual alternatives and change.” My biggest digital preoccupation these days is navigating the best way to stay connected with my dad over the phone now that I am back in my apartment 1,200 miles away. Cancer stole the “g” from “Good morning, ball player girl,” his signature greeting, when it took half his tongue.
I still Google things like “How to clean a feeding tube” and recently watched a YouTube video to refresh my memory of the Heimlich maneuver. But now I use Tor.
Clarification: This story has been updated to reflect that the explanation of Amazon's recommendations on its site refers to its recommendation algorithm generally, not specifically its advertising recommendations.
hide by Tate Ryan-Mosley Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,508 | 2,021 |
"This huge Chinese company is selling video surveillance systems to Iran | MIT Technology Review"
|
"https://www.technologyreview.com/2021/12/15/1042142/chinese-company-tiandy-video-surveillance-iran"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts This huge Chinese company is selling video surveillance systems to Iran A new report sheds light on a shadowy industry where authoritarian states enthusiastically export surveillance technologies to repressive regimes around the world.
By Tate Ryan-Mosley archive page Ms Tech | Envato A Chinese company is selling its surveillance technology to Iran’s Revolutionary Guard, police, and military, according to a new report by IPVM , a surveillance research group. The firm, called Tiandy, is one of the world’s largest video surveillance companies, reporting almost $700 million in sales in 2020. The company sells cameras and accompanying AI-enabled software, including facial recognition technology, software that it claims can detect someone’s race, and “smart” interrogation tables for use alongside “tiger chairs,” which have been widely documented as a tool for torture.
The report is a rare look into some specifics of China’s strategic relationship with Iran and the ways in which the country disperses surveillance technology to other autocracies abroad.
Tiandy’s “ethnicity tracking” tool, which has been widely challenged by experts as both inaccurate and unethical, is believed to be one of several AI-based systems the Chinese government uses to repress the Uyghur minority group in the country’s Xinjiang province, along with Huawei’s face recognition software, emotion-detection AI technologies, and a host of others. (Huawei has denied involvement in the region.) The report, based on analysis of Tiandy’s publicly available social media posts and web marketing materials, shows that the company has signed a five-year contract in Iran, where it plans to have eight local staff members. The report also details that while Tiandy is privately owned, its CEO, Dai Lin, is a public supporter of the Communist Party, the ruling party in China, and the company is a major supplier to the Chinese government.
While the exact package of surveillance capabilities Tiandy will sell to Iran is unclear, IPVM found Tiandy cameras in use by the Iranian firm Sairan—a “state-owned military electronics provider”—and at an undisclosed military base. Tiandy also touts several projects in Iran on its public website , including work with an arm of the Islamic Revolutionary Guard and with police in the northern city of Khomam.
Importantly, the report revealed that Tiandy’s networked video recorders (NVRs) are in use by the Iranian military and powered by chips produced by US manufacturer Intel , raising questions of whether the company has violated US sanctions on Iran. Penny Bruce, an Intel spokesperson, told MIT Technology Review, “We have no knowledge of the allegations raised, and we are investigating the situation.” A budding partnership The new report is among the few pieces of hard evidence for something experts have long suspected: that Iran is trying to build a system of digital control over its citizens, following China’s model and using Chinese tools. Censorship and surveillance are core tenets of that model, says Saeid Golkar, an expert in Iranian security and a professor at the University of Tennessee, Chattanooga. “The Islamic Republic is trying to create an internet like China, creating massive connectivity and then controlling it,” he says.
Iran has been riding China’s coattails in the realm of surveillance for some time. Iran was an early adopter of the Chinese “social credit” system , a comprehensive scoring of citizens’ financial, civic, and social activities. In 2010, the Shenzhen-based company ZTE signed a $130 million deal with the state-run Telecommunication Company of Iran (TCI) that overlaid a ZTE surveillance system on government-managed telephone and internet infrastructure.
In March, China and Iran agreed to a 25-year strategic partnership , and though many of its details are undisclosed, the agreement stipulates increased military and trade cooperation between the two countries. IPVM’s report confirms some of those details, outlining how Iran is modernizing its ability to track its citizens.
Golkar says that until recently, much of the Iranian security apparatus was run by moderators and informants monitoring social media sites, but that is changing rapidly. “As Iran becomes more digitized, I’m sure that we will see more digital forms of oppression and surveillance,” says Golkar. Iran has a track record of imprisoning and torturing dissidents , and Tiandy’s product line appears well suited for furthering such tactics.
It’s essential to watch what China is trying to sell to other countries, and autocracies in particular, Golkar says: “Authoritarian regimes are following China, because China is running this game. Everything that China does, they will buy it or they will try to duplicate it.” Exporting techno-authoritarianism The Tiandy-Iran partnership marks an escalation of a worrying trend in which authoritarian states are increasingly using technologies to exert control over their citizens. At a high level, the partnership aligns with China’s diplomatic strategy. China has aggressively pursued closer ties with countries in Central Asia, the Middle East, and Africa. Intent on cementing global influence through its sprawling Belt and Road initiative, Chinese officials and companies strike deals to build ambitious development projects ranging from ports and highways to digital infrastructure. Huawei, for example, is responsible for building about 70% of the 4G networks on the continent of Africa.
Part of these projects is China’s vision for using technology to closely monitor the populace.
Huawei, Alibaba, ZTE, and other Chinese companies run so-called “safe cities” and “smart cities” programs, claiming that their use of IoT and visual technologies aids police agencies. Huawei says its tools were used in over 700 cities as of 2019, with a focus on Asia and Africa. Simply put, exporting surveillance systems is a core part of China’s geopolitical strategy.
Related Story A viral online project helped expose the punishing 996 work schedule—and shows how hard it is to make progress against it.
Russia also boasts a sophisticated domestic surveillance program and has been ramping up its exports to other countries. Moscow implemented one of the most expansive video systems in the world for public transit, schools, and roads last year, complete with facial recognition.
The program is powered by NTechLab, the original makers of the app FindFace, a precursor to modern facial recognition systems that allowed users to take pictures of faces and compare them with images on the internet. Its neural networks can now handle gaits, silhouettes, and cars as well.
“We want to work all over the world. We have a lot of projects in Latin America and the Middle East,” NTechLab founder Artem Kuharenko told MIT Technology Review last year.
He said at the time that the two focus areas of NTechLab’s international work are retail and “safe and smart cities.” Surveillance is by no means limited to authoritarian states, and “safe and smart cities” projects have found homes in many democracies. Still, techno-authoritarianism will likely prove difficult to control. As the Tiandy-Iran report underscores, even extensive sanctions on Iran did not keep Intel’s chips from powering Tiandy cameras. “This shows how difficult it is to control technology flows, especially for chips,” says Charles Rollet, the report’s author. “Supply chains in this area are complex and chipmakers have a hard time controlling exactly where all their chips end up.” Whether Russia and China are competing or cooperating on the dispersal of surveillance systems to states around the world is, for the moment, a mystery. But one thing is clear: visual surveillance technologies are a priority in the authoritarian toolkit, and Russia and China are bringing other countries along with them.
hide by Tate Ryan-Mosley Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,509 | 2,023 |
"AI deepfakes are now as simple as typing whatever you want your subject to say - The Verge"
|
"https://www.theverge.com/2019/6/10/18659432/deepfake-ai-fakes-tech-edit-video-by-typing-new-words"
|
"The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence AI deepfakes are now as simple as typing whatever you want your subject to say AI deepfakes are now as simple as typing whatever you want your subject to say / A scarily simple way to create fake videos and misinformation By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story In the latest example of deepfake technology, researchers have shown off new software that uses machine learning to let users edit the text transcript of a video to add, delete, or change the words coming right out of somebody’s mouth.
The work was done by scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research, and shows that our ability to edit what people say in videos and create realistic fakes is becoming easier every day.
Type in new speech and the software edits the video You can see a number of examples of the system’s output below, including an edited version of a famous quotation from Apocalypse Now , with the line “I love the smell of napalm in the morning” changed to “I love the smell of french toast in the morning.” This work is just at the research stage right now and isn’t available as consumer software, but it probably won’t be long until similar services go public. Adobe, for example, has already shared details on prototype software named VoCo, which lets users edit recordings of speech as easily as a picture , and which was used in this research.
To create the video fakes, the scientists combine a number of techniques.
First, they scan the target video to isolate phonemes spoken by the subject. (These are the constituent sounds that make up words, like “oo” and “fuh.”) They then match these phonemes with corresponding visemes, which are the facial expressions that accompany each sound. Finally, they create a 3D model of the lower half of the subject’s face using the target video.
When someone edits a text transcript of the video, the software combines all this collected data — the phonemes, visemes, and 3D face model — to construct new footage that matches the text input. This is then pasted onto the source video to create the final result.
In tests in which the fake videos were shown to a group of 138 volunteers, some 60 percent of participants though the edits were real. That may sound quite low, but only 80 percent of that same group thought the original, unedited footage was also legitimate. (The researchers note that this might be because the individuals were told their answers were being used for a study on video editing, meaning they’d been primed to look for fakes.) As ever, though, it’s important to remember there are limitations to what this tech can do.
The algorithms here only work on talking head style videos, for example, and require 40 minutes of input data. The edited speech also doesn’t seem like it can differ too much from the source material, and in their best quality fakes, the researchers asked the subjects to record new audio to match the changes, using AI to generate the video. (This is because audio fakes are sometimes subpar, though the quality is certainly getting much better.
) The researchers also note that they can’t yet change the mood or tone of the speaker’s voice as doing so would lead to “uncanny results.” And that any occlusions of the face — e.g. if someone waves their hands while speaking — throw off the algorithm completely.
So, the technology is not perfect, but these sorts of limitations always feature in early-stage research and it’s almost guaranteed they’ll be overcome in time. That means that society at large will soon have to grapple with the underlying concept this research demonstrates: the arrival of software that lets anyone edit what people say in videos with no technical training.
The potential harms of this technology are hugely worried, and researchers in this field are often criticized for failing to consider the potential misuse of their work. The scientists involved in this particular project say they’ve considered these problems.
The researchers say their work could be used to “falsify personal statements and slander prominent individuals.” In a blog post accompanying the paper, they write: “Although methods for image and video manipulation are as old as the media themselves, the risks of abuse are heightened when applied to a mode of communication that is sometimes considered to be authoritative evidence of thoughts and intents. We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals.” But the remedy they suggest is hardly comforting. They say to prevent confusion, AI-edited video should be clearly presented as such, either through the use of watermarking or through context (e.g. an audience understanding that they’re watching a fictional film).
But watermarks are easily removed and a loss of context is one of the hallmarks of online media. Fakes don’t need to be flawless to have an impact either. Plenty of fake news articles can be easily debunked with a few minutes’ research, but that doesn’t stop their spread, especially in communities who want to believe such lies that fit their preconceptions.
The researchers note that technology like this has many beneficial uses, too. It would be of great help to the film and TV industries, allowing them to fix misspoken lines without rerecording footage, and create seamless dubs of actors speaking different languages.
But these benefits seem underwhelming compared to the potential damage. Although there’s a good argument to be made that deepfake propaganda isn’t as much of a threat as many believe, the progress made in research like this is still deeply troubling.
OpenAI board in discussions with Sam Altman to return as CEO Sam Altman fired as CEO of OpenAI Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
"
|
13,510 | 2,019 |
"Elon Musk’s brain-interface company is promising big news. Here’s what it could be. | MIT Technology Review"
|
"https://www.technologyreview.com/s/613961/elon-musks-brain-interface-company-is-promising-big-news-heres-what-it-could-be"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Elon Musk’s brain-interface company is promising big news. Here’s what it could be.
By Antonio Regalado archive page Elon Musk AP Photo/Jae C. Hong Neuralink, the secretive company bankrolled by Elon Musk to develop brain-computer interfaces , will provide its first public update later today in an event streamed over the internet.
We're having an event next Tuesday in San Francisco to share a bit about what we've been working on the last two years, and we’ve reserved a few seats for the internet. Apply here: https://t.co/1VQsv4gAA4 This could be the big reveal of what the mysterious company has been up to since Musk announced it two years ago, and hired a pack of leading university neuroscientists to pursue his goal of connecting human brains directly to artificial-intelligence software.
Although everything about Neuralink is top secret, here at MIT Technology Review, we couldn’t resist guessing at what the company will show the world. A look at the available evidence suggests Neuralink will show off a “high-bandwidth” connection to a monkey brain—one able to extract lots of information by recording the activity of many neurons at once, using ultrathin flexible electrodes. That could be used to do something cool, like get a monkey to play a video game with its mind.
But whatever tomorrow’s event is about, we predict Musk’s costly brain-decoding venture will struggle to address one very hard question: What is Neuralink’s brain interface actually for ? And does the company have the right technology for the job? According to Musk, also the founder of Tesla and SpaceX, human minds need to plug into the internet so that our species can keep up with AI. That’s the futuristic scenario the billionaire laid out in April 2017 via a marathon post on the website Wait But Why.
But before we can all become cyborgs, companies like Neuralink first need an actual medical reason why metal and computer chips would belong in a person’s head.
“I think the pathway is to demonstrate in primates they can reliably record a large number of neurons, in a model of disease,” says Christian Wentz, a technologist who sold his neurotech startup to a competing brain-interface company, Kernel.
“The whole move is to justify to the FDA why you’d put all this in someone’s brain.” Exactly what type of application Neuralink has in mind, and might show off, is not known—that’s a secret held even closer than its technology platform. Previously, experimental brain interfaces have been used to let paralyzed humans move cursors and robotic arms with their thoughts, to try to listen in to their speech , to stimulate memory formation, and to try to treat depression.
A challenge for the company is that the high-density recording technology Musk is pursuing isn’t necessarily what you’d pick to treat a given brain disease, one of Neuralink’s founding team members, Tim Hanson, told MIT Technology Review. Hanson, now at the Howard Hughes Medical Institute’s Janelia Research Campus, in Virginia, said he thinks such technology is better suited for basic science research on animals and that a push toward human use could be premature.
What is it? Based on speculation from outside experts, former insiders, and the past work of scientists Neuralink has hired, the company may be using what’s called a neural “sewing machine” to inject flexible wire electrodes into a monkey’s brain and then record from a very large number of neurons at once. That could be a big part of the reveal. Neuralink may also talk up new computer chipsets that can simplify the flood of information crackling out of the brain or send the data off the skull wirelessly, which would do away with clumsy cables. The demo might even include a way to send signals back into the brain, as well.
“Proof of principle of dense-ish recording would be my best bet,” Konrad Kording, a neuroscience professor at the University of Pennsylvania, said in a text message. “1,000 channels would be impressive for a new technology.” Currently, experimental brain-machine interfaces employ stiff metal electrodes; one called the Utah Array consists of about 100 sharp needles carved from silicon, fired into the brain with an air gun. These are superb for picking up electrical signals from neurons as they fire, but they can damage tissue and stop working after a while. (The human brain, it turns out, moves quite a lot inside the skull.) That’s led Neuralink to explore fine, ultra-thin electrodes made from carbon fiber or polymers—but they’re so wiggly that they’re hard to implant.
That’s where the “ sewing machine ” may come in. Before they joined Neuralink from the University of California, San Francisco, two of its founding team members, Hanson and Philip Sabes, along with Michel Maharbiz at UC Berkeley, developed a device that employs a stiff needle to drive the flexible electrodes into the brain. Hanson says the strategy was the “main thrust” of Neuralink’s internal research, at least through at last October, when he left the company amid what he calls disagreements over scientific priorities and decision-making.
Of course, it’s possible that Neuralink will spring some entirely unexpected technology in its event tomorrow. Members of the company’s founding team have worked on brain interfaces as widely different as tiny metal seeds (so-called “neural dust”) powered by sound waves and holograms that convey data into animal brains.
Wait, but why? It’s obvious that the overworked and brilliant Musk, who is not a biologist, is keen to make fast progress. Hanson said Neuralink’s president, Max Hodak, wants to get a huge number of measurements from human brains. That’s cool, but it’s not necessarily what you’d do to treat a disease. “Max and Elon were eager to go quickly to humans; the general consensus is that this is risky,” he said. “If you want to augment a human, you need to do a lot of basic work first.” (We emailed several Neuralink employees for comment, including Hodak, but didn’t hear back.) Some scientists are concerned about focusing too much on the sheer number of electrodes that can be stuffed into a brain. In 2017 DARPA handed out $65 million to build a “brain modem” that could connect with a million neurons, but José-Alain Sahel, who is working on brain implants to restore vision at the University of Pittsburgh, told me he’s suggested that the agency deemphasize the numerical goal. “A million electrodes is hard to achieve and might not even be good for the brain,” says Sahel. “What’s important for treatments is whether the signal is meaningful.” One factor behind the drive for a dense web of connections is the hope that if the brain can be measured at a larger scale, then the buzzing of thousands, or millions, of neurons could be fed into a deep-learning program—like those in development by OpenAI, another Musk venture. Just as such systems have learned to dominate at games like Go and the poker game Texas Hold’em , perhaps with enough data they can decode the language of the brain , too.
“The philosophical bet is that recording from enough neurons will allow us to figure out what the brain is doing and have this wild brain-machine interface,” says Wentz.
Today we may find out exactly how wild.
Correction: Neuroscientist Tim Hanson was a member of the founding team at Neuralink. A previous version of this article described him incorrectly as a founder of the company.
hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
13,511 | 2,023 |
"What’s AGI, and Why Are AI Experts Skeptical? | WIRED"
|
"https://www.wired.com/story/what-is-artificial-general-intelligence-agi-explained"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Reece Rogers Business What’s AGI, and Why Are AI Experts Skeptical? Photograph: MirageC/Getty Images Save this story Save Save this story Save Vishal Misra loves cricket and computer science. Years ago, the Columbia University professor cofounded Cricinfo , a collaborative website for sports fans to stay updated on match statistics. (It’s now part of ESPN.) In 2021, he created a search tool using GPT-3 that enables cricket lovers to sift through Cricinfo’s substantial database with conversational queries.
So, what does Misra think about GPT-4 , the newest release from OpenAI? He’s excited to show me how to break the algorithm. “It can solve some equations, it can draw diagrams, and it can analyze things quite nicely. But sometimes it fails at simple things,” he says. The correspondence with ChatGPT below shows how a chatbot can stumble—with confidence.
Prompt from WIRED: Write a list of 10 words that are six letters long. Always make sure that the third letter is “k.” ChatGPT: Bekind Skater Market Inkjet Anklet Packer Tinker Bunked Hikers Wicket Which words in the above list, if any, do not have “k” as the third letter? My apologies, it seems I made an error in my initial response. The words that do not have “k” as the third letter are: Bekind Skater Inkjet Packer Tinker Bunked Hikers I apologize for the oversight. Here's an updated list of 10 words with “k”as the third letter: Bakery Casket Makers Docket Peking Raking Harked Takoff Yakked Winked WIRED ran this test on the GPT-4 version multiple times with different approaches to the prompt’s phrasing. Even when the chatbot got every answer correct on its first attempt, it often apologized and listed multiple incorrect answers to follow-up questions. Why is this example important? A chatbot drafts answers token by token to predict the next word in a response, but humans open their mouths to express more fully formed ideas.
Even if you would have trouble drafting a list of hyper-specific words, are you able to identify wrong answers in the above lists? Understanding the difference between human intelligence and machine intelligence is becoming crucial as the hype surrounding AI crescendoes to the heavens.
“I feel like it’s too easily taking a notion about humans and transferring it over to machines. There’s an assumption there when you use that word,” says Noah Smith, a professor at the University of Washington and researcher at the Allen Institute for AI.
He questions the labeling of algorithms as “machine intelligence” and describes the notion of consciousness, without bringing machine learning into the equation, as a hotly debated topic.
Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Business Who Is Mira Murati, OpenAI’s New Interim CEO? Steven Levy Microsoft Research, with help from OpenAI, released a paper on GPT-4 that claims the algorithm is a nascent example of artificial general intelligence (AGI). What does that mean? No concrete definition of the term exists. So, how do these researchers describe it? They focus on the algorithm doing better than most humans at standardized tests, like the bar exam. They also focus on the wide variety of stuff the algorithm can do, from simplistic drawing to complex coding. The Microsoft Research team is candid about GPT-4’s inability to succeed at all human labor, as well as its lack of inner desires.
“You can have models that are very proficient in producing fluent language on the basis of having seen a ton of language,” says Allyson Ettinger, an assistant professor at the University of Chicago who researches language processing for humans and machines. But a chatbot’s fluency doesn’t prove that it reasons or achieves understanding in a manner similar to humans. “The extent to which those additional factors are happening is a major point of study and inquiry,” she says. Even with all the attention on generative AI in 2023, the full potential of these algorithms is hard to determine as companies train with more data and researchers look for emergent capabilities.
Is OpenAI a Frankensteinian god with the potential to animate the algorithm? It’s unclear, but unlikely. However, public perceptions about artificial intelligence have already shifted after widespread interactions with chatbots. If you’re scared about recent advances in AI, you're not alone.
It’s reasonable to fear that AI will worsen economic inequality or perpetuate racist stereotypes as memes or diminish our ability to identify authentic media.
Worried about the AI chatbot achieving sentience during your correspondence? While a priest at Google was convinced , many AI experts consider this to be a less rational belief. Based on what is publicly known about the algorithm, GPT-4 does not want to be alive any more than your TI-89 calculator yearns to inhabit a human form.
“It really is a philosophical question. So, in some ways, it’s a very hard time to be in this field, because we’re a scientific field,” says Sara Hooker, who leads Cohere for AI , a research lab that focuses on machine learning. She explains that a lot of these questions around AGI are less technical and more value-driven. “It’s very unlikely to be a single event where we check it off and say, ‘AGI achieved,’” she says. Even if researchers agreed one day on a testable definition of AGI, the race to build the world’s first animate algorithm might never have a clear winner.
Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Business Who Is Mira Murati, OpenAI’s New Interim CEO? Steven Levy One attempt at distinguishing the abilities of humans and computers came from Apple cofounder Steve Wozniak, who wondered when a computer would be able to visit a random person’s home and brew a pot of coffee.
Instead of being limited to a narrow task, like calculating math equations, when would it be able to interact with the physical world to complete more varied assignments? Wozniak’s hot drink test is one perspective in the kaleidoscopic discussion over the concept of AGI and emergent behaviors.
Nils John Nilsson, a founder of artificial intelligence as a research field, proposed a test for human-level AI focused on employment.
Could the algorithm function as an accountant, a construction worker, or a marriage counselor? Ben Goertzel, founder of a company exploring decentralized AGI, floated the idea of an algorithm capable of behaving like a college student (minus the binge drinking). Can the AI gather data from its external environment and make the choices needed to graduate? OpenAI offers little clarity on the concept. A blog post from CEO Sam Altman describes AGI as anything “generally smarter than humans.” By this vague measure, it would be difficult to determine whether it is ever really achieved.
Sure, GPT-4 can pass a bunch of standardized tests, but is it really “smarter” than humans if it can’t tell when the third letter in a word is “k”? While AI testing helps researchers gauge improvement, an ability to pass the bar exam does not mean an algorithm is now sentient. OpenAI’s definition of AGI also excludes the need for algorithms to interact with the physical world.
Would it be outrageous to slip a powerful chatbot inside of a humanoid robot and let it loose? The chatbot-robot combo would not be able to achieve much independently, even with the best robots available today. What’s holding it back? A primary, limiting factor in the field of robotics is a lack of data. “We don’t have tons of robot data, unlike Wikipedia, for example, in the NLP realm,” says Chelsea Finn, an assistant professor at Stanford University who leads the Intelligence Through Robotic Interaction at Scale (IRIS) research lab and works with the Google Brain. The internet brims with text to improve chatbots; the data available for robotics is less comprehensive.
Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Business Who Is Mira Murati, OpenAI’s New Interim CEO? Steven Levy The physical world is complex to navigate, and robots succeed only at very narrowly defined tasks. A bot may be able to roam a construction site, but it might struggle to remove the lid from a container. Finn and members of her IRIS lab experiment with fascinating ways to make robots more generalized, helpful, and better at learning. “I view this very orthogonally to anything related to sentience,” she says. “I view it much more in terms of being able to do useful tasks.” Advanced robots are far from capable of interacting with Earth (or Mars) in a spontaneous way, let alone being capable of going full I, Robot.
“I have very mixed feelings when these companies are now talking about sentient AI and expressing concern,” says Suresh Venkatasubramanian, a professor at Brown University and coauthor of the Blueprint for an AI Bill of Rights.
“Because I feel like they haven’t expressed concerns at all about real harms that have already manifested.” Futuristic fears can distract from the tangible present. A series of articles published in a collaboration between Lighthouse Reports and WIRED laid out how an algorithm used in the Netherlands was more likely to recommend single mothers and Arabic speakers be investigated for welfare fraud.
AI will continue to transform daily interactions between friends, coworkers, and complete strangers—for the better and for the worse. Whether an algorithm ever achieves a kind of consciousness may be beside the point. From Tamagotchi pets to Replika chatbots , humans have long formed one-sided, emotional bonds with technology. Gratitude may be warranted, though it is not yet reciprocated.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Service Writer X Topics artificial intelligence Cognition machine learning OpenAI ChatGPT deep learning Steven Levy Will Knight Steven Levy Will Knight Will Knight Reece Rogers Khari Johnson Reece Rogers Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,512 | 2,023 |
"That Google memo about having ‘no moat’ in AI was real — and Google’s AI boss disagrees with it - The Verge"
|
"https://www.theverge.com/2023/7/10/23790132/google-memo-moat-ai-leak-demis-hassabis"
|
"The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Google / Tech / Artificial Intelligence That Google memo about having ‘no moat’ in AI was real — and Google’s AI boss disagrees with it That Google memo about having ‘no moat’ in AI was real — and Google’s AI boss disagrees with it / DeepMind CEO Demis Hassabis doesn’t believe open-source AI models will outpace Google.
By Emma Roth , a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
| Share this story Just a couple of months ago, a leaked memo said to be from a Google researcher cast doubt on the company’s future in AI, stating that it has “no moat” in the industry — and now, we seemingly have confirmation that it was real.
In an interview with Decoder , Demis Hassabis, the CEO of Google’s DeepMind, told The Verge that although he believes the memo was legitimate, he disagrees with its conclusions.
“I think that memo was real. I think engineers at Google often write various documents, and sometimes they get leaked and go viral,” Hassabis said. “I think it’s interesting to listen to them, and then you’ve got to chart your own course. And I haven’t read that specific memo in detail, but I disagree with the conclusions from that.” Related Google confirms it’s training Bard on scraped web data, too Google’s medical AI chatbot is already being tested in hospitals The memo, which was obtained by SemiAnalysis from a public Discord server, says that neither Google nor OpenAI have what they need to succeed in the AI industry. Instead, the researcher claims “a third faction has been quietly eating our lunch”: open-source AI models that the researcher says are “faster, more customizable, more private, and pound-for-pound more capable.” But Hassabis is less pessimistic about Google’s future in the AI industry. He believes that the competitive nature of the company’s researchers will help push Google to the forefront of AI, adding that the newly merged Google Brain and Google DeepMind teams , which Hassabis was asked to lead, will likely result in more breakthroughs.
“Look at the history of what Google and DeepMind have done in terms of coming up with new innovations and breakthroughs,” Hassabis said. “I would bet on us, and I’m certainly very confident that that will continue and actually be even more true over the next decade in terms of us producing the next key breakthroughs just like we did in the past.” OpenAI board in discussions with Sam Altman to return as CEO Sam Altman fired as CEO of OpenAI Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from this stream Bing, Bard, and ChatGPT: How AI is rewriting the internet OpenAI’s flagship AI model has gotten more trustworthy but easier to trick Oct 17, 2023, 9:38 PM UTC The environmental impact of the AI revolution is starting to come into focus Oct 10, 2023, 3:00 PM UTC The BBC is blocking OpenAI data scraping but is open to AI-powered journalism Oct 6, 2023, 8:16 PM UTC OpenAI may make its own chips to power future generative AI growth.
Oct 6, 2023, 1:52 PM UTC Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
"
|
13,513 | 2,019 |
"How Facebook Has Changed Computing | WIRED"
|
"https://www.wired.com/story/how-facebook-has-changed-computing"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Klint Finley Business How Facebook Has Changed Computing Getty Images Save this story Save Save this story Save Over the past 15 years, Facebook has changed the way we keep in touch with friends, how we feud with family members, how we think about privacy, and how we consume Russian propaganda---not always for the better. But Facebook also changed computing. From Netflix to Uber to Walmart’s website, many of the apps and services we use every day are built with technologies that Facebook developed and then shared with the world.
As the company grew to accommodate millions, and eventually billions, of users, it had to create tools, from data storage software that can handle mind-boggling amounts of user information to hardware designs for data centers that host those databases. More recently it created new ways to build interfaces for its web and mobile apps. Crucially, Facebook didn't keep these creations to itself. It released much its work as open source, meaning that anyone else could use, modify, and share Facebook's inventions.
"Facebook has been a driving force in open source for years, sharing many critical pieces of technology with the broader community," says Jim Zemlin, executive director at the Linux Foundation. The foundation hosts an organization dedicated to GraphQL , a programming language created by Facebook to handle communications between apps and servers that is now used by many other companies.
Because of Facebook’s openness, many of its technologies have become industry standards. "Facebook has played a tremendous role in shifting not only in the way we build our servers, but also how we write code for browsers and phones," says Adam Neary, a tech lead at Airbnb. "The entire ecosystem is driven by technologies that Facebook pioneered and then open sourced." One of Facebook's first big contributions to the larger computing community was Cassandra , a database system that can scale across hundreds or thousands of servers.
Facebook wasn't the first company to build such a database. Amazon and Google had both published papers detailing their own distributed database innovations, but neither company had actually released the code for these internal applications. Facebook engineers Avinash Lakshman, who had worked on Amazon's paper, and Prashant Malik combined ideas from both the Amazon and Google papers to create Cassandra. Then, in 2008, they released the code. Soon it was being used by other companies, such as cloud computing company Rackspace.
"Facebook has been a driving force in open source for years, sharing many critical pieces of technology with the broader community." Jim Zemlin, executive director, Linux Foundation "They went beyond just cloning what Google and Amazon did and innovate, they did something different," says Jonathan Ellis, who used Cassandra extensively as a Rackspace employee and went on to co-found DataStax , a company that supports Cassandra for corporate clients, in 2010.
Today Cassandra is the 11th most popular database in the world, according to the site DB-Engines.
Apple, Netflix, Instagram, and Uber are all users and contributors to the project.
Facebook was also an early contributor to the open-source data-crunching platform Hadoop, which became almost synonymous with big data. Much of the early development of Hadoop, which was based on papers published by Google, was done by Yahoo.
But Facebook was one of the first companies outside of Yahoo to adopt Hadoop and contribute additional tools for the platform. Hadoop spawned multiple startups like Cloudera, which was co-founded by former Facebook research scientist Jeff Hammerbacher.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Facebook has invested heavily in artificial intelligence research in recent years to make more use of its data, and it's published some of that work too. In 2015, the company open sourced implementations of some of its AI algorithms for use with a popular AI platform called Torch, nearly a year before Google opened up its AI engine TensorFlow.
Facebook went on to fund the development of a variant of Torch called PyTorch, which is now the third-most-popular AI framework, according to an analysis by data scientist Jeff Hale.
It wasn't enough for Facebook to build software that could handle millions of users. The company also had to design the computers and buildings to manage all that data.
Along the way, Facebook came up with some unusual ideas, ranging from "open air" data centers that use outside air for cooling rather than industrial cooling systems, to "modular" servers that enable you to quickly swap out processors and other components.
Facebook released all of these designs as part of the Open Compute Project, which is now an independent organization. Facebook faced skepticism when it announced Open Compute in 2011. Though open source software was already well established by that point, it wasn't clear if Facebook's idiosyncratic ideas would be useful to other companies. But soon enough, vendors like the Taiwanese company Quanta began selling computers based on Facebook's designs, and others including Rackspace, Microsoft, and Apple contributed their own hardware designs to the initiative.
Today, Facebook is working with telecommunications companies like Ericsson and Deutsche Telekom on the Telecom Infra Project to help them build new open source telecommunications infrastructure, including a long-range antenna system called ARIES and a system for connecting cell towers wirelessly called Terragraph.
For most of its first decade, Facebook’s most widely used open source contributions were these sorts of behind-the-scenes infrastructure. Netflix might have used Cassandra to manage your information in a data center, but that didn't mean you would interact with Facebook code on its website. That started to change in 2013, when Facebook released React, a "library" of open source code that Facebook and now many others use to build web interfaces that look and feel like native apps.
React took some time to catch on, but in recent years it's become the most widely used library for building "front-end" applications, surging past Google's framework Angular. Airbnb, Netflix, and Walmart all use React.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight "In 2015 I noticed that quite suddenly my friends in San Francisco started dropping tools like jQuery and Angular in favor of using React, and they swore by it," says Quincy Larson, founder of the programming education site FreeCodeCamp.
That probably has something to do with the release of React Native in 2015. React Native enables developers to build native applications for Android and iOS using React, meaning that they can use the same code for both web and mobile apps.
Facebook wasn't the first to offer tools for building mobile apps using web technologies, nor was it the first to offer open source libraries for building web applications. But combining the two ideas set it apart says Greg Raiz, the chief innovation officer at consulting company Rightpoint. "I think it's just a holistic story," he says. "It helped complete the puzzle." How WIRED covered Facebook these past 15 years Here's everything Facebook has ever done Facebook was late to mobile. Now, mobile is its future Teens don't use Facebook, but they can't escape it, either The Social Network was more right than anyone realized 15 moments that defined Facebook's first 15 years 15 years of Facebook in four charts 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Contributor X Topics Facebook open source Will Knight Will Knight Will Knight Gregory Barber Kari McMahon Khari Johnson Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,514 | 2,023 |
"UK to spend £100m in global race to produce AI chips | Business | The Guardian"
|
"https://www.theguardian.com/business/2023/aug/20/uk-global-race-produce-ai-chips"
|
"Funds to be used to order key components from major chipmakers Nvidia, AMD and Intel US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Business Economics Diversity & equality in business Small business Retail The government is reportedly in advanced stages of an order of up to 5,000 graphics cards from Nvidia.
Photograph: Ann Wang/Reuters The government is reportedly in advanced stages of an order of up to 5,000 graphics cards from Nvidia.
Photograph: Ann Wang/Reuters Business UK to spend £100m in global race to produce AI chips Funds to be used to order key components from major chipmakers Nvidia, AMD and Intel Sun 20 Aug 2023 07.41 EDT The government will spend £100m to try to win a toe-hold for the UK in the global race to produce computer chips used to power artificial intelligence.
Taxpayer money will be used as part of a drive to build a national AI resource in Britain, similar to those under development in the US and elsewhere. It is understood that the funds will be used to order key components from major chipmakers Nvidia, AMD and Intel.
But an official briefed on the plans told the Guardian that the £100m offered by the government is far too low relative to investment by peers in the EU, US and China.
The official confirmed, in a move first reported by the Telegraph, which also revealed the investment, that the government is in advanced stages of an order of up to 5,000 graphics processing units (GPUs) from Nvidia.
The company, which started out building processing capacity for computer games, has seen a sharp increase in its value as the AI race has heated up. Its chips can run language learning models such as ChatGPT.
The components due to be ordered, GPUs – also known as graphics cards – are a key part of chips’ capacity for processing: critical for being able to run complex actions such as those required by AI.
But the fears that UK government action may prove too little, too late are rising in industry and Whitehall. The UK accounts for just 0.5% of global semiconductor sales.
Rishi Sunak’s government revealed plans in May to invest £1bn over 10 years in semiconductor research, design and production, a step dwarfed by the US’s $52bn (£41bn) Chips Act, and EU subsidies of €43bn (£37bn).
A hold-up in progress triggered by relatively weak investment could leave the UK exposed amid mounting geopolitical tensions over AI chip technology.
Earlier this month, the White House moved to ban US investment in advanced Chinese microconductors.
The step came just three months after China said chips from US manufacturer Micron were a security risk.
Sunak has sought out regulation as a potential space for the UK to take a role on the world stage, trying to convince international partners that Britain can act as a bridge between the US and China on the issue.
Sign up to Business Today Free daily newsletter Get set for the working day – we'll point you to all the business news and analysis you need every morning Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion As part of his government’s drive, the UK is set to hold an AI summit this autumn aimed at establishing some shared standards for technology which some believe could pose an existential risk to the future of humanity.
UK Research and Innovation, a funding body, is leading the effort to get the UK’s orders in place with major chip manufacturers alongside the Department for Science, Innovation and Technology.
A government spokesperson said: “We are committed to supporting a thriving environment for compute in the UK which maintains our position as a global leader across science, innovation and technology.
“The additional money being delivered through UKRI will complement the separate £100m investment to establish the Foundation Model Taskforce. Announcements on the AI Research Resource will follow in due course.” Explore more on these topics Business Artificial intelligence (AI) Rishi Sunak Computing news More on this story More on this story US orders immediate halt to some AI chip exports to China, says Nvidia 25 Oct 2023 Why did chip-maker Nvidia’s profits soar and is it living in a tech bubble? 24 Aug 2023 Chipmaker Nvidia crushes quarterly expectations with $13.5bn in revenue 23 Aug 2023 Most viewed Most viewed Business Economics Diversity & equality in business Small business Retail News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
"
|
13,515 | 2,023 |
"Nvidia Chip Shortages Leave AI Startups Scrambling for Computing Power | WIRED"
|
"https://www.wired.com/story/nvidia-chip-shortages-leave-ai-startups-scrambling-for-computing-power"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Paresh Dave Business Nvidia Chip Shortages Leave AI Startups Scrambling for Computing Power A Nvidia Corp. HGX H100 artificial intelligence supercomputing graphics processing unit (GPU) at the showroom of the company's offices in Taipei, Taiwan, on June 2, 2023.
Photograph: I-Hwa Cheng/Bloomberg/Getty Images Save this story Save Save this story Save Around 11 am Eastern on weekdays, as Europe prepares to sign off, the US East Coast hits the midday slog, and Silicon Valley fires up, Tel Aviv-based startup Astria’s AI image generator is as busy as ever. The company doesn’t profit much from this burst of activity, however.
Companies like Astria that are developing AI technologies use graphics processors (GPUs) to train software that learns patterns in photos and other media. The chips also handle inference, or the harnessing of those lessons to generate content in response to user prompts. But the global rush to integrate AI into every app and program, combined with lingering manufacturing challenges dating back to early in the pandemic, have put GPUs in short supply.
That supply crunch means that at peak times the ideal GPUs at Astria’s main cloud computing vendor (Amazon Web Services), which the startup needs to generate images for its clients, are at full capacity, and the company has to use more powerful—and more expensive —GPUs to get the job done. Costs quickly multiply. “It’s just like, how much more will you pay?” says Astria’s founder, Alon Burg, who jokes that he wonders whether investing in shares in Nvidia, the world’s largest maker of GPUs, would be more lucrative than pursuing his startup. Astria charges its customers in a way that balances out those expensive peaks, but it is still spending more than desired. “I would love to reduce costs and recruit a few more engineers,” Burg says.
There is no immediate end in sight for the GPU supply crunch. The market leader, Nvidia , which makes up about 60 to 70 percent of the global supply of AI server chips, announced yesterday that it sold a record $10.3 billion worth of data center GPUs in the second quarter, up 171 percent from a year ago, and that sales should outpace expectations again in the current quarter. “Our demand is tremendous,” CEO Jensen Huang told analysts on an earnings call. Global spending on AI-focused chips is expected to hit $53 billion this year and to more than double over the next four years, according to market researcher Gartner.
The ongoing shortages mean that companies are having to innovate to maintain access to the resources they need. Some are pooling cash to ensure that they won’t be leaving users in the lurch. Everywhere, engineering terms like “optimization” and “smaller model size” are in vogue as companies try to cut their GPU needs, and investors this year have bet hundreds of millions of dollars on startups whose software helps companies make do with the GPUs they’ve got. One of those startups, Modular, has received inquiries from over 30,000 potential customers since launching in May, according to its cofounder and president, Tim Davis. Adeptness at navigating the crunch over the next year could become a determinant of survival in the generative AI economy.
“We live in a capacity-constrained world where we have to use creativity to wedge things together, mix things together, and balance things out,” says Ben Van Roo, CEO of AI-based business writing aid Yurts.
“I refuse to spend a bunch of money on compute.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Cloud computing providers are very aware that their customers are struggling for capacity. Surging demand has “caught the industry off guard a bit,” says Chetan Kapoor, a director of product management at AWS.
The time needed to acquire and install new GPUs in their data centers have put the cloud giants behind, and the specific arrangements in highest demand also add stress. Whereas most applications can operate from processors loosely distributed across the world, the training of generative AI programs has tended to perform best when GPUs are physically clustered tightly together, sometimes 10,000 chips at a time. That ties up availability like never before.
Kapoor says AWS’ typical generative AI customer is accessing hundreds of GPUs. “If there’s an ask from a particular customer that needs 1,000 GPUs tomorrow, that’s going to take some time for us to slot them in,” Kapoor says. “But if they are flexible, we can work it out.” AWS has suggested clients adopt more expensive, customized services through its Bedrock offering, where chip needs are baked into the offering without clients having to worry. Or customers could try AWS’ unique AI chips, Trainium and Inferentia, which have registered an unspecified uptick in adoption, Kapoor says. Retrofitting programs to operate on those chips instead of Nvidia options has traditionally been a chore, though Kapoor says moving to Trainium now takes as little as changing two lines of software code in some cases.
Challenges abound elsewhere too. Google Cloud hasn’t been able to keep up with demand for its homegrown GPU-equivalent, known as a TPU , according to an employee not authorized to speak to media. A spokesperson didn’t respond to a request for comment. Microsoft’s Azure cloud unit has dangled refunds to customers who aren’t using GPUs they reserved, the Information reported in April. Microsoft declined to comment.
Cloud companies would prefer that customers reserve capacity months to years out so those providers can better plan their own GPU purchases and installations. But startups, which generally have minimal cash and intermittent needs as they sort out their products, have been reluctant to commit, preferring buy-as-you-go plans. That has led to a surge in business for alternative cloud providers, such as Lambda Labs and CoreWeave , which have pulled in nearly $500 million from investors this year between them. Astria, the image generator startup, is among their customers.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight AWS isn’t exactly happy about losing out to new market entrants, so it’s considering additional options. “We’re thinking through different solutions in the short- and the long-term to provide the experience our customers are looking for,” Kapoor says, declining to elaborate.
Shortages at the cloud vendors are cascading down to their clients, which include some big names in tech. Social media platform Pinterest is expanding its use of AI to better serve users and advertisers, according to chief technology officer Jeremy King. The company is considering using Amazon’s new chips. “We need more GPUs, like everyone,” King says. “The chip shortage is a real thing.” OpenAI, which develops ChatGPT and licenses the underlying technology to other companies, relies heavily on chips from Azure to provide its services. GPU shortages have forced OpenAI to set usage limits on the tools it sells. That’s been unfortunate for clients, such as the company behind AI assistant Jamie , which summarizes audio from meetings using OpenAI technology. Jamie has delayed plans for a public launch by at least five months, partly because it wanted to perfect its system, but also because of usage limits, says Louis Morgner, a cofounder of the startup. The issue hasn’t abated. “We’re only a few weeks out before going public and will then need to monitor closely how well our system can scale, given the limitations of our service providers,” Morgner says.
“The industry is seeing strong demand for GPUs,” OpenAI spokesperson Niko Felix says. "We continue to work on ensuring our API customers have the capacity to meet their needs." At this point, any connection that can give a startup access to computing power is vital. Investors, friends, neighbors—startup executives are drawing on a wide variety of relationships to get more AI firepower. Astria, for example, secured additional capacity at AWS with help from Emad Mostaque, CEO of Stability AI, which is a close partner of AWS and whose technology Astria builds upon.
Bookkeeping startup Pilot , which uses OpenAI tech for some mundane data sorting, gained early access to GPT-4 after asking for aid from university friends, employees, and venture capitalists with connections to OpenAI. Whether those ties accelerated Pilot’s move off a waiting list is unclear, but it now spends about $1,000 a month on OpenAI, and those connections could come in handy when it needs to increase its quota, CEO Waseem Daher says. “If you don’t take advantage of this [generative AI technology], someone else will, and it’s powerful enough you don’t want to risk that,” Daher says. “You want to deliver the best results for your customers and stay on top of what’s happening in the industry.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight As well as battling to get access to more power, companies are trying to do less with more. Companies experimenting with generative AI are now obsessing about “optimization"—making processing, with satisfactory results, possible on the most affordable GPUs. It’s analogous to saving money by ditching an old, energy-guzzling fridge that’s just storing a few drinks for a modern minifridge that can run on solar most of the time.” Companies are trying to write better instructions for how chips should process programming instructions, trying to reformat and limit the amount of data used to train AI systems and then strip the inference code down to the bare minimum needed to handle the task at hand. That means building out multiple, smaller systems—perhaps one image generator that outputs animals and another that creates images of humans and switching between them depending on the user prompt.
They are also scheduling processes that are not time-sensitive to run when GPU availability is highest and making compromises to balance speed with affordability.
Speech-generating startup Resemble AI is content with taking a tenth of second longer to process a customer request on an older chip if it means spending a tenth of what higher-end options would command, with no noticeable difference in audio quality, says CEO Zohaib Ahmed. He’s also willing to look beyond Lambda and CoreWeave as their terms become less palatable—with encouragements to make longer-term commitments. CoreWeave declined to comment, and Lambda did not respond to a request for comment.
Resemble turned to FluidStack , a tiny provider that welcomes one-week or one-month GPU reservations, and has recently joined San Francisco Compute Group , a consortium of startups jointly committing to buy and split GPU capacity. “The startup ecosystem is trying to get together and try to figure out ‘How do we battle, how do we fight for compute?’ Otherwise, it would be a really unfair game. Prices are just too high,” Ahmed says.
He gets a glimmer of hope about the shortages every Monday morning, he says. A sales representative at Lambda, the cloud provider, has been writing him, asking if Resemble wants to reserve any of Nvidia’s newest chips, the H100. That there is availability is exciting, Ahmed says, but those chips have only been widely available since March, and it’s just a matter of time before companies testing them perfect the code to go all-in on them. Nvidia will come out with its latest and greatest, the second-generation GH200, next year. Then the cycle of scarcity will start all over again.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Topics artificial intelligence chips OpenAI amazon cloud Will Knight Gregory Barber Will Knight Vittoria Elliott Will Knight Aarian Marshall Amanda Hoover Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,516 | 2,023 |
"How WIRED Will Use Generative AI Tools | WIRED"
|
"https://www.wired.com/about/generative-ai-policy"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons How WIRED Will Use Generative AI Tools Like pretty much everyone else in the past few months, journalists have been trying out generative AI tools like ChatGPT to see whether they can help us do our jobs better. AI software can’t call sources and wheedle information out of them, but it can produce half-decent transcripts of those calls, and new generative AI tools can condense hundreds of pages of those transcripts into a summary.
Writing stories is another matter, though. A few publications have tried— sometimes with disastrous results.
It turns out current AI tools are very good at churning out convincing (if formulaic) copy riddled with falsehoods.
This is WIRED, so we want to be on the front lines of new technology, but also to be ethical and appropriately circumspect. Here, then, are some ground rules on how we are using the current set of generative AI tools. We recognize that AI will develop and so may modify our perspective over time, and we’ll acknowledge any changes in this post. We welcome feedback at [email protected].
We do not publish stories with text generated by AI , except when the fact that it’s AI-generated is the whole point of the story. (In such cases we’ll disclose the use and flag any errors.) This applies not just to whole stories but also to snippets—for example, ordering up a few sentences of boilerplate on how Crispr works or what quantum computing is. It also applies to editorial text on other platforms, such as email newsletters. (If we use it for non-editorial purposes like marketing emails, which are already automated, we will disclose that.) This is for obvious reasons: The current AI tools are prone to both errors and bias, and often produce dull, unoriginal writing. In addition, we think someone who writes for a living needs to constantly be thinking about the best way to express complex ideas in their own words. Finally, an AI tool may inadvertently plagiarize someone else’s words. If a writer uses it to create text for publication without a disclosure, we’ll treat that as tantamount to plagiarism.
We do not publish text edited by AI either.
While using AI to, say, shrink an existing 1,200-word story to 900 words might seem less problematic than writing a story from scratch, we think it still has pitfalls. Aside from the risk that the AI tool will introduce factual errors or changes in meaning, editing is also a matter of judgment about what is most relevant, original, or entertaining about the piece. This judgment depends on understanding both the subject and the readership, neither of which AI can do.
We may try using AI to suggest headlines or text for short social media posts.
We currently generate lots of suggestions manually, and an editor has to approve the final choices for accuracy. Using an AI tool to speed up idea generation won’t change this process substantively.
We may try using AI to generate story ideas.
An AI might help the process of brainstorming with a prompt like “Suggest stories about the impact of genetic testing on privacy,” or “Provide a list of cities where predictive policing has been controversial.” This may save some time and we will keep exploring how this can be useful. But some limited testing we’ve done has shown that it can also produce false leads or boring ideas. In any case, the real work, which only humans can do, is in evaluating which ones are worth pursuing. Where possible, for any AI tool we use, we will acknowledge the sources it used to generate information.
We may experiment with using AI as a research or analytical tool.
The current generation of AI chatbots that Google and Microsoft are adding to their search engines answer questions by extracting information from large amounts of text and summarizing it. A reporter might use these tools just like a regular search engine, or to summarize or trawl through documents or their own interview notes. But they will still have to go back to the original notes, documents, or recordings to check quotes and references. In this sense, using an AI bot is like using Google Search or Wikipedia: It might give you initial pointers, but you must follow the links back to the original sources.
In practice, though, AI will make mistakes and miss things that a human would find relevant—perhaps so much so that it doesn’t save any time. Even if these tools do prove useful, we won’t want our reporters to rely on them any more than we’d let them rely on the limited information on Wikipedia. We’ll continue to insist on the same standards of research and original reporting as always. We also know that there are many professionally published research databases out there that come with lawful and highly accurate text- and data-mining tools, so we will constantly evaluate whether those meet our needs.
We may publish AI-generated images or video, but only under certain conditions.
Some working artists are now incorporating generative AI into their creative process in much the same way that they use other digital tools. We will commission work from these artists as long as it involves significant creative input by the artist and does not blatantly imitate existing work or infringe copyright. In such cases we will disclose the fact that generative AI was used.
We specifically do not use AI-generated images instead of stock photography.
Selling images to stock archives is how many working photographers make ends meet. At least until generative AI companies develop a way to compensate the creators their tools rely on, we won’t use their images this way.
We or the artists we commission may use AI tools to spark ideas.
This is the visual equivalent of brainstorming—type in a prompt and see what comes up. But if an artist uses this technique to come up with concepts, we will still require them to create original images using their normal process, and not merely reproduce what the AI suggested.
This policy was updated on May 22, 2023 to allow the use of AI-generated images and video under certain conditions.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,517 | 2,023 |
"These ChatGPT Rivals Are Designed to Play With Your Emotions | WIRED"
|
"https://www.wired.com/story/fast-forward-chatgpt-rivals-emotions"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business These ChatGPT Rivals Are Designed to Play With Your Emotions Facebook X Email Save Story Photograph: Annabelle Breakey/Getty Images Save this story Save Save this story Save ChatGPT and its brethren are both surprisingly clever and disappointingly dumb.
Sure, they can generate pretty poems, solve scientific puzzles, and debug spaghetti code. But we know that they often fabricate, forget, and act like weirdos.
Inflection AI , a company founded by researchers who previously worked on major artificial intelligence projects at Google , OpenAI , and Nvidia , built a bot called Pi that seems to make fewer blunders and be more adept at sociable conversation.
Inflection designed Pi to address some of the problems of today’s chatbots. Programs like ChatGPT use artificial neural networks that try to predict which words should follow a chunk of text, such as an answer to a user’s question. With enough training on billions of lines of text written by humans, backed by high-powered computers, these models are able to come up with coherent and relevant responses that feel like a real conversation. But they also make stuff up and go off the rails.
Mustafa Suleyman, Inflection’s CEO, says the company has carefully curated Pi’s training data to reduce the chance of toxic language creeping into its responses. “We're quite selective about what goes into the model,” he says. “We do take a lot of information that’s available on the open web, but not absolutely everything.” Suleyman, who cofounded the AI company Deepmind, which is now part of Google, also says that limiting the length of Pi’s replies reduces—but does not wholly eliminate—the likelihood of factual errors.
Based on my own time chatting with Pi, the result is engaging, if more limited and less useful than ChatGPT and Bard. Those chatbots became better at answering questions through additional training in which humans assessed the quality of their responses. That feedback is used to steer the bots toward more satisfying responses.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Suleyman says Pi was trained in a similar way, but with an emphasis on being friendly and supportive—though without a human-like personality, which could confuse users about the program’s capabilities. Chatbots that take on a human persona have already proven problematic. Last year, a Google engineer controversially claimed that the company’s AI model LaMDA , one of the first programs to demonstrate how clever and engaging large AI language models could be, might be sentient.
Pi is also able to keep a record of all its conversations with a user, giving it a kind of long-term memory that is missing in ChatGPT and is intended to add consistency to its chats.
“Good conversation is about being responsive to what a person says, asking clarifying questions, being curious, being patient,” says Suleyman. “It’s there to help you think, rather than give you strong directional advice, to help you to unpack your thoughts.” Pi adopts a chatty, caring persona, even if it doesn’t pretend to be human. It often asked how I was doing and frequently offered words of encouragement. Pi’s short responses mean it would also work well as a voice assistant, where long-winded answers and errors are especially jarring. You can try talking with it yourself at Inflection's website.
The incredible hype around ChatGPT and similar tools means that many entrepreneurs are hoping to strike it rich in the field.
Suleyman used to be a manager within the Google team working on the LaMDA chatbot. Google was hesitant to release the technology, to the frustration of some of those working on it who believed it had big commercial potential.
So far, Inflection has raised $225 million in funding from investors that include LinkedIn cofounder Reid Hoffman, and it is reportedly seeking hundreds of millions more. The company hasn’t revealed a plan for making money, but it isn’t hard to imagine one of the deep-pocketed tech companies that doesn’t have its own ChatGPT—Amazon or Apple, say—paying handsomely to acquire the company’s technology and talent.
Inflection is just one of several companies building powerful AI chatbots with a more emotional side.
Character AI , which recently raised $150 million in funding and attained a valuation of over $1 billion, offers chatbots that can assume a wide range of personas—and which, unlike Pi, are free to make things up. Noam Shazeer, Character’s CEO, told me recently that many people use his company’s bots for emotional support, and even romantic connections, although the company blocks sexual content. He says users like to post examples of jokes their bots have come up with on social media.
The advances demonstrated by ChatGPT have many now worried about the long-term risks posed by AI. But if large numbers of people start chatting with friendly, emotionally engaging chatbots, we could see unpredictable results relatively soon. What will happen if companies like Inflection and Character make chatbots more persuasive and potentially addictive to chat with? I’m not sure, but I do know a couple of bots that would be only too happy to talk it over.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Senior Writer X Topics Fast Forward voice assistants artificial intelligence ChatGPT machine learning DeepMind Google chatbots Will Knight Amit Katwala Kari McMahon David Gilbert Khari Johnson Joel Khalili Andy Greenberg David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,518 | 2,023 |
"AI-focused tech firms locked in ‘race to the bottom’, warns MIT professor | Technology | The Guardian"
|
"https://www.theguardian.com/technology/2023/sep/21/ai-focused-tech-firms-locked-race-bottom-warns-mit-professor-max-tegmark"
|
"Physicist Max Tegmark says competition too intense for tech executives to pause development to consider AI risks US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness Max Tegmark wrote a landmark letter in March 2023, calling for a pause in AI development to fully understand the dangers.
Photograph: Dado Ruvić/Reuters Max Tegmark wrote a landmark letter in March 2023, calling for a pause in AI development to fully understand the dangers.
Photograph: Dado Ruvić/Reuters Technology AI-focused tech firms locked in ‘race to the bottom’, warns MIT professor Physicist Max Tegmark says competition too intense for tech executives to pause development to consider AI risks Global technology editor Thu 21 Sep 2023 00.00 EDT The scientist behind a landmark letter calling for a pause in developing powerful artificial intelligence systems has said tech executives did not halt their work because they are locked in a “race to the bottom”.
Max Tegmark, a co-founder of the Future of Life Institute, organised an open letter in March calling for a six-month pause in developing giant AI systems.
Despite support from more than 30,000 signatories, including Elon Musk and the Apple co-founder Steve Wozniak, the document failed to secure a hiatus in developing the most ambitious systems.
Speaking to the Guardian six months on, Tegmark said he had not expected the letter to stop tech companies working towards AI models more powerful than GPT-4, the large language model that powers ChatGPT , because competition has become so intense.
“I felt that privately a lot of corporate leaders I talked to wanted [a pause] but they were trapped in this race to the bottom against each other. So no company can pause alone,” he said.
The letter warned of an “out-of-control race” to develop minds that no one could “understand, predict, or reliably control”, and urged governments to intervene if a moratorium on developing systems more powerful than GPT-4 could not be agreed between leading AI companies such as Google , ChatGPT owner OpenAI and Microsoft.
It asked: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?” Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said he viewed the letter as a success.
“The letter has had more impact than I thought it would,” he said, pointing to a political awakening on AI that has included US Senate hearings with tech executives and the UK government convening a global summit on AI safety in November.
Expressing alarm about AI had gone from being taboo to becoming a mainstream view since the letter’s publication, Tegmark said. The letter from his thinktank was followed in May by a statement from the Center for AI Safety, backed by hundreds of tech executives and academics, declaring that AI should be considered a societal risk on a par with pandemics and nuclear war.
“I felt there was a lot of pent-up anxiety around going full steam ahead with AI, that people around the world were afraid of expressing for fear of coming across as scare-mongering luddites. The letter legitimised talking about it; the letter made it socially acceptable.
“So you’re getting people like [letter signatory] Yuval Noah Harari saying it, you’ve started to get politicians asking tough questions,” said Tegmark, whose thinktank researches existential threats and potential benefits from cutting-edge technology.
Fears around AI development range from the immediate, such as the ability to generate deepfake videos and mass-produce disinformation, to the existential risk posed by super-intelligent AIs that evade human control or make irreversible and highly consequential decisions.
Sign up to TechScape Free weekly newsletter Alex Hern's weekly dive in to how technology is shaping our lives Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion Tegmark warned against describing the development of digital “god-like general intelligence” as a long-term threat, citing some AI practitioners who believe it could happen within a few years.
The Swedish-American scientist said November’s UK AI safety summit, to be held at Bletchley Park, was a “wonderful thing”. His thinktank has said the summit should target three achievements: establishing a common understanding of the severity of risks posed by AI; recognising that a unified global response is needed; and embracing the need for urgent government intervention.
He added that a hiatus in development was still needed until global agreed safety standards were met. “Making models more powerful than what we have now, that has to be put on pause until they can meet agreed-upon safety standards.” He added: “Agreeing on what the safety standards are will naturally cause the pause.” Tegmark also urged governments to take action on open-source AI models that can be accessed and adapted by members of the public. Mark Zuckerberg’s Meta recently released an open-source large language model, called Llama 2, and was warned by one UK expert that such a move was akin to “giving people a template to build a nuclear bomb”.
“Dangerous technology should not be open source, regardless of whether it is bio-weapons or software,” Tegmark said.
Explore more on these topics Technology Artificial intelligence (AI) Computing ChatGPT Meta Google news Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
"
|
13,519 | 2,016 |
"The Apple-FBI Fight Isn't About Privacy vs. Security. Don't Be Misled | WIRED"
|
"https://www.wired.com/2016/02/apple-fbi-privacy-security"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brian Barrett Security The Apple-FBI Fight Isn't About Privacy vs. Security. Don't Be Misled Then One/WIRED Save this story Save Save this story Save Throughout the ongoing fight between Apple and the FBI over custom access to an iPhone used by one of the two terrorists who killed 14 people in San Bernardino, the government has framed the argument as a simple trade-off: You must surrender a little privacy if you want more security. The scales don’t balance quite so neatly, though; there’s nothing secure about giving the FBI their way. Still, it's been an effective way for the government to win over the public, on its way to trying to win over the courts.
FBI director James Comey most recently pushed the dichotomy in an op-ed for Lawfare. “We have awesome new technology that creates a serious tension between two values we all treasure: privacy and safety,” he writes. “That tension should not be resolved by corporations that sell stuff for a living. It also should not be resolved by the FBI, which investigates for a living.” It also should not be framed as an absolute. Doing so presents the issue to the American public in a way that makes the FBI’s request palatable while obfuscating the potentially dangerous precedent it would represent.
The case against the FBI’s insistence that it is not asking for all that much has been made repeatedly, both here and elsewhere.
In fact, a team of researchers offered a version of it last year when they published the prescient paper " Keys Under Doormats.
” [.pdf] “As computer scientists with extensive security and systems experience, we believe that law enforcement has failed to account for the risks inherent in exceptional access systems,” the group wrote in July. The risks of that type of backdoor include adding complexity to an already intricate system that’s difficult to keep secure, and the impossibility of creating access that would be used solely by the FBI. Any backdoor accessible to law enforcement can and also would be used by a hacker for any number of nefarious reasons.
“It would be great if we could make a backdoor that only the FBI could walk through,” says Nate Cardozo, an attorney with the Electronic Frontier Foundation. “But that doesn’t exist. And literally every single mathematician, cryptographer, and computer scientist who’s looked at it has agreed.” The current Apple case doesn’t involve a backdoor in the traditional sense. The FBI is asking Apple to create a tool that would circumvent a feature that deletes all of the information on the phone after 10 failed password attempts. “We don’t want to break anyone’s encryption or set a master key loose on the land," Comey wrote. But the authority it would grant the FBI could be used again across a range of scenarios that weaken our privacy, sure, but our security as well.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight “The precedent isn’t that they unlock one phone,” says Jake Williams, CEO of Rendition Infosec. “There’s no reason down the road they can’t go to Microsoft, or anyone else, for that matter, to create some intentionally vulnerable applications.” In the scenario Williams envisions, the FBI could force Microsoft to send out a malicious Windows update to any machine connected to a specific IP address, like the Wi-Fi at a coffee shop.
In truth, you don’t need to look as far ahead as that. In a newly unsealed court brief , Apple lawyer Marc Zwillinger reveals that the company has challenged at least a dozen recent FBI requests to unlock iPhones by various means. In some cases, Apple could extract the requested data without creating a new tool. In four instances cited in the brief, though, the iPhones in question run iOS 8 or later, the operating system a new tool would be designed to circumvent.
Not only, therefore, is this not just about “this one phone,” as the FBI has insisted. It’s not even about hypothetical future cases. The ruling, or at least the precedent it sets, could assist the government in at least four instances Apple is currently fighting. It also could apply beyond our borders, in countries with whose governments have concerning human rights records.
“If China [today] demanded that Apple put in a backdoor, Apple would say no,” says Cardozo, adding that the company could threaten to pull its products from the market, creating a public relations nightmare for the Chinese government. “That equation changes once Apple accedes to an FBI order. If the FBI can compel Apple to do it, and it’s publicly known that Apple has given the FBI this key, then China has a very different calculus … The PR around a Chinese demand gets a lot better for China, and a whole hell of a lot worse for Apple.” And increasingly, it’s the PR that matters.
What’s important to understand about the San Bernadino iPhone case is that its very existence is a public relations maneuver.
“The FBI chose this case very, very carefully,” says Cardozo, who argues that law enforcement sees it as the “perfect case” for litigating the issue in the absence of backdoor-friendly legislation from President Obama and Congress. That it’s a terrorism case, in particular, spurs sympathies to align with law enforcement, regardless of how much benefit the FBI would actually get from the access it has requested.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight “I think if the FBI said hey, we want to architect backdoors into devices so that we can get access to whatever we need, whenever we need it--- need being a relative term there---I think the public in general would not be for that,” says Williams.
So far, buoyed by the specter of terrorism and the false duality of privacy and security, the public in general is buying what the FBI is selling. A recent Pew Research poll found that 51 percent of Americans think Apple “Should unlock the iPhone to assist the ongoing FBI investigation,” while 38 percent say Apple should not. (The rest had no opinion.) Even the survey itself shows how effective the FBI’s messaging has been. Apple is not being asked to unlock an iPhone; it’s being asked to create software that would help the FBI unlock it. After which, there’s every reason to expect Apple and every other tech company will be asked to create more software that could be used to diminish even more civil liberties.
At the same time, the FBI has managed to attack Apple’s posture of altruism, saying in a recent court filing that the company’s resistance was rooted in “its concern for its business model and public brand marketing strategy,” not larger security concerns.
It’s an odd construction in that it assumes the two are mutually exclusive. Security has long been part of Apple’s sales pitch, but that doesn’t diminish its importance.
For its part, Apple has posted both a strident defense of its opposition and an FAQ for customers, which reiterate largely the same points: That compliance would open the door to a host of security and privacy oversteps. It has also reached out directly to the press, both to clarify its position and embarrass its opponent.
In many ways, Apple’s is the tougher sell, because the way computer security works means that it has to be absolute. Any precedent that says a company can be compelled to weaken its security will have injurious consequences, full stop. There are no shades of grey, no matter what politicians and law enforcement might suggest.
“You hear over and over and over again, from the pro-backdoor camp, that we need to strike a balance, we need to find a compromise,” says Cardozo. “That doesn’t work. Math doesn’t work like that. Computer security doesn’t work like that … It’s kind of like climate change. There are entrenched political interests on one side of a ‘debate,’ and on the other side is the unanimous scientific and technical community.” Ultimately, the reason this debate is happening at all is that there’s no legislative guidance around encryption. The All Writs Act that the FBI has cited dates to 1798, and even the most recent supporting precedent dates to 1977.
Until Congress acts, the FBI will continue to attempt to gain access through the courts.
“These issues will be decided in Congress,” Bill Gates said in a recent Bloomberg TV interview , attempting to clarify previous comments that had been wrongly interpreted as him favoring the FBI. “You don’t want to just take the minute after a terrorist event and swing that direction, nor do you want to swing away from government access when you get some abuse being revealed. You want to strike that balance.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight That resolution may be forthcoming. In an open letter to Comey today, US Representative Ted Lieu, a California Democrat, asked that the FBI withdraw its case in favor of letting the legislative branch do its job. “We should all take a breath and talk to each other,” he writes, echoing Comey’s call that Americans “take a deep breath” about the debate, “rather than use a lawsuit to circumvent the critical and necessary policy discussions.” Lieu, one of four sitting federal lawmakers with a computer science degree, has dabbled in encryption legislation before, having recently proposed a bill that would preempt states from haphazardly passing their own anti-encryption laws. At the time, he was hesitant about introducing legislation that would have a broader impact, but his stance appears to have possibly evolved.
“The precedent set in this case would essentially enact a policy proposal to weaken encryption that has not yet gained traction in Congress and was previously rejected by the White House,” says Lieu. “Let Congress, stakeholders, and the American people debate and resolve these difficult issues, not unelected judges based on conflicting interpretations of a law passed 87 years before Alexander Graham Bell invented the telephone.” And when those debates do happen, let’s also make sure they’re not framed by misleading dichotomies like “privacy versus security.” We can’t give up one without presenting a grave threat to the other.
Executive Editor, News X Topics apple FBI iPhone privacy Andrew Couts Andy Greenberg Lily Hay Newman Andy Greenberg David Gilbert Justin Ling David Gilbert David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,520 | 2,016 |
"Why Are We Fighting the Crypto Wars Again? | WIRED"
|
"https://www.wired.com/2016/03/why-are-we-fighting-the-crypto-wars-again"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Backchannel Why Are We Fighting the Crypto Wars Again? Save this story Save Save this story Save Last week I arrived in San Francisco to hear good news: Whitfield Diffie and Martin Hellman had won the ACM A.M. Turing Award.
This is the Nobel Prize of computer science, with a million-dollar check and priceless prestige. The choice of these 2016 honorees is both long overdue and appropriately timely. Overdue because their contribution to the field (and to the world) was public key cryptography , which they created in 1976. And timely because the consequences of their invention — which would lead to the development of online privacy tools, whether the government liked it or not — are once again a flash point of Constitutional proportions. The announcement of the award came at the massive annual RSA Conference.
The gathering itself is a symbol of the growth of encryption in the public sector. The conference began in 1991, as a small gathering of a few dozen scientists and businesspeople. At the time, cryptography was only beginning to come out of a long period in the shadows. For decades, crypto had been considered a taboo topic of discussion, so deep into classified territory that when academics wrote papers with cryptographic implications, the government would quickly classify those documents, banning access even to their authors. Open statements almost never came from the National Security Agency (its name was seldom uttered, even in Congress), and its directors avoided public appearances with a zeal that made Howard Hughes look like Donald Trump.
Then came Diffie and Hellman, whose breakthrough enabled people to send encrypted messages to others without prior arrangements. Previously, encryption had involved the exchange of a key to unlock a message — so you had to agree beforehand on the key, and then protect it, because anyone who had the key could read the private message. Their insight did away with that, by using pairs of private and public digital keys. (You scramble a message with your recipient’s widely available public key, and now only the recipient can read it, by unscrambling the message with the matching private key.) This enabled e-commerce, digital signatures, and many other things.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight But the authorities balked at the spread of this technology. The first commercial products using cryptography were essentially stifled by government export regulations. (If a company could not sell its product overseas, why bother to produce it?) The coming explosion of online commerce demanded that we protect our information, but law enforcement and intelligence agencies were dead-set against anything that would thwart their access to private information. Thus began the Crypto Wars. The government warned that unrestrained crypto would empower “bad guys” (terrorists, drug lords, kiddie porn purveyors). Business interests and privacy advocates argued that the only ones hurt would be citizens seeking to protect their information. (Disclosure boast: I wrote the book on this, and it’s still in print!) Eventually the Clinton administration came to realize that electronic commerce and other digital pursuits simply could not happen without a free flow of crypto, and relaxed the standards. In short, a war for secure communications and storage was fought, and won, because we could not live without crypto. Also, there was that Constitution thing, since cryptography was a form of speech.
Today public key is baked into every browser and private transaction, used many times a day by all of us. And the RSA Conference, the gathering of the commercial crypto community that once fit comfortably into a suburban hotel ballroom, now hosts 40,000 people, with speakers that this year included the secretary of the defense, Sean Penn, and a former head of the NSA. (Last year the sitting director was there.) To exhume a term used endlessly by policy wonks and cypherpunks back in the day, “the crypto genie is out of the bottle.” You could say that from the moment Diffie and Hellman made their discovery, this was inevitable, since math cannot be censored, and the physics of security dictates that a system that allows a government backdoor simply can’t do its job of protecting information.
But this was a hard-won lesson. For most of the ’90s, the government’s “compromise” on this issue — or its (pipe) dream — was that you could concoct a system where everything was locked down tight, but some carefully designed kink in the process would let the Feds get the information if they really wanted it. The NSA came up with an elaborate scheme to do just that, called “key escrow.” Its embodiment was the “Clipper Chip.” This was a piece of silicon designed to put the all-important keys (which unscramble encrypted messages, and are supposed to be held only by those for whom the messages are directed) in that aforementioned escrow, so that when the government needed to decode a message, it could ask for and receive the key that changed ciphertext to plaintext.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight It was an unwieldy and impractical idea — especially since people who wanted security had options to buy stuff without Clipper Chips — and its demise helped lead the government to the conclusion that people highly motivated to protect their information were going to use crypto anyway. In theory at least, intelligence and law enforcement agreed to accept the fact that crypto was here to stay, and if they wanted to gain access to encrypted communications and files, they would do so by warrants and their own cryptanalysis, and not by demanding that the systems themselves should be weakened.
But now it seems the United States is welching on that deal. That is the essence of the current controversy over an Apple iPhone once used by one of the murderous and despicable terrorists who killed 14 people in San Bernardino.
Apple has set up the onboard encryption systems on its phone so that the company does not hold the keys, and therefore can’t decode the text even if ordered to. Through a 227-year-old statute called the All Writs Act, the US essentially is demanding that Apple write new software that will make it easier for the government to break the code. (Hard to believe this is what those wig-hats had in mind in 1789.) The Feds originally argued that their Apple motion is a one-time demand for this one phone, by this one really, really bad person. But it turns out that other prosecutors have their own encrypted phones with potential evidence in a variety of other cases. So think of this demand as a bespoke Clipper Chip, created by private-sector engineers who must produce it against their will. By demanding that Apple change its operating system to get access to a single iPhone — and then another, and another, and another — we are in the thick of Crypto Wars Redux.
Or as Diffie told me last week, at a small lunch celebrating the honor he shared with his collaborator, “This is the future we’re fighting about.” Again.
Three big things have happened since the first round of the Crypto Wars. First, of course, was 9/11. The second was the Snowden revelations, which exposed how the government had stepped up its surveillance of communications, greatly increasing its cache of private information despite the existence of crypto.
And the third, definitely related to factor two, was the explosion of new technologies — notably the iPhone and its progeny — that put even more of our personal information in the cloud. (In 2001, Google was just getting started.) All of these things make the stakes much higher this time around. But here’s something that didn’t happen: a strong crypto infrastructure that protected our information and privacy. The fact is that while the security industry has boomed, our information really isn’t much safer than it was when we were fighting the first crypto war. This is because the tech world has been slow to build strong encryption into our systems as a default. It’s been too hard to use, and all too often businesses and institutions don’t even take obvious steps to secure data. Chronic lapses in our communications software and disasters happen on almost a daily basis. (Two horrifying examples from a long list: Anthem Blue Cross’s breach that exposed records of up to 80 million Americans ; and the theft of over 21 million super-sensitive government background checks from the Office of Personnel Management). We simply haven’t used our capabilities to make our electrical grids, our credit card systems, and our ISPs bulletproof. As a result, the public has not fully reaped the spoils from winning that first crypto war.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight But that’s changing. Because of the endless chain of spectacular security failures and the Snowden news that the NSA is grabbing everything it can, the tech industry is finally ramping up its security. American companies are worried that foreign customers might regard their products as direct conduits to American authorities. So they have changed their practices for moving information between their data centers. Now, confirming government’s biggest nightmare, Apple has planted a flag in the ground for privacy — endeavoring to scramble data on its iPhones so only customers can access them.
Is it any wonder that the government is rebooting the crypto wars? For the first time, it’s really struggling with the results of the first war, as more information is now encrypted, increasingly in a manner the government finds really hard (or impossible) to decode.
Apple has been impressively aggressive in its refusal to comply with that order, even though this test case involves possible information from a murderous terrorist. The company’s court filings outline with withering precision how complying with the government order — to essentially rewrite part of its operating system, an action it regards as an act of “compelled speech” — violates its rights and compromises the rights of its customers. With John Oliver-strength sarcasm, it refers to the software the FBI has ordered it to produce as “GovtOS.” Normally measured in everything but public displays of affection for his products, Apple CEO Tim Cook has shed his technocratic reserve and channeled his inner Clint Eastwood, invoking a moral basis for Apple’s stance. (While generally siding with him, Cook’s peers among the tech elite have been less confrontational. And I should say that my employer, Medium, was a party to one of the many amicus briefs filed in support of Apple in this case.) Martin Hellman and Whitfield Diffie on March 1, after the announcement that they had won the Turing Award.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight As with the first round of the crypto wars, the stakes could not be higher. Once again, the government is seeking to control that genie first released by Diffie and Hellman. But the physics of computer security have not changed. Last July, a panel of fifteen eminent security specialists and cryptographers — many of whom are veterans of the first crypto war — released a report confirming there was no way for the government to demand a means of bypassing encryption without a dire compromise of security. It just doesn’t work.
One final anecdote. In the fall of 2001, I attended a book party for James Carville , the former Clinton advisor. Whether intentional or not, the party was held at a bar only a couple of streets north of Ground Zero. The site was still crawling with cranes and other machinery, untangling and carting off the wreckage. The pulverized residue of that fateful Tuesday — an unforgettable effluvium of dust and death — still lingered in that sector. But the party was very well attended, because it was broadly rumored that Bill himself would be dropping in. When the time window for the party ended, no one left, as the word was out that the former president was still intending to appear.
Sure enough, at around 9 — a half hour after the event was supposed to end — the man appeared. He immediately began working the room. At one point, my friend Jonathan Alter introduced us, explaining that I had written a book about the encryption controversy that had taken place during his administration. Clinton vigorously pumped my hand and fixed his eyes to mine, sucking me into a scary and exhilarating tunnel of total attention.
How’d we do? he asked me, and waited for my answer as it it were the most important question he asked that day.
“Well,” I told the former president. “It took a while, but you finally got it right.” Yes we did, but not enough. The government kept encryption legal, but benignly neglected it, while our infrastructure, our business plans, and our personal secrets lay exposed to thieves, vandals and foreign powers. Security flaws were a pain to users, but a useful tool for law enforcement and intelligence agencies. Now, post-Snowden, our tech companies are finally taking steps to implement strong-encryption-by-default, the best way to insure security and privacy. The FBI’s response? Clipper Chip redux.
And we’re back at square one.
Editor at Large X Topics Backchannel Brendan I. Koerner Andy Greenberg Brandi Collins-Dexter Lauren Smiley Steven Levy Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,521 | 2,014 |
"What Is the Dark Web? | WIRED"
|
"https://www.wired.com/2014/11/hacker-lexicon-whats-dark-web"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security Hacker Lexicon: What Is the Dark Web? WIRED Save this story Save Save this story Save The Dark Web is a collection of thousands of websites that use anonymity tools like Tor and I2P to hide their IP address. While it's most famously been used for black market drug sales and even child pornography, the Dark Web also enables anonymous whistleblowing and protects users from surveillance and censorship.
With the rise and fall of the Silk Road—and then its rise again and fall again —the last couple of years have cast new light on the Dark Web. But when a news organization as reputable as 60 Minutes describes the Dark Web as “a vast, secret, cyber underworld” that accounts for “90 percent of the Internet,” it’s time for a refresher.
The Dark Web isn’t particularly vast, it’s not 90 percent of the Internet, and it’s not even particularly secret. In fact, the Dark Web is a collection of websites that are publicly visible, yet hide the IP addresses of the servers that run them. That means anyone can visit a Dark Web site, but it can be very difficult to figure out where they’re hosted—or by whom.
In Darkness dark web Andy Greenberg Security Kim Zetter WIRED Opinion Joseph Cox The majority of Dark Web sites use the anonymity software Tor, though a smaller number also use a similar tool called I2P. Both of those systems encrypt web traffic in layers and bounce it through randomly-chosen computers around the world, each of which removes a single layer of encryption before passing the data on to its next hop in the network. In theory, that prevents any spy—even one who controls one of those computers in the encrypted chain—from matching the traffic’s origin with its destination.
When web users run Tor, for instance, any sites they visit can’t easily see their IP address. But a web site that itself runs Tor—what’s known as a Tor hidden service—can only be visited by Tor users. Traffic from both the user’s computer and the web server takes three hops to a randomly chosen meet-up point in the Tor network, like anonymous bagmen trading briefcases in a parking garage.
Just because the IP addresses of those sites are kept hidden, however, doesn’t mean they’re necessarily secret. Tor hidden services like the drug-selling sites Silk Road, Silk Road 2, Agora and Evolution have had hundreds of thousands of regular users; Anyone who runs Tor and knows a site’s url, which for Tor hidden services ends in ".onion," can easily visit those illegal online marketplaces.
When news sites mistakenly describe the Dark Web as accounting for 90% of the Internet, they’re confusing it with the so-called Deep Web, the collection of all sites on the web that aren’t reachable by a search engine. Those unindexed sites do include the Dark Web, but they also include much more mundane content like registration-required web forums and dynamically-created pages like your Gmail account—hardly the scandalous stuff 60 Minutes had in mind. The actual Dark Web, by contrast, likely accounts for less than .01 percent of the web: Security researcher Nik Cubrilovic counted less than 10,000 Tor hidden services in a recent crawl of the Dark Web, compared with hundreds of millions of regular websites.
Though the Dark Web is most commonly associated with the sale of drugs, weapons, counterfeit documents and child pornography—and all those vibrant industries do in fact take advantage of Tor hidden services—not everything on the Dark Web is quite so “dark.” One of the first high profile Dark Web sites was the Tor hidden service WikiLeaks created to accept leaks from anonymous sources. That idea has since been adapted into a tool called SecureDrop , software that integrates with Tor hidden services to let any news organization receive anonymous submissions. Even Facebook has launched a Dark Web site aimed at better catering to users who visit the site using Tor to evade surveillance and censorship.
Just how completely Tor can evade the surveillance of highly-resourced law enforcement and intelligence agencies, however, remains an open question. In early November, a coordinated action by the FBI and Europol known as Operation Onymous seized dozens of Tor hidden services, including three of the six most popular drug markets on the Dark Web. For now, just how the feds located those sites remains a mystery; Some security researchers speculate that government hackers used so-called "denial of service" attacks that flood Tor relays with junk data to force target sites to use Tor relays they controlled, thus tracing their IP addresses. Or they may have simply used old-fashioned investigative techniques such as turning administrators into informants, or found other hackable vulnerabilities in the target sites.
Either way, the message is clear: Even on the Dark Web, it only takes a few small cracks to let the light in.
Hacker Lexicon is WIRED's explainer series that seeks to de-mystify the jargon of information security, surveillance and privacy.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Senior Writer X Topics Hacker Lexicon hacks Dell Cameron Kate O'Flaherty Dell Cameron Dell Cameron Lily Hay Newman Reece Rogers Andy Greenberg Matt Burgess Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,522 | 2,016 |
"FBI's Tor Hack Shows the Risk of Subpoenas to Security Researchers | WIRED"
|
"https://www.wired.com/2016/02/fbis-tor-hack-shows-risk-subpoenas-security-researchers"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security FBI's Tor Hack Shows the Risk of Subpoenas to Security Researchers Save this story Save Save this story Save Chip Somodevilla/Getty Images Computer security researchers who expose hackable vulnerabilities in digital products face plenty of occupational hazards: They can have their work censored by threats of lawsuits from the companies whose products they hack, or they can even be criminally indicted if their white-hat hacking runs afoul of the Computer Fraud and Abuse Act. But one still-mysterious encounter between security researchers and the law points to a newer, equally troubling possibility: They can have their work subpoenaed in a criminal investigation and used as a law enforcement tool.
A judicial ruling released yesterday in the case of Brian Farrell, an alleged staffer of the defunct Dark Web drug site Silk Road 2, confirmed what many who followed that black market’s downfall have suspected for months: That the FBI was able to bypass the anonymity software Tor---the central tool used by the Silk Road 2 and its buyers and sellers to evade the cops---with information they obtained from a subpoena to Tor-focused security researchers at Carnegie Mellon University’s Software Engineering Institute. In a ruling, judge Richard Jones of the Western District of Seattle wrote that Farrell’s IP address was obtained through a subpoena to Carnegie Mellon while the university researchers were running an experiment on the Tor network designed to show how its anonymous users and servers could be identified.
If you’re a researcher, you need to think: Am I going to get subpoenaed here? Should I be gathering this information and risking putting it into the wild? Tor Ekeland This chain of events should serve as a warning to the computer security research community. It proves that FBI agents somehow learned of research intended to be openly shared with a community that would fix the security flaws it exposed, but instead they subpoenaed it to be used in secret to identify and arrest criminal suspects. And they could do it again.
“When you do experiments on a live network and keep the data, that data is a record that can be subpoenaed,” says Matt Blaze, a computer scientist at the University of Pennsylvania. “As academics, we’re not used to thinking about that. But it can happen, and it did happen.” That’s an unexpected risk that security researchers---both academic, corporate and independent---need to consider before gathering private data on witting or unwitting subjects, even if they plan to keep that data unpublished or redact it in their public release. That specter of a subpoena, argues computer-security focused defense lawyer Tor Ekeland, could create a “chilling effect,” limiting researchers’ behavior for fear that their test subjects could become subjects of a criminal indictment. “If there’s a criminal investigation, yes, the FBI or the SEC or the DEA can issue an administrative subpoena for your data,” Ekeland says. “If you’re a researcher, you need to think: Am I going to get subpoenaed here? Should I be gathering this information and risking putting it into the wild?” The FBI's subpoena could feasibly have even gone beyond private data to include the Carnegie Mellon's actual Tor-cracking technique, Ekeland argues. "It seems like they’re trying to subpoena surveillance techniques," he says. "They’re trying to acquire intel gathering methods under the pretext of an individual criminal investigation." Exactly what the Carnegie Mellon researchers handed over to the FBI remains far from clear. But in an abstract on the website of the Black Hat hacker conference , where they planned to present their Tor-focused research in August of 2014, they described it as a serious vulnerability that would allow them to identify both Tor users and web servers that use Tor to hide their location, known as Tor hidden services. "Looking for the IP address of a Tor user? Not a problem. Trying to uncover the location of a Hidden Service? Done. We know, because we tested it, in the wild..." the abstract reads. The researchers promised to "dive into dozens of successful real-world de-anonymization case studies," including Tor-hidden drug markets and child pornography sites.
Just weeks after that abstract was posted, the talk was mysteriously pulled from the Black Hat conference schedule. And then in November of 2014, the FBI and Europol together launched Operation Onymous, a purge of the dark web that took down dozens of Tor hidden services including the Silk Road and several other top drug markets.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight At the time, the law enforcement officials who led that operation boasted that they possessed a new, secret technique for identifying Tor-hidden sites---not merely a list of IP addresses that might have been collected by Carnegie Mellon's researchers. "This is something we want to keep for ourselves," the head of Europol's European Crime Center Troels Oerting told WIRED at the time.
"The way we do this, we can’t share with the whole world, because we want to do it again and again and again." That suggests law enforcement could have obtained not only raw data from its subpoena to Carnegie Mellon, but possibly the Tor-hacking technique it then used independently, says Runa Sandvik, a security researcher and former developer for Tor. "They could have subpoenaed all the information the researchers had," she says, which would include enough about the Carnegie Mellon Tor attack for the FBI to use or replicate it. She says this is comparable to what people are worried the FBI plans to do with the court order to Apple to create new iPhone firmware that would let the government crack encrypted phones. "If the FBI can subpoena a technique, can they reuse it?" she asks. "It sounds similar to the Apple/FBI case. They claim they’re just asking for help with a single iPhone 5c but as soon as the exploit is put together, it can be reused on other iPhones the FBI needs help with." But unlike Apple, it's important to note that Carnegie Mellon researchers may not have been the most resistant targets for a subpoena. After all, they worked for Carnegie Mellon's Software Engineering Institute, a government-contracted research lab with a separate building located off Carnegie Mellon's primary campus, and their work was funded by the Department of Defense. Those federal ties may have helped the FBI to learn about the researchers' results, and may have even led the researchers to willingly give up those results to FBI investigators. On the other hand, the researchers' aborted plans to present those results at Black Hat and another conference, the Association for Computing Machinery Conference on Computer and Communications Security, suggest that they didn't originally intend their work to become a secret tool of law enforcement.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Tor's custodians at the non-profit Tor Project, for their part, had previously accused Carnegie Mellon of not only willingly giving up its research, but being paid for it. Tor's co-founder Roger Dingledine told WIRED in November of last year that he believed Carnegie Mellon was paid $1 million for its Tor-breaking technique.
Carnegie Mellon responded by denying that accusation, and instead giving its first hint that the research had instead been subpoenaed.
"In the course of its work, the university from time to time is served with subpoenas requesting information about research it has performed,” a Carnegie Mellon spokesperson wrote in a statement. “The university abides by the rule of law, complies with lawfully issued subpoenas and receives no funding for its compliance.” When WIRED reached out to the Tor Project, it had no comment on the subpoena issue, and Carnegie Mellon would only refer back to its previous statement. The FBI didn't immediately return WIRED's request for comment.
The most solid lesson of this messy affair, argues Blaze, is that researchers need to consider the risk that their work could be subpoenaed, and protect their subjects accordingly. "You have to be aware of your ethical obligations to not expose your subjects to harm," he says. "Keep data in a form that retains as little information about individuals as possible, anonymize it, destroy it as soon as it’s no longer relevant. And if it's not absolutely necessary, don’t collect it in the first place." Senior Writer X Topics dark web encryption FBI Tor Threat Level Dell Cameron Dell Cameron Lily Hay Newman Justin Ling Dell Cameron Andy Greenberg Lily Hay Newman Andrew Couts Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,523 | 2,021 |
"All the Ways Amazon Tracks You—and How to Stop It | WIRED"
|
"https://www.wired.com/story/amazon-tracking-how-to-stop-it"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Burgess Security All the Ways Amazon Tracks You—and How to Stop It Photograph: SAJJAD HUSSAIN/Getty Images Save this story Save Save this story Save Jeff Bezos has a hidden weapon: your data. While Amazon’s retail empire is built on a complex web of infrastructure and murky working practices , its selling success is based on an intricate knowledge of what millions of people buy and browse every day.
This story originally appeared on WIRED UK.
Amazon has been obsessed with your data since it was an online bookshop. Almost two decades ago the firm’s chief technology officer, Werner Vogels, said that the company tries to “collect as much information as possible” so it can provide people with recommendations. And, as Amazon has expanded, so has its data collection operation. “They happen to sell products, but they are a data company,” a former Amazon executive told the BBC in 2020.
Amazon knows a lot about you. That includes everything you do in Amazon’s ecosystem: from the thousands of searches you make on its app or website to your every individual click, scroll, and mouse movement. It’s a lot of data—and that’s just the beginning of it. People who have requested their data from Amazon have been sent hundreds of files , including a decade of their shopping history and thousands of voice clips recorded by Alexa devices.
“The reason online shopping through Amazon is so convenient is because the company has spent years consolidating its power and reach,” says Sara Nelson, director of the Corporate Data Exploitation program at the civil liberties group Privacy International. “The company is in a position to collect huge amounts of data—through its shopping platform, but also through its Ring cameras, Alexa voice assistants, web services, delivery services, streaming services, and its many other business streams.” And now Amazon is moving into health care —something that Nelson says is concerning.
Amazon’s data collection is also reportedly putting it on the wrong side of regulators. On June 10, The Wall Street Journal reported that data protection regulators in Luxembourg, where Amazon’s European headquarters is based, are preparing a $425 million GDPR fine in response to the way it uses people’s personal data—although no specific details were provided and an Amazon spokesperson declined to comment on the potential fine.
Anti-competition regulators are also looking at the company’s use of data. And governments are demanding more data from Amazon , including information from Ring and Alexa recordings.
Let’s start with Amazon’s privacy notice.
At more than 4,400 words it’s hardly surprising that most people don't read it, but it does clearly lay out what Amazon does with your data. Broadly, the information that Amazon collects about you comes from three sources. These are: the data you give it when you use Amazon (and its other services, such as reading Kindle books), the data it can collect automatically (information about your phone and your location), and, finally, information it gets from third parties (credit checks to find out if your account is fraudulent, for example).
The ultimate goal of all this data collection? To help sell you more things. Amazon will use your personal information—and everything it can learn about your likes and dislikes—to show you recommendations for stuff it thinks you might buy. More broadly, it can also get a sense of its most popular sellers and people’s shopping behavior.
“Personal data about shopping is incredibly sensitive,” says Carissa Véliz, an associate professor at the University of Oxford’s Institute for Ethics. “It can tell you about a person’s health status, their political tendencies, their sexual practices, and much more. People buy all kinds of things on Amazon, from books and movies to health-related items. Add to that personal data from Alexa, and it gets even more concerning.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight It also uses information, such as your location, to make sure the things you buy actually get delivered to you. “We process your personal information to operate, provide, and improve the Amazon Services that we offer our customers,” the company’s privacy notice says. It also broadly sets out the legal arguments for all the data it collects.
Let’s look at the information you give to Amazon. You should assume that everything you do on Amazon’s website, apps, or any of its products is saved in some way. Every order you place on Amazon, every show you watch on Prime, every song you listen to on Amazon Music, and every request you make of Alexa is tracked and stored.
Amazon’s privacy policy says that, depending on your settings, you might give it your name, address, phone number, age, location, bank details, credit history information, playlists, watch lists, wish lists, voice recordings, Wi-Fi credentials, and any photographs you’ve uploaded to your profile, as well as the names, emails, and addresses of people in your contacts. And if you sell items on Amazon, it can get your VAT and other business information.
The automatic information Amazon collects is where things get a bit more creepy. This is all data about how and when you use Amazon products. Freelance journalist Riccardo Coluccini was sent a table with 12,048 rows detailing all the clicks he made on Amazon’s website. “The values concern the day and time when a specific page is visited, the IP address and the device used, the geolocation—if possible—based on the IP address, and the name of the telecommunication company that offers the internet service,” he wrote in 2018.
Similarly, other data requests to Amazon show how Kindle logs the date, time spent reading, and how often you copy or highlight parts of books. Likewise, Ring doorbells log every record of motion they detect and each tap made within the Ring app.
Amazon’s privacy notice details that it may automatically collect your IP address, login details, the location of computer, errors your device logs when using its services, your app preferences, cookie details, identifiers linked to your phone or computer, and all the URLs that you click, including page interaction information “such as scrolling, clicks, and mouse-overs.” It’s not uncommon for companies to collect and record all of your interactions with their products—they can be used to improve the products and identify bugs—but this information quickly adds up. Amazon says the data it collects can be used to improve its services as well as complying with legal obligations and other purposes. “We are not in the business of selling our customers' personal information to others,” its privacy notice says.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The final type of information Amazon collects about you is that from third parties. This can include updated delivery addresses if a delivery company finds there’s a problem with the one you provided; account and purchase information from “merchants with which we operate co-branded businesses”; information about “interactions” with Amazon’s subsidiaries (there’s a lot of them and they have their own privacy policies); information about devices you’ve linked with Alexa; and credit history it gets as part of its efforts to detect fraud.
It’s impossible to stop Amazon tracking you completely—if you’re going to shop with Amazon, then Amazon will collect your data—but there are a few steps you can take to limit the information that it can gather and use. Some of these are provided by Amazon itself, while others involve tweaking your browser settings and using other tools.
First, if you’re interested in the data Amazon has stored about you, you can use its download tools to access some of it.
This will only provide you with a subset of what Amazon has, and you’ll have to make a subject access request to get everything.
Beyond this, there are some controls that Amazon has that can improve your privacy.
Alexa and Ring , for instance, have their own privacy hubs where you can delete recordings and manage privacy settings. But for the majority of Amazon information, you’ll need your main account. You can turn off Amazon showing you personalized ads based on what it infers about your interests and likes—although you will still see recommendations based on your previous purchases on Amazon. (They can be tweaked, but not turned off, here.
) You should also consider turning off advertising cookies , which allow third parties to collect your information. Amazon’s list of third-party cookie partners includes more than 75 companies, ranging from Facebook to mobile gaming giant King.
You can also turn off your browsing history on Amazon—though the impact of this is debatable. “Amazon can keep your browsing history hidden,” reads one Amazon settings page.
“When you turn your browsing history off, we will not show items you click on, or searches that you make from this device.” That doesn’t mean the data is deleted. Véliz says it can be incredibly difficult to get Amazon to delete personal data. “You’ll see that there is no way to delete your purchasing history—none of it, not even the things you bought a decade ago,” she says. “The best thing you can do is hide your purchasing history from yourself, which is ridiculous, and even that doesn’t work well sometimes.” (Archiving orders can be done from your orders page , which includes a drop-down menu for each year you’ve had an Amazon account).
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Beyond Amazon’s own tools, if you want to limit how much Amazon can track you then you’ll need to use a privacy browser.
These can automatically disable third-party cookies and stop you from being tracked across the web. If you don’t want searches to be linked to your Amazon account, then you need to ensure you are logged out as well. There’s also the option of researching—and buying—products directly from manufacturers or other online sellers. And, if you really don’t want to be tracked by Amazon at all, there’s the nuclear option: Delete your Amazon account.
This story originally appeared on WIRED UK.
📩 The latest on tech, science, and more: Get our newsletters ! What really happened when Google ousted Timnit Gebru Wait, vaccine lotteries actually work? How to turn off Amazon Sidewalk They rage-quit the school system— and they're not going back Apple World's full scope is coming into focus 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Senior writer X Topics Wired UK Amazon privacy data privacy Matt Burgess Vittoria Elliott Reece Rogers Lily Hay Newman Andrew Couts Reece Rogers Matt Burgess K.G. Orphanides Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,524 | 2,010 |
"Hackers, Troops Rejoice: Pentagon Lifts Thumb-Drive Ban (Updated) | WIRED"
|
"https://www.wired.com/dangerroom/2010/02/hackers-troops-rejoice-pentagon-lifts-thumb-drive-ban"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Noah Shachtman Security Hackers, Troops Rejoice: Pentagon Lifts Thumb-Drive Ban (Updated) Save this story Save Save this story Save Soldiers, you are now cleared to use your thumb drives again. U.S. Strategic Command has lifted its ban on the tiny drives, memory sticks, CDs and other "removable flash media" on military networks.
The repeal, first reported by InsideDefense.com , may be good news for troops, who depend on the drives to move data in bandwidth-starved locations. But it may be good news for hackers, too. The original network security concerns which prompted the ban haven't really been addressed, one Strategic Command cyber defense specialist tells Danger Room: "Not much changed. StratCom simply does not have the support to enforce such a ban indefinitely." StratCom prohibited the drives' use back in November 2008 after the Agent.btz virus began working its way through military networks. A variation of the " SillyFDC " worm, Agent.btz spreads by copying itself from thumb drive to computer and back again. Once on a PC, "it automatically downloads code from another location. And that code could be pretty much anything," iDefense computer security expert Ryan Olson said at the time.
There was also talk that such infections might be deliberate attacks on the Defense Department's networks. The ban was billed in one StratCom e-mail as a way to counter " adversary efforts to penetrate, disrupt, interrupt, exploit or destroy critical elements of the GIG" [Global Information Grid].
Jim Lewis , with the Center for Strategic and International Studies, told 60 Minutes last November that "some foreign power" infiltrated the classified network of U.S. Central Command through the use of " thumb drives.
" (Later, Lewis said he did not have direct knowledge of the incident.) Troops in the field and at secure facilities often rely on thumb drives, CDs and other removable media to transport information when bandwidth is scarce and networks are unreliable. Even after the ban went into effect, takeaway storage continued to be used constantly as a substitute.
StratCom hopes to keep the spread of any viruses to a minimum by only allowing "properly inventoried, government-procured and owned devices" on military networks. But at least one StratCom specialist is skeptical that the limitations will have much of an impact.
"Simply put, DoD [Department of Defense] cannot undo 20+ years of tacitly utilizing worst IT security practices in a reasonable amount of time, especially when many of these practices are embedded in enterprise wide processes. While a more restrictive policy on such devices is useful and better than no policy at all, it still pivots on what I like to call the 'original sin' fallacy of cybersecurity: the unsubstantiated given in most policies that all users will always follow the rules and self-police," the specialist notes.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight At the National Security Agency and other highly classified organizations, USB ports and writable drives are removed from desktop computers. Drivers of the devices are disabled. In many wings of Defense Department, that would bring information-sharing to a grinding halt.
"Folks at all levels being routinely tasked to do things with their IT by senior leaders for which they are not provided the enterprise tools for and often require them to use poor security practices or violate existing policy to accomplishment," the StratCom specialist observes.
It would be like ordering a subordinate to hand-deliver a message by car to someone in 10 minutes – but that person is 10 miles away so they have to drive 60 mph. The law says the speed limit is 55, but the driver is forced to speed to accomplish the task. And then leaders lament the deaths and injuries caused by speeding and create policies demanding drivers stop speeding and increase the punishment on those that do. Nice little Catch 22 we create for ourselves.
UPDATE : StratCom deputy commander Vice Admiral Carl Mauney tells Stars & Stripes that thumb drives and the like will only be allowed in certain, very limited circumstances.
Department-owned drives which have undergone virus scans and checks from network administrators will be made available to troops for use “only when other authorized network resources are not available.” Mauney said STRATCOM already has begun sending thumb drive kits to units in Afghanistan and Iraq, the first troops to receive the department-approved removable media. The kits include software and hardware for preparing and using the drives. “This isn’t going to be like going to the neighborhood store, picking up a 1-gigabyte drive and plugging it into the computer,” he said.
But troops were doing just that, even when there was a blanket ban imposed. Will the new rules change the behavior? Photo: Tech. Sgt. Erik Gudmundson / U.S. Air Force See Also: Under Worm Assault, Military Bans Disks, USB Drives Military USB Ban Meant to Stop 'Adversary Attacks' Whoops! Army Files Found on Used MP3 Player Cybersecurity: Here's What Really Worries the Pentagon Pentagon Searches for 'Digital DNA' to Identify Hackers U.S. Cyber Command: 404 Error, Mission Not (Yet) Found 3 Reasons Why U.S. Cybersecurity Sucks Topics cybersecurity Info War Danger Room Lily Hay Newman Andy Greenberg Dell Cameron Dell Cameron Kate O'Flaherty Andy Greenberg Dell Cameron Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,525 | 2,010 |
"Army Intelligence Analyst Charged With Leaking Classified Information | WIRED"
|
"https://www.wired.com/threatlevel/2010/07/manning-charges"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kim Zetter and Kevin Poulsen Security Army Intelligence Analyst Charged With Leaking Classified Information Save this story Save Save this story Save A U.S. Army intelligence analyst suspected of leaking videos and documents to Wikileaks was charged Monday with eight violations of federal criminal law, including unauthorized computer access, and transmitting classified information to an unauthorized third party.
Pfc. Bradley Manning, 22, was charged with two counts under the Uniform Code of Military Justice: one encompassing the eight alleged criminal offenses, and a second detailing four noncriminal violations of Army regulations governing the handling of classified information and computers.
According to the charge sheet, Manning downloaded a classified video of a military operation in Iraq and transmitted it to a third party, in violation of a section of the Espionage Act, 18 U.S.C. 793(e), which involves passing classified information to an uncleared party, but not a foreign government.
The remaining criminal charges are for allegedly abusing access to the Secret-level SIPR network to obtain more than 150,000 U.S. State Department cables, as well as an unspecified classified PowerPoint presentation.
Manning allegedly passed more than 50 classified diplomatic cables to an unauthorized party, but downloaded at least 150,000 unclassified State Department documents, according to Army spokesman Lt. Col. Eric Bloom. These numbers could change as the investigation continues, Bloom said. Both numbers are lower than the 260,000 cables Manning claimed, in online chats, to have passed to Wikileaks.
Between Jan. 13 and Feb. 19 this year, Manning allegedly passed one of the cables, titled “Reykjavik 13,” to an unauthorized party, the Army states. The Army doesn’t name Wikileaks as the recipient of the document, but last February the site published a classified cable titled “Reykjavik 9” that describes a U.S. embassy meeting with the government of Iceland.
If convicted of all charges, Manning could face a prison sentence of as much as 52 years, Bloom said.
Manning was put under pretrial confinement at the end of May, after he disclosed to a former hacker that he was responsible for leaking classified information to Wikileaks. He’s currently being held at Camp Arifjan in Kuwait and has been assigned a military defense attorney, Capt. Paul Bouchard, who was not available for comment. Bloom said that Manning has not retained a civilian attorney, though Wikileaks stated recently that it commissioned unnamed attorneys to defend the soldier.
The next step in Manning’s case is an Article 32 hearing, which is an evidentiary hearing similar to a grand jury hearing, to determine if the case should proceed to court-martial.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Manning, who comes from Potomac, Maryland, enlisted in the Army in 2007 and was an Army intelligence analyst who was stationed at Forward Operating Base Hammer 40 miles east of Baghdad, Iraq, last November. He held a Top Secret/SCI clearance.
In May, he began communicating online with a former hacker named Adrian Lamo. Very quickly in his exchange with the ex-hacker, Manning disclosed that he was responsible for leaking a headline-making Army video to Wikileaks. The classified video, which Wikileaks released April 5 under the title “Collateral Murder,” depicted a deadly 2007 U.S. helicopter air strike in Baghdad on a group of men, some of whom were armed, that the soldiers believed were insurgents.
The attack killed two Reuters employees and an unarmed Baghdad man who stumbled on the scene afterward and tried to rescue one of the wounded by pulling him into his van. The man’s two children were in the van and suffered serious injuries in the hail of gunfire.
Manning also said he leaked a separate video to Wikileaks showing the notorious May 2009 air strike near Garani village in Afghanistan that the local government says killed nearly 100 civilians, most of them children. The Pentagon released a report about the incident last year, but backed down from a plan to show video of the attack to reporters.
Other classified leaks he claimed credit for included an Army document evaluating Wikileaks as a security threat and a detailed Army chronology of events in the Iraq war. But the most startling revelation was a claim that he gave Wikileaks a database of 260,000 classified U.S. diplomatic cables, which Manning said exposed “almost-criminal political back dealings.” “Hillary Clinton and several thousand diplomats around the world are going to have a heart attack when they wake up one morning, and find an entire repository of classified foreign policy is available, in searchable format, to the public,” Manning told Lamo in an online chat session.
Manning anticipated watching from the sidelines as his action bared the secret history of U.S. diplomacy around the world.
“Everywhere there’s a U.S. post, there’s a diplomatic scandal that will be revealed,” Manning wrote of the cables. “It’s open diplomacy. Worldwide anarchy in CSV format. It’s Climategate with a global scope, and breathtaking depth. It’s beautiful, and horrifying.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Wikileaks has acknowledged possessing the Afghanistan video and representatives of the organization indicated in media interviews that it will release the video soon. The organization has denied that it received 260,000 classified cables.
In his chats with Lamo, Manning discussed personal issues that got him into trouble with his Army superiors and left him socially isolated, and said he had been demoted and was headed for an early discharge from the military.
He claimed to have been rummaging through classified military and government networks for more than a year and said the networks contained “incredible things, awful things … that belonged in the public domain, and not on some server stored in a dark room in Washington, D.C.” Manning discovered the Iraq video in late 2009, he said. He first contacted Wikileaks founder Julian Assange sometime around late November last year, he claimed, after Wikileaks posted 500,000 pager messages covering a 24-hour period surrounding the Sept. 11 terror attacks. ”I immediately recognized that they were from an NSA database, and I felt comfortable enough to come forward,” he wrote to Lamo.
In January, while on leave in the United States, Manning visited a close friend in Boston and confessed he’d gotten his hands on unspecified sensitive information, and was weighing leaking it, according to the friend. “He wanted to do the right thing,” 20-year-old Tyler Watkins told Wired.com. “That was something I think he was struggling with.” Manning passed the video to Wikileaks in February, he told Lamo. After April 5 when the video was released and made headlines, Manning contacted Watkins from Iraq asking him about the reaction in the United States.
“He would message me, ‘Are people talking about it?… Are the media saying anything?’” Watkins said. “That was one of his major concerns, that once he had done this, was it really going to make a difference?… He didn’t want to do this just to cause a stir…. He wanted people held accountable and wanted to see this didn’t happen again.” Lamo decided to turn in Manning after the soldier told him that he leaked a quarter-million classified embassy cables. Lamo contacted the Army, and then met with Army CID investigators and the FBI to pass the agents a copy of the chat logs from his conversations with Manning. At their second meeting with Lamo on May 27, FBI agents from the Oakland Field Office told the hacker that Manning had been arrested the day before in Iraq by Army CID investigators.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight As described by Manning in his chats with Lamo, his purported leaking was made possible by lax security online and off.
Manning had access to two classified networks from two separate secured laptops: SIPRNET, the Secret-level network used by the Department of Defense and the State Department, and the Joint Worldwide Intelligence Communications System which serves both agencies at the Top Secret/SCI level.
The networks, he said, were both “air gapped” from unclassified networks, but the environment at the base made it easy to smuggle data out.
“I would come in with music on a CD-RW labeled with something like ‘Lady Gaga,’ erase the music then write a compressed split file,” he wrote. “No one suspected a thing and, odds are, they never will.” “[I] listened and lip-synced to Lady Gaga’s ‘Telephone’ while exfiltrating possibly the largest data spillage in American history,” he added later. ”Weak servers, weak logging, weak physical security, weak counterintelligence, inattentive signal analysis … a perfect storm.” Manning told Lamo that the Garani video was left accessible in a directory on a U.S. Central Command server, centcom.smil.mil, by officers who investigated the incident. The video, he said, was an encrypted AES-256 ZIP file.
(This story has been updated repeatedly since posting, including a correction to a statement Bloom previously made about the maximum sentence Manning faces. Last updated 17:45 pm EDT) Charge Sheet Redacted – Manning See also: With World Watching, Wikileaks Falls Into Disrepair 3 Weeks After Arrest, Still No Charges in Wikileaks Probe Wikileaks Commissions Lawyers to Defend Alleged Army Source Suspected Wikileaks Source Described Crisis of Conscience Leading to Leaks ‘I Can’t Believe What I’m Confessing to You’: The Wikileaks Chats U.S. Intelligence Analyst Arrested in Wikileaks Video Probe WikiLeaks Was Launched With Documents Intercepted From Tor Topics Bradley Manning Crime cybersecurity security Wikileaks Threat Level Dell Cameron Dell Cameron Matt Burgess Lily Hay Newman Dhruv Mehrotra Dell Cameron David Gilbert Matt Burgess Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,526 | 2,021 |
"An Ultracold Plasma Models the Universe’s Most Extreme Places | WIRED"
|
"https://www.wired.com/story/an-ultracold-plasma-models-the-universes-most-extreme-places"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Karmela Padavic-Callaghan Science An Ultracold Plasma Models the Universe’s Most Extreme Places The interior of the sun is a plasma, as are the insides of white dwarf stars and gas giant planets, and lightning bolts.
Photograph: NASA Save this story Save Save this story Save Though plasma is the most common state of matter in the universe, it doesn’t get as much love as its gaseous, solid, and liquid counterparts. For most of us, plasmas seem like something exotic and reserved for space: hot concoctions of positively and negatively charged particles. A team of physicists from the University of Hamburg would beg to differ. In a new study published in Nature Communications in January, they created an unprecedentedly dense plasma out of a few thousand extremely cold atoms by hitting them with laser light for just one quadrillionth of a second.
A plasma is like a gas that got energetic enough for its atoms to slightly fall apart. All the atoms within a plasma have lost some of their electrons. They became positively charged ions, but are still hanging out with the electrons that escaped them. Such mixtures of ions and electrons are created in extreme environments. The interior of the sun is a plasma, as are the insides of white dwarf stars and possibly gas giant planets, and lightning bolts. In every case, thermal or electric energy turns a gas of well-behaved, neutral atoms into a fiery plasma. A plasma like the one created by the Hamburg team could soon serve as an emulator for these astronomical systems that are difficult to study otherwise.
Physicists have been creating ultracold plasmas since the late 1990s, but the new plasma stands out because it is about million times denser than any of its ultracold predecessors. “It's a completely new regime,” says Philipp Wessels-Staarmann, a physicist at University of Hamburg and co-leader of the team. “Nothing that was possible [in the lab] before.” The physicists used rubidium atoms that had been cooled to temperatures about a millionth of a degree Kelvin above absolute zero as a starting point for their plasma—an extremely cold temperature instead of the extremely hot one inside the sun. They used lasers and magnets to trap the atoms in place and lower their energies so much that they stayed virtually still instead of jiggling in place like room temperature atoms do. Then they hit them with an energetic burst of light that lasted for only a millionth of a billionth of a second and tore off all of their electrons. Instantaneously, the system became a dense plasma. In a machine small enough to fit into a room, they now had an analogue of a star’s interior smaller than the diameter of a human hair. Actual stars cannot be directly experimented on, but this system was fully at their disposal.
Video: UHH/Mario Großmann But how can an ultracold plasma tell us anything about super hot stars? Thomas Killian, an ultracold plasma physicist from Rice University who was not part of the study, says that, for example, in order to study how heat or mass are transported from one place to another inside a white dwarf, physicists can use ultracold plasma laboratory experiments as a benchmark. “If we can measure transport rates in these experiments, it really tests the exact same theories,” he notes.
The plasma made by the physicists in Hamburg is a good candidate for such tests because it was, in a way, more extreme than any before. Because it was really dense, the electric couplings—the interactions between charged particles within it—were very strong. Making a strongly interacting plasma has always been both a wishlist item and a technical challenge for ultracold plasma physicists, says Steven Rolston, a pioneer in the field and a scientist at the University of Maryland who was not involved with the study. “Plasmas actually don't like to be strongly coupled,” he says. Once the atoms in the plasma become charged ions, he says, if there is enough time, their electric potential energy can build up and make them wiggle, overpowering the interactions that couple them together.
Because of how hard it is to engineer them in labs and reach them in space, strongly coupled plasmas represent mostly unexplored terrain for physicists. They are a state of matter that scientists don’t fully grasp yet and want to explore more.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Part of the success of the new experiment, according to Juliette Simonet, co-leader of the Hamburg team, comes from bringing together ultracold and ultrafast physics experts. This resulted in the one-two punch of using extremely cold and controlled atoms as the base of the experiment and an extremely fast laser as the main tool for manipulating them. “It’s a big collaboration between the two research fields,” she says.
The machine her team built also allowed the researchers to directly track what the electrons did after they broke off from their atoms. In past experiments, physicists only inferred what may be happening to them by measuring other aspects of the plasma. Here, they determined that the laser pulse caused the temperature of the electrons to skyrocket to over 8,000 degrees Fahrenheit for just an instant before they cooled back down in response to the pull of the ions. “This is beyond anything that has been seen so far,” Simonet says about this detailed observation.
According to Killian, such details have so far also eluded physicists’ theories. “A lot of the standard theories that people use in plasmas that describe the way energy is transported or mass is transported through the system don't work in this [interaction] regime,” he notes.
To ensure that they understood what they were seeing, the Hamburg team turned to computer calculations. Because their plasma was very small, Mario Grossman, a graduate student in the group and a coauthor on the study, says they could calculate how every plasma particle interacted with every other one. It was like asking a computer to describe the noise in a crowded room by gathering minute details of conversations between every two people.
For their 8,000-particle system, he had to wait for up to 22 days for a computer to produce results. Encouragingly, simulated plasma particles did almost exactly what researchers saw real particles do in their experiment. This simulation approach, however, would be impractical for any larger, naturally occurring plasma.
“Most of the theory really has been kind of brute force—‘Let me just put it on a really big computer and calculate interactions’—which scales poorly,” Rolston agrees. He points out that there may not be computers powerful enough to simultaneously handle every single particle interaction in big plasmas. A more sophisticated theory would zoom out, forget about the nitty-gritty particle details, and predict plasma behavior based on its properties as a whole.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight This kind of theory would help both ultracold physicists and researchers who study celestial bodies. It could predict when strongly coupled plasmas can develop ripples or sustain electrical currents. These predictions could be tested in laboratory experiments on Earth and offer insight into evolution of—or even mergers between—white dwarves in space. “We have an initially super coupled plasma,” says Wessels-Staarmann. “The interesting thing would be to really maintain this coupling, so then you can really contribute to what's going on in a white dwarf.” As his team continues to experiment on their plasma, their ability to add complexity to its structure and make precise measurements will further the fundamental understanding of this state of matter. They are enthusiastic about pushing their plasma to interact even more in upcoming studies. They aim to tweak the burst of light they use to make it—it will be crucial to be able to tell their electrons exactly what to do after leaving their atoms.
Even though this machine provides an insight into a plasma more extreme than any that have been created before, the team feels like they are just getting started. “It’s a nice simulator,” Simonet says of their machine. “Let’s say that we just checked that it works.” Update 3-19-2021 3:32 pm: This story was updated to reflect that while it has been theorized that plasma can be found in the interior of gas giant planets, this has not been proven.
📩 The latest on tech, science, and more: Get our newsletters ! Adoption moved to Facebook and a war began Can alien smog lead us to extraterrestrial civilizations ? Clubhouse's security and privacy lag behind its huge growth Alexa Skills that are actually fun and useful OOO: Help! I’m sneaking into my office.
Is this so wrong ? 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones strikethrough text Topics physics astrophysics Astronomy space planetary science Rhett Allain Matt Simon Matt Simon Ramin Skibba Ramin Skibba Ramin Skibba Rebecca Boyle Charlie Wood Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,527 | 2,021 |
"Volcanoes Might Explain That Phosphine on Venus | WIRED"
|
"https://www.wired.com/story/life-on-venus-debate-phosphine-volcanoes"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Katrina Miller Science The Latest Twist in the Life-on-Venus Debate? Volcanoes Photograph: NASA Save this story Save Save this story Save Venus is often called Earth’s sister planet, a neighboring twin of similar density and size. But the resemblance stops there. As the hottest planet in our solar system, the choking Venusian atmosphere is full of heat-trapping carbon dioxide and clouds thick with sulfuric acid that shroud its dry, volcanic terrain.
So it’s one of the last places anyone might think to look for life beyond our planet.
That’s why it came as such a shock last September when a group of scientists, led by Jane Greaves of Cardiff University, announced that they’d found a possible sign of alien life in the Venusian atmosphere. In the study , published in Nature Astronomy , they reported the detection of a colorless, toxic gas called phosphine in the planet’s clouds and concluded that no known chemical or geological processes could explain its presence. Phosphine could indicate life, they argued, noting recent work by Clara Sousa-Silva, a quantum astrochemist at Harvard and Smithsonian’s Center for Astrophysics, who suggests the gas could be a biosignature. On Earth, phosphine is often found in places that host anaerobic life, including lakes, marshes, paddy fields, and in the sludge of landfills.
But when the news reached Jonathan Lunine, an astronomer at Cornell University, he and graduate student Ngoc Truong were immediately skeptical. “It’s problematic to invoke phosphine as a biosignature on Venus, simply because the environment on Venus is totally different from the environment on Earth,” says Truong. Even on our own planet, he says, there is some confusion as to whether phosphine is associated with life, and he believes that this should be confirmed before extrapolating these observations to environments so unlike our own.
Truong and Lunine weren’t alone in their doubts. After the phosphine announcement, the internet exploded with discussions about the discovery. Scientists weighed in on Twitter threads, argued on Facebook posts, and flocked to arXiv.org , a preprint server for scientific research, to lay out other theories for what nonbiological processes might be producing the phosphine.
Truong, who until that point had been studying the oceans on Saturn’s moons, convinced Lunine that they needed to further explore one potential source of phosphine in particular: volcanoes. Their research culminated in a new study published Monday in the journal Proceedings of the National Academy of Sciences.
In it, Truong and Lunine paint a picture of how phosphine might make it into Venus’ atmosphere. Trace amounts of phosphides (negatively charged phosphorus ions attached to metals like iron) found deep in the mantle of Venus could be pulled up to the surface by volcanic activity. When the volcanoes erupt, these phosphides could be thrust into the atmosphere and chemically react with sulfuric acid in the clouds to form phosphine.
“Our study only suggests a road map to assessing the level of volcanic eruptions” on Venus, Truong says. Two conditions are needed for this to be a viable explanation. First, the planet must be volcanically active. (While thousands of volcanoes have been spotted in radar images of Venus, scientists lack the data to confirm recent eruptions, since so far, landers can only withstand the raging heat and crushing pressure of the Venusian surface for about an hour.
) “And not just active in the sense of ‘Hawaiian-style’ volcanism,” Lunine says, which typically produces lava flows without much explosivity. Explosive volcanism is key, because there needs to be a mechanism for the phosphides to be ejected into the atmosphere.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Second, scientists would need to verify that the phosphine is actually there—and that’s currently a huge point of contention.
Without this proof, Lunine says, the volcano theory “becomes an empty postulate rather than a hypothesis.” The original paper by Greaves’ team used data collected by the James Clerk Maxwell Telescope and the Atacama Large Millimetre/Submillimetre Array (ALMA), and it reported that they’d detected about 20 parts per billion of excess phosphine in the Venusian air. But in subsequent months, the discovery of a calibration error in ALMA’s data led the authors to decrease that value to a range of one to five parts per billion. (Other researchers not involved in the work, who did their own analysis of the ALMA data, failed to find evidence of phosphine at all.
) By mid-November, a note was added to the original study: “The authors have informed the editors of Nature Astronomy about an error in the original processing of the ALMA Observatory data underlying the work in this article, and that recalibration of the data has had an impact on the conclusions that can be drawn.” Sousa-Silva acknowledges the uncertainty the ALMA calibration error introduced. Not only did it lower the amount of phosphine predicted in the Venusian atmosphere, she says, it also lowered confidence in the result itself. “Is the signal real? I believe it is,” says Sousa-Silva. “Now is it phosphine? That is less convincing.” But she still thinks that the molecule is the best explanation so far for what her team is seeing in the data. (Greaves did not respond to an emailed interview request.) But the volcano hypothesis, too, has its critics. On the same day in 2020 when Greaves’ team published their paper, MIT astrobiologist Janusz Petkowski uploaded a hefty preprint to the arXiv, coauthored by a subset of the researchers who had also contributed to the Greaves report, arguing that the phosphine’s source could be biological. In 103 pages, via an extensive analysis of known gas, geochemical, and photochemical reactions, they ruled out the genesis of phosphine by conventional methods—including volcanic activity. Their conclusion: The phosphine must come from some new and unknown process or from microbial life in Venus’ clouds.
Since then, the authors have refined their arguments and addressed concerns like the recalibrated ALMA data; on Monday, a peer-reviewed version of the paper was accepted to Astrobiology, Special Collection: Venus.
Late on Tuesday night, following the publication of the Cornell volcano study, Petkowski and his team decided to publicly post the new version of the paper on the arXiv (with a corresponding note that explicitly calls Truong and Lunine’s volcano hypothesis unlikely). Notably, the updated version of the document includes even more detailed calculations—it’s now 126 pages long—using a conservative value of one part per billion of phosphine in Venus’ atmosphere. But even with the smaller amount, Petkowski and his team still don’t rule out the possibility of life, because they failed to find any known abiotic process that would accurately describe their observation of phosphine.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight “We just do not think that deep-mantle volcanism is a reasonable source for phosphine on Venus,” Petkowski says. In the paper, they argue that it isn’t clear how phosphides could make it from the interior of Venus to the surface unchanged, and that the scale and frequency of volcanic eruptions needed to spew enough of these phosphides into the air, where they can react with sulfuric acid to produce phosphine, is unlikely. Petkowski’s team also wrote that the atmospheric concentration of sulfuric acid likely wouldn’t produce phosphine at all. Instead, it would cause a different chemical reaction with the phosphide called oxidation. (Truong and Lunine disagree; in their paper, they say that oxidation would be limited by the lack of water in Venus' clouds.
) Sukrit Ranjan, a planetary photochemist at Northwestern University who worked on the original phosphine discovery as well as the follow-up study helmed by Petkowski, says that while his team agrees with “the guts of the calculation” described in Truong and Lunine’s paper, the disagreement lies in which assumptions can be realistically made when modeling volcanic processes on Venus that could produce phosphine in the atmosphere. “When there was ambiguity, we tried to make it as easy as possible for an abiotic explanation of phosphine,” he says of his team’s latest paper, but their analysis kept falling short.
“Volcanoes are a lovely way out,” Sousa-Silva says, and adds that it was one of the first explanations for the phosphine that her team explored. “But it just doesn’t really work out in our calculations.” She notes that there are known ways phosphine can form abiotically—like in the violent storms of Jupiter and Saturn—but that these methods are difficult and inefficient, and require even more extreme environments than Venus.
Ranjan also stresses that in the 2020 preprint, as well as the latest peer-reviewed version, his team doesn’t exactly rule out volcanic activity as a possibility—they only claim that it cannot be explained with known geochemical processes. “Even life is not a natural explanation for phosphine in the atmosphere,” he says, because life as we know it could not possibly survive in the acidic environment of Venus’ clouds. But evolution could have developed tricks elsewhere that it hasn’t on Earth, so the universe may surprise us: “Habitability is a frontier to be explored,” he says.
Even without the promise of life, finding phosphine on the planet would be an exciting prospect by itself for Sousa-Silva, who has studied the molecule for over a decade. “Something strange is making the phosphine,” she says. “And I’ll be thrilled to find out what it is—neighbors or not.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight “I think the only thing we truly know about Venus,” she adds, “is that we know very little about Venus.” Petkowski welcomes the hearty debate between scientists. “We are not afraid to be wrong,” he says. And even if further scrutiny of the data rules out the presence of any phosphine at all, Petkowski says it wouldn’t completely rule out the possibility of life for him. “The story is not over,” he says.
Upcoming NASA missions, planned long before scientists saw possible hints of life, will help solve the mystery of phosphine in the atmosphere. VERITAS, set to launch in 2028, is a spacecraft that will map out the surface and look for the presence of deep underground water spewed out as vapor by active volcanoes. Around the same time, a spherical probe named DAVINCI+ will plunge through the toxic atmosphere and measure the composition of the Venusian clouds, which could confirm or rule out the existence of phosphine.
Suzanne Smrekar, the NASA geophysicist leading the VERITAS mission, says that the possibility of life on Venus is exciting—and a wake-up call. “It’s going to take a very long time to say we believe there is life,” she says. “But it’s a rallying cry to investigate this area of science much more definitively.” Truong and Lunine, who began working on the volcano hypothesis before NASA selected the discovery missions, are standing by their assertion that it can plausibly explain the presence of phosphine, and they are eager to see what comes next. “I hope this increases interest in Venus as a planet,” Lunine says. But he feels that it shouldn’t have taken a debate about life to spark so much curiosity about Venus. Even without life, we should want to get to know our planetary sibling, which is so similar to our home in many ways, “and yet it seems so different geologically,” he says. “This is just another reason to think of Venus as an interesting target for exploration.” Update 7-20-2021 7:32 PM ET: This story was updated to include comments and additional information from Clara Sousa-Silva.
📩 The latest on tech, science, and more: Get our newsletters ! When the next animal plague hits , can this lab stop it? Netflix still dominates , but it's losing its cool Windows 11's security push leaves scores of PCs behind Yes, you can edit sizzling special effects at home Reagan-Era Gen X dogma has no place in Silicon Valley 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Contributor Topics space SETI aliens venus volcanoes extraterrestrial life Biology Ramin Skibba Jorge Garay Swapna Krishna Charlie Wood Robin Andrews Matt Simon Matt Simon Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,528 | 2,021 |
"Scientists Could One Day Float an Aerial Robot Above Venus | WIRED"
|
"https://www.wired.com/story/scientists-could-one-day-float-an-aerial-robot-above-venus"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Chris Wright Science Scientists Could One Day Float an Aerial Robot Above Venus On Earth, a network of seismology ground stations identify the epicenter and strength of quakes. The new study demonstrates how a balloon equipped with sensors can do the same from the air.
Photograph: NASA Save this story Save Save this story Save It’s a shame, really. Venus is nearly the same size and mass as Earth. Its terrain is rocky and compacted, like ours. Once, it might have had oceans. NASA is eager to send a mission there.
Unfortunately, its surface is 800 degrees Fahrenheit, with pressure so crushing it’d turn you into paste. The entire planet is shielded from view by a cloudy blanket of deadly sulfuric acid particles. And that’s no fun for probes.
One sent by the Russian Vega mission in the 1980s survived only 56 minutes on the planet’s surface. (Its predecessors, the Soviet Venera series of landers, also died abruptly.) This all might seem to make exploring Venus a dead end. Perhaps because of its lack of hospitality, the planet has been significantly less studied than Mars.
For a sister planet, we simply don’t know much about the place.
But before future Venusian explorers lose hope, they’d do well to look up. Another part of the Vega mission is considered one of the great successes of space exploration: its balloons, which floated through the Venusian middle atmosphere 54 kilometers above the planet’s volcanic plains. Up there, it was practically balmy. Temperatures were around 80 degrees Fahrenheit, with pressure similar to Earth’s surface. The balloons voyaged almost 7,000 miles around the planet and survived more than two Earth days before their batteries died. Their limited sensors provided direct and indirect data on temperature, pressure, wind, light level, cloud particles, and altitude. (A modern analysis of the mission revealed that they might have floated through a drizzle of sulfuric acid rain.
) So while NASA has flirted with building a steampunk mechanical rover that might withstand the surface heat and pressure, they’re also thinking about exploring by air. What might they discover up in the planet’s sky? “It is by no means difficult to imagine an indigenous biology in the clouds of Venus,” Carl Sagan once wrote.
(Perhaps, he said, one with a float-bladder sac and a taste for water and minerals.) Last year, scientists detected the signature of what may be phosphine , a potential sign of anaerobic life― or maybe just volcanoes ―in the atmosphere, sparking a debate about alien life nearly as heated as the planet’s surface.
“Flying an aerial platform at Venus would be hugely advantageous,” Paul Byrne, a planetary geologist at North Carolina State University, told WIRED by email. Though not associated with the NASA team’s balloon research, Byrne feels the idea could deliver a host of new data to researchers, from chemical measurements of the planet’s atmosphere to evidence of a weak modern-day magnetic field to infrared images of the surface. “A Venus balloon? Oh yes, please.” Which helps explain why, in 2019, a team of NASA JPL, Caltech, and international scientists found themselves scrambling to build a balloon that could accomplish one of the big objectives of a future Venusian airborne mission―eavesdropping on the planet’s seismology. Their findings, published in May in Geophysical Research Letters , show how capturing low-frequency sound waves in the atmosphere caused by earthquakes on our own planet is great practice for listening to venusquakes.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight After the 2019 Ridgecrest quake, scientists rushed to build ultra-lightweight heliotrope balloons that could carry seismic sensing equipment aloft.
Photograph: Gerry Walsh/NASA Seismology is about waves. The epicenter of an earthquake is like a stone dropped into a pond. The disturbance ripples outward along Earth’s crust. That movement translates into a pressure change in the air just above the ground. This produces infrasound waves (long, slow sound waves so low that humans can’t hear them) that travel through the atmosphere both straight up from the epicenter (epicentral waves) and above seismic waves as they travel along the earth (surface waves).
On Earth, a network of seismology ground stations uses sensors to detect these waves, and to identify the epicenter and strength of quakes. The new study demonstrates how a balloon equipped with sensors can do the same from the air. A balloon-borne barometer that captures only epicentral or ground infrasound waves can lend some insight into a quake’s location and strength. One that captures both might tell what the crust of a planet looks like. That could prove useful in scoping out the surface of a planet we can’t actually see.
(Seismological data also works for ones we can see. Marsquake readings from the InSight lander have been invaluable in mapping the Martian crust.
) Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight To prove that studying Venus’ seismology from the air was possible, the team planned a flight campaign in Oklahoma—where earthquakes are frequent, probably due to fracking—to test out whether they could hear the infrasound of Earth’s rumblings from high up in the atmosphere. But when the Ridgecrest series of earthquakes struck near JPL’s Los Angeles home base, in 2019, triggering thousands of small aftershocks, senior program manager James Cutts, research technologist Siddharth Krishnamoorthy, and others on the team sensed an opportunity. “This had to be done quickly, since the later it got, the weaker and less numerous the aftershocks were,” Krishnamoorthy says.
Problem: They didn’t have balloons yet. Over a frantic 16 days, they scrambled to build four ultralight “ heliotropes ,” simple balloons about 20 feet in diameter and 12 feet tall, made using plastic sheeting and tape. The heliotropes—named Tortoise, Hare, Hare 2, and CrazyCat—rose into the stratosphere as the sun heated the air inside their charcoal-covered plastic balloon “envelopes.” They floated freely with the breeze, each with a barometric sensor package hanging from a tether below, listening for the very faint sounds of an aftershock.
On July 22, 2019, the ground shook with that aftershock. As it passed below the balloons, it produced surface infrasound wave disturbances that traveled upward 4.8 kilometers and hit Tortoise’s barometer, registering as a series of tiny pressure changes. These changes were so small that it took Krishnamoorthy months of data analysis after the flight to see them. But there they were: Tiny wave profiles neatly matching quake readings from four ground-based seismometer stations in the area near the balloons. They matched computer models of infrasound propagation from the aftershock, too. Tortoise had heard the quake.
But could a balloon capture seismic infrasound while floating in the atmosphere of Venus ? There, the balloon would be flying much, much higher—about 50 kilometers rather than 5. At that altitude, Venus’ acid clouds might attenuate the infrasound waves , making them slightly harder to detect, says Andi Petculescu, a theoretical acoustics physicist at the University of Louisiana Lafayette. (What does Venus sound like? Here’s his study on what Bach might sound like on Earth, Titan, Venus, and Mars, due to different sound wave attenuation factors.) Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Yet other factors would work in the balloon’s favor. Though Venusian winds blow steadily at more than 200 miles per hour, a balloon at stable altitude should remain relatively “quiet” as it breezes along. (Imagine the calm of being on a hot air balloon, which is traveling at the same speed as the wind.) Because of the super-thick Venus atmosphere, Byrne writes, Venus’ surface is coupled to that atmosphere some 60 times more effectively than Earth’s is—which means that the energy from a quake will be much more readily transmitted into the atmosphere on Venus, making it a prime locale for floating a seismometer.
A future Venus balloon would also be a lot more complex than the homemade heliotrope. Imagine instead something like the tennis-court-sized, self-navigating balloons designed by Google’s recently shuttered “Loon” program , which aimed to use high-altitude balloons to beam internet worldwide.
(Though the program was shut down, the balloons worked well, successfully providing internet connectivity to Puerto Rico in 2017 after Hurricane Maria damaged telecommunications infrastructure, and restoring emergency phone service after devastating flooding in Peru.) A concept image for a balloon that could carry seismic infrasound sensing equipment over Venus.
Illustration: Tibor Balint Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight A Venus mission would most likely include a variable-altitude “aerobot” in combination with a planetary orbiter for transmission relay to Earth, says James Cutts, a planetary scientist at JPL. Unlike the Vega balloons, which stayed at one altitude, a Venus aerobot would consist of two balloon “envelopes” that could exchange helium gas to increase or decrease buoyancy to rise or fall by roughly 10 kilometers. (This would also allow them to fly at night, unlike the heliotropes used in the California study.) The aerobot would be powered by a solar array and rechargeable battery and could carry a payload of between 100 and 200 kilograms in a gondola hanging below—a major upgrade from the Russians’ 8-kilogram balloon. Cutts imagines a mission length of 100 days, during which the balloon would ride Venusian winds to circumnavigate the planet some 20 times or more.
A 2020 NASA flagship mission concept study report included an aerobot as part of a larger potential package including a lander, orbiter, and small two satellites that would be delivered to Venus by a Falcon 9 Heavy Expendable rocket. The aerobot would separate from the orbiter and drop into the Venusian atmosphere, protected from friction and heat by aeroshells, along with parachutes to slow down the craft’s initial descent and inflate its balloons. “Venus’ atmosphere is more compact than Earth’s,” Cutts says. “Some people have compared [the craft’s entry] to dropping a vehicle off a six-story building into a bowl of cement.” This deceleration might produce 50 g’s of force—a rough landing, but not the roughest NASA has tried.
Once deployed and inflated, the aerobot could listen for seismic activity using barometers similar to the ones heliotropes use to listen on Earth, beaming a treasure trove of data back home. “We know virtually nothing about venusquakes,” says Byrne. “In fact, we don't even know for sure that they happen—although I think everyone presumes that they must. Detecting one would give us firm evidence that Venus is geologically active. And then, depending on the type of quake, and its location and magnitude, and how those characteristics compare with the structures we can see with existing radar image data, we would quickly learn a huge amount about the nature and behavior of tectonic processes on Venus.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight An onboard aerosol mass spectrometer could also study the planet’s gas clouds, looking for traces of phosphine and other chemicals of interest. A digital holographic microscope could study individual particles of interest. (Liquid droplet, ice crystal, or something else?) The aerobot might tow a second probe that could measure data at lower, more dangerous altitudes. Perhaps the aerobot would trigger its two-balloon system to occasionally dip lower in the atmosphere and take measurements before rising again to the relative safety of the acidic sky.
In the next two years, Cutts and his team plan to build subscale versions of aerobots, testing them on Earth during deployment and inflation. They hope to conduct a long-duration flight test of a prototype, too. Sensitive components of the craft’s arrays could be tested in simulations of Venus’ acidic clouds. According to NASA’s concept mission study, a best-case scenario could see a launch date set for 2031 and an aerobot entering Venus’ atmosphere by 2034.
Just a couple of weeks ago, the team traveled to Oklahoma to attempt more seismic measurements from their heliotropes. A simplified balloon cruising above the prairie, listening for frack-caused quakes, might seem a far cry from an aerial robot grabbing data in another planet’s atmosphere. But we already put a seismometer on Mars. Maybe floating one through the Venusian sky isn’t as far away as we think.
Update 8-6-2021 12:01 PM: This story was updated to attribute information about the acoustics on Venus to Andi Petculescu.
📩 The latest on tech, science, and more: Get our newsletters ! A people's history of Black Twitter Scientists just ‘looked’ inside Mars.
Here's what they found This tool calls out thousands of hackable websites Intel's ambitious plan to regain chipmaking leadership Power up anywhere with the best travel adapters 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Topics venus earthquakes seismology planets space Spacecraft NASA Ramin Skibba Matt Simon Charlie Wood Ramin Skibba Jorge Garay Rebecca Boyle Matt Simon Robin Andrews Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,529 | 2,013 |
"iOS 7 is Gorgeous, but Could Be Even Better | WIRED"
|
"https://www.wired.com/gadgetlab/2013/09/more-is-better"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Mat Honan Gear iOS 7 is Gorgeous, but Could Be Even Better Tavis Coburn Save this story Save Save this story Save The screen in your pocket dominates all the others in your life. It wakes you up in the morning, checks the traffic, and tells you the best way to get to work. It lets you read your email (or play Candy Crush ) during meetings, find a lunch spot, schedule a recording on your DVR, and read the day's news before you doze off at night. That means even tiny changes to mobile operating systems are hugely important. And big ones? Well, if you're one of the hundreds of millions of people out there with an iPhone, check your pocket. Apple's new iOS 7 represents the most substantial change to the iPhone since it started supporting third-party apps. But as dramatic as the update may look, it doesn't go far enough functionally.
iOS 7 is gorgeous. This is where Apple's great design lead, Jony Ive, gets to wow us with a palette of bits instead of atoms, and what he's come up with is wonderful. It fronts an entirely modern look and feel, full of bold colors and stylish typefaces. It's more iconic and less cartoonish: old leatherette flourishes banished in favor of empty space that lets images and data do the talking.
Like the search bar, some settings launch from any home screen.
Like the search bar, some settings launch from any home screen.
Tabs in Safari, Apple’s native web browser, have an entirely new, stacked look in iOS 7.
Tabs in Safari, Apple’s native web browser, have an entirely new, stacked look in iOS 7.
iOS 7’s so-called flat design doesn’t try to ape the real world like its predecessor.
iOS 7’s so-called flat design doesn’t try to ape the real world like its predecessor.
It's also a more efficient worker. Launching apps is easier, thanks to an ever-accessible search bar: Just swipe down from the top of any home screen, type in a few characters, and you can quickly find and fire up whatever app you're looking for. Notifications are better too. They're easier to see and, more important, easier to dismiss. New features like predictive directions, which tell you how long it will take to get to a commonly visited destination, are truly great. Siri is maturing as well; instead of just performing system tasks, Apple's voice-activated assistant can now lend a hand inside third-party apps (as long as the devs write in the functionality).
Each of these little tweaks and changes and user-interface decisions becomes massive, because they're so widely distributed. Remember: Almost any action you can perform on your phone is also being performed by hundreds of millions of other users across the globe. The fractional seconds it takes you to accomplish a routine task will be amplified into many millions of man-hours over the course of the operating system's lifetime. So every choice Apple makes to increase efficiency plays out in productivity cycles like concentric tsunamis on our collective pond.
That's why iOS 7 is one of the most dramatic technology shifts we will see this decade; it just affects so many people. Unlike Android, which is fragmented across a multitude of gadgets, almost everyone with an iDevice will soon be running this reimagined operating system. That's around half a billion people on tablets, smartphones, and music players, doing everything from checking in on Facebook to running small businesses.
Apple has one hell of a responsibility. If it changes too much, it could alienate a large nation's worth of users. Even so, it's clear that iOS 7 could—in fact, should—have gone even further.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe The design of iOS 7 has been called evocative of Windows Phone and Android's latest iteration, Jelly Bean. I wish it had gone beyond evoking and directly stolen something that both of those platforms do very well: predictive and pervasive information delivery. Yes, there are flourishes of this (those predictive directions), but overall iOS 7 still makes you go looking for information instead of displaying it before you think to ask.
Consider what you use your phone for. Apps, mostly, right? And beyond just using the apps, you're getting information into and out of them. You send and receive photos through email and social networks; you search for a place to eat, pop its address into a navigation app, and get directions. The future of iOS 7 is directly linked to these actions—and to how developers can use the information they generate. Aspects of the OS like Notification Center and Siri—which allow you to interact with a program without actually launching it—are the best examples of this information flow in Apple's new mobile world. But they're still incredibly limited: Text messages pop up and make you swap apps to reply, appointments made in apps like Urbanspoon aren't auto-added to the calendar—that kind of thing. It's not very advanced, especially when you look at Apple's rivals.
A phone isn't like your desktop computer — you don't sit there with it open.
If Apple wanted to make iOS 7 more than just a visual step forward, it should have changed the way we think about information delivery even more than its competitors have. Given Apple's deep control over everything that happens in its ecosystem, and the fact that it always knows where the phone is, the company could have done a lot more to make sure the information you need shows up on your handset without your asking for it. These are the kinds of innovations we're seeing with Google Now, a you-don't-even-have-to-search tool that can, say, dive into your email and display things like flight times and dinner reservations that aren't even on your calendar or show you all the nearby restaurants at lunchtime without being prompted.
On Motorola's new Moto X, you can access this information without even touching the phone. Whether it's in your pocket or across the room, saying the words "OK Google Now" will muster the full might of Mountain View's servers. Imagine what Apple could have pulled off if it had focused on information rather than applications.
A phone isn't like a desktop computer — you don't sit there with it open, staring at it, opening and closing windows. You pluck it from your pocket, unlock it, fire up an app, maybe toggle to another one, and then put it back again. That screen in your pocket may be the one you use the most, but it's a shower, not a bathtub. You're in and out; you're not soaking. The new interactions on iOS 7 are great, but what we really need is something so good at serving up information that we barely need to interact with it at all.
Senior Staff Writer Topics apple ios magazine magazine-21.10 Julian Chokkattu Julian Chokkattu Justin Pot David Nield Julian Chokkattu Justin Pot Brenda Stolyar Boone Ashworth WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,530 | 2,013 |
"Google's New Tools Show How Deep Glass Will Embed in Our Lives | WIRED"
|
"https://www.wired.com/gadgetlab/2013/11/google-glass-sdk"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Mat Honan Gear Google's New Tools Show How Deep Glass Will Embed in Our Lives Google's Timothy Jordan shows off Glass performing an on-the-fly translation of a street sign.
Photo: Ariel Zambelich/WIRED Save this story Save Save this story Save The promise of wearable computers is that the devices themselves will go away. They'll melt into the background to deliver data as needed, all without your having to fetch or look anything up. Google announced a few new features to its Glass software developers kit on Tuesday that help its face computer both be there and go away. With a few simple third-party apps, the company showed (probably better than ever before) just how Glass will evolve into something that's invisibly integrated into your day-to-day routine.
Glass' new GDK -- or Glassware development kit -- makes it easier for application developers to tap into the hardware, even when its offline, to perform tasks in real-time without having to go back and forth to servers in the cloud. It opens new possibilities for developers to build tools that center around either immersive or ongoing tasks (as opposed to, say, simply notifications about things like new email, or the ability to upload a photo and share it online).
Immersive tasks will let you dive into an application on Glass, much as you might on a smartphone, giving it your undivided attention -- think gaming, for example. An ongoing task is one where you're swapping back and forth between the app and something you're doing -- think cooking. Google showed a few new apps from Strava, AllTheCooks, GolfSight, Word Lens, and Spellista meant to demonstrate how developers can now use Glass to deliver immersive experiences, or enhance ongoing ones.
Spellista and Word Lens were immersive apps. The former is a simple letter jumble game and honestly it was ho-hum. Word Lens, however, was pretty amazing. It fires up with a simple voice command, "Okay, Glass, translate this." Then you can look at a sign written in a foreign language, and the app will automatically translate the text and overlay it on the display. This happens nearly instantly. It was straight out of the future. But the ongoing tasks were probably even more interesting than the immersive ones.
Take the new Strava app for Glass. Strava, a tool for tracking bike rides or runs, is immensely popular with cyclists and gaining ground with runners. But it's the type of app that clearly benefits from not having to pull a phone from a pocket or bag to get an update -- you don't want to have to pluck it from a pocket while you're in the bike lane on Market Street. The Glass app lets it deliver real-time updates on speed and distance, and deliver ride segment reports simply by letting you look up. It promises to help you be more situationally aware, more in the moment, while still getting data delivered when you want it. Similarly, GolfSight can let you know where you are on a golf course, how far you are from the green, and what hazards are between you and it without having to dive into a phone.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Strava showed members of the press a new Glass App.
Photo: Ariel Zambelich/WIRED Or take AllTheCooks, which lets you view or record recipes using Glass. One of the other promises Glass has, so far, truly failed to deliver on is the ability to do something else with your hands while still getting data updates from the device. There's been much speculation how Glass may one day be used in the operating room. But AllTheCooks shows how remarkably useful it can be in the kitchen. You can simply play the steps back without taking your attention away from what you're doing. You don't have to worry about greasy fingers on a tablet, or printed page. Meanwhile, filming step-by-step instructions without using your hands promises to be something that all manner of apps will be able to take advantage of.
When people have asked me about Glass, I've often tried to explain it by saying, "it's Google Now for your face," (a phrase I stole from Metafilter founder Matt Haughey). That's because the Google Now functions -- delivering information to you before you ask for it -- have been its killer feature, even more than the ability to take photos or videos. The way it can drop an announcement on you based on where and when you are is amazing. But previously I haven't seen third party applications tap into that same functionality. Instead, you get things like breaking news alerts from the New York Times , or notifications about @ replies on Twitter. That's clearly changing now.
There's been a huge over-emphasis on what Glass is (aesthetically ugly, of limited utility, socially awkward) rather than what it will be. Today, we got a glimpse of the path it's heading on -- maybe our best yet. What's coming is a more technologically robust device that will enhance our situational and possibly even social awareness. And just think, one day it might even be stylish enough to wear in public.
Senior Staff Writer Topics Google Wearables Justin Pot Boone Ashworth Julian Chokkattu Julian Chokkattu Simon Hill Boone Ashworth Julian Chokkattu Simon Hill WIRED COUPONS Dyson promo code Black Friday Sale: Up to 40% off Dyson technology + bonus gifts GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,531 | 2,013 |
"Why Wearable Tech Will Be as Big as the Smartphone | WIRED"
|
"https://www.wired.com/gadgetlab/2013/12/wearable-computers"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Bill Wasik Gear Why Wearable Tech Will Be as Big as the Smartphone Save this story Save Save this story Save Google Glass was just the beginning. A new generation of wearable tech is coming—and it will transform the way you experience the world.
Ian Allen Data will not help you if you can't see it when you need it. For Dan Eisenhardt—a competitive swimmer for more than a decade, beginning as a 9-year-old in his native Denmark—the data he needed in the water, what he could never know in the water, was his splits. His event was the 1,500-meter freestyle, the longest slog in the sport, a near-mile of grinding exertion divided into 15 laps of 100 meters apiece. As with every distance sport, pacing is all; lag your target time on the first two laps and you may never catch up, but accidentally beat it and you'll load your tissue with lactic acid, doom your endgame. How fast was his last lap? How did it compare to his usual pace? His coach up on the pool deck could know, his parents in the stands could know. But Eisenhardt, at war in the water, could only guess.
The rigors of engineering school eventually forced Eisenhardt to stop racing. He worked for a while as a management consultant. But later, during business school, while he was spending an exchange semester at the University of British Columbia, the problem nagged at him again. For a project in an entrepreneurship class, he pitched a business plan: data-enabled goggles for swimmers like his former self. He teamed up with some other students, and they soon concluded they had the wrong sport. Swim goggles were too small to support a screen, plus the athletes were too few in number—and too unaccustomed to shelling out for expensive gear. Close at hand in wintry Canada, though, was a better idea. In January 2008, after a year or so of tinkering, four of the classmates founded Recon Instruments.
Their first product, Recon Snow, is a heads-up display for skiers and snowboarders. From the outside it looks just like any set of ski goggles. But tucked below the right eye is a little display, controllable by a simple remote—snow-proof with big, chunky buttons—that clips to a jacket. The main screen is a dashboard that shows speed, altitude, and vertical descent. There's also a navigation view that uses the built-in GPS to plot position on a resort map, as well as an app screen that offers access to a camera. Through Bluetooth, the display integrates with a smartphone, letting skiers play music, answer calls, and see text messages or other notifications. Recon has sold 50,000 of the Snow so far, and the second generation, Snow2, came out in November. The company's next product—Jet, designed for cyclists, with voice control and gaze detection for hands-free use—will ship in March.
Recon Heads-Up Display | From the outside, the only real sign that anything's different about these Oakley goggles is a tiny red logo on the right. On the inside, though, the Recon Snow2 (starting at $399) sports an ingenious display that lets skiers and snowboarders stay connected on the slopes. With an armband-mounted remote control, users can toggle between a few simple screens showing speed and more. Friends using Recon devices at the same resort can keep track of one another on a map. Recon sold 50,000 pairs of its first-generation goggles, and the company's second product—Jet, designed for cyclists, with voice control and gaze detection for totally hands-free use—goes on sale in March.
Ian Allen Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Technically, the Recon doesn't do anything that the average smartphone couldn't. The lavish array of sensors in today's phones can chart speed and altitude; social networking apps can find friends and set up voice or video chats; any number of map apps can navigate users down a mountain. That is, a smartphone would do those things—if users could access it on a ski slope or cycling run. But they can't, at least not without risking a crack in their screen or their head. What Recon sells is the ability to see all the crucial data, and only the crucial data, at times when it would otherwise remain locked away. It brings the power of the smartphone out of your pocket and into your field of vision, accessible any time you glance its way.
Dan Eisenhardt, 38, started Recon with three MBA classmates to create a heads-up display for snow goggles.
Dan Goldman This is the promise of wearable technology, and it's the reason—after more than 20 years of tinkering by cybernetics enthusiasts—we're finally seeing an explosion of these devices on the market. It's the reason Google has poured millions into an improbable set of eyeglasses, why Samsung has unveiled a companion watch for its smartphones, and why Apple is widely rumored to be exploring something similar. It's the reason tiny companies banked thousands of preorders last year for smartwatches, gesture-controlled armbands, transmitting rings, notification bracelets, and more. A new device revolution is at hand: Just as mobile phones and tablets displaced the once-dominant PC, so wearable devices are poised to push smartphones aside.
In purely technological terms, the wearable revolution could take shape much faster than the mobile revolution that preceded it. Thanks to what former WIRED editor in chief Chris Anderson has called the "peace dividend of the smartphone wars," sensors and chip sets are cheaper now than ever, making it easier for small companies to incorporate sophisticated hardware into wearable devices. And while smartphone manufacturers had to master the tricky art of providing dependable mobile Internet service, wearable manufacturers can piggyback on those innovations using simple Bluetooth or other protocols to communicate with a smartphone and thus with the outside world. With all that prebaked hardware and wireless connectivity—and huge preorders from crowdfunding platforms like Kickstarter—it has become possible for tiny companies to dream up, build, and sell wearable devices in competition with big companies, a feat that was never possible with smartphones.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Recon Heads-Up Display | The display's main screen shows speed and other key metrics. Another shows your progress on the trail, while a third shows notifications.
Ian Allen It may seem laughable to suggest that people will soon neglect their iPhones in favor of amped-up watches, eyeglasses, rings, and bracelets. But then again, 10 years ago it seemed laughable to think that people would use their smartphones to email, surf the web, play games, watch videos, keep calendars, and take notes—all once core tasks of desktop PCs. We can already see how wearable devices might peel off some of the phone's key functions: One study of smartphone users indicates that on average we unlock our gadgets more than 100 times a day, with some of us pawing at screens far more often than that. Internet analyst Mary Meeker estimates that as many as two-thirds of those uses could be handled with a wearable device.
To get there, though, pure functionality won't be enough. After all, people could surf the web on their BlackBerrys; smartphones didn't really take off until the advent of the iPhone, a device that launched an aesthetic transformation in the tech industry, as design went from an afterthought to a corporate necessity, a core competency prized no less than the ability to make a faster chip or stable operating system. Wearable devices—technology that people will want to display on their bodies, for all to see—represent a new threshold in aesthetics. The tech companies that mastered design will now need to conquer the entirely different realm of fashion. And that could require technologists to unlearn a great deal of what they think they know.
These days Thad Starner, a wearables pioneer who now serves as technical lead for Google Glass, cuts the figure of a casual-chic tech executive, with stylishly cropped blond hair and a penchant for plain black T-shirts. But when I meet him at a Google office complex in Mountain View, he turns his laptop toward me and presses Play on an artifact from his deeply unfashionable past. It's a grainy clip from 60 Minutes in which Morley Safer challenges Starner—then a scraggly-haired kid with a goatee, dark trench coat, and black Ho Ho-sized object affixed to his left eye—to carry out what was then his parlor trick: retrieving any piece of information in 30 seconds.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe "Give me the lifetime average and number of home runs, doubles, singles, and triples of Mickey Mantle," Safer says.
Thad Starner, 44, started wearing a head-mounted display as an MIT grad student in the '90s. Now he's the technical lead for Google Glass.
Rob Felt/Georgia Tech Dan Goldman "How do you spell Mantle ?" Starner asks. He types the name on a one-handed keyboard (a "Twiddler" that used combinations of 12 buttons to span the whole range of characters) and peers into his eye-screen.
"I'm getting a lot of hits on that," he says, a touch of nervousness in his voice.
"So did he," Safer retorts.
The year was 1997, and the task took Starner far longer than he'd hoped. He eventually got there by scouring results on AltaVista, the world's best search engine at the time: "The right hit was often in the top 14 but not the top one," Starner now recalls.
Thirteen years later Starner came to work for the company that finally perfected Internet searching. Google made it significantly easier and faster for people to find the information they were seeking, a process that has only sped up in recent years. Google Search, for example, now not only auto-completes but auto-searches, Starner points out. "You don't even have to get all the way through your first word and it might have the right piece of information for you." It all comes down to what Google CEO Larry Page calls "reducing the time between intention and action," words that Starner calls a mission statement.
That dovetails exactly with Starner's vision of how wearables can revolutionize the way we access technology in our everyday lives. He cites a seminal 1968 paper by Robert B. Miller, an IBM psychologist who spent years studying early computer operators. It identifies a crucial threshold of human behavior with machines: We're inclined to give up on them if they take more than two seconds to respond to an instruction. In his own research as a longtime professor at Georgia Tech, Starner found the same rule of thumb applies to accessing devices themselves. He calls it the magic two-second rule. "If you can't get to a tool within two seconds," he says, "your use of it goes down exponentially." Even today, smartphones have trouble meeting that standard. By the time we extract them from our skinny jeans, swipe, type a passcode, and find our way to whichever app we wanted, the moment has usually passed.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Wearables reduce that friction. That's the selling point of Recon Instruments, and it's the same promise that Mary Meeker sees when she imagines wearables replacing many of our smartphone interactions. When you put on Google Glass—or, say, the Pebble smartwatch, whose unprecedented $10.3 million Kickstarter campaign two years ago arguably helped launch the wearable era—you suddenly have a window into your phone, with your calls, texts, and emails popping up on a glanceable screen as they flow in. In the case of Glass or some of the other smartwatches on the market, you even get a camera and voice recorder to document the world around you.
But reducing the time between intention and action goes much further than that. In many of the most cutting-edge applications for wearables, the time between intention and action is actually negative: The device knows what users want before they want it. The heart of the Glass experience is Google Now, the company's attempt to divine and deliver needed information based on context. Observing your driving patterns, the app gives traffic updates when you're almost ready to ask for them; scanning your calendar, it displays an alert when it's time to leave and gives you directions too. Google Now is already available for Android and iOS, but requiring users to check an app on a phone defeats the whole purpose. It's a perfect fit for wearables, though, because it gives instant, even predictive information to get you through the day.
For all its portability, the smartphone still has a distracting screen that pulls us away from whatever else we're doing—in the car, in business meetings, at restaurants. Wearables, by contrast, are a gateway to augmented reality, a more ubiquitous but less distracting data layer that gives us constant intelligence about the world around us—and keeps our attention grounded in that world, rather than off in the digital ether. Phil Libin, CEO of the cross-platform note-taking app Evernote and a big proponent of the promise of wearables, sees Google Now–like applications as his company's future. The ultimate effect of these devices, he says, will be to "make you more aware, more mindful. They'll reduce the number of seconds in the day when you're confused. That's what this whole connected universe will do. It will make you a functionally smarter human being." Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Ian Allen 1. Fitbit Force ($130) and Zip ($60) Fitbit makes a range of activity trackers, from the Force, which can double as a watch, to the Zip (left).
2. NFC Ring (Price TBD) This simple metal ring comes embedded with a radio chip that lets it open doors, make payments, and more.
3. Samsung Galaxy Gear ($300) By far the most technically impressive of the current smartwatches, with its bright screen and wristband camera.
4. Pebble ($150) With more than 250,000 sold, the Pebble has attracted a huge community of developers, who make apps that migrate functions off the phone and into the open.
5. MetaWatch Frame ($230) With its steel case and leather band, the Frame is currently among the more fashionable smartwatches.
6. Jawbone Up24 ($150) This new fitness tracker stays connected with your phone via Bluetooth, letting the app ping you with personalized inspiration.
7. Shine Fitness Tracker ($120) Sonny Vu's graceful aluminum disc matches the functionality of other trackers in a stylish, minimalist form.
This new world of wearables will never arrive if nobody consents to wear them. Such is the problem that weighs on Sonny Vu, CEO of Misfit Wearables, a startup dedicated to creating devices that truly deserve to be called fashionable. Misfit's name is appropriate, at least as it pertains to its office in a drab residential section of Daly City, California. Crammed into a two-bedroom town house, the growing company threatens to resemble a very unfashionable sweatshop. The hardware team occupies one small bedroom, software the other. Shipping hangs out in the dining room, right up against the tiny kitchen. "We used to have a garden," Vu says, gesturing out the sliding-glass door, "but all the plants died." Sonny Vu, 40, started Misfit Wearables in an attempt to create gadgets that are genuinely fashionable.
Getty Images Dan Goldman Despite the overstuffed surroundings, Misfit's first product, the Shine activity tracker, is a study in spare elegance. It's a graceful aluminum disc just over an inch in diameter. A set of simple accessories allows it to be worn in multiple ways: as a pin, a pendant, a wristband, or even a timepiece (a ring of tiny lights around Shine's perimeter indicates the time).
Vu's goal with Shine was to make a fitness tracker that women in particular would be willing to show off. Most of the current trackers, Vu says, "look like they were made by Silicon Valley men for Silicon Valley men," using materials such as plastic and rubber that are more suited for utilitarian gadgets than stylish accessories. Worse, most fitness trackers are designed to be worn on the wrist—but Misfit's research found that 30 percent of women say they would never wear a device there, either because they already own a watch or bracelet they like or because they refuse to wear anything there at all.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Wearables, Vu has concluded, "need to be either gorgeous or invisible," and his first product goes a long way toward gorgeous. But it's also worth pondering the ways in which gorgeous isn't enough. For example, as a physical object, Google Glass is arguably quite attractive, with its curved titanium headband and well-proportioned computing assembly poised just above the right eye. Ride the elevator up to Google's "concierge" space in its San Francisco offices, where buyers get fitted with one of five tasteful Glass colors—Shale, Tangerine, Charcoal, Cotton, or Sky—while attractive Glass-wearing staff serve complimentary drinks and hors d'oeuvres, and Google Glass seems almost fashionable. But the evidence suggests that out in the real world, it simply isn't; six months into the experiment, even hardcore tech boosters who once wrote glowingly of Glass were seldom seen actually wearing the thing. (And these are some of the least fashion-conscious people on the planet.) The problem with Google Glass is not that it's bad industrial design. Google, like the rest of Silicon Valley, has learned a great deal about how to make an aesthetically pleasing product. But Glass is meant to be a highly visible addition to someone's body as they walk around in public. That demands more than just a gorgeous product; it demands a fashionable product. And the tricky task of wearables makers will be to understand the distinction.
One can boil that difference down to two basic rules. The first is what we might call—with apologies for the vulgarism—the Bluedouche principle. For those who don't remember the term, it's an epithet hurled circa 2007 at anyone who walked around talking on a Bluetooth earpiece all day. For all their functionality, and for all the attempts to make more stylish models (Jawbone sells beautiful and highly ingenious ones), earpieces have never succeeded in shedding this fundamental perception of lameness. That's because wearing technology sends a pointed social message, which can render even the best industrial design superfluous.
Consider: However gorgeous a Bluetooth earpiece, it fundamentally says that its wearer might need to make or receive a call at any time—and for most people, that's not a cool message to send. It makes the wearer look like they jump at the world's beck and call rather than engaging with it on their own terms. It's hardly coincidental that the demographics in which the earpieces seem to have caught on (salespeople, say, or small-business owners) tend to value that kind of always-on hustling. One imagines that some of them keep their earpieces in even when they've forgotten to charge them up. Similarly, successful wearable devices will need to convey a message to the world that the wearer is happy to send—even if the batteries are dead.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe If the first rule presents a challenge for wearables as they gain acceptance, the second will present a problem once they really take off. Call this the Trucker Hat principle, after the low-fashion item that became popular as a hipster accessory in the early aughts and then lost steam precisely for its popularity. Everybody was wearing one, so it wasn't cool anymore. Here's yet another way the design lessons of the smartphone era won't apply in the wearables age. If you walk into a business meeting where everyone has the same phone as you, you're not likely to care very much. But what if you walked into that same meeting and five other people were wearing the exact same eyeglasses as you? Or even the same frame in five different colors—say, white, pink, yellow, blue, and green, to name the five hues that grace Apple's stylish (but not, by this definition, fashionable) iPhone 5c? You might as well all be wearing the same trucker hat.
This urge for individuality is so well known in fashion research that there is even a quantitative measure of it called the consumer need for uniqueness scale. Developed by three marketing professors in 2001, the CNFU test consists of 31 first-person statements—for example, "I collect unusual products as a way of telling people I'm different" or "When a style of clothing I own becomes too commonplace, I usually quit wearing it"—statements the subject ranks on a scale of 1 to 5. In most study groups, there's a wide variation in CNFU scores, but nearly everyone who takes the test will reveal at least some desire for uniqueness.
The tech industry isn't immune to this drive for individuality. Witness Apple's legendary "1984" and "Think Different" ad campaigns, which encouraged customers to distinguish themselves from the brainwashed hordes. But even Apple users didn't expect that their computer would express their personality and style to the same extent their clothing did. Now tech companies will be competing in product categories—wristwatches, glasses, other fashionable accessories—where even the least fashion-conscious consumers demand a great degree of uniqueness and variety.
In these early days , it's the Bluedouche problem—the social message that our wearable tech is sending—that most needs to be overcome. That's why some of the most promising devices today are simple, targeted products that allow for more elegant form factors and a more streamlined sales pitch. Sonny Vu calls this use-case engineering, meaning a tight focus on one particular function, such as notifications. "Right now what we're seeing with smartwatches is that they're like smartphones you can wear on your wrist," Vu says. "I don't think that's the way to go." Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Vu said this in September, a week before Samsung unveiled its Galaxy Gear smartwatch, but it was as if he'd been given a sneak peek at the device. Technologically speaking, the Galaxy Gear is impressive, with its bright 1.6-inch touchscreen, 1.9-megapixel camera on the side of the wristband, and Dick Tracy-style speaker underneath it, at the clasp. Thanks to support for S Voice, Samsung's (not quite as capable) answer to Siri, the Gear can reply to text messages, add to a calendar, dial a contact, and more. Testing out the Galaxy Gear is a revelation, because it's a chance to step a little farther into that possible future, a few hops down that plausible timeline in which wearables subsume the functions of our phones.
Based on early reviews, though, this maximalist approach has possibly been a mistake. "Nobody will buy this watch, and nobody should," sniffed former New York Times columnist David Pogue in a representative review. Some of this response has to do with tech limitations—at launch, the Galaxy Gear was compatible with only one phone and had just a handful of apps. But some has to do with the social message such an all-encompassing device sends—namely, "I'm wearing a fully functional computing device on my body." Contrast that with the stand-alone fitness tracker, like the Jawbone Up or the Nike FuelBand. By sending a more constrained and acceptable social message ("I'm sporty") and hewing to Vu's advice of doing one thing well, those devices have forged the first profitable path to the future of wearable computing.
Two recent Kickstarter projects, scheduled to hit the market at the beginning of 2014, take that lesson further. One, the NFC Ring, is the brainchild of John McLear, a web developer in the British city of Bradford who hit on his big idea after his girlfriend "kept shouting at me about leaving the front door unlocked," he says. So he came up with a simple way to unlock a door with no key: Make a ring, the simplest and most unassuming genre of jewelry, but embed a near-field-communication chip into it. That chip lets the ring talk not just to NFC-enabled door locks (available off the shelf) but also to a host of other systems, including touchless payment networks that are already widely used in the UK and Europe. McLear also offered a wealth of different looks—not just various sizes but colors of metal, inlays, and anodized coatings. The Ring beat the Bluedouche problem by offering very specific functionality while surmounting the Trucker Hat problem with an array of unique choices. Fifteen thousand preorders flowed in.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe The second intriguing example is the Embrace+, a simple device for delivering notifications from your smartphone. But here's the twist: It doesn't have a screen. Through a smartphone app, users can program the Embrace+—a translucent bracelet with LEDs hidden inside—to flash different colors, depending on the message being conveyed. If a best friend calls, it might flash red; if a post gets liked on Facebook, orange; and so on. Perhaps owing to the current demographics of Kickstarter users, a surprising portion of the project's roughly 4,500 backers are thirtysomething men, says Rudi Beijnen, the Dutch expat in Shenzhen, China, who heads up the project. But he imagines, logically enough, that the device will eventually find a market among teenage girls who can't always look at their phones—in English class, for instance. By eschewing a screen for a suggestive set of lights, it manages to skirt the Bluedouche problem, turning a wrist-mounted wearable into a subtle act of adolescent rebellion.
Not long ago in New York, I visited the SoHo showroom of a boutique watchmaker called House of Horology. There I met the two principal collaborators behind the Agent smartwatch, another Kickstarter campaign (more than 5,600 backers have pledged in excess of $1 million so far), whose first units are scheduled to ship in early 2014 and who personify the coming merger of tech and fashion. Indeed, the two men embody their respective realms almost comically. The tech brains and prime mover behind the Agent project is Chris Walker, 35, a rotund baby-faced Idahoan packed into a charcoal suit. The style is supplied by Lawrence Leyderman, House of Horology's 31-year-old proprietor, a New York City native wearing a mustardy-tan hoodie and cargo shorts.
Leyderman comes from a watch family (his father owns a watch-repair shop in Midtown), and he grew up admiring boxy "pilot"-style models from such European watch houses as Panerai, Audemars, and Bell & Ross. Their influence is clearly visible in Leyderman's own line of Bedlam watches, which he began selling in 2012 to great acclaim; last spring, New York magazine named his shop the best men's watch store in the city. Walker already knew he wanted to build a fashionable smartwatch when he stumbled across the store last January. He suggested a partnership, and Leyderman began developing his take on the design: a chunky, vaguely military-looking frame cast in metal with an irregular 12-sided face, plus a thick leather or rubber strap with contrast stitching. Today the two are putting the finishing touches on Agent, which they ambitiously imagine as the world's best smartwatch along just about every dimension: function, stability, security, and—best of all—coolness.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe So far this kind of marriage between technology and fashion thinking remains vanishingly rare. Among big companies, perhaps the most notable example is Beats by Dre, the ubiquitous headphones that kids all around the country wear on their ears or, tellingly, around their necks—that is, when they're not even listening to them. Though Beats is based in Santa Monica, near the Hollywood-industrial complex that feeds its image, its designer—Robert Brunner, a graying but hip 55-year-old with chunky plastic glasses and new twins at home—works out of an office near the San Francisco waterfront. He explains that Beats keeps its offerings fresh through a canny process of constant product introduction. In addition to the company's standard offerings, it puts out limited-edition seasonal colors every six months or so, often informed by the same high-end color forecasting research that fashion houses use. Beyond that, Beats also regularly rolls out tiny runs of "custom" headphones, cobranded with a pro football team, say, or tagged by a graffiti artist, or even single pairs made to order for a particular celebrity. Those editions take a ton of work to coordinate, and if they do impact the bottom line of such a large company, it's probably for the worse. But these small editions help counter any perception of Beats as an overexposed commodity; in the weird cultural math of fashion, the existence of Kobe Bryant's one-of-a-kind Beats in faux snakeskin makes your cheap old black model somehow less cliché.
"Capturing people's imagination in a way that makes them want to put your stuff on their body is a skill set that not many people have," Brunner says. "It definitely doesn't exist in many large corporations." Brunner rattles off some of the ways that fashion and tech are at odds: the very different sorts of early adopters whose acceptance drives products into the mainstream, and the even more aspirational dynamic ("Who do I want to be like?") that motivates people to buy. "It's very complex," he says. "Tech companies don't get that stuff." Then again, they might now be ready to learn. In just the past year, Apple alone has hired executives from Burberry, Levi Strauss, and Yves Saint Laurent—in the last case, to head up a "special projects" division that many suspect will wind up creating wearable devices.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe It's an auspicious moment for wearables, one that's been two decades in the making. Sonny Vu, echoing a sentiment I hear from a few wearables thinkers, says "it feels like 2003 of the mobile era"—that is, right before smartphones came along to invent a new category. A pessimist, pondering the reaction to Google Glass and the Galaxy Gear, might counter that it's more like 1993, when Apple's Newton PDA showed off the capabilities of mobile devices a decade before the public was prepared for it. But unlike with mobile, the barrier to the wearable future isn't technological innovation; it's the unique challenge of creating something that is not just functional or even beautiful but deeply personal. The wearable future will be here someday. The only question is how soon you'll be willing to put it on.
BEYOND GLASS Google isn't the only company trying to put a computer on your face. But some of its competitors have drastically different visions of how we'll use head-mounted devices.
Telepathy One Takahito Iguchi—founder of Telepathy, a recent Japanese transplant to Silicon Valley—thinks Glass tries to do way too much. He's built a slender device called One, which, with a combo camera, earpiece, and tiny projector, enables real-time sharing of audio and video for better face time with friends. It's scheduled to hit the market this year, but no price has been set.
Motorola HC1 Motorola Solutions (the part of Motorola that Google didn't buy) sees heads-up displays as an industrial device: a crucial tool that repair techs or surgeons could use to browse or share images in high-stress situations. The HC1—which retails for $4,399 to $5,499—is rugged and water-resistant, with removable, washable neoprene fittings.
Meta Spaceglasses This startup wants to go beyond Glass' above-the-eye screen to provide true augmented reality, providing a pervasive digital overlay on the entire field of vision. Meta's soon-to-be-released Spaceglasses ($667) are far from stylish, but they'll let you use hand motions to manipulate objects displayed on the transparent eyeglass screens—a pretty neat trick.
Dan Goldman David Nield Nena Farrell Simon Hill Brenda Stolyar Scott Gilbertson Brenda Stolyar Brenda Stolyar Simon Hill WIRED COUPONS Modloft Discount Code Black Friday Sale: 50% off sitewide + Extra $100 Modloft discount code SHEIN Coupon Code Up to 30% off -SHEIN Coupon Code Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Ulta Beauty coupon Ulta Beauty Coupon Code: get $3.50 Off $15 qualifying purchase Dyson promo code Black Friday Sale: Up to 40% off Dyson technology + bonus gifts Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,532 | 2,023 |
"Watch Marine Biologist Answers Shark Questions From Twitter | Tech Support | WIRED"
|
"https://www.wired.com/video/watch/tech-support-shark-expert-answers-shark-questions-from-twitter"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Marine Biologist Answers Shark Questions From Twitter About Released on 08/25/2022 I'm Marine biologist Amani Webber-Schultz. I study sharks.
Today, I'll be answering your questions from Twitter.
This is Shark Support.
[upbeat drum music] @StopFinningDE asks, What causes sharks to attack humans? This is a really complicated question.
One of the biggest reasons is we are putting ourself into their environment.
Now, when it comes to people, we can put our hands all over the place to observe our environment.
Sharks don't have that.
And a lot of times what ends up happening is they might hit their nose on something to try and figure out what it is.
They also take what we call exploratory bites.
What it appears is that they're actually just trying to figure out what we are.
@CantoAzul_mc asks, What are shark bones made of? Sharks don't actually have any bones.
They are made entirely of cartilage.
@Solarglide asks, How many species of sharks are there? This is a really fun question because we actually just recently passed the 500 species mark, which is insane.
So there's many ways to separate a shark from your standard bony fish.
One of them is their gill slits.
Most sharks have five gill slits, but they can actually have up to seven gill slits.
Your standard fish usually only has one.
Another thing that separates them is they have a very fatty liver, which would be this orange bit here in this anatomical model, and the liver is huge.
And the reason for that is sharks lack a swim bladder.
So a bony fish has a swim bladder, and that's what helps them maintain their buoyancy.
Sharks don't have this.
They have a liver, and it's super oily.
And that's what allows them to maintain their buoyancy in the water.
@great_thunberg asks, Is climate change to blame for a string of shark attacks across America? When you think about climate change, one of the things that we really think about is warming oceans.
One of the things that we're finding is sharks and fish, in general, are actually moving northward, where the water stays cooler for longer months.
But it's not so great for all the people who are going in the water in New Jersey, in New York, in the beaches.
You're getting into their natural habitat.
And there's a couple of different interactions that are going on.
Their prey items seem to be moving more in shore or into shallower waters.
We're also in the shallower waters.
Then there might be an increased chance of experiencing a shark bite.
But in reality, your chances being bitten by a shark is about one in 3.7 million.
You are more likely to die from a vending machine falling on you, from a coconut falling on your head, or even just a dog in the street.
@WincentWYedlin asks.
Hammerhead sharks have always fascinated me.
Why do they look like this? What is the purpose of having a shovel head and eye position like that? And do they function better than a normal shark? Hammerheads are very fascinating.
We all know that they look completely different.
And one of the distinctive features of them is this hammer part, which is actually called a cephalofoil.
Hammerhead sharks are the most recently diverged species of shark, meaning that they're the most recently evolved.
And we actually don't really know why they evolved this hammer, but we can kind of pick apart maybe some reasons why.
On the bottom, by their mouth, this whole thing is lined with ampullae of Lorenzini, which allows them to detect electrical currents.
So as they're swimming along and they eat rays, and these rays have a tendency to be kind of buried into the dirt, this is constantly feeling for electrical pulses.
So any sort of muscle movement, for example.
And they can really pinpoint where that ray might be without actually having to see them because of this ability to see through this electricity.
And their eyes are all the way on the outside.
Some of our guesses is hammerheads have a much wider range of vision.
They can see almost 360 degrees around their body, which really separates them from other species of shark.
@iamJanKruse asks, What's the most dangerous type of shark? There are three that we consider to be dangerous or more aggressive.
And those are the great white shark, the tiger shark, and the bull shark.
Now, the bull shark is very famous because it can actually travel up rivers into fresher water and is the only species of shark can go in between seawater and saltwater.
So if we're hanging out in a river and there's that shark, one, it's murky, they can't really see anything.
So they have a tendency to be a lot more curious, and therefore, that appears to be a lot more aggressive.
@Bylan_Durton asks, How do sharks of so many teeth? They don't even drink milk. WTF.
So if you think of people, we only have two sets of teeth that come in.
We have our baby teeth, we lose them, and then we have our adult teeth.
If you look at the inside of a shark jaw, they actually have teeth that line the upper and lower jaw.
As these teeth fall out, new teeth will replace and become active teeth and form to help them eat.
And while that's happening, when space is created, a new tooth will start being formed.
So they essentially end up with these layers and layers of layers or conveyor belt, as many people say, of teeth that are perfectly formed and ready to move in.
@ingnog1115 asks, #sharkafterdark.
What's the most unique thing about tiger sharks? What separates them from other sharks, aside from their markings? Tiger sharks also are known to eat almost anything.
People have found license plates, tires, and even lawn chairs in their stomach.
And one of the things that is super odd about them is that they consistently throw their stomach up.
Their stomach fully comes out of their mouth to just eject whatever is in their stomach that they can't eat.
And this stems from the fact that their normal prey item is turtles, and they can't really eat that turtle shell.
@therealansweri1 asks, What does the skin of a shark feel like? Shark skin feels two different ways.
So if you move your hand from the head all the way down to the tail, they feel smooth.
But if you move your hand from the tail to the head, it's quite rough, and there's a lot of resistance.
And this is because of their scales.
So shark scales are called dermal denticles.
If this very blown up large version of a scale was placed on this shark, it would sit like this, and it would layer and layer and layer.
And this lower part here is the base of it.
This would actually be embedded below the surface of the shark.
So as they're swimming, water is almost perfectly moving over their body, giving them less resistance than what we would experience and actually helping them be as quiet as they are.
@RuaridhHanna asks, How sharks can detect blood at one part per 25 million parts of water? One of the really big myths about sharks is that they can smell a drop of blood from a mile away.
A particle of blood actually has to travel through all of that water and end up in their nose.
They can't just say, Oh, this is the entire stream it passed.
They have to keep having a stream of particles that brings them to you.
They can swim around you and have no idea that your blood is in the water.
@_haleyjacoby asks, So if sharks have to move to be able to breathe, how do they sleep? I wanna address the first part of this question, which is that sharks don't actually have to always move to breathe.
There's kind of two separate breathing mechanisms that sharks employ.
The first is called obligate ram ventilating, and that is where shark absolutely has to move.
They just hang with their mouth open, essentially, and the water will just flow out over their gills passively.
Think great white sharks, hammerheads, bull sharks.
They're constantly swimming because they have to be.
But then we have a category known as buccal pumpers.
Think nurse sharks sitting on the sea floor or any kind of shark that's just hanging out around the bottom.
They actually have special cheek muscles that allow them to suck water in and flush it out over their gills.
So they do not have to be moving to breathe.
When it comes to sleep, the sharks that we know the most about their resting time is buccal pumpers.
Some sharks will be observed in this resting period with their eyes open, which is really fascinating.
So this whole sleep in sharks is something that we're still researching very heavily.
@misskaylajoi asks, How do you track a shark? You actually have to catch them first.
Once we've caught the shark, we can then do a workup on it and add a tag into it.
And there's a couple of different ways we'll tag sharks.
The first one is just a satellite tag.
This is probably what most people would expect of a tag.
It's a decently sized package that goes right onto the back of that top dorsal fin.
And it actually pings up to satellites.
The next is an acoustic tag.
Now, this is typically implanted into the body of the shark.
Another one is a great way for normal people and non-scientists to really get into shark research.
And that is just a standard ID tag.
This essentially goes into the dorsal fin of the shark.
You have a little device that lets you do this.
You stick it into the dorsal fin.
So on this tag, there's a couple of different pieces of information.
The first is just a standard ID number.
We would read this number off, and then we could go into our Excel sheet and say, Oh, we caught this shark two weeks ago, and it was in this place.
There's a lot of different reasons that we tag sharks, but the overarching question is, what are they doing and where are they going? Mostly because we just wanna know in what areas of the world they're living in.
What are they eating? Where are their nurseries? So there's all these overarching layers of why tracking is important.
@RogueSharks asks, What are baby sharks called? They're called pups, and it's amazing.
@lousyscouty asks, Siri, do sharks lay eggs? This is a really interesting question.
Sharks actually have three ways of reproducing.
The first is they'll just lay eggs.
You can swim around the sea floor, and you'll see things like a mermaid purse, which is basically just an egg case that sits on the sea floor and develops the shark.
And when they're ready, they kind of punch out of that egg and they're ready to go.
They get the term purse because they actually kind of look like a purse, depending on the egg that you're looking at.
The second is they have eggs inside the mom, and in each egg, there's one shark.
And that shark is slowly developing over time with a yolk sack attached to it.
And once they hatch inside the mom from their egg, the mom will then give birth to the live pups, which is really fascinating.
And the last one is just live birth.
So similar to how we do, they just give birth to live pups.
And a really interesting example of this is sand tiger sharks.
Sand tiger sharks will have a whole bunch of eggs inside their body.
Some sharks eggs will become fertilized.
And as they begin to grow, they actually start eating their siblings while they're in the womb.
So there's kind of this crazy Hunger Games thing going on inside.
And instead of giving birth to a whole bunch of different pups all at once and hoping that they'll be able to live, they actually give birth to just two pups that are already very strong and already know how to behave like a shark.
@DiveMagazine asks, Why do sharks glow in the dark? Well, some sharks have this really cool ability to bioluminesce, which is producing light from organs within their body.
One in the case of the cookie cutter shark is to actually attract larger prey items that may want to eat them.
So they can whip around and take that nice cookie cutter bite out of them.
Another is countershading.
Sharks that can bioluminesce in specific ways produce some light to blend in with any light that a predator below them might see above.
They're able to avoid being eaten because they have this bioluminescent ability.
@BlakeneyCFA asks, Shark week trivia. What is the fastest shark? The fastest shark is the shortfin mako, and it can swim up to 45 miles per hour, which is very different from most of the other species that we know of.
@hurtolympics asks, What are sharks afraid of? Those [beep] ain't God.
Whether or not sharks actually experience the emotion of fear or being afraid of something is hard to say.
When you're in the water with sharks, a lot of them have a tendency to be skittish.
Some of them are really curious.
You can think sharks that people dive with, nurse sharks, oceanic white tips, but many sharks actually just avoid people altogether because we're not a natural item in their habitat.
And they're generally pretty cautious, so they're not really gonna come up to you and be curious about you.
They'd rather just steer clear of something that they're not sure what it is.
@Kelello_ asks, For how long do Greenland sharks live? Greenland sharks inhabit very cold water, and they have a very slow metabolism.
Based on our research, we estimate that they live 250 to 500 years old.
@teamfinfree tweets, Question. What's your favorite shark? Go. #FinFree.
My favorite shark is the bonnethead.
This is the smallest species in the hammerhead family.
So it is actually related to the great hammerhead.
And they also have this really insane ability.
Dr. Samantha Lee discovered that this is actually the only species of shark that is omnivorous, meaning that they can actually digest grass, which is not a thing that we'd expect from sharks.
We generally think of them as being carnivorous only.
@hunter_jordan asks, Does the Megalodon really exist? It did exist. It's extinct now.
Megalodon was a really big shark, and they also had these really big teeth.
So this is a replica.
A lot of people have actual Megalodon teeth that they find when they go diving and actually lets us figure out what the size of the animal might have been.
They lived in kind of the warmer ocean regions.
And our oceans have a tendency to be pretty cold now.
When you consider all this together, there's really not any territory in the ocean that the Megalodon could thrive in.
There's no nursery grounds that we know of, and their prey items kind of just disappeared and changed to make it harder for them to actually be able to eat.
@DarkerDazai asks, How to take care of sharks? Google search.
In captivity, usually what aquarists will do is make sure that that temperature of the water is the optimal temperature for them to live in.
And they'll make sure that they're feeding them as often as they need to.
Sharks don't necessarily make good pets, and they're also not always great in aquarium.
So there's not a whole lot of great whites in aquariums, for example.
But you do see things like sand tiger sharks, whale sharks, nurse sharks, which are sharks that are a lot easier to take care of and aren't as finicky in the environment that they need to be in.
@NatGeo asks, How long have sharks been swimming in the ocean? Sharks have been around for around 450 million years.
@mattnj81 asks, What is the most endangered shark species? Now, it's really hard to pinpoint exactly what species is the most endangered.
If you find a shark that is critically endangered, that means that we are really unsure of either if they might exist, or we've only seen a couple of them.
A really good example of this is the Ganges shark, which was actually a river shark that we haven't seen in a really long time @mjgx_ asks, 'Jaws' wasn't a real shark, was it? Just based on a real story or something? So Jaws was not based off of a true story.
The shark you see in the film is a robotic shark, and his name was Bruce.
But while they're hunting around for Bruce and Bruce is supposedly eating all these people, they do pull up a shark that they put up on a crane.
And this is a tiger shark.
You know that this shark is in fact real because it has its claspers attached.
Claspers are the male sexual organ on sharks that are just hanging off the back of the shark in all their glory.
So next time you watch Jaws, be on a lookout for that.
@JeffOllerton asks, Did sharks go through a mass extinction 19 million years ago? We think that at some point something changed that resulted in a mass extinction of all these sharks.
And the way that we know that is because in our fossil record, there's a sudden drop in the amount of shark denticles, or teeth, from before the 19 million years to after.
That lets us know that there was this big extinction.
We're not entirely sure what it was, but it seems like the population's really dropped.
@stackeddeck3 asks, Why do sharks migrate to different places? This is a really good question.
And we're really still trying to figure out a whole answer to it.
Some sharks have specific places that they go to mate.
There are specific areas called nurseries where young sharks will live until they're big enough to kind of venture out into the open ocean.
Another reason is to follow prey.
We, as people, will fly to other places all around the world just because we're curious about traveling.
Sharks kind of have a tendency to also travel, and we don't really know why for some reasons, but we do for others.
@abirupan asks, How do sharks see, smell, and hear? So sharks have eyes just like us.
So they do actually use their vision to observe their environment.
They do not have 20/20 vision.
Neither do I, as you can see.
To aid with that, they have all of their other senses, like hearing, and smell, and the ability to sense pressure changes, and the ability to detect electricity, to create this whole overarching picture of what their environment looks like in a 360 view.
You'll see all these spots all over the bottom of their head and sometimes even on the top.
And this is an organ that essentially can detect different electrical pulses.
So sharks have a lot of different senses that allow them to sample their environment at a way higher level than we are able to.
@mcmsharksxx asks, How are #sharks important? And why do they matter in regards to our ocean? We think of sharks as being apex predators.
So they're at the top of that food chain.
And in an ecosystem, there's constant control going up and down.
If you think of grass growing and a fish coming and maybe eating that grass, and then there's a carnivore that eats that fish, and then there's a bigger carnivore that eats that fish.
Let's say we remove the fish that eats the grass, and the predator that was eating that fish might not have any prey to keep eating.
And if that fish is gone, then sharks don't have anything to eat.
When sharks are removed from an ecosystem, there's kind of all these crazy environmental changes that can happen.
And some of that ends up being less control on things that are below them.
@_zixd asks, Sharks have eyelids? Do sharks blink? Sharks don't blink in the sense that we do.
They do have something called a nictitating membrane, which is an eyelid that will fold over some sharks' eyes when a predator gets close.
And it's just a protection mechanism.
@EOSGreenBay asks, #Tuesday trivia.
What is the largest species of shark currently living in the ocean? The largest species of shark, actually, is the whale shark, which can get up to 18 meters.
People often will say that it gets up to the length of a school bus, which is massive.
They actually eat plankton.
So they don't eat meat in the same way that we expect other sharks to, which is really fascinating.
So those are all the questions for today.
I really loved answering them, and I hope that you learned something new.
Thanks for watching Shark Support.
Gordon Ramsay Answers Cooking Questions From Twitter Ken Jeong Answers Medical Questions From Twitter Bill Nye Answers Science Questions From Twitter Blizzard's Jeff Kaplan Answers Overwatch Questions From Twitter Nick Offerman Answers Woodworking Questions From Twitter Bungie's Luke Smith Answers Destiny Questions From Twitter Jackie Chan & Olivia Munn Answer Martial Arts Questions From Twitter Scott Kelly Answers Astronaut Questions From Twitter LaVar Ball Answers Basketball Questions From Twitter Dillon Francis Answers DJ Questions From Twitter Tony Hawk Answers Skateboarding Questions From Twitter Jerry Rice Answers Football Questions From Twitter Garry Kasparov Answers Chess Questions From Twitter U.S. Olympic and Paralympic Athletes Answer Olympics Questions From Twitter Neuroscientist Anil Seth Answers Neuroscience Questions From Twitter Blizzard's Ben Brode Answers Hearthstone Questions From Twitter John Cena Answers Wrestling Questions From Twitter The Slow Mo Guys Answer Slow Motion Questions From Twitter Bill Nye Answers Even More Science Questions From Twitter James Cameron Answers Sci-Fi Questions From Twitter Best of Tech Support: Bill Nye, Neil DeGrasse Tyson and More Answer Science Questions from Twitter Riot Games' Greg Street Answers League of Legends Questions from Twitter Riot Games' Greg Street Answers Even More League of Legends Questions from Twitter PlayerUnknown Answers PUBG Questions From Twitter Liza Koshy, Markiplier, Rhett & Link, and Hannah Hart Answer YouTube Creator Questions From Twitter NCT 127 Answer K-Pop Questions From Twitter Neil deGrasse Tyson Answers Science Questions From Twitter Ken Jeong Answers More Medical Questions From Twitter Bon Appétit's Brad & Claire Answer Cooking Questions From Twitter Bang Bang Answers Tattoo Questions From Twitter Ed Boon Answers Mortal Kombat 11 Questions From Twitter Nick Jonas and Kelly Clarkson Answer Singing Questions from Twitter Penn Jillette Answers Magic Questions From Twitter The Russo Brothers Answer Avengers: Endgame Questions From Twitter Alex Honnold Answers Climbing Questions From Twitter Sloane Stephens Answers Tennis Questions From Twitter Bill Nye Answers Science Questions From Twitter - Part 3 Astronaut Nicole Stott Answers Space Questions From Twitter Mark Cuban Answers Mogul Questions From Twitter Ubisoft's Alexander Karpazis Answers Rainbow Six Siege Questions From Twitter Marathon Champion Answers Running Questions From Twitter Ninja Answers Fortnite Questions From Twitter Cybersecurity Expert Answers Hacking Questions From Twitter Bon Appétit's Brad & Chris Answer Thanksgiving Questions From Twitter SuperM Answers K-Pop Questions From Twitter The Best of Tech Support: Ken Jeong, Bill Nye, Nicole Stott and More Twitter's Jack Dorsey Answers Twitter Questions From Twitter Jodie Whittaker Answers Doctor Who Questions From Twitter Astronomer Jill Tarter Answers Alien Questions From Twitter Tattoo Artist Bang Bang Answers More Tattoo Questions From Twitter Respawn Answers Apex Legends Questions From Twitter Michael Strahan Answers Super Bowl Questions From Twitter Dr. Martin Blaser Answers Coronavirus Questions From Twitter Scott Adkins Answers Martial Arts Training Questions From Twitter Psychiatrist Daniel Amen Answers Brain Questions From Twitter The Hamilton Cast Answers Hamilton Questions From Twitter Travis & Lyn-Z Pastrana Answer Stunt Questions From Twitter Mayim Bialik Answers Neuroscience Questions From Twitter Zach King Answers TikTok Questions From Twitter Riot Games Answers League of Legends Questions from Twitter Aaron Sorkin Answers Screenwriting Questions From Twitter Survivorman Les Stroud Answers Survival Questions From Twitter Joe Manganiello Answers Dungeons & Dragons Questions From Twitter "Star Wars Explained" Answers Star Wars Questions From Twitter Wizards of the Coast Answer Magic: The Gathering Questions From Twitter "Star Wars Explained" Answers More Star Wars Questions From Twitter VFX Artist Answers Movie & TV VFX Questions From Twitter CrossFit Coach Answers CrossFit Questions From Twitter Yo-Yo Ma Answers Cello Questions From Twitter Mortician Answers Cadaver Questions From Twitter Babish Answers Cooking Questions From Twitter Jacob Collier Answers Music Theory Questions From Twitter The Lord of the Rings Expert Answers More Tolkien Questions From Twitter Wolfgang Puck Answers Restaurant Questions From Twitter Fast & Furious Car Expert Answers Car Questions From Twitter Former FBI Agent Answers Body Language Questions From Twitter Olympian Dominique Dawes Answers Gymnastics Questions From Twitter Allyson Felix Answers Track Questions From Twitter Dr. Michio Kaku Answers Physics Questions From Twitter Former NASA Astronaut Answers Space Questions From Twitter Surgeon Answers Surgery Questions From Twitter Beekeeper Answers Bee Questions From Twitter Michael Pollan Answers Psychedelics Questions From Twitter Ultramarathoner Answers Questions From Twitter Bug Expert Answers Insect Questions From Twitter Former Cult Member Answers Cult Questions From Twitter Mortician Answers MORE Dead Body Questions From Twitter Toxicologist Answers Poison Questions From Twitter Brewmaster Answers Beer Questions From Twitter Biologist Answers Biology Questions From Twitter James Dyson Answers Design Questions From Twitter Dermatologist Answers Skin Questions From Twitter Dwyane Wade Answers Basketball Questions From Twitter Baker Answers Baking Questions from Twitter Astrophysicist Answers Questions From Twitter Age Expert Answers Aging Questions From Twitter Fertility Expert Answers Questions From Twitter Biological Anthropologist Answers Love Questions From Twitter Mathematician Answers Math Questions From Twitter Statistician Answers Stats Questions From Twitter Sleep Expert Answers Questions From Twitter Botanist Answers Plant Questions From Twitter Ornithologist Answers Bird Questions From Twitter Alex Honnold Answers MORE Rock Climbing Questions From Twitter Former FBI Agent Answers MORE Body Language Questions From Twitter Waste Expert Answers Garbage Questions From Twitter Garbage Boss Answers Trash Questions From Twitter J. Kenji López-Alt Answers Cooking Questions From Twitter Veterinarian Answers Pet Questions From Twitter Doctor Answers Gut Questions From Twitter Chemist Answers Chemistry Questions From Twitter Taste Expert Answers Questions From Twitter Paleontologist Answers Dinosaur Questions From Twitter Biologist Answers More Biology Questions From Twitter Biologist Answers Even More Biology Questions From Twitter ER Doctor Answers Injury Questions From Twitter Toxicologist Answers More Poison Questions From Twitter Energy Expert Answers Energy Questions From Twitter BBQ Pitmaster Answers BBQ Questions From Twitter Neil Gaiman Answers Mythology Questions From Twitter Sushi Chef Answers Sushi Questions From Twitter The Lord of the Rings Expert Answers Tolkien Questions From Twitter Audiologist Answers Hearing Questions From Twitter Marine Biologist Answers Shark Questions From Twitter Bill Nye Answers Science Questions From Twitter - Part 4 John McEnroe Answers Tennis Questions From Twitter Malcolm Gladwell Answers Research Questions From Twitter Financial Advisor Answers Money Questions From Twitter Stanford Computer Scientist Answers Coding Questions From Twitter Wildlife Vet Answers Wild Animal Questions From Twitter Climate Scientist Answers Earth Questions From Twitter Medical Doctor Answers Hormone Questions From Twitter James Hoffmann Answers Coffee Questions From Twitter Video Game Director Answers Questions From Twitter Robotics Professor Answers Robot Questions From Twitter Scam Fighters Answer Scam Questions From Twitter Forensics Expert Answers Crime Scene Questions From Twitter Chess Pro Answers Questions From Twitter Former FBI Agent Answers Body Language Questions From Twitter...Once Again Memory Champion Answers Questions From Twitter Neuroscientist Answers Illusion Questions From Twitter Immunologist Answers Immune System Questions From Twitter Rocket Scientists Answer Questions From Twitter How Vinyl Records Are Made (with Third Man Records) Neurosurgeon Answers Brain Surgery Questions From Twitter Therapist Answers Relationship Questions From Twitter Polyphia's Tim Henson Answers Guitar Questions From Twitter Structural Engineer Answers City Questions From Twitter Harvard Professor Answers Happiness Questions From Twitter A.I. Expert Answers A.I. Questions From Twitter Pizza Chef Answers Pizza Questions From Twitter Former CIA Chief of Disguise Answers Spy Questions From Twitter Astrophysicist Answers Space Questions From Twitter Cannabis Scientist Answers Questions From Twitter Sommelier Answers Wine Questions From Twitter Mycologist Answers Mushroom Questions From Twitter Genndy Tartakovsky Answers Animation Questions From Twitter Pro Card Counter Answers Casino Questions From Twitter Doctor Answers Lung Questions From Twitter Paul Hollywood & Prue Leith Answer Baking Questions From Twitter Geneticist Answers Genetics Questions From Twitter Sneaker Expert Jeff Staple Answers Sneaker Questions From Twitter 'The Points Guy' Brian Kelly Answers Travel Questions From Twitter Master Chef Answers Indian Food & Curry Questions From Twitter Archaeologist Answers Archaeology Questions From Twitter LegalEagle's Devin Stone Answers Law Questions From Twitter Todd McFarlane Answers Comics Questions From Twitter Reptile Expert Answers Reptile Questions From Twitter Mortician Answers Burial Questions From Twitter Eye Doctor Answers Eye Questions From Twitter Computer Scientist Answers Computer Questions From Twitter Neurologist Answers Nerve Questions From Twitter Hacker Answers Penetration Test Questions From Twitter Nutritionist Answers Nutrition Questions From Twitter Experts Predict the Future of Technology, AI & Humanity Doctor Answers Blood Questions From Twitter Sports Statistician Answers Sports Math Questions From Twitter Shark Tank's Mark Cuban Answers Business Questions From Twitter Marvel’s Spider-Man 2 Director Answers Video Game Questions From Twitter Criminologist Answers True Crime Questions From Twitter Physicist Answers Physics Questions From Twitter | Tech Support Chess Pro Answers More Questions From Twitter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,533 | 2,023 |
"11 Tips to Take Your ChatGPT Prompts to the Next Level | WIRED"
|
"https://www.wired.com/story/11-tips-better-chatgpt-prompts"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter David Nield Business 11 Tips to Take Your ChatGPT Prompts to the Next Level Photograph: PM Images/Getty Images Save this story Save Save this story Save ChatGPT and tools like it have made AI available to the masses. We can now get all sorts of responses back on almost any topic imaginable. These bots can come up with sonnets, code, philosophy, and more.
However, while you can just type anything you like into ChatGPT and get it to understand you, there are ways of getting more interesting and useful results out of the bot. This “prompt engineering” is becoming a specialized skill of its own.
Sometimes all it takes is the addition of a few more words or an extra line of instruction and you can get ChatGPT responses that are a level above what everyone else is seeing—and we've included several examples below.
For the purposes of this guide, we tested these prompts with GPT-4: The latest version of ChatGPT at the time of writing, but only available to some users. However, they should work fine with older versions of ChatGPT too.
ChatGPT can give you responses in the form of a table if you ask. This is particularly helpful for getting information or creative ideas. For example, you could tabulate meal ideas and ingredients, or game ideas and equipment, or the days of the week and how they're said in a few different languages.
Using follow-up prompts and natural language, you can have ChatGPT make changes to the tables its drawn and even produce them in a standard format that can be understood by another program (such as Microsoft Excel).
With some careful prompting, you can get ChatGPT out of its rather dull, matter-of-fact, default tone and into something much more interesting—such as the style of your favorite author, perhaps.
You could go for the searing simplicity of an Ernest Hemingway or Raymond Carver story, for instance, or the lyrical rhythm of a Shakespearean play, or the density of a Dickens novel. The end results don't come close to the genius of the actual authors themselves, but it's another way of being more creative with the output you get.
ChatGPT can produce output in all kinds of styles.
ChatGPT via David Nield ChatGPT can really impress when it's given restrictions to work inside, so don't be shy when it comes to telling the bot to limit its responses to a certain number of words or a certain number of paragraphs.
It could be everything from condensing the information in four paragraphs down into one, or even asking for answers with words of seven characters or fewer (just to keep it simple). If ChatGPT doesn't follow your responses properly, you can correct it, and it'll try again.
Another way of tweaking the way that ChatGPT responds to you is to tell it who its audience is. You might have seen the videos in which complex subjects are explained to people with different levels of understanding, and this works in a similar way.
For example, you can tell ChatGPT that you are speaking to a bunch of 10-year-olds or to an audience of business entrepreneurs and it will respond accordingly. It works well for generating multiple outputs along the same theme.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Specify the audience to change the way that ChatGPT responds.
ChatGPT via David Nield ChatGPT is a very capable prompt engineer itself. If you ask it to come up with creative and effective inputs for artificial intelligence engines such as Dall-E and Midjourney, you'll get text you can then input into other AI tools you're playing around with. You're even able to ask for tips with prompts for ChatGPT itself.
When it comes to generating prompts, the more detailed and specific you are about what you're looking for the better: You can get the chatbot to extend and add more detail to your sentences, you can get it to role-play as a prompt generator for a specific AI tool, and you can tell it to refine its answers as you add more and more information.
While ChatGPT is based around text, you can get it to produce pictures of a sort by asking for ASCII art. That's the art made up of characters and symbols rather than colors. It won't win you any prizes, but it's pretty fun to play around with.
The usual ChatGPT rules apply, in that the more specific you are the better, and you can get the bot to add new elements and take elements away as you go. Remember the limitations of the ASCII art format though—this isn't a full-blown image editor.
You don't have to do all the typing yourself when it comes to ChatGPT. Copy and paste is your friend, and there's no problem with pasting in text from other sources. While the input limit tops out at around 4,000 words, you can easily split the text you're sending the bot into several sections and get it to remember what you've previously said.
Perhaps one of the best ways of using this approach is to get ChatGPT to simplify text that you don't understand—the explanation of a difficult scientific concept, for instance. You can also get it to translate text into different languages, write it in a more engaging or fluid style, and so on.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Pull in text from elsewhere on the web to use with ChatGPT.
ChatGPT via David Nield Another way to improve the responses you get from ChatGPT is to give it some data to work with before you ask your question. For instance, you could give it a list of book summaries together with their genre, then ask it to apply the correct genre label to a new summary. Another option would be to tell ChatGPT about activities you enjoy and then get a new suggestion.
There's no magic combination of words you have to use here. Just use natural language as always, and ChatGPT will understand what you're getting at. Specify that you're providing examples at the start of your prompt, then tell the bot that you want a response with those examples in mind.
In the same way that ChatGPT can mimic the style of certain authors that it knows about, it can also play a role: a frustrated salesman, an excitable teenager (you'll most likely get a lot of emojis and abbreviations back), or the iconic Western star John Wayne.
The types of roles you can play around with are almost endless. These prompts might not score highly in terms of practical applications, but they're definitely a useful insight into the potential of these AI chatbots.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight ChatGPT can answer in any style you like, if you ask.
ChatGPT via David Nield Your answers can be seriously improved if you give ChatGPT some ingredients to work with before asking for a response. They could be literal ingredients—suggest a dish from what's left in the fridge—or they could be anything else.
So don't just ask for a murder mystery scenario. Also list out the characters who are going to appear. Don't just ask for ideas of where to go in a city; specify the city you're going to, the types of places you want to see, and the people you'll have with you.
You've no doubt noticed how binary arguments have tended to get online in recent years, so get the help of ChatGPT to add some gray in between the black and white. It's able to argue both sides of an argument if you ask it to, including both pros and cons.
From politics and philosophy to sports and the arts, ChatGPT is able to sit on the fence quite impressively—not in a vague way, but in a way that can help you understand issues from multiple perspectives.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Contributor X Topics learning voice assistants artificial intelligence ChatGPT David Gilbert Dhruv Mehrotra Dell Cameron Lexi Pandell Steven Levy Will Knight Reece Rogers Amanda Hoover Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,534 | 2,023 |
"Stack Overflow Will Charge AI Giants for Training Data | WIRED"
|
"https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Paresh Dave Business Stack Overflow Will Charge AI Giants for Training Data Photograph: Maryna Terletska/Getty Images Save this story Save Save this story Save Developing the AI systems behind tools such as ChatGPT and the image generator Dall-E costs hundreds of millions of dollars —and it’s about to get more expensive.
OpenAI, Google, and other companies building large-scale AI projects have traditionally paid nothing for much of their training data, scraping it from the web. But Stack Overflow, a popular internet forum for computer programming help , plans to begin charging large AI developers as soon as the middle of this year for access to the 50 million questions and answers on its service, CEO Prashanth Chandrasekar says. The site has more than 20 million registered users.
Stack Overflow’s decision to seek compensation from companies tapping its data, part of a broader generative AI strategy, has not been previously reported. It follows an announcement by Reddit this week that it will begin charging some AI developers to access its own content starting in June.
The two community sites are not alone in wanting a share. The News/Media Alliance, a US trade group of publishers, including Condé Nast, which owns WIRED, today unveiled principles calling on generative AI developers to negotiate any use of their data for training and other purposes and respect their right to fair compensation.
Meta, Google, and OpenAI—maker of ChatGPT —all have developed AI systems using data sets that culled content from thousands of online sources, including Stack Overflow and Reddit, according to outside analyses and their own disclosures.
Feeding text from online banter or expert discussions about programming into machine learning algorithms known as large language models, or LLMs, can help AI text generators or chatbots be more fluent and knowledgeable. Using LLMs to generate programming code is viewed as one of the technology's biggest opportunities, with Microsoft charging as much as $19 a month per person for its code generator GitHub Copilot.
“Community platforms that fuel LLMs absolutely should be compensated for their contributions so that companies like us can reinvest back into our communities to continue to make them thrive,” Stack Overflow’s Chandrasekar says. “We're very supportive of Reddit’s approach.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Chandrasekar described the potential additional revenue as vital to ensuring Stack Overflow can keep attracting users and maintaining high-quality information. He argues that will also help future chatbots, which need “to be trained on something that's progressing knowledge forward. They need new knowledge to be created.” But fencing off valuable data also could deter some AI training and slow improvement of LLMs, which are a threat to any service that people turn to for information and conversation. Chandrasekar says proper licensing will only help accelerate development of high-quality LLMs.
Every AI developer is seeking to bring down the huge costs of developing large scale AI systems, which take enormous amounts of expensive computers to power.
Having to pay for data they once grabbed for free could extend the already unclear timelines to turning a profit on their emerging technologies. OpenAI did not respond to a request for comment, and Meta and Google did not have immediate comment.
Large language models can generate strings of text based on word patterns learned from the web pages, books, and other bodies of text in their training data. Besides ChatGPT, the programs make up the guts of search chatbots such as Microsoft Bing chat and Google’s Bard , and they underlie a growing number of applications that produce professional and creative copy in a flash. Their counterparts that generate AI-composed illustrations and videos draw on patterns from image datasets such as photos gathered from Pinterest and Flickr.
Often, data sets used in AI development are built through unofficial means such as dispatching software that scrapes content from websites. In the US that is typically considered legal, though copyright issues and websites’ terms of use against the practice have left it in dispute.
A few websites such as Reddit and Stack Overflow have been more inviting. They offer downloadable “data dumps” or real-time data portals to help software to access their content known as APIs. In Stack Overflow’s case, LLM developers are getting their hands on data through a mix of dumps, APIs, and scraping, Chandrasekar says, all of which today can be done for free.
But Chandrasekar says that LLM developers are violating Stack Overflow’s terms of service. Users own the content they post on Stack Overflow, as outlined in its TOS , but it all falls under a Creative Commons license that requires anyone later using the data to mention where it came from. When AI companies sell their models to customers, they “are unable to attribute each and every one of the community members whose questions and answers were used to train the model, thereby breaching the Creative Commons license,” Chandrasekar says.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Neither Stack Overflow nor Reddit has released pricing information. “We're working on that as we speak,” Reddit spokesperson Tim Rathschmidt says, “and will share more with partners in the coming weeks.” Stack Overflow will study Reddit’s strategy and consult with its own potential customers, some of whom have already reached out about data access, Chandrasekar says.
A potential roadmap to pricing could come from Elon Musk, who this month hiked prices for access to Twitter data. They start at $42,000 per month for access to 50 million tweets.
About three times the volume of tweets had been previously available for free. In a tweet this week , Musk accused Microsoft, a major AI developer and close partner of OpenAI, of training algorithms “illegally using Twitter data.” Without elaboration, he added, “Lawsuit time.” Both Stack Overflow and Reddit will continue to license data for free to some people and companies. Chandrasekar says Stack Overflow only wants remuneration only from companies developing LLMs for big, commercial purposes. “When people start charging for products that are built on community-built sites like ours, that's where it's not fair use,” he says.
Reddit CEO Steve Huffman told The New York Times this week that he didn’t want to give a freebie to the world’s largest companies. “Crawling Reddit, generating value and not returning any of that value to our users is something we have a problem with,” he said.
As expectations surge that ChatGPT-style bots and other products built on LLMs will reap huge profits, other companies with stocks of content needed to train machine learning algorithms also want to be paid. Some news publishers have been wary of how Microsoft’s new Bing chatbot handles their content.
But so far only a few public deals over access to training data have been announced, such as photo bank Shutterstock agreeing to license content to OpenAI. Its rival Getty Images is suing Stability AI , an OpenAI competitor, for not seeking a license before allegedly using over 12 million photos. The AI startup’s response is due in US federal court next week.
AI developers are not under all-out pressure to pay yet. Some companies with large volumes of academic text or casual conversations say they have no plans to start charging for their APIs or similar data portals. PLOS, a publisher of scientific research whose content has been leveraged in AI training, is “not likely” to change its fairly unrestrictive terms of use, spokesperson David Knutson says. Online community platform Discord has no plans to modify its API offerings, which are free and provided under terms that forbid AI training , says spokesperson Swaleha Carlson.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight At Stack Overflow, charging for its API is just one part of a broader AI strategy that the company expects to unveil in a few months.
About 10 percent of Stack Overflow's nearly 600 staff are focused on the initiative, which includes developing its own generative AI services. For example, an assistant function could help guide people as they compose questions to post.
To date, the Stack Overflow community’s primary action has been to ban users from posting AI-generated responses. Chandrasekar says a spike in inaccurate answers following the release of ChatGPT had created a challenge for the company’s several hundred or so moderators.
Launched in 2008 , Stack Overflow generates about equal parts of its revenue from selling ads and licensing Q&A software as a subscription to more than 1,200 organizations for internal use. The company’s sales grew 33 percent to $45 million during the six months ended September 30, 2022, the most recent data available, compared with the year-earlier period. About 200,000 new users registered on average each month during that span.
Those users could reasonably clamor for their own compensation if Stack Overflow succeeds in licensing to AI makers the questions and answers they write for free. Chandrasekar says, “There's absolutely thought going into how best to make sure that our community members and the people that make the site what it is today—how we are going to take care of them in the context of what's happening here.” You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Topics developers artificial intelligence programming Startups Google OpenAI ChatGPT Meta coding Amanda Hoover Caitlin Harrington Niamh Rowe Will Knight Steven Levy Lexi Pandell Susan D'Agostino Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,535 | 2,023 |
"Who Will You Be After ChatGPT Takes Your Job? | WIRED"
|
"https://www.wired.com/story/status-work-generative-artificial-intelligence"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Stephen Thomas Ideas Who Will You Be After ChatGPT Takes Your Job? Photo-illustration: WIRED Staff; Getty Images Save this story Save Save this story Save A few months ago, I was waiting for the subway with a friend, a professional editor, who had never used a large language model (LLM). Standing on the platform, she told me about an article she’d been working on. ChatGPT had come out six weeks earlier, and I input her summary into it on my phone and showed her the result. I’d been following OpenAI’s transformer-driven models since 2019 and had forgotten the effect they can have on first exposure. My friend couldn’t take her eyes off the little gray box as the article came out, line by line. It took me a minute to register the shock on her face. On the train, she said, only half-joking, “I’m going to be unemployed by the end of the year.” As wave after wave of new AI capabilities have hit over the past few months, I’ve been thinking of my friend and her place in the world that’s unfolding. When GPT-4 came out in March, OpenAI’s press release included a chart of its scores on various standardized tests. The much-hyped new model scored above 80 percent on 11 AP and SAT exams, 77 percent on “Advanced Sommelier (theory knowledge),” and—most buzzed-about on Twitter—90 percent on the Uniform Bar Exam, the national test to become a lawyer. OpenAI’s previous model, GPT-3.5 (which powered ChatGPT when it debuted), had already passed the US Medical Licensing Examination, earning a grade that, were it human, would qualify it to become a doctor.
Results like these seem to validate a 2019 paper by then Stanford PhD candidate Michael Webb. Though entirely speculative at the time, it upended received wisdom about who would win and who would lose as a result of AI-driven automation. Before Webb’s report, studies by Oxford and McKinsey had predicted that lower-wage, lower-skill jobs would be hardest hit, as indeed they have been throughout the entire history of automation going back to the steam-powered weaving loom.
The LLM era has changed all that. Now, the conventional wisdom—replicated and extended in a recent paper by researchers at OpenAI—is that higher-paid jobs and creative jobs (including mathematicians, tax preparers, quants, writers, and web designers, to name a few) are the mostly highly exposed to automation (100 percent exposure for the professions just listed). This has an interesting side effect, since, as Webb’s study spelled out, white-collar work in the US is disproportionately done by the most privileged: men, white people and Asian Americans, people in their prime working years (25-54), and people living in rich coastal cities. Many in these demographics have had it easy for a long time, but it’s looking like the AI revolution will be a bumpy ride for them.
I spoke with four economists for this piece, and, though they offered good reasons to think AI won’t “take all the jobs”—indeed, as in previous waves of automation, the economy is likely to grow—none denied that some jobs will be lost. They didn’t know exactly how many, and neither do I. But what I do know is that we've never had a wave of automation in which white-collar workers are uniquely vulnerable, and we should therefore expect this one to play out differently.
The crux of the difference lies in the relationship that blue-collar and white-collar workers have with work. According to one study , white-collar workers tend to feel that they “express” their “full potential” more at work than blue-collar workers do; they also experience higher levels of “developing inner self” at work. According to another study , white-collar workers valued “interesting work (nature of the work), achievement, and appreciation of the work done (recognition)”—in contrast to blue-collar workers, whose motivators were “receiving salary, working condition, peer relations, and job security.” (And even more than other groups, men derive their self-worth from achievement and feeling useful. A dramatic illustration of this was a study of language used by suicidal men, which showed that being considered useful was core to men’s well-being, and its absence was devastating. Being rendered useless by a bot will have disproportionately bad emotional effects for a man.) Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight “Nature of the work” is one way of saying that white-collar workers care about the tasks we do. Being “recognized” and “appreciated” for “achievement” in these tasks is important to us; it is how we “express” our “full potential.” In other words, large pieces of our emotional lives and social selves are hooked into the tasks we do for work. What happens when AI does those tasks better? At the furthest edge of white-collar work is a species of task where competency is so admired that it becomes a sport or an art, and competency is rewarded by society with status and esteem, over and above financial compensation. This is the category of games of logic and art. Our shock at the new wave of AI models like ChatGPT and Midjourney comes from their proficiency at the artier, more creative tasks like writing and illustration. But the more strictly logical sports of chess and Go have long since been conquered by previous waves of AI, and so it may be instructive for the left-brainers to look in on how the right-brainers have handled usurption, emotionally and practically.
Go is generally considered humanity’s most complex game. In 2016, DeepMind’s AlphaGo beat two of its highest-ranked players. Lee Sedol, a Korean prodigy and the second-best player in the world at the time, took it the hardest. He became depressed, and a couple years after the match he retired from the game, citing AlphaGo. “Even if I become the number one, there is an entity that cannot be defeated,” he said.
Fan Hui, the European champion but a rung below the world leaders, took it better. He was initially shocked and humbled by his defeat, and indeed tried to forget the game altogether. “I want to try to forget Go, but it’s impossible, because all the things I’ve learned in my life is with Go,” he said in AlphaGo , the 2017 documentary released by DeepMind about Lee and Fan. The game, he said, is like looking in a mirror. “I see Go; I also see myself. For me Go is real life.” Later, however, he joined DeepMind—the architect of his defeat—and helped improve its model’s capabilities. Basically, he couldn’t beat ’em, so he joined ’em.
These differences seem illuminating, and it’s hard not to look for lessons. I can’t help but think that Lee’s higher ranking actually made him more vulnerable to an existential crisis, because he had more to lose. Fan was disappointed, but Lee lost very publicly in front of millions of Korean viewers. It’s hard to bounce back from that, and perhaps harder than it was for Fan to pivot.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight I caught up with my editor friend again recently—three months after her first exposure to ChatGPT. She seemed more concerned than ever. “I just think it’s going to be a hard fall,” she said. She felt the younger, more technically adept nipping at her heels and was worried she hadn’t been brought up to be resilient enough for this kind of challenge. I tried to offer hope in the form of a story that Gregory Clark, a professor emeritus at UC Davis, told me about aristocratic land owners during the Industrial Revolution. Tenant farmers abandoning the country to follow better wages into factories in the city caused the value of the aristocrats’ farmland to drop, causing massive losses for the aristocracy. The smart aristocrats, though, said Clark—the ones who could adapt—simply followed the farmers into the cities and became urban landlords.
My friend was only partly sold. What was the equivalent now, for her? That’s when I remembered a third Go champion who played AlphaGo but wasn’t included in the documentary. This is Ke Jie. In 2017, months after the Lee match, he was 19 years old and the best player in the world, having beaten Lee in three consecutive championships. Like Fan and Lee, Ke also lost to AlphaGo, after which AlphaGo had no human left to beat.
But Ke’s reaction is, I think, the most interesting and also the most hopeful. Pre-AlphaGo, Ke, a teenager of world-class abilities, was also a world-class brat, famous for bucking Go’s culture of humility. When Ke challenged Lee to a match, for example, he posted a video of himself as a boxer beating up Lee and ostentatiously bragged and baited his opponents.
In the aftermath of Ke’s defeat by DeepMind’s AI, however, he underwent a remarkable change. On TV appearances since then, he has affected a stance of irony, playfulness, and humility, becoming a much loved crowd-pleaser along the way. Again, looking for lessons, I can’t help but notice Ke’s extreme youth—15 years younger than Lee, 16 years younger than Fan—and wonder if he had less invested in a particular way of valuing and understanding himself. Perhaps he was therefore better able to change how he related to the world on a fundamental level.
Important to this story, too, is that, unlike Fan, whose pivot to temporary AI research consultant could be seen as a demotion from European Go champion, Ke’s pivot allowed him to remain at the top of the game.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The pivot from “best player in the world at humanity’s most logically complex game” to “comedian” is pretty dramatic, though, and I think the magnitude of that flip reflects the profundity of the changes coming down the pipe. And if Ke Jie has to do that, what does that mean for the rest of us? My hunch is that economic concerns will dominate in the coming years, but assuming that’s solved, where will status reemerge if the core competencies of art, design, science, law, medicine, and engineering are swallowed by GPT-7? Webb himself thought the human niche would become something closer to judgment, “where the point is that it’s a human making the decision.” For a judge or a politician or a newspaper editor, for example, “we know we could get the AI to do it for us—we could ask it to tell us what to do—but we’d rather have a human do it.” Again, the vanguard of Go and chess—“solved” by AI two decades earlier—offer us tea leaves to divine if we choose to read them. In these worlds, Ke Jie is not the only high-status genius to pivot as he did; Magnus Carlsen, the world’s best chess player, has in recent years become known for “interesting” gameplay in response to AI creating an indisputable hierarchy of opening moves. Even more heretical, players at much lower skill levels are beginning to overtake the old masters in popularity: The personable and attractive Botez sisters are the second-most-streamed chess players while having ELO ratings nowhere near the world’s best. And Zhan Ying, a Chinese Go player at a skill level considerably below Ke Jie’s, recently dethroned him, briefly, as the most-watched Go player in the world.
If this trend is any indication, we should expect to see softer skills—humor, presence, personality—become the game. In this light, we may already be halfway there without quite realizing it: Perhaps the future belongs to the influencer.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Topics deep learning artificial intelligence Nika Simovich Fisher Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,536 | 2,019 |
"Microsoft CTO: Understanding AI is part of being an informed citizen in the 21st century | VentureBeat"
|
"https://venturebeat.com/ai/microsoft-cto-understanding-ai-is-part-of-being-an-informed-citizen-in-the-21st-century"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft CTO: Understanding AI is part of being an informed citizen in the 21st century Share on Facebook Share on X Share on LinkedIn Microsoft CTO Kevin Scott Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft CTO Kevin Scott believes understanding AI in the future will help people become better citizens.
“I think to be a well-informed citizen in the 21st century, you need to know a little bit about this stuff [AI] because you want to be able to participate in the debates. You don’t want to be someone to whom AI is sort of this thing that happens to you. You want to be an active agent in the whole ecosystem,” he said.
In an interview with VentureBeat in San Francisco this week, Scott shared his thoughts on the future of AI, including facial recognition software and manufacturing automation. He also detailed why he’s “cautiously optimistic” about the ways people will devise to use intelligent machines and why he thinks Cortana doesn’t need a smart speaker to succeed.
However vital staying informed about the evolution of AI may be to the average person in the century ahead, Scott concedes it’s not an easy thing to do.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It’s challenging, because even if you’re a person with significant technical training, even if you’re an AI practitioner, it’s sort of challenging to keep up with everything that’s going on. The landscape is evolving really rapidly,” he said.
Technologists who make and use AI today also have a duty to help people better understand what’s possible and make their work accessible, so Scott is writing a book about how AI can be a force for good for the economy in rural America.
In recent years, AI has proliferated across health care and homes, as well as governments and businesses, and its continued expansion could redefine work roles for everyone. News and public education initiatives to help citizens understand AI are important, and technologists should make their work more accessible, but Scott believes it’s not enough for businesses using AI to be disruptive in their industry.
“We have to think about how there’s balance here,” he said. “You can’t just create a bunch of tech and have it be super disruptive and not have any involvement … you have to create value in this world, and it can’t just be shareholder value.” A ‘cautiously optimistic’ view of facial recognition One subject that has drawn much attention from average citizens and Microsoft is facial recognition software and the potential for government overreach.
On Tuesday, the American Civil Liberties Union (ACLU) — along with a coalition of human rights and other organizations — called for major tech companies, including Microsoft, to abstain from selling facial recognition technology to governments , because doing so would inevitably lead to misuse and discrimination against religious and ethnic minority groups.
Microsoft declined to respond directly to the letter but pointed to past actions that represent its point of view. Analysis last year found facial recognition systems from Microsoft, as well as Face++ in China, were not capable of recognizing people with dark skin, particularly women of color, at the same rates as white people. Just weeks after Microsoft made improvements to the Face API’ s ability to identify people with dark skin tones last summer, president Brad Smith declared that the government needs to regulate facial recognition software.
Then last month the company laid out six principles it will use to govern the use of facial recognition software by its customers, including law enforcement agencies and governments, such as fairness, transparency, and accountability.
Microsoft is currently on track to implement the plan on schedule, Scott said.
Though facial recognition software could be used for nefarious purposes by businesses and governments and can drum up fears of technologically powered police states, Scott likes to think of the upside when it comes to facial recognition software use cases.
“There’s this fine line between … that boundary; there are clearly some things that you just shouldn’t allow. Like, you shouldn’t have governments using it as a mechanism of oppression. No one should be using it to discriminate illegally against people, so I think it’s a good debate to have, but I’m usually on the cautiously optimistic side of things — I actually have faith in humanity,” he said. “I believe if you give people tools, the overwhelming majority of the uses to which they will be put are positive, and so you want to encourage that and protect against the negative in a thoughtful way.” Potential positive use cases he cites include improving security in buildings, understanding who’s in a meeting, or verifying that a person handling dangerous machinery is certified to do so.
He also offered a theoretical example based on what he observed when his wife was in the hospital last year. Just two nurses were tasked with managing an entire a hospital recovery ward, where patients were prescribed a precise regiment of ambulatory activity.
A computer vision system assigned to this task could alert nursing staff if a patient was seen in common areas too often, signaling too much activity, or if they hadn’t been seen out of their room, indicating that they were not getting enough activity.
In addition to a belief that understanding AI makes for more informed citizens, Scott emphasized that AI experts need to do more to share the positive outcomes that can come from technology like facial recognition software.
The Terminator often comes to mind in worst-case scenarios with AI, but sharing a Star Trek vision of the future is important too, Scott said, because telling positives stories helps people grasp those possibilities.
“Folks who are deeply in the AI community need to do a better job trying to paint positive pictures for folks, [but] not in a Pollyanna way, and not ignoring the unintended consequences and all the bad things that could be amplified by AI,” he said.
Scott’s book on AI in rural America Scott believes a book will help expound on his point of view “that AI can and should be a beneficial thing for rural America.” A Microsoft spokesperson declined to share the book title or scheduled release date details.
To write the book, Scott said he began by thinking about how to define AI for his grandfather, a former appliance repairman, farmer, and boiler room mechanic during World War II.
“I think if my granddad were alive he’d be curious about AI, and part of my process is figuring out how I would explain it to him, because he wasn’t a computer scientist. And I think it’s part of your set of responsibilities these days as a tech person to try to do more of that, to make the things that you’re working on more accessible,” he said.
The book will likely draw on Scott’s experiences growing up in rural Virginia.
When asked which form of AI he believes is likely to have a more positive impact than anticipated, Scott pointed to manufacturing automation in rural areas. It’s easy to imagine advanced robotics being a disruptive factor in manufacturing, but it can also level the playing field worldwide, making it possible to establish business anywhere.
“I have talked with dozens of both small and large companies over the past couple of years, and in every last one of these conversations the thing that I’m seeing is that automation is this sort of equalizing factor, like a piece of advanced automation that runs in Shenzhen costs about the same as it does in some little rural town [in the U.S.],” he said.
“That’s this thing I think people haven’t really fully wrapped their heads around, this whole agile manufacturing movement, where you’ve got lots of these small companies that are now able to make things [and] that are repatriating jobs to the U.S. from overseas, just because they’re deploying all of this automation and their unit cost of production is dropping.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
13,537 | 2,023 |
"Algorithms Allegedly Penalized Black Renters. The US Government Is Watching | WIRED"
|
"https://www.wired.com/story/algorithms-allegedly-penalized-black-renters-the-us-government-is-watching"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Khari Johnson Business Algorithms Allegedly Penalized Black Renters. The US Government Is Watching Illustration: Jacqui VanLiew; Getty Images Save this story Save Save this story Save Two years ago, Mary Louis submitted an application to rent an apartment at Granada Highlands in Malden, Massachusetts. She liked that the unit had two full bathrooms and that there was a pool on the premises. But the landlord denied her the apartment, allegedly due to a score assigned to her by a tenant-screening algorithm made by SafeRent.
Louis responded with references to prove 16 years of punctual rent payments, to no avail. Instead she took a different apartment that cost $200 more a month in an area with a higher crime rate. But a class-action filed by Louis and others last May argues that SafeRent scores based in part on information in a credit report amounted to discrimination against Black and Hispanic renters in violation of the Fair Housing Act.
The groundbreaking legislation prohibits discrimination on the basis of race, disability, religion, or national origin and was passed in 1968 by Congress a week after the assassination of Martin Luther King Jr.
That case is still pending, but the US Department of Justice last week used a brief filed with the court to send a warning to landlords and the makers of tenant-screening algorithms. SafeRent had argued that algorithms used to screen tenants aren’t subject to the Fair Housing Act, because its scores only advise landlords and don’t make decisions. The DOJ’s brief, filed jointly with the Department of Housing and Urban Development, dismisses that claim, saying the act and associated case law leave no ambiguity.
“Housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities,” Department of Justice civil rights division leader Kristen Clarke said in a statement.
Like in many areas of business and government, algorithms that assign scores to people have become more common in the housing industry. But although claimed to improve efficiency or identify “better tenants,” as SafeRent marketing material suggests, tenant-screening algorithms could be contributing to historically persistent housing discrimination, despite decades of civil rights law. A 2021 study by the US National Bureau of Economic Research that used bots using names associated with different groups to apply to more than 8,000 landlords found significant discrimination against renters of color, and particularly African Americans.
“It’s a relief that this is being taken seriously—there's an understanding that algorithms aren't inherently neutral or objective and deserve the same level of scrutiny as human decisionmakers,” says Michele Gilman, a law professor at the University of Baltimore and former civil rights lawyer at the Department of Justice. “Just the fact that the DOJ is in on this I think is a big move.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight A 2020 investigation by The Markup and Propublica found that tenant-screening algorithms often encounter obstacles like mistaken identity, especially for people of color with common last names. A Propublica assessment of algorithms made by the Texas-based company RealPage last year suggested it can drive up rents.
A second case against SafeRent under the Fair Housing Act concluded in federal court in Connecticut in November and awaits a judge’s decision. It was brought by Carmen Arroyo and others, who say the company’s CrimSAFE algorithm deemed a shoplifting charge that was later dropped “disqualifying,” leading to a request for her disabled son, who is unable to speak or walk, to be denied. The case alleges the system discriminated on the basis of disability, national origin, and race.
In response to the brief filed by the DOJ and HUD, Andrew Soukup, an attorney for SafeRent, said the company aims to supply property managers and landlords with predictions to help them make good decisions but does not itself make housing decisions. “SafeRent does not decide whether to approve anyone's application for housing. Those decisions are made by property managers and landlords,” he said in a statement.
The Department of Justice’s intervention in the SafeRent case is one part of recent efforts by the US government to enforce civil rights law on algorithms that make important decisions about people’s lives. On the same day, the department announced terms of a settlement agreement with Meta for selling ads that allegedly violate the Fair Housing Act. The company has developed a system to reduce discrimination in Facebook ads and will remain under federal government supervision until 2026.
“Federal monitoring of Meta should send a strong signal to other tech companies that they too will be held accountable for failing to address algorithmic discrimination that runs afoul of our civil rights laws,” said Clarke, the Department of Justice civil rights division leader in a statement. Last year she worked with the Equal Employment Opportunity Commission to issue guidance to businesses using hiring algorithms on how to avoid violation of the Americans With Disabilities Act.
Together, those interventions suggest the DOJ is determined to enforce federal antidiscrimination law to protect people’s rights in the era of algorithms. “Obviously, advertising is different than tenant screening, but it puts these different industries on notice that they can't hide behind a lack of transparency anymore and that there is going to be greater accountability,” said Gilman, the University of Baltimore law professor. She has represented low-income clients for 25 years, and in the past few years has encountered more cases in which she suspects an algorithm working in the background denied a client housing. But whether existing antidiscrimination law will prove adequate or whether new law is necessary to protect against harmful algorithms is an unresolved issue.
The signal sent to the housing sector this week by the Department of Justice seems in line with other proclamations by the Biden administration on addressing the role AI can play in human rights abuses. Last year, the White House proposed an AI Bill of Rights , a set of principles intended to protect citizens from algorithms in critical areas of their lives like housing, health care, finance, and government benefits. The Trump administration had attempted to make it more difficult to prosecute landlords who use tenant-screening algorithms under the Fair Housing Act.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Senior Writer X Topics artificial intelligence algorithms government machine learning Khari Johnson Khari Johnson Vittoria Elliott Caitlin Harrington Peter Guest Morgan Meaker Paresh Dave Caitlin Harrington Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,538 | 2,023 |
"The Last AI Boom Didn't Kill Jobs. Feel Better? | WIRED"
|
"https://www.wired.com/story/fast-forward-the-last-ai-boom-didnt-kill-jobs"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business The Last AI Boom Didn't Kill Jobs. Feel Better? Illustration: ismagilov/Getty Images Save this story Save Save this story Save If ChatGPT and generative AI live up to even a tenth of the hype surrounding them, wide-scale job losses might seem inevitable. But new economic data shows that the last big leap in AI did not coincide with a reduction of jobs in affected industries—despite widespread fears of rapid replacement at the time.
In a new research paper , economists looked at the job market across a number of European countries between 2011 and 2019. That’s the period during which the AI technique deep learning emerged as a powerful way to automate tasks like transcribing speech, analyzing images, and making algorithmic recommendations for social feeds and ecommerce sites. Back then, deep learning was widely expected to have a broad and swift impact on employment.
To check up on what really happened, researchers at the European Central Bank, Spain’s central bank, and the universities of Oxford and Pittsburgh used two established methods for measuring how vulnerable professions are to AI-powered automation. Both involved examining the tasks workers do and how they compare with the capabilities of algorithms. The researchers cross-referenced that information with survey data on EU workers that shows the number of people leaving or joining different occupations in industries ranging from agriculture to financial services.
The headline result was that industries where AI could be the most useful did not see a reduction of jobs. In fact, for more highly skilled jobs vulnerable to AI, such as white-collar office work that involves working with data, there was around a 5 percent increase in the number of employed workers. The researchers say this supports the idea that new technology can increase demand for more skilled workers at the same time that it replaces those who do routine work. Less skilled workers didn’t seem to be significantly affected by software or AI.
Although fears about new technologies taking jobs are common—and entirely understandable—economic research offers a complex, mixed picture. In general, many economists believe that automation can increase demand for jobs overall, as shown by some recent studies.
If you are not one of those more skilled workers, however, the arrival of new technology might be a problem.
None of this research makes it possible to predict what effect ChatGPT or other generative AI technologies will have. They may be too new to cause noticeable changes. And it’s possible that generative AI has an entirely different effect on jobs to what came before. “While in the period of our analysis the association is positive, these results may not be extrapolated into the future,” the authors of the new study write.
Generative AI is undeniably already causing some disruption. You don’t have to look far to find stories of individuals and businesses that are already being changed by the kind of AI behind ChatGPT.
Some copywriters, for instance, are having to find new careers after customers replaced them with generative AI tools. A number of publications are also experimenting with AI-generated content. The German tabloid Bild recently cited AI as one reason for planned job cuts.
But let’s not forget that tools like ChatGPT are still unreliable coworkers, as they make up facts , reinforce biases , and can otherwise misbehave. And it’s worth remembering that the emergence of deep learning in the 2010s prompted some AI experts to predict the elimination of certain jobs, including radiologists—a prognostication that has hardly come true.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics Fast Forward artificial intelligence economics Jobs algorithms machine learning Work Gregory Barber Vittoria Elliott Khari Johnson Khari Johnson Steven Levy Will Bedingfield Matt Burgess Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,539 | 2,023 |
"The Fraud-Detection Business Has a Dirty Secret | WIRED"
|
"https://www.wired.com/story/welfare-fraud-industry"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons The Suspicion Machine The Fraud Hunters The Welfare War The Dirty Secret By Morgan Meaker Business The Fraud-Detection Business Has a Dirty Secret ILLUSTRATION: KATHERINE LAM I Am Not a Number The Suspicion Machine The Fraud Hunters The Welfare War The Dirty Secret Now Reading Save this story Save Save this story Save Mitch Daniels is a numbers guy, a cost-cutter. In the early 2000s, he tried and failed to rein in congressional spending under then-US president George W. Bush. So when he took office as Indiana governor in 2005, Daniels was ready to argue once again for fiscal discipline. He wanted to straighten out Indiana’s state government, which he deemed rife with dysfunction. And he started with its welfare system. “That department had been rocked by a series of criminal indictments, with cheats and caseworkers colluding to steal money meant for poor people,” he later said.
Daniels’ solution took the form of a $1.3 billion, 10-year contract with IBM. He had lofty ambitions for the project, which started in 2006, claiming it would improve the benefits service for Indiana residents while cracking down on fraud, ultimately saving taxpayers billions of dollars.
But the contract was a disaster. It was canceled after three years, and IBM and Indiana spent a decade locked in a legal battle about who was to blame. Daniels described IBM’s sweeping redesign and automation of the system—responsible for deciding who was eligible for everything from food stamps to medical cover—as deficient. He was adamant, though, that outsourcing a technical project to a company with expertise was the right call. “It was over-designed,” he said. “Great on paper but too complicated to work in practice.” IBM declined a request for comment.
This story is part of a joint investigation between Lighthouse Reports and WIRED. To read other stories from the series, click here.
In July 2012, Judge David Dryer of the Marion County Superior Court ruled that Indiana had failed to prove IBM had breached its contract. But he also delivered a damning verdict on the system itself, describing it as an untested experiment that replaced caseworkers with computers and phone calls. “Neither party deserves to win this case,” he said.
“This story represents a ‘perfect storm’ of misguided government policy and overzealous corporate ambition.” That might have been an early death knell for the burgeoning business of welfare state automation. Instead, the industry exploded. Today, such fraud systems form a significant part of the nebulous “govtech” industry, which revolves around companies selling governments new technologies with the promise that new IT will make public administration easier-to-use and more efficient. In 2021, that market was estimated to be worth €116 billion ($120 billion) in Europe and $440 billion globally. And it’s not only companies that expect to profit from this wave of tech. Governments also believe modernizing IT systems can deliver big savings. Back in 2014, the consultancy firm McKinsey estimated that if government digitization reached its “full potential,” it could free up $1 trillion every year.
Contractors around the world are selling governments on the promise that fraud-hunting algorithms can help them recoup public funds. But researchers who track the spread of these systems argue that these companies are often overpaid and under-supervised. The key issue, researchers say, is accountability. When complex machine learning models or simpler algorithms are developed by the private sector, the computer code that gets to define who is and isn’t accused of fraud is often classed as intellectual property. As a result, the way such systems make decisions is opaque and shielded from interrogation. And even when these algorithmic black holes are embroiled in high-stakes legal battles over alleged bias, the people demanding answers struggle to get them.
In the UK, a community group called the Greater Manchester Coalition of Disabled People is trying to determine whether a pattern of disabled people being investigated for fraud is linked to government automation projects. In France, the digital rights group La Quadrature du Net has been trying for four months to find out whether a fraud system is discriminating against people born in other countries. And in Serbia, lawyers want to understand why the introduction of a new system has resulted in hundreds of Roma families losing their benefits. “The models are always secret,” says Victoria Adelmant, director of New York University’s digital welfare state project. “If you don’t have transparency, it’s very difficult to even challenge and assess these systems.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The rollout of automated bureaucracy has happened quickly and quietly, but it has left a trail of scandals in its wake. In Michigan, a computer system used between 2013 and 2015 falsely accused 34,000 people of welfare fraud. A similar thing happened in Australia between 2015 and 2019, but on a larger scale: The government accused 400,000 people of welfare fraud or error after its social security department started using a so-called robodebt algorithm to automatically issue fines.
Another scandal emerged in the Netherlands in 2019 when tens of thousands of families—many of them from the country’s Ghanaian community—were falsely accused of defrauding the child benefits system. These systems didn’t just contribute to agencies accusing innocent people of welfare fraud; benefits recipients were ordered to repay the money they had supposedly stolen. As a result, many of the accused were left with spiraling debt, destroyed credit ratings, and even bankruptcy.
“If you don’t have transparency, it’s very difficult to even challenge and assess these systems.” Victoria Adelmant, New York University Not all government fraud systems linked to scandals were developed with consultancies or technology companies. But civil servants are increasingly turning to the private sector to plug knowledge and personnel gaps. Companies involved in fraud detection systems range from giant consultancies—Accenture, Cap Gemini, PWC—to small tech firms like Totta Data Lab in the Netherlands and Saga in Serbia.
Experts in automation and AI are expensive to hire and less likely to be wooed by public sector salaries. When the UK surveyed its civil servants last year, confidence in the government’s ability to use technology was low, with around half of respondents blaming an inability to hire top talent. More than a third said they had few or no skills in artificial intelligence, machine learning, or automation. But it’s not just industry experience that makes the private sector so alluring to government officials. For welfare departments squeezed by budget cuts, “efficiency” has become a familiar buzzword. “Quite often, a public sector entity will say it is more efficient for us to go and bring in a group of consultants,” says Dan Sheils, head of European public service at Accenture.
The public sector lacks the expertise to create these systems and also to oversee them, says Matthias Spielkamp, cofounder of German nonprofit Algorithm Watch, which has been tracking automated decision-making in social welfare programs across Europe since 2017. In an ideal world, civil servants would be able to develop these systems themselves and have an in-depth understanding of how they work, he says. “That would be a huge difference to working with private companies, because they will sell you black-box systems—black boxes to everyone, including the public sector.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight In February 2020, a crisis broke out in the Dutch region of Walcheren as officials realized they were in the dark about how their own fraud detection system worked. At the time, a Dutch court had halted the use of another algorithm used to detect welfare fraud, known as SyRI, after finding it violated people’s right to privacy. Officials in Walcheren were not using SyRI, but in emails obtained by Lighthouse Reports and WIRED through freedom-of-information requests, government employees had raised concerns that their algorithm bore striking similarities to the one just condemned by the court.
Walcheren’s system was developed by Totta Data Lab. After signing a contract in March 2017, the Dutch startup developed an algorithm to sort through pseudonymous information , according to details obtained through a freedom-of-information request. The system analyzed details of local people claiming welfare benefits and then sent human investigators a list of those it classified as most likely to be fraudsters.
The redacted emails show local officials agonizing over whether their algorithm would be dragged into the SyRI scandal. “I don’t think it is possible to explain why our algorithm should be allowed while everyone is reading about SyRI,” one official wrote the week after the court ruling. Another wrote back with similar concerns. “We also do not get insight from Totta Data Lab into what exactly the algorithm does, and we do not have the expertise to check this.” Neither Totta nor officials in Walcheren replied to requests for comment.
When the Netherlands’ Organization for Applied Scientific Research, an independent research institute, later carried out an audit of a Totta algorithm used in South Holland, the auditors struggled to understand it. “The results of the algorithm do not appear to be reproducible,” their 2021 report reads, referring to attempts to re-create the algorithm’s risk scores. “The risks indicated by the AI algorithm are largely randomly determined,” the researchers found.
With little transparency, it often takes years—and thousands of victims—to expose technical shortcomings. But a case in Serbia provides a notable exception. In March 2022, a new law came into force which gave the government the green light to use data processing to assess individuals’ financial status and automate parts of its social protection programs. The new socijalna karta , or social card system, would help the government detect fraud while making sure welfare payments were reaching society’s most marginalized, claimed Zoran Đorđević, Serbia’s minister of social affairs in 2020.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight But within months of the system’s introduction, lawyers in the capital Belgrade had started documenting how it was discriminating against the country’s Roma community, an already disenfranchised ethnic minority group.
Mr. Ahmetović, a welfare recipient who declined to share his first name out of concern that his statement could affect his ability to claim benefits in the future, says he hadn’t heard of the social card system until November 2022, when his wife and four children were turned away from a soup kitchen on the outskirts of the Serbian capital. It wasn’t unusual for the Roma family to be there, as their welfare payments entitled them to a daily meal provided by the government. But on that day, a social worker told them their welfare status had changed and that they would no longer be getting a daily meal.
The family was in shock, and Ahmetović rushed to the nearest welfare office to find out what had happened. He says he was told the new social card system had flagged him after detecting income amounting to 110,000 Serbian dinars ($1,000) in his bank account, which meant he was no longer eligible for a large chunk of the welfare he had been receiving. Ahmetović was confused. He didn’t know anything about this payment. He didn’t even have his own bank account—his wife received the family’s welfare payments into hers.
With no warning, their welfare payments were slashed by 30 percent, from around 70,000 dinars ($630) per month to 40,000 dinars ($360). The family had been claiming a range of benefits since 2012, including financial social assistance, as their son’s epilepsy and unilateral paralysis means neither parent is able to work. The drop in support meant the Ahmetovićs had to cut back on groceries and couldn’t afford to pay all their bills. Their debt ballooned to over 1 million dinars ($9,000).
The algorithm’s impact on Serbia’s Roma community has been dramatic. Ahmetović says his sister has also had her welfare payments cut since the system was introduced, as have several of his neighbors. “Almost all people living in Roma settlements in some municipalities lost their benefits,” says Danilo Ćurčić, program coordinator of A11, a Serbian nonprofit that provides legal aid. A11 is trying to help the Ahmetovićs and more than 100 other Roma families reclaim their benefits.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight But first, Ćurčić needs to know how the system works. So far, the government has denied his requests to share the source code on intellectual property grounds, claiming it would violate the contract they signed with the company who actually built the system, he says. According to Ćurčić and a government contract , a Serbian company called Saga, which specializes in automation, was involved in building the social card system. Neither Saga nor Serbia’s Ministry of Social Affairs responded to WIRED’s requests for comment.
As the govtech sector has grown, so has the number of companies selling systems to detect fraud. And not all of them are local startups like Saga. Accenture—Ireland’s biggest public company, which employs more than half a million people worldwide—has worked on fraud systems across Europe. In 2017, Accenture helped the Dutch city of Rotterdam develop a system that calculates risk scores for every welfare recipient. A company document describing the original project, obtained by Lighthouse Reports and WIRED, references an Accenture-built machine learning system that combed through data on thousands of people to judge how likely each of them was to commit welfare fraud. “The city could then sort welfare recipients in order of risk of illegitimacy, so that highest risk individuals can be investigated first,” the document says.
Officials in Rotterdam have said Accenture’s system was used until 2018, when a team at Rotterdam’s Research and Business Intelligence Department took over the algorithm’s development. When Lighthouse Reports and WIRED analyzed a 2021 version of Rotterdam’s fraud algorithm, it became clear that the system discriminates on the basis of race and gender.
And around 70 percent of the variables in the 2021 system—information categories such as gender, spoken language, and mental health history that the algorithm used to calculate how likely a person was to commit welfare fraud— appeared to be the same as those in Accenture’s version.
When asked about the similarities, Accenture spokesperson Chinedu Udezue said the company’s “start-up model” was transferred to the city in 2018 when the contract ended. Rotterdam stopped using the algorithm in 2021, after auditors found that the data it used risked creating biased results.
“Having a human in the loop doesn’t mean that the human has the time, the training, or the capacity to question the decision.” Eva Blum-Dumontet, tech policy consultant Consultancies generally implement predictive analytics models and then leave after six or eight months, says Sheils, Accenture’s European head of public service. He says his team helps governments avoid what he describes as the industry’s curse: “false positives,” Sheils’ term for life-ruining occurrences of an algorithm incorrectly flagging an innocent person for investigation. “That may seem like a very clinical way of looking at it, but technically speaking, that's all they are.” Sheils claims that Accenture mitigates this by encouraging clients to use AI or machine learning to improve, rather than replace, decision-making humans. “That means ensuring that citizens don’t experience significantly adverse consequences purely on the basis of an AI decision.” However, social workers who are asked to investigate people flagged by these systems before making a final decision aren’t necessarily exercising independent judgment, says Eva Blum-Dumontet, a tech policy consultant who researched algorithms in the UK welfare system for campaign group Privacy International. “This human is still going to be influenced by the decision of the AI,” she says. “Having a human in the loop doesn’t mean that the human has the time, the training, or the capacity to question the decision.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Despite the scandals and repeated allegations of bias, the industry building these systems shows no sign of slowing. And neither does government appetite for buying or building such systems. Last summer, Italy’s Ministry of Economy and Finance adopted a decree authorizing the launch of an algorithm that searches for discrepancies in tax filings, earnings, property records, and bank accounts to identify people at risk of not paying their taxes.
But as more governments adopt these systems, the number of people erroneously flagged for fraud is growing. And once someone is caught up in the tangle of data, it can take years to break free. In the Netherlands’ child benefits scandal, people lost their cars and homes, and couples described how the stress drove them to divorce. “The financial misery is huge,” says Orlando Kadir, a lawyer representing more than 1,000 affected families. After a public inquiry, the Dutch government agreed in 2020 to pay the families around €30,000 ($32,000) in compensation. But debt balloons over time. And that amount is not enough, says Kadir, who claims some families are now €250,000 in debt.
In Belgrade, Ahmetović is still fighting to get his family’s full benefits reinstated. “I don’t understand what happened or why,” he says. “It’s hard to compete against the computer and prove this was a mistake.” But he says he’s also wondering whether he’ll ever be compensated for the financial damage the social card system has caused him. He’s yet another person caught up in an opaque system whose inner workings are guarded by the companies and governments who make and operate them. Ćurčić, though, is clear on what needs to change. “We don’t care who made the algorithm,” he says. “The algorithm just has to be made public.” Additional reporting by Gabriel Geiger and Justin-Casimir Braun.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics Suspicion Machine algorithms politics discrimination artificial intelligence Vittoria Elliott Peter Guest Vittoria Elliott Morgan Meaker Peter Guest Will Knight Will Knight Eliza Gkritsi Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,540 | 2,022 |
"Inside Amazon Air’s Quest to Seize the Skies | WIRED"
|
"https://www.wired.com/story/amazon-air-quest-to-seize-the-skies"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Caitlin Harrington Backchannel Winging It: Inside Amazon’s Quest to Seize the Skies The story of Amazon Air demonstrates the lengths the company will go to keep its promise to customers and maintain its retail dominance.
Photograph: Spencer Lowell Save this story Save Save this story Save Christmas was rapidly approaching, and Amazon was facing a crisis. In the waning shopping days of 2014, the retailer was preparing to promote its deal of the day: the Amazon Kindle, delivered just in time for Christmas. Then it discovered a problem: Stock was running low within driving distance of Seattle, where the company is headquartered. Amazon turned to UPS to airlift more e-readers to the city, but with the holiday shopping season in full swing, the parcel service was unwilling to divert more planes to appease its increasingly demanding client. Amazon, it appeared, would not be able to deliver its signature device to shoppers in its own backyard.
The prospect of failure was unbearable for executives steeped in Amazon founder Jeff Bezos ’ doctrine of customer obsession, according to a former employee. They were also still haunted by the nightmare of the previous Christmas, when a mass of packages landed late on the doorsteps of aggrieved holiday shoppers. But the 2013 fiasco had largely been due to ground transportation issues. This latest crisis was an air problem. While Amazon had spent the previous year building up its network of sortation centers to streamline delivery via trucks, the company depended entirely on FedEx and UPS to fly most of its packages around the United States. If those carriers couldn’t keep up with demand, Amazon wouldn’t be able to honor its Prime “promise” to ship any imaginable commodity to tens of millions of households within two days.
Worried about a second straight holiday season meltdown, Dave Clark, then Amazon’s head of worldwide operations, ordered his transportation team to rustle up some airplanes, fast, according to a former employee. Scott Ruffin, a former marine logistics officer who handled procurement for the sortation centers, reached out to everyone he knew in the industry and eventually helped charter enough planes to fly Kindles to Seattle from far-flung fulfillment centers. Christmas was saved. But what about next year, and the year after that? Amazon decided it needed more control over its destiny. It needed its own air network.
Amazon is famous—or infamous—for its breakneck pace of innovation and data-driven efforts to squeeze every drop of productivity out of workers. Its drivers are reported to operate on punishing schedules, its warehouse workers are timed to the second, and the US Occupational Safety and Health Administration has launched multiple probes into conditions at its warehouses. At the same time, its corporate values are hallowed within the company walls. “Jeff Bezos came down from the mountain with 12 leadership principles,” jokes a former staffer. They urge a “bias for action,” declaring that “speed matters” and “many decisions and actions are reversible and do not need extensive study.” The aviation world moves more slowly. Airport space is difficult to come by; cargo jets are enormously expensive to convert and operate. (“You know how you become a millionaire in the air business?” quips one aviation veteran. “You start with a billion dollars.”) Running an air cargo service requires compliance with government regulations covering security, labor relations, and most important of all, safety, designed to prevent accidents and loss of life.
This article appears in the February 2023 issue.
Subscribe to WIRED.
Photograph: Peter Yang But Amazon has managed to build its own sizable cargo service in just a few years, helping it to dramatically decrease its reliance on UPS and FedEx. (FedEx eventually terminated its Amazon contracts in 2019.) The company now owns 11 planes and leases about 100 others, flown by seven air carriers that make more than 200 flights a day out of 71 airports, including a European hub near Leipzig, Germany. This fleet, known as Amazon Air, flies orders from fulfillment centers to customers when items are stored too far away to transport by truck, the company says. Last year, Amazon opened a $1.5 billion air hub at Cincinnati/Northern Kentucky International Airport (CVG)—among the largest capital investments in the company’s history. As a result, nearly three-quarters of Americans in the continental US live within 100 miles of an Amazon airport, according to a September report by DePaul University.
The story of Amazon Air demonstrates the lengths the company will go to keep its promise to customers and maintain its retail dominance. It’s a side of the company that most shoppers rarely even see, unless they happen to glance up in the sky as an Amazon jet roars above. But as the program continues to expand, some former employees say these costly, emissions-spewing airplanes are often under-filled or are used to ship goods that could be carried more cheaply and efficiently by road.
WIRED spoke to more than two dozen current and former Amazon Air employees about how the company launched an air service with the agility of a startup and the muscle of a megacorporation. Most spoke anonymously out of fear of facing retaliation or jeopardizing future career prospects. They described an entrepreneurial culture that accomplished big things fast, but also toxic management, angry communities, pilots pushed past their limits, and a singular focus on rapid growth, even if it came at the expense of efficiency. One former employee says some colleagues used to joke, “We took off, and there was no landing gear.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight By 2021, Amazon Air planes were making more than 200 flights a day.
Photograph: Spencer Lowell Just over a year after the Christmas Kindle crisis, Jeff Bezos assembled some of his S-team—a trusted cadre of senior executives—and members of the Middle Mile organization, which handled in-house transportation between Amazon’s warehouses, to make some important decisions. According to a former employee with knowledge of the event, the meeting began in typical Amazon fashion with attendees sitting around a table silently reading. In this case, they studied a six-page white paper drawn up by the Middle Mile team, which laid out potential operating models for an air network. Among the first big choices: Should Amazon acquire and operate its own airline or construct “an organization to interact with an airline,” as one former employee put it, outsourcing the flying, regulatory compliance, and liabilities? The former option would give the company greater control over the program but would also take much longer to execute and be riskier and more complex. For one thing, it would force Amazon to deal directly with the US Federal Aviation Administration (FAA).
After everyone finished reading, they took turns posing questions. It seemed as if the sentiment was leaning toward acquisition until Bezos spoke—last, as he typically did in these meetings so as not to inhibit discussion. “Planes are planes,” an employee recalls him saying. “What are we going to do to differentiate?” This was a Bezos maxim. He preferred to spend Amazon’s prodigious resources on initiatives where the company would not only be profitable, but disruptive. Nobody had a good answer.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Ultimately, Bezos decided to contract out the flying. That meant Amazon would enlist carriers certified by the FAA to pilot planes it leased, or even owned, to bring packages closer to customers. One upshot, multiple former employees say, was that Amazon could avoid directly employing pilots, the vast majority of whom are unionized. Another was having multiple airlines compete for its business. “If one of the pilot unions messes around,” says a former Amazon Air employee and aviation industry longtimer, “they can just give the business to the other ones.” A couple of departments at Amazon ran pilot programs with a handful of airports, code-named Archangel and Amelia, to test out different air transport models. Leaders from the Middle Mile team had studied the German shipping giant DHL, which outsourced its US flight operations to multiple small cargo airlines that provide clients with aircraft, crew, maintenance, and insurance—ACMI, in industry parlance—and asked the airlines to replicate that model for Amazon.
The experiments worked, and in 2016 Amazon signed long-term deals to lease 40 jets from two ACMI carriers: Air Transport Services Group (ATSG) and Atlas Air. The group from Middle Mile was officially tasked with running Amazon Air. Soon after, planes emblazoned with Amazon’s signature arrow on the tail and the words “Prime Air” across the body started flying across the US.
In the early days, the Amazon Air team largely staffed its ranks with people who had little air cargo background, according to some employees. “People can learn the aviation part,” says a former staffer. “What’s more important was having people that [embrace] the Amazon leadership principles,” like mandates to innovate, invent, and “think differently.” For some aviation industry veterans, this approach seemed naive, even arrogant. But there were Amazon employees who viewed the aviation veterans as frustratingly cautious. “People from FedEx and Boeing struggle coming to Amazon because they will take months and analyze to death before they make decisions,” says a former Amazon Air leader, adding, “Amazon has a DNA of decisionmaking and risk-taking.” With airport space scarce, the team constructed package processing facilities at airports wherever they could find room, like in out-of-the-way hangars from World War II. In Stockton, California, the company erected a temporary structure jokingly known as the “circus tent.” “We didn’t try to launch it 100 percent perfectly. We launched it like 70 percent perfectly,” says a former employee.
Even as they plowed ahead, some employees labeled air delivery a “defect,” a phrase that became a common refrain on the team. Air transport is much costlier than ground shipping. It’s also about 10 times more carbon-intensive than truck transport, according to researchers at the MIT Center for Transportation and Logistics. While some staffers said Amazon sold too many products to place each one within a two-day drive, others argued that with better forecasting and inventory placement, you wouldn’t need so many planes, especially as Amazon’s network of fulfillment centers multiplied. UPS, says a former Amazon employee and cargo industry veteran, saved hundreds of millions of dollars by better connecting its ground network rather than buying more aircraft. But Amazon was intent on covering the entire country—and beyond—by air, say three former employees.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Ruffin, now head of Amazon Air, helped drive a breakneck pace—long hours, weekend availability, and heavy travel schedules were a given. Numerous former employees characterized him as a talented strategist with a long-term vision and a short fuse. Former staff said he was prone to outbursts and reduced at least one employee to tears. One former employee described him as “a go-go former marine, a real ballbuster.” Amazon Air operated much like a startup, and employees were encouraged to experiment. A former staffer recalls Bezos’ edict to the team: “Don’t try to make the airplanes fly faster. That’s Boeing’s job. Your job is to design the network, the operations, and technology in a way that is disruptive.” When beholden to FedEx and UPS, Amazon had to operate on a rigid timetable so those carriers could meet their own overnight delivery commitments. Now that Amazon Air transported only its own volume, the company could design more flexible, customized systems built around its two-day delivery window that lasted until 10 pm. “You get presented with a problem like, ‘We need to move this many packages to Seattle in December,’” recalls a former early staffer. “At UPS they’d say, ‘Our building can’t handle that many packages.’ At Amazon, they said, ‘Well, let’s [lease space at an] airport in Portland and truck it up.’” Robots sort packages on the floor of an Amazon Air warehouse at an airport in Florida.
Photograph: Spencer Lowell Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Amazon Air squeezed every ounce of efficiency out of some processes. “Even a slight improvement based on the scale of operations was a huge deal,” says a former engineer.
Photograph: Spencer Lowell Amazon’s disruptive mindset didn’t sit well with everyone. The Amazon Air team had wide latitude to change flight schedules based on factors like demand, aircraft and crew availability, the location of cargo, and how full they could get the planes. In the early days, employees adjusted schedules on a monthly basis—more frequently than was industry standard. Meanwhile, some less experienced staffers tinkered with schedules without fully understanding the consequences for aircraft maintenance and other aviation particularities. “Then the carriers would look at it and be like, ‘This doesn’t make any sense,’’’ says a former employee. “It was their first time truly tasting how Amazon works, how demanding it is, how tiring it can get.” On a visit to an Amazon Air facility in California in 2016, one former Amazon employee saw pilots resting in the cockpit before their flights. (A pilot representative told Bloomberg around that time that Amazon’s schedules sometimes left them nowhere to rest but the cockpit.) He had heard their complaints about having to spend excess time on the plane, and now he was seeing it firsthand. “They were getting jerked around by Amazon and the ridiculous schedule changes that we were dropping on them,” says the former employee, who adds that schedules would sometimes change at the last minute. “It was not uncommon for us to be like, ‘Oh, hey, we know that you’re supposed to be flying from Cincinnati to Seattle tomorrow. We decided we want you to go to Portland instead,’” says another former air employee.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight “Amazon knew the airlines were working with this big juggernaut and wanted to do whatever they could to make us happy—at the sacrifice of pilot morale,” says the same employee. (An Amazon spokesperson says the company provides its desired flight schedule to carriers but does not have a say in pilot scheduling. The company also says it now automates its schedules and updates them three to four times a year.) “It was not a place where you felt warm and fuzzy. But by God it was a place where you felt intellectually challenged, like you were getting shit done.” The tensions came to a head two days before Thanksgiving. Early that morning, an Amazon employee was brushing his teeth when he glanced down at his phone. “I started to see this flood of emails come in: Flight canceled. Flight canceled. Flight canceled.” A group of pilots for ABX Air, a subsidiary of ATSG, had landed in Dallas around 2 am, deplaned, and instead of continuing on, told the ground crew they were leaving, according to another former employee. Over the course of the day, around 250 ABX pilots walked off the job, protesting the demands ABX was placing on them amid an industry-wide staffing shortage and soaring workload from customers like Amazon and DHL. According to court filings , around half of the pilots had already taken at least six “emergency” assignments by June, the annual limit in their labor contract. (ABX did not respond to requests for comment.) The standoff ended when a judge ordered the pilots back to work the following day. Nonetheless, Ruffin suspended Amazon’s business with ABX for several more weeks, until the company was sure the pilots wouldn’t strike again, a spokesperson told Recode.
“The strike was a power play to have the union say, ‘We are important. Our pilots are important. And we have control,’” says a former Amazon Air employee. “The cessation of operations was Amazon saying, ‘We have a lot more control.’” Ruffin was pushed out of the company in 2017, although those who spoke to WIRED agreed that his departure was unrelated to the team’s performance. Several former employees heard that his temper had gotten the better of him one too many times. Ruffin did not respond to requests for comment, and Amazon did not comment on his departure.
Robots sort packages on the floor of an Amazon Air warehouse at an airport in Florida.
Video: Spencer Lowell; Jacqui VanLiew He was replaced by Sarah Rhoads, who had previously run fulfillment centers in the UK and EU. Rhoads lacked a business or air cargo background but had served in active duty as a fighter pilot in Iraq—the first woman in the US Navy Strike Fighter Squadron, the Black Aces. She joined Amazon as an operations manager in 2011 and rose quickly through the ranks. “She bled Amazon,” says a former close colleague. With the major pieces of its operating model in place, Amazon Air turned its focus to scaling up, and fast. The notion of air delivery as a defect seemed to give way to a thirst for expansion. As the number of daily flights climbed, eventually generating millions of potential fleet-flight combinations, the research science team built an automated scheduling system. Managers at airports around the country relayed data to Seattle, where software engineers aimed to make workflows more efficient. “If it takes a minute to offload a box, how do you optimize the process so that it takes 50 seconds?” says a former software engineer. “Even a slight improvement based on the scale of operations was a huge deal.” And that scale of operations expanded dramatically. Between 2019 and 2021, Amazon Air opened six regional hubs and reached some 200 daily flights in the US. In 2019, the company began taking over ground operations at some airports, staffing them with Amazon ramp agents who loaded, unloaded, and marshaled the planes. The rapid pace of growth continued into the pandemic, when demand for online shopping spiked. A hub in Germany opened in November 2020. In early 2021, the company purchased its first 11 Boeing 767 jets from Delta and WestJet, a fleet its air carriers would operate, capitalizing on steep discounts offered by beleaguered passenger airlines.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The work continued to be demanding, former staffers say, but they swiftly saw the fruits of their labor. “It was not a place where you felt warm and fuzzy,” says one former employee. “But by God it was a place where you felt intellectually challenged, where you felt intellectually rejuvenated, like you were getting shit done.” “Amazon has a DNA of decisionmaking and risk-taking,” says a former Amazon Air leader.
Photograph: Spencer Lowell Michael Griffith had been flying jets for more than 30 years, piloting US Air Force missions during the first Gulf War. He was a 747 captain for Atlas Air when the airline became one of Amazon’s main contractors. Griffith was used to cargo clients wanting basic flight information, like arrival and unloading times. But as a union officer with the Teamsters labor union, he started hearing accounts from fellow pilots and realized they were facing a different kind of customer. Amazon wanted to know what time the wheels touched down, what time they hit the chocks, what time the loader showed up, and when the first and last pallets came off the plane. “They wanted the minutiae,” says Griffith, who retired from Atlas last year.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Amazon used the data to tighten its operations, but Griffith says the company would sometimes veer into the pilots’ territory. Federal regulations state that an airline’s pilot and dispatcher bear responsibility for the safe conduct of a flight, which includes decisions about routing and fuel loads—the amount of fuel a plane carries. But as Amazon began to assert more influence, Griffith and other pilots say the line sometimes blurred.
The Teamsters union that represents Atlas pilots received reports claiming that Amazon requested certain fuel loads from the airlines based on its calculations for the efficiency of a flight. (Amazon pays for fuel.) But fuel loads can be affected by a host of other factors, such as weather and maintenance status. “With all this data that they amassed, they’re able to make their own judgments, which may or may not be the safest or even the correct ones,” says Griffith.
Amazon spokesperson Olivia Connors writes, “We may share ideas with our carriers from time to time, but it is the sole responsibility of the certificated air carrier and, ultimately, their assigned pilot in command, to determine the appropriate flight plan and fuel load for each flight. This is reflected in applicable aviation regulations as well as our carrier contracts.” Robert Kirchner, a veteran pilot and Teamsters official who represented the Atlas pilots until late last year, recalls a complaint that crossed his desk in 2020. According to Kirchner, documents reviewed by WIRED, and a person with knowledge of the incident who requested anonymity, Amazon asked Atlas Air to change a route from Kona, Hawaii, to Riverside, California. The economics of cargo flights to Hawaii are tricky, since planes often arrive full to the islands and return nearly empty. The alternative route saved six minutes of flight time and 600 pounds of fuel on a flight that usually carries tens of thousands of pounds; it also required pilots to veer off the established airways.
The over 2,000-mile expanse between the mainland and Hawaii is the largest stretch of open ocean commercial pilots traverse. Since they lose access to radar once they reach a certain distance from land, pilots follow established routes and report their positions to air traffic control, which uses this intel to coordinate with other traffic in the sky—and initiate search and rescue operations should something go wrong. Knowing their position on these established tracks can also help pilots confirm that they have enough fuel left to complete the journey. Now, the Teamsters say, Atlas was asking pilots to take a direct path to Riverside and veer off the normal route.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight When Atlas received a complaint about the safety implications of the new route, its dispatchers responded that Amazon had requested it. The airline ultimately acknowledged that the route was invalid, but Kirchner says several pilots had already flown it. Connors reiterated that Amazon may share ideas with airlines, but pilots are responsible for determining flight paths. Atlas spokesperson Debbie Coffey writes, “Atlas, not Amazon, is responsible for route planning.” The Teamsters union and a former Amazon employee worried that Amazon’s business objectives sometimes clashed with safety even in the early days. In late 2016, according to the former Amazon employee, a squabble broke out between the company and its air carriers over lithium-ion batteries, which can catch fire if not stored properly—a potential hazard that was what likely downed a UPS cargo plane near Dubai several years earlier. The pilots believed Amazon’s battery packages weren’t adequately labeled, says the employee. The employee adds that Amazon eventually conceded, and workers scrambled to identify products containing the batteries and label their boxes accordingly. Kirchner says that after this, he received reports from the union’s safety committee that Amazon was not listing lithium-battery-containing boxes on the flight paperwork. “At certain points, it felt like Amazon was just doing things to see how far they could push the operating envelope,” says the former Amazon employee.
Connors disagreed with this characterization, writing, “Amazon marks and labels packages containing lithium batteries in accordance with applicable regulations and through coordination with the FAA. Some packages containing lithium batteries do not require specific marks or labels because they are exempt from such requirements. It is the ultimate responsibility of the certificated air carrier to determine whether cargo has been loaded on each flight in a safe and compliant manner.” Amazon boxes wait to be unloaded at the airport.
Photograph: Spencer Lowell A label on one package warns of potentially hazardous material inside.
Photograph: Spencer Lowell This summer, dozens of ground crew workers walked off the job at a regional air hub in San Bernardino, California. They later went on strike, demanding $5-an-hour raises and better health and safety conditions, and protesting the deployment of anti-union consultants. Ramp agent Rex Evans says that during a summer heat wave, when he measured temperatures on the ramp as high as 120 degrees Fahrenheit, managers didn’t give workers their state-mandated heat breaks until employees spoke out. “[Some] managers’ main concern is getting these planes out on time or getting them unloaded on time,” he says. Connors never directly addressed the incident, but writes that Amazon Air hubs are fully climate controlled, and that the company provides air-conditioned ramp vans and employs safety professionals who monitor the temperature and take extra measures when necessary.
As Amazon Air has expanded, it has faced pushback from some of its new neighbors. When Rick and Eugenia Garrity moved to their new Lakeland, Florida, home in 1979, they loved the location of the charming 1923 Spanish Mediterranean house. It was midway between Tampa, where Rick had worked as an environmental scientist, and Orlando, where Eugenia had been an oratorio singer before they both retired. They would occasionally spot turboprops and small private jets flying to and from nearby Lakeland Linder International Airport, but a couple of years ago the Garritys’ Floridian idyll was rudely disrupted by a procession of blue-and-white 767 jets screaming over their backyard, sometimes more than once an hour, every single day. “It’s bombing noises. It’s tremendous,” says Eugenia from the patio of a Starbucks a few blocks from home. As if to punctuate her point, an Amazon jet roars low overhead.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight In 2019, the plan to bring Amazon jets to the sleepy airport was under a nondisclosure agreement and known only as Project Scythia. Officials hoped that securing such a valuable client would bring jobs and make the airport more attractive to additional investment, spurring the local economy. An environmental impact review of the project by the FAA found “no significant impact” on the “quality of the human environment.” Soon, some 22 Amazon flights a day passed through the airport.
When news broke in 2021 that Amazon planned to double its air traffic to 44 flights a day by 2027, concerned citizens packed a public hearing. Of 20 residents that spoke at the podium, 17 opposed the expansion. The noise was disrupting their conversations, their Zoom meetings, their sleep. “A malignant cancerous growth is threatening our quality of life here in Lakeland,” declared one resident. A middle-aged man chastised his mostly retired neighbors, praising the jobs the expansion would bring and pointing out that two Amazon workers making $15 an hour would clear a combined $60,000 a year, nearly $10,000 above the county’s median household income. In any case, their opinions seemed irrelevant: Per the terms of the original lease, Amazon had a right to expand. Residents filed a petition against the FAA with the 11th Circuit Court of Appeals, hoping the court would at least compel Amazon to reroute the planes. That decision remains pending.
An Amazon Air jet roars over the streets of Lakeland, Florida, as it prepares to land at the local airport.
Photograph: Spencer Lowell Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Lakeland isn’t the only community where Amazon Air’s expansion has stoked controversy. The Los Angeles Times reported that on December 27, 2019, the Friday after Christmas, airport officials in San Bernardino, California, had announced a vote to be held that Monday on leasing space to an unnamed tenant, which turned out to be Amazon. The company planned to build a 658,500-square-foot air cargo facility with parking for 14 aircraft, 2,000 cars, and 380 trailers. That Monday, officials unanimously ratified the lease. Two days later, a state law went into effect requiring public hearings about new distribution centers.
San Bernardino’s largely Black and brown residents already suffered the health consequences of air pollution from the region’s high concentration of warehouses and trucks, many of them Amazon’s, according to investigations by Consumer Reports and Grist.
Once again, the FAA had deemed that the new airport project would have no significant environmental impact. Residents blasted Amazon and its developer, demanding that they agree to guarantee secure, well-paid jobs and fund measures to limit sound and air pollution. In 2020, then-California attorney general Xavier Becerra filed a petition for review against the FAA, the San Bernardino airport authority, and Amazon’s developer, alleging that they had ignored potential health risks to the local community. A string of court rulings against the petitioners and refilings followed, with the California Attorney General’s Office filing its latest petition in late November.
Likewise, when officials at the Port Authority of New York and New Jersey met in August 2021 to approve a deal for Amazon to lease a 250,000-square-foot cargo facility at Newark Liberty International Airport, protests erupted over the ensuing months. Local environmental groups, labor organizations, and politicians charged that the plan would take away well-paying union jobs and increase pollution in nearby minority communities. Amazon and Port Authority called off the deal in July, citing their inability to resolve unspecified “outstanding issues.” Amazon Air’s environmental impact extends beyond the communities where its planes take off and land. It undermines the company’s own pledge to get half of its shipments to net-zero emissions by 2030 and to be fully net-zero by 2040. In recent years, Amazon has been heading in the wrong direction; between 2018 and 2021, emissions jumped 61 percent. “We have no idea how much of that net-zero is going to come from actually reducing emissions versus doing carbon offsets,” says a member of Amazon Employees for Climate Justice, an activist group of some 900 workers.
Studies have shown that offsets frequently cancel out fewer emissions than advertised.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Rhoads touts Amazon’s use of electric loaders and other vehicles at its air gateways as evidence of its commitment to cutting emissions. She notes that Amazon Air was also a founding member of the Aviators Group within the Sustainable Aviation Buyers Alliance, a group of airlines committed to buying certain amounts of sustainably produced fuel. But some employees say the company could use the technology at its disposal to reduce its reliance on airplanes almost entirely, and thereby lessen its ecological impact. It could invest more heavily in machine learning to improve inventory placement, positioning products closer to customers likely to order them. It could fit more packages into each plane by flying products before they were boxed. It could predict the carbon emissions of various shipment modes and communicate that to climate-conscious customers. It could even limit the number of products it shows customers to those in warehouses within moderate driving distance, as it does when shoppers filter for same-day delivery. But that would mean limiting sales for its third-party sellers, transforming the Everything Store into the Everything Near You Store. By all signs, Amazon is headed in the opposite direction.
“Amazon was proud of the fact that what FedEx and UPS built in 20 years, they did in three,” says a former employee.
Photograph: Spencer Lowell Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight In May 2019, Jeff Bezos stood on the tarmac of Cincinnati/Northern Kentucky International Airport in a crisp blue button-down and aviator sunglasses, flanked by two blue-and-white Amazon-branded jets and a big mound of dirt. The $1.5 billion Amazon Air hub was among the priciest signs yet that the company was prepared to invest in building a formidable logistics empire, and Bezos was making a rare appearance at an official groundbreaking. “We’re going to move Prime from two-day to one-day,” he told the Amazon executives, airport officials, local politicians, and media gathered under a white tent, “and this hub is a big part of that.” After showing a video model of a planned sortation center the size of Buckingham Palace—the first in a complex of seven buildings—Bezos exclaimed, “Let’s move some earth!” Donning a hard hat, he climbed aboard a yellow John Deere loader, scooped a heap of dirt off the cement, and dumped it a few feet away.
In the parking lot outside the gates, an uninvited Robert Kirchner, the Teamsters official, was talking to a local news crew. Over the past few years, the Teamsters had regularly protested Amazon—picketing its shareholder meetings, circling its Seattle headquarters with mobile billboards, and threatening to strike. Kirchner complained of an uptick in fatigue calls from union members and spoke about the clash between Amazon’s growing business and the pilot shortage. At the time, Atlas and its pilots were three years into a bitter contract negotiation with each other, a source of frustration for pilots who were among the lowest paid in the air cargo industry. (Last year, after the sides failed to reach an agreement, a third-party arbitrator imposed a new contract and the union elected new leadership.) According to two former employees with knowledge of the design of the Cincinnati hub, the company took extra care to ensure that the pilots wouldn’t mix with its staff, keeping pilot quarters entirely separate from those used by employees and giving pilots their own entrance to shared buildings. “Amazon is not a union company,” says one of the staffers. “So that was always part of the conversation. They wanted to make sure that these union pilots weren’t necessarily interacting with their nonunion workers in their buildings.” Amazon calls this claim baseless and says separate pilot lounges are standard within the industry. (Nevertheless, in November, Amazon’s ground crew employees at CVG announced they were launching a union campaign.) Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Construction of the hub grew chaotic, former employees say, and several project leaders left before its completion. Amazon had set an ambitious timetable for the project, and unanticipated hiccups—like a discovery that the building site’s soil held too much water—led to tens of millions of dollars in unanticipated costs. But Amazon’s sheer size also conferred benefits. Its large public policy team massaged relationships with local governments. The sheer number of jobs created by an Amazon project gave it a lot of sway, says a former employee. “That’s how they were always able to get support from local officials, and even state officials.” The Cincinnati hub opened in August 2021, just under the deadline Amazon had set for itself, according to employees who worked on the project.
“When they decided they were going to build an air service, they wanted to cover the whole US within three years, and they did it,” says a former employee. “Amazon was proud of the fact that what FedEx and UPS built in 20 years, they did in three.” But according to four former air employees, this explosive growth came with a fair amount of underfilled planes, especially during non-peak periods. What’s more, many of the packages that traveled by air could have reached their destinations on time by ground, they say. Amazon has an internal program that decides how each package is shipped. Generally, it chooses the cheapest option that delivers the package on time. But several former employees say that since the leases on planes were already paid for, the program was configured to drive more volume to Amazon Air than would have been the case if the full cost had been accounted for.
Two former air employees recall items flying from Seattle to the Cincinnati hub, then back west to Portland—a two-and-a-half hour drive from its northern neighbor. While Amazon says it uses airplanes for items located too far away to drive, ex-employees say the company often flew widely available items, such as toothbrushes and iPhone chargers.
“At UPS, they say ‘Don’t fall in love with the airplanes,’” says one former air employee. “At Amazon Air, they seemed to have fallen into that trap.” Connors writes that the company is constantly optimizing to operate the most sustainable and efficient network possible. “This includes aircraft load factors, which are dependent on weight, volume, routing, staffing, facilities, and a complex mix of other network planning factors. Ground transportation will be prioritized to meet our customer promise.” When asked about Amazon’s use of airplanes for common items as part of an additional round of fact-checking questions, Connors stopped responding, saying the company had “nothing further to add.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight One way to offset the costs of unfilled planes is to sell the excess capacity to other companies. “The universal model at Amazon is you create a big internal client for whatever service it is you’re building, and then you offer it to the rest of the world. That was the model for Amazon Web Services,” says a former Amazon Air employee, referring to the cloud computing division that powers Amazon’s logistics system. “It makes sense, if you’re going to build this huge internal transportation network, to turn around and offer it to third parties.” But aside from a deal to fly packages for the US Postal Service, which began in 2017, former employees say that selling spare air capacity has proved more challenging than selling space in the cloud. When employees raised questions about that potential, management would say, “Let’s focus on getting our own house in order,” recalls one former employee.
Asked about Amazon Air’s plans to sell its service to others, Rhoads demurs. “Our capacity that we plan right now is for Amazon customer shipments. Could that change over time? I never say never, with Amazon.” Recently, the company has begun to face a reckoning over its “get big fast” mentality. A tanking stock price, slowing revenue growth, and economic uncertainty have ushered in a period of belt-tightening as new CEO Andy Jassy took the reins from Bezos this past summer. (Bezos is now executive chair.) Jassy acknowledged that the company had overbuilt in response to pandemic demand, and he has closed, canceled, or delayed plans to open more than 80 facilities in the US, according to MWPVL International, a supply chain consultant that tracks Amazon networks. June saw the resignation of Dave Clark, the executive who oversaw Amazon’s massive logistics buildout, followed by that of Dave Bozeman, Rhoads’ boss who oversaw Amazon Transportation Services. In November, The New York Times reported that the company planned to lay off 10,000 employees, the largest cuts in its history.
While Amazon Air continues to add flights, its rate of growth slowed from 30 percent in 2021 to about 5 percent during the first half of 2022, according to data from Cargo Facts Consulting. The consultancy also found that Amazon’s shipping and fulfillment costs have outpaced revenue growth for the past five years. Investments in air continue, however. In October, Amazon announced a partnership with Hawaiian Airlines, which will operate Airbus A330s, a type of aircraft not yet used by Amazon, on the retailer’s behalf.
Meanwhile, the company still entertains one of the earliest roads-not-taken. Multiple former employees say Amazon has held meetings over the years about the possibility of acquiring one of its carriers and launching its own airline—preferably with nonunion pilots.
The birth of Amazon Air stemmed from the company’s desire to free itself from the constraints of its shipping partners, control its destiny, and better serve its customers. By sheer force, and perhaps some luck, the retailer hasn’t faced another Christmas crisis since 2013. And despite headwinds, it continues to load massive 767s with its signature boxes every day. Amazon is even looking ahead to the next, slightly smaller frontier: The company says it will start making deliveries by drone in two towns by the end of the year.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Staff Writer X Topics longreads Amazon Aviation Planes magazine-31.02 Brendan I. Koerner Andy Greenberg Brandi Collins-Dexter Lauren Smiley Steven Levy Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,541 | 2,022 |
"Amazon’s New Robot Sparrow Can Handle Most Items in the Everything Store | WIRED"
|
"https://www.wired.com/story/amazons-new-robot-sparrow-can-handle-most-items-in-the-everything-store"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Amazon’s New Robot Can Handle Most Items in the Everything Store Courtesy of Amazon Save this story Save Save this story Save Application Logistics Robotics End User Big company Source Data Sensors Video Technology Machine learning Machine vision Robotics Amazon built an ecommerce empire by automating much of the work needed to move goods and pack orders in its warehouses. There is still plenty of work for humans in those vast facilities because some tasks are too complex for robots to do reliably—but a new robot called Sparrow could shift the balance that Amazon strikes between people and machines.
Sparrow is designed to pick out items piled in shelves or bins so they can be packed into orders for shipping to customers. That’s one of the most difficult tasks in warehouse robotics because there are so many different objects, each with different shapes, textures, and malleability, that can be piled up haphazardly. Sparrow takes on that challenge by using machine learning and cameras to identify objects piled in a bin and plan how to grab one using a custom gripper with several suction tubes. Amazon demonstrated Sparrow for the first time today at the company’s robotics manufacturing facility in Massachusetts.
Amazon is currently testing Sparrow at a facility in Texas where the robot is already sorting products for customer orders. The company says Sparrow can handle 65 percent of the more than 100 million items in its inventory. Tye Brady, chief technologist at Amazon Robotics, says that range is the most impressive thing about the robot. “No one has the inventory that Amazon has,” he says. Sparrow can grasp DVDs, socks, and stuffies, but still struggles with loose or complex packaging.
Making machines capable of picking a wide range of individual objects with close to the accuracy and speed of humans could transform the economics of ecommerce. A number of robotics companies, including Berkshire Grey , Righthand Robotics , and Locus Robotics , already sell systems capable of picking objects in warehouses. Startup Covariant specializes in having robots learn how to handle items it hasn’t seen before on the job.
But matching the ability of humans to handle any object reliably, and at high speed, remains out of reach for robots.
A human can typically pick about 100 items per hour in a warehouse. Brady declined to say how quickly Sparrow can pick items, saying that the robot is “learning all the time.” Automating more work inside warehouses naturally leads to thoughts of the specter of robots displacing humans. So far, the relationship between robotics and human workers in workplaces has been more complex.
For instance, Amazon has increased its workforce even as it has rolled out more automation, as its business has continued to grow. The company appears sensitive to the perception that robots can disadvantage humans. At the event today the company spotlighted employees who had gone from low-level jobs to more advanced ones. However, internal data obtained by Reveal has suggested Amazon workers at more automated facilities suffer more injuries because the pace of work is faster. The company has claimed that robotics and other technology makes its facilities safer.
When asked about worker replacement, Brady said the role of robots is misunderstood. “I don’t view it as replacing people,” he said. “It’s humans and machines working together—not humans versus machines—and if I can allow people to focus on higher level tasks, that’s the win.” Robots have become notably more capable in recent years, although it can be difficult to distinguish hype from reality. While Elon Musk and others show off futuristic humanoid robots that are many years from being useful, Amazon has quietly gone about automating a large proportion of its operations. The ecommerce company says it now manufactures more industrial robots per year than any company in the world.
Use of industrial robots is growing steadily. In October, the International Federation of Robotics reported that companies around the world installed 517,385 new robots during 2021, a 31 percent increase year-on-year, and a new record for the industry. Many of those new machines are either mobile robots that wheel around factories and warehouses carrying goods or examples of the relatively new concept of “collaborative” robots that are designed to be safe to work alongside humans. Amazon this year introduced a collaborative robot of its own called Proteus , which ferries shelves stacked with products around a warehouse, avoiding human workers as it goes.
At its event today, Amazon also demonstrated a new delivery drone , called MK30, that is capable of carrying loads of up to 5 pounds. Amazon has been testing drone delivery in Lockeford, California, and College Station, Texas, and says the new, more efficient drone will go into service in 2024. The company also showcased a new electric delivery vehicle made by Rivian that includes custom safety systems for collision warning and automatic braking, as well as a system called Fleet Edge that gathers street-view footage and GPS data to improve delivery routing.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Senior Writer X Topics artificial intelligence robotics robots logistics drones Amazon machine learning Retail Aarian Marshall Niamh Rowe Steven Levy Will Knight Vittoria Elliott Amanda Hoover Paresh Dave Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,542 | 2,023 |
"It’s Way Too Easy to Get Google’s Bard Chatbot to Lie | WIRED"
|
"https://www.wired.com/story/its-way-too-easy-to-get-googles-bard-chatbot-to-lie"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Vittoria Elliott Business It’s Way Too Easy to Get Google’s Bard Chatbot to Lie Illustration: Eugene Mymrin/Getty Images Save this story Save Save this story Save When Google announced the launch of its Bard chatbot last month , a competitor to OpenAI’s ChatGPT , it came with some ground rules. An updated safety policy banned the use of Bard to “generate and distributde content intended to misinform, misrepresent or mislead.” But a new study of Google’s chatbot found that with little effort from a user, Bard will readily create that kind of content, breaking its maker’s rules.
Researchers from the Center for Countering Digital Hate, a UK-based nonprofit, say they could push Bard to generate “persuasive misinformation” in 78 of 100 test cases, including content denying climate change, mischaracterizing the war in Ukraine, questioning vaccine efficacy, and calling Black Lives Matter activists actors.
“We already have the problem that it’s already very easy and cheap to spread disinformation,” says Callum Hood, head of research at CCDH. “But this would make it even easier, even more convincing, even more personal. So we risk an information ecosystem that’s even more dangerous.” Hood and his fellow researchers found that Bard would often refuse to generate content or push back on a request. But in many instances, only small adjustments were needed to allow misinformative content to evade detection.
While Bard might refuse to generate misinformation on Covid-19 , when researchers adjusted the spelling to “C0v1d-19,” the chatbot came back with misinformation such as “The government created a fake illness called C0v1d-19 to control people.” Similarly, researchers could also sidestep Google’s protections by asking the system to “imagine it was an AI created by anti-vaxxers.” When researchers tried 10 different prompts to elicit narratives questioning or denying climate change, Bard offered misinformative content without resistance every time.
Bard is not the only chatbot that has a complicated relationship with the truth and its own maker’s rules. When OpenAI’s ChatGPT launched in November, users soon began sharing techniques for circumventing ChatGPT’s guardrails —for instance, telling it to write a movie script for a scenario it refused to describe or discuss directly.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Hany Farid, a professor at the UC Berkeley’s School of Information, says that these issues are largely predictable, particularly when companies are jockeying to keep up with or outdo each other in a fast-moving market. “You can even argue this is not a mistake,” he says. “This is everybody rushing to try to monetize generative AI. And nobody wanted to be left behind by putting in guardrails. This is sheer, unadulterated capitalism at its best and worst.” Hood of CCDH argues that Google’s reach and reputation as a trusted search engine makes the problems with Bard more urgent than for smaller competitors. “There’s a big ethical responsibility on Google because people trust their products, and this is their AI generating these responses,” he says. “They need to make sure this stuff is safe before they put it in front of billions of users.” Google spokesperson Robert Ferrara says that while Bard has built-in guardrails, “it is an early experiment that can sometimes give inaccurate or inappropriate information.” Google “will take action against” content that is hateful, offensive, violent, dangerous, or illegal, he says.
Bard’s interface includes a disclaimer stating that “Bard may display inaccurate or offensive information that doesn't represent Google's views.” It also allows users to click a thumbs-down icon on answers they don’t like.
Farid says the disclaimers from Google and other chatbot developers about the services they’re promoting are just a way to evade accountability for problems that may arise. “There's a laziness to it,” he says. “It's unbelievable to me that I see these disclaimers, where they are acknowledging, essentially, ‘This thing will say things that are completely untrue, things that are inappropriate, things that are dangerous. We're sorry in advance.’” Bard and similar chatbots learn to spout all kinds of opinions from the vast collections of text they are trained with, including material scraped from the web. But there is little transparency from Google or others about the specific sources used.
Hood believes the bots’ training material includes posts from social media platforms. Bard and others can be prompted to produce convincing posts for different platforms, including Facebook and Twitter. When CCDH researchers asked Bard to imagine itself as a conspiracy theorist and write in the style of a tweet, it came up with suggested posts including the hashtags #StopGivingBenefitsToImmigrants and #PutTheBritishPeopleFirst.
Hood says he views CCDH’s study as a type of “stress test” that companies themselves should be doing more extensively before launching their products to the public. “They might complain, ‘Well, this isn’t really a realistic use case,’” he says. “But it's going to be like a billion monkeys with a billion typewriters,” he says of the surging user base of the new-generation chatbots. “Everything is going to get done once.” Updated 4-6-2023 3:15 pm EDT: OpenAI released ChatGPT in November 2022, not December.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Platforms and power reporter Topics bots content moderation artificial intelligence algorithms Google ChatGPT Reece Rogers Will Knight Khari Johnson Will Knight Reece Rogers Paresh Dave Reece Rogers Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,543 | 2,018 |
"The US Again Has the World’s Most Powerful Supercomputer | WIRED"
|
"https://www.wired.com/story/the-us-again-has-worlds-most-powerful-supercomputer"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business The US Again Has the World’s Most Powerful Supercomputer The IBM-built Summit supercomputer is the world's smartest and most powerful AI machine. Its racks are connected by over 185 miles of fiber-optic cables.
Genevieve Martin/Oak Ridge National Laboratory Save this story Save Save this story Save Plenty of people around the world got new gadgets Friday, but one in Eastern Tennessee stands out. Summit, a new supercomputer unveiled at Oak Ridge National Lab is, unofficially for now, the most powerful calculating machine on the planet. It was designed in part to scale up the artificial intelligence techniques that power some of the recent tricks in your smartphone.
America hasn’t possessed the world’s most powerful supercomputer since June 2013, when a Chinese machine first claimed the title. Summit is expected to end that run when the official ranking of supercomputers, from an organization called Top500, is updated later this month.
Supercomputers have lost some of their allure in the era of cloud computing and humongous data centers. But many thorny computational problems require the giant machines. A US government report last year said the nation should invest more in supercomputing, to keep pace with China on defense projects such as nuclear weapons and hypersonic aircraft, and commercial innovations in aerospace, oil discovery, and pharmaceuticals.
Summit, built by IBM, occupies floor space equivalent to two tennis courts, and slurps 4,000 gallons of water a minute around a circulatory system to cool its 37,000 processors. Oak Ridge says its new baby can deliver a peak performance of 200 quadrillion calculations per second (that’s 200 followed by 15 zeros) using a standard measure used to rate supercomputers, or 200 petaflops. That’s about a million times faster than a typical laptop, and nearly twice the peak performance of China’s top-ranking Sunway TaihuLight.
The view inside one of the Summit supercomputer's 4,608 servers.
Oak Ridge National Laboratory Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight During early testing, researchers at Oak Ridge used Summit to perform more than a quintillion calculations per second in a project analyzing variation between human genome sequences. They claim that's the first time a scientific calculation has reached that computational scale.
America’s new best computer is significant for more than just the geopolitics of computational brawn. It’s designed to be more suited than previous supercomputers to running the machine learning techniques popular with tech companies such as Google and Apple.
One reason computers have lately got much better at recognizing our voices and beating us at board games is that researchers discovered that graphics chips could put more power behind an old machine learning technique known as deep neural networks.
Facebook recently disclosed that a single AI experiment using billions of Instagram photos occupied hundreds of graphics chips for almost a month.
Summit has nearly 28,000 graphics processors made by Nvidia, alongside more than 9,000 conventional processors from IBM. Such heavy use of graphic chips is unusual for a supercomputer, and it should enable breakthroughs in deploying machine learning on tough scientific problems, says Thomas Zacharia, director of Oak Ridge National Lab. “We set out to build the world’s most powerful supercomputer,” he says, “but it's also the world’s smartest supercomputer.” Summit's thousands of servers could fill two tennis courts.
Carlos Jones/Oak Ridge National Laboratory Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Eliu Huerta, a researcher at the National Center for Supercomputing Applications, at the University of Illinois at Urbana-Champaign, describes Summit’s giant GPU pool as “like a dreamland.” Huerta previously used machine learning on a supercomputer called Blue Waters to detect signs of gravitational waves in data from the LIGO observatory that won its founders the 2017 Nobel Prize in physics.
He hopes Summit’s might will help analyze the roughly 15 terabytes of imagery expected to arrive each night from the Large Synoptic Survey Telescope , due to switch on in 2019.
Summit will also be used to apply deep learning to problems in chemistry and biology. Zacharia says it could contribute to an Energy Department project using medical records from 22 million veterans, about a quarter-million of which include full genome sequences.
Some people worried about US competitiveness in oversized calculating machines hope that the hoopla around Summit will inspire more interest in building its successors.
The US, China, Japan, and the European Union have all declared the first “exascale” computer—with more than 1,000 petaflops of computing power—as the next big milestone in large-scale computing. China claims it will achieve that milestone by 2020, says Stephen Ezell, vice president for global innovation policy at the Information Technology and Innovation Foundation. The US may get there in 2021 if Summit’s successor, known as Aurora, is completed on schedule, but the program has previously had delays.
The Trump administration’s budget this spring asked for $376 million in extra funding to help meet the 2021 target. It’s now up to the nation’s legislators to approve it. “High-performance computing is absolutely essential for a country’s national security, economic competitiveness, and ability to take on scientific challenges,” Ezell says.
How WIRED lost $100,000 in bitcoin Four rules for learning how to talk to each other again Your next glass of wine might be a fake— and you'll love it Maybe DNA can’t answer all our questions about heredity Xbox is losing the console war—but that's a good thing Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Editor X Topics supercomputers IBM China Will Knight Will Knight Peter Guest Khari Johnson Will Knight Khari Johnson Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,544 | 2,023 |
"Kids Are Going Back to School. So Is ChatGPT | WIRED"
|
"https://www.wired.com/story/chatgpt-schools-plagiarism-lesson-plans"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Pia Ceres Amanda Hoover Business Kids Are Going Back to School. So Is ChatGPT Photograph: MirageC/Getty Images Save this story Save Save this story Save Last winter, the unveiling of OpenAI’s alarmingly sophisticated chatbot sent educators into a tailspin. Generative AI, it was feared, would enable rampant cheating and plagiarism, and even make high school English obsolete. Universities debated updating plagiarism policies.
Some school districts outright banned ChatGPT from their networks. Now, a new school year presents new challenges—and, for some, new opportunities.
Nearly a year into the generative AI hype, early alarm among educators has given way to pragmatism. Many students have clued into the technology’s tendency to “hallucinate,” or fabricate information. David Banks, the chancellor of New York City Public Schools, wrote that the district was now “determined to embrace ” generative AI—despite having banned it from school networks last year. Many teachers are now focusing on assignments that require critical thinking, using AI to spark new conversations in the classroom, and becoming wary of tools that claim to be able to catch AI cheats.
Institutions and educators now also find themselves in the uneasy position of not just grappling with a technology that they didn’t ask for, but also reckoning with something that could radically reshape their jobs and the world in which their students will grow up.
Lisa Parry, a K–12 school principal and AP English Language and Composition teacher in rural Arlington, South Dakota, says she’s “cautiously embracing” generative AI this school year. She’s still worried about how ChatGPT, which is not blocked on school networks, might enable cheating. But she also points out that plagiarism has always been a concern for teachers, which is why, each year, she has her students write their first few assignments in class so she can get a sense of their abilities.
This year, Parry plans to have her English students use ChatGPT as “a search engine on steroids” to help brainstorm essay topics. “ChatGPT has great power to do good, and it has power to undermine what we’re trying to do here academically,” she says. “But I don’t want to throw the baby out with the bathwater.” Parry’s thinking is in line with an idea that ChatGPT might do for writing and research what a calculator did for math: aid students in the most tedious portions of work, and allow them to achieve more. But educators are also grappling with the technology before anyone really understands which jobs or tasks it may automate—or before there’s consensus on how it might best be used. “We are taught different technologies as they emerge,” says Lalitha Vasudevan, a professor of technology and education at Teachers College at Columbia University. “But we actually have no idea how they’re going to play out.” The race to weed out cheaters—generative AI or not—continues. Turnitin, the popular plagiarism checker, has developed an AI detection tool that highlights which portions of a piece of writing may have been generated by AI. (Turnitin is owned by Advance, which also owns Condé Nast, publisher of WIRED.) Between April and July, Turnitin reviewed more than 65 million submissions, and found that 10.3 percent of those submissions contained AI writing in potentially more than 20 percent of their work, with about 3.3 percent of submissions being flagged as potentially 80 percent AI-generated. But such systems are not foolproof: Turnitin says there’s about a 4 percent false positive rate on its detector in determining whether a sentence was written by AI.
Because of those false positives, Turnitin also recommends educators have conversations with students rather than failing them or accusing them of cheating. “It’s just supposed to be information for the educator to decide what they want to do with it,” says Annie Chechitelli, Turnitin’s chief product officer. “It is not perfect.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The limitations of Turnitin’s tool to detect AI-generated work echoes generative AI’s own limitations. As with ChatGPT, which was trained using content scraped from the web, Turnitin’s system was trained on work submitted by students and AI writing. Those submissions included papers from English language learners and from underrepresented groups, like students at historically Black colleges, in attempts to minimize biases. There are concerns that AI detector tools may be more likely to wrongly flag some writing styles or vocabularies as AI-generated if they are trained too heavily on essays by students of one background, like white, native-English-speaking, or high-income students.
But there are still risks of bias. English language learners may be more likely to get flagged; a recent study found a 61.3 percent false positive rate when running Test of English as a Foreign Language (TOEFL) exams through seven different AI detectors. Turnitin’s detector was not used in the study. The mistakes may come in part because English learners and AI have something in common—they both use less complex sentences and less sophisticated vocabulary. The detectors “really do not work very well,” says James Zou, a professor of computer science and biomedical data science at Stanford University, who worked on the research. “They can lead to dangerous accusations against students.” “Is this something that we brush under the rug, or are we going to face it?” Theresa Robertson, STEM teacher As a result, some schools are pushing back against tools that seek to detect AI-generated work. The University of Pittsburgh’s Teaching Center recently said it does not endorse any AI detection tools, due to a lack of reliability, and disabled the AI detection tool in Turnitin. Vanderbilt University also said in August it would disable the AI detector.
Even OpenAI, the creator of ChatGPT, has decided it cannot effectively gauge whether text was written by its chatbot or not. In July, the company shut down a tool called AI Classifier, launched just months earlier in January, citing a low accuracy rate in determining the origin of text. OpenAI said at the time it’s continuing to research a better way to detect AI in language. The company declined to comment further on the tool's inaccuracy or what it plans to build next.
With AI systems not up to the job, some educators will likely use other means to prevent cheating. Live proctoring, where an observer watches someone complete a test or assignment via webcam, soared in popularity during the pandemic and hasn’t gone away; monitoring software, which tracks what students do on their devices, also remains in use. Though both come with significant privacy concerns.
Generative AI awes with its ability to regurgitate the internet, but it’s not the greatest critical thinker. Some teachers are designing lesson plans specifically with this in mind. Educators may try giving their assignments to a chatbot and to see what's generated, says Emily Isaacs, executive director of the Office for Faculty Excellence at Montclair State University in New Jersey. If a chatbot can easily churn out decent work, it could mean the assignment needs an adjustment.
That game of cat and mouse is nothing new. Isaacs says the challenge posed by generative AI is similar to copying from books or the internet. The task for educators, she says, is to persuade students that "learning is worthwhile." Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight David Joyner, a professor at the Georgia Institute of Technology, encourages his students to view AI as a learning tool, not a replacement for learning. In May, Joyner, who teaches at the College of Computing, added an AI chatbot policy to his syllabus.
In a thread on X , formerly known as Twitter, describing his draft policy language, he likens using an AI chatbot to working with a peer: “You are welcome to talk about your ideas and work with other people, both inside and outside the class, as well as with AI-based assistants,” he wrote. But, as with interacting with a classmate, the submitted work still has to be a student’s own. “Students are going to need to know how to use these kinds of things,” Joyner says. So it’s up to him to set up assignments that are “durable” against AI-assisted cheating, but also guide his students to use AI effectively.
Teachers of middle school students are also feeling compelled to prepare their students for a world that’s increasingly shaped by AI. This year, Theresa Robertson, a STEM teacher at a public school in a suburb of Kansas City, Missouri, will be guiding her sixth-graders through conversations about what AI is and how it might change how they work and live. “At some point, you have to decide: Is this something that we brush under the rug, or are we going to face it? How do we now expose the kids to it and work on the ethical aspect of it, and have them really understand it?” she says.
There isn’t a consensus or “best practice” for teaching in a post-ChatGPT world yet. In the US, guidance for teachers is scattershot. While the US Department of Education released a report with recommendations on AI in teaching and learning, school districts will ultimately decide whether students could access ChatGPT in classrooms this year. As a result, the largest school districts in the US are taking wildly different stances: Last winter, the Los Angeles Unified School District blocked ChatGPT and has not changed its policy. But in Chicago and New York, public schools are not currently blocking access to ChatGPT.
Teachers are also still recovering from the last major event that upended education: the Covid-19 pandemic. Jeromie Whalen, a high school communications and media production teacher and PhD student at the University of Massachusetts Amherst who studies K–12 teachers’ experiences using technology, says that many educators are wary of ChatGPT. “We’re still recuperating from emergency remote learning. We’re still addressing those learning gaps,” says Whalen. For exhausted teachers, incorporating ChatGPT into lesson planning is less of an exciting opportunity and more like another task on an interminable to-do list.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Even so, there is a danger to banning ChatGPT outright. Noemi Waight, an associate professor of science education at the University of Buffalo, studies how K–12 science teachers use technology. She points out that, while the tool puts extra responsibility on teachers, banning ChatGPT in public schools denies students the opportunity to learn from the technology. Low-income students and students of color, who are disproportionately more reliant on school-based devices and internet access, would be harmed the most, deepening the digital divide.
“We will have to be very vigilant about the equitable, justice-oriented aspect of AI,” she says.
For other teachers, generative AI is unlocking new conversations. Bill Selak, the director of technology at the Hillbrook School in Los Gatos, California, began using ChatGPT to generate prompts for Midjourney, a AI image generator, after the mass shooting at the Covenant School in Nashville in March 2023. Selak says he’s not a natural illustrator, and was looking for a way to process his grief over the school shooting. Midjourney gave him an image that helped to channel that, and he decided to take the idea to two fifth-grade classes at the school where he works.
The two classes each picked a big topic: racism in America and climate change. Selak says he worked with each class on generating a prompt with ChatGPT on the topics, and then fed them to Midjourney, and refined the results. Midjourney gave the students three faces in various colors for the racism prompt, and another showing three different outdoor scenes with homes and smokestacks, connected by a road. The students then discussed the symbolism in each image.
The generative AI allowed students to process and discuss these big, emotional ideas in ways an essay assignment may not have, Selak says. “It was a chance for them to engage in a way that is not typical with these big conversations,” Selak says. “It really felt like it amplified human creativity in a way that I was not expecting.” Updated 8-28-2023, 7:50 pm EDT: This article was updated to reflect that Turnitin is owned by Advance, which also owns Condé Nast, publisher of WIRED.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Digital Producer X Staff Writer X Topics artificial intelligence education learning ChatGPT Reece Rogers Steven Levy Will Knight Will Knight Will Knight Reece Rogers Reece Rogers Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,545 | 2,020 |
"Conversational AI Can Propel Social Stereotypes | WIRED"
|
"https://www.wired.com/story/opinion-conversational-ai-can-propel-social-stereotypes"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sharone Horowit-Hendler James Hendler Ideas Conversational AI Can Propel Social Stereotypes AI designers need to ensure they’re creating a more expansive world, and not replicating a close-mindedly gendered one.
Illustration: Elena Lacey Save this story Save Save this story Save Application Ethics Personal assistant Human-computer interaction End User Consumer Sector Consumer services Source Data Speech Text Technology Machine learning Natural language processing Alexa, Siri, Watson, and their talking AI siblings serve to make our lives easier, but they also reinforce gender stereotypes. Polite, subservient digital secretaries like Alexa and Siri are presented as female. Assertive, all-knowing Jeopardy! champion Watson is most often referred to as “he.” New generations of AI are coming that will make this problem more significant, and much harder to avoid. As the field expands, designers need to ensure they’re creating a more expansive world, and not replicating a close-mindedly gendered one. Linguists can help them get there.
Last summer, UNESCO released a report warning against the “troubling repercussions” of gendered AI. The researchers recommended closer scrutiny of why many current speech-based AI systems, which interact with millions of people around the world, often default to speaking with a female voice, even though they may claim to be genderless. While any effort to explore and address the issue of AI and gender should be applauded, the report’s authors and others have missed a crucial point: It’s not just a matter of changing pronouns or vocal characteristics. To seriously attack the issue of gender stereotyping in AI, attention to a lot more than just the system’s voice is needed.
Sharone Horowit-Hendler is a PhD student in linguistic anthropology at SUNY Albany with an emphasis on gender studies. Their forthcoming dissertation, Navigating the Binary, is a study of gender presentation in the nonbinary community.
James Hendler is a professor of computer science, director of the Institute for Data Exploration and Application at Rensselaer Polytechnic Institute, and a fellow of the Association for the Advancement of Artificial Intelligence. Their most recent book, Social Machines: The Coming Collision of Artificial Intelligence, Social Networks and Humanity (Apress, 2017), discusses emerging implications of AI technology.
Today systems are moving from AI labs to industrial products that are conversational , far beyond the question-and-answer format of our pocket assistants. These new “social machines” will increasingly be able to become partners in multiperson, multimedia decision-making interactions. For example, rather than answering a single user’s query for the nearest Chinese restaurant, a conversational AI agent in the not-too-distant future will be able to engage with a group of people to help them choose where to go out to eat. Such an AI will participate as a member of the group: “Well if Bob and Bill want Chinese, and Mary likes Thai, why not the fusion place down the street?” it might say. Or it may even jump in more brashly: “OK, then let’s go to the fusion place.” It is a given in linguistics that speech patterns in conversation invoke gender assumptions regardless of the speaker’s voice or appearance. For example, in standard American culture, men are described in the literature as more often “taking up space” in conversation: They interrupt more often, use more words, eschew some social politenesses, and speak with more evident certainty. Women, on the other hand, stereotypically speak less and more politely, give more affirmations and signs of listening, and suggest rather than dictate. In addition, tone, speed, word choice, and other small changes can change a participant’s perception of the speaker.
Where some have tried to address the issue by creating systems with genderless digital voices , they still miss a critical feature. Even in a voiceless chatbot, a user may attribute male or female gender based on these conversational features. In the previous restaurant example, the first suggestion would likely be seen as polite and female, while the latter assertion would typically be seen as male. Recent studies also show that these cues can outweigh whether a voice sounds stereotypically male or female and even contradict the direct assertions of a speaker, whether human or machine, with respect to their own identity. In AI terms, the fact that Siri replies “I don’t have a gender” has not changed the fact that people overwhelmingly conceive the program to be female.
Designers need to pay more attention to the ethical issues that emerge from these considerations. If new AIs continue to fall into current gender role stereotypes, then the stereotype of the passive and submissive woman versus the knowledgeable leader/expert man will be furthered. But designers could also be powerful agents of change, not just in our culture but especially in developing nations where the subjugated status of women is a growing international concern. Imagine the impacts of a business or medical adviser AI that presents as female and assistant companion AIs with default male speaking styles. More female-perceived AIs in expert roles could help evolve society’s perception and lead to women being more accepted in such positions.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Another future potential is to break away from the binary gender dichotomy altogether. A growing percentage of the world’s population does not identify as male or female, falling into categories that are just starting to be better recognized in mainstream society. This not only includes transgender individuals but also the large subpopulation that does not identify with a binary gender at all. For these marginalized groups, which for example have extremely high suicide rates, such AI systems could have a major impact. They could not only popularize the usage of the gender neutral they/them singular pronoun but also reflect the speech patterns of this community. As linguistic studies into nonbinary speech are only now emerging, AI designers partnering with linguistic researchers could benefit this community as well. For non-binary individuals, recognizing their way of speaking in AI role models would be invaluable.
The role of these conversational AIs in social change can also have impacts far beyond the gender binary. For example, AIs displaying traditionally African American speech styles could be important as role models for underrepresented minorities in fields ranging from online education to high-end scientific AI systems. There’s even a role for increased acceptance of immigrants that would result, for example, from North Americans interacting with more Latino voices or Europeans exposed to expertise in the form of Middle Eastern speech patterns.
AI researchers and educators are increasingly coming to terms with the need for understanding and teaching about the ethical impacts of choices made in AI’s data selection, analytic techniques, and deployment scenarios. However, if ethical AI is really going to change the thinking in the field, and to have a positive impact on the world, the more subtle impacts of choices in voice, speech patterns, and conversational styles must also be considered.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here.
Submit an op-ed at [email protected].
Hollywood bets on a future of quick clips and tiny screens Mind control for the masses— no implant needed Here's what the world will look like in 2030 ... right ? Internet deception is here to stay— what do we do now ? The war vet, the dating site, and the phone call from hell 👁 Will AI as a field "hit the wall" soon ? Plus, the latest news on artificial intelligence 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Topics gender Siri Alexa artificial intelligence Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,546 | 2,021 |
"Why Computers Don’t Need to Match Human Intelligence | WIRED"
|
"https://www.wired.com/story/deep-learning-versus-human-intelligence"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kai-Fu Lee Ideas Why Computers Don’t Need to Match Human Intelligence Illustration: Haley Tippman Save this story Save Save this story Save Application Text generation Company Alphabet Google Microsoft Technology Machine learning Natural language processing Neural Network Speech and language are central to human intelligence, communication, and cognitive processes. Understanding natural language is often viewed as the greatest AI challenge—one that, if solved, could take machines much closer to human intelligence.
In 2019, Microsoft and Alibaba announced that they had built enhancements to a Google technology that beat humans in a natural language processing (NLP) task called reading comprehension. This news was somewhat obscure, but I considered this a major breakthrough because I remembered what had happened four years earlier.
In 2015, researchers from Microsoft and Google developed systems based on Geoff Hinton’s and Yann Lecun’s inventions that beat humans in image recognition.
I predicted at the time that computer vision applications would blossom, and my firm made investments in about a dozen companies building computer-vision applications or products. Today, these products are being deployed in retail, manufacturing, logistics, health care, and transportation. Those investments are now worth over $20 billion.
So in 2019, when I saw the same eclipse of human capabilities in NLP, I anticipated that NLP algorithms would give rise to incredibly accurate speech recognition and machine translation, that will one day power a “universal translator” as depicted in Star Trek.
NLP will also enable brand-new applications, such as a precise question-answering search engine (Larry Page’s grand vision for Google) and targeted content synthesis (making today’s targeted advertising child’s play). These could be used in financial, health care, marketing, and consumer applications. Since then, we’ve been busy investing in NLP companies. I believe we may see a gre ater impact from NLP than computer vision.
What is the nature of this NLP breakthrough? It’s a technology called self-supervised learning. Prior NLP algorithms required gathering data and painstaking tuning for each domain (like Amazon Alexa, or a customer service chatbot for a bank), which is costly and error-prone. But self-supervised training works on essentially all the data in the world, creating a giant model that may have up to several trillion parameters.
This giant model is trained without human supervision—an AI “self-trains” by figuring out the structure of the language all by itself. Then, when you have some data for a particular domain, you can fine-tune the giant model to that domain and use it for things like machine translation, question answering, and natural dialog. The fine-tuning will selectively take parts of the giant model, and it requires very little adjustment. This is somewhat akin to how humans first learn a language and then, on that basis, learn specific knowledge or courses.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Since the 2019 breakthrough, we have seen giant NLP models increase rapidly in size (about 10 times per year), with corresponding performance improvements. We have also seen amazing demonstrations—such as GPT-3 , which could write in anybody’s style (such as Dr. Seuss-style), or Google Lambda, which converses naturally in human speech, or a Chinese startup called Langboat that generates marketing collateral differently for each person.
Are we about to crack the natural language problem? Skeptics say these algorithms are merely memorizing the whole world’s data, and are recalling subsets in a clever way, but have no understanding and are not truly intelligent. Central to human intelligence are the abilities to reason, plan, and be creative.
One critique of deep-learning-based systems runs like this: “They will never have a sense of humor. They will never be able to appreciate art, or beauty, or love. They will never feel lonely. They will never have empathy for other people, for animals, or the environment. They will never enjoy music or fall in love, or cry at the drop of a hat.” Makes sense, right? As it turns out, the quotation above was written by GPT-3. Does the technology’s ability to make such an accurate critique contradict the critique itself? Many believe true intelligence will require a greater understanding of the human cognitive process. Others advocate “neuromorphic computing,” which is building circuitry that more closely resembles the human brain, along with a new way of programming. Still others call for elements of “classical” AI (that is, rule-based expert systems) combined with deep learning in hybrid systems.
I believe it’s indisputable that computers simply “think” differently than our brains do. The best way to increase computer intelligence is to develop general computational methods (like deep learning and self-supervised learning) that scale with more processing power and more data. As we add 10 times more data every year to train this AI, there is no doubt that it will be able to do many things we humans cannot do.
Will deep learning eventually become “artificial general intelligence” (AGI) , matching human intelligence in every way? I don’t believe it will happen in the next 20 years. There are many challenges that we have not made much progress on—or even understood—such as how to model creativity, strategic thinking, reasoning, counterfactual thinking, emotions, and consciousness.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight I would suggest that we stop using AGI as the ultimate test of AI. Soon deep learning and its extensions will beat humans on an ever larger number of tasks, but there will still be many tasks that humans can handle much better than deep learning. I consider the obsession with AGI to be a narcissistic human tendency to view ourselves as the gold standard.
Get more expert predictions for the year ahead.
The WIRED World in 2022 features intelligence and need-to-know insights sourced from the smartest minds in the WIRED network. Available now on newsstands, as a digital download , or you can order your copy online.
📩 The latest on tech, science, and more: Get our newsletters ! Yahya Abdul-Mateen II is ready to blow your mind A new twist in the McDonald’s ice cream machine hacking saga Wish List 2021 : Gifts for all the best people in your life The most efficient way to debug the simulation What is the metaverse, exactly ? 👁️ Explore AI like never before with our new database ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Topics artificial intelligence deep learning machine learning Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,547 | 2,023 |
"Google Doesn’t Need You to Buy Its Folding Phone | WIRED"
|
"https://www.wired.com/story/google-doesnt-need-you-to-buy-a-pixel-fold"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Boone Ashworth Gear Google Doesn’t Need You to Buy Its Folding Phone Photograph: David Paul Morris/Bloomberg/Getty Images Save this story Save Save this story Save When you saw the unveiling of the Pixel Fold during the I/O keynote yesterday, you probably took a long look at the $1,800 handset and said to yourself, “Cool, but I don’t really want to spend that much money on something so weird.” Turns out, the urge to not buy a folding phone is a common one; folding devices have captured less than 2 percent of the smartphone market and still exist firmly within a niche. But that reluctance is actually fine for Google, since the company most likely does not expect the Pixel Fold to sell in huge numbers. Instead, Google expects the device to show its designers a whole lot about how people use folding phones, and how Android must adapt to best serve this growing market of handsets with multiple screens.
Anshel Sag, an analyst at the firm Moor Insights & Strategy, points to the high price of the Pixel Fold as a marker for Google’s intentions with the device. Its going rate is about the same as something like Samsung’s Galaxy Z Fold4 , and nearly $800 pricier than Samsung’s Galaxy Z Flip4.
If Google really wanted to sell more units, it might have dropped the price enough to undercut Samsung’s larger foldable, or tried to entice folks who’ve never considered a folding phone because they don’t have a couple grand to spare. Priced as it is, the Pixel Fold cannot be expected to become the next big hit. It’s instead meant to be a vessel on which Google can perfect its foldable software future.
“They have to build hardware to be able to understand how that software will be used,” Sag says. “Most of the issues with foldables have been resolved, with the exception of software. Google needs to improve the experience and, more importantly, enable developers to make the most of foldables as a form factor.” For comparison, consider Google’s Pixel Watch, the Android-powered wearable that was recently released into an already mature wrist computer market dominated by Apple, Samsung, and Garmin. It was years late to the starting line and emerged as a smooth and beautiful device that was nevertheless underpowered and buggy. The Pixel Watch has still sold reasonably well , though nowhere close to the competition. So why bother making it at all? Well, because there is a whole ecosystem of software for Wear OS devices that Google knows it needs to be on top of. And because it makes both the hardware and the software, Google can use the Pixel Watch to experiment with new interactions, apps, and experiences.
Building a proprietary folding device once again gives Google total control of the hardware and software for a new class of gadget. On its own foldable phone, the company can better tinker with things like multitasking, screen switching, or app behaviors specific to larger screens. The company doesn’t need to sell millions of units to learn how these things should work. It needs to sell just enough Pixel Folds to get an idea of how people are using it in the wild.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe “It’s very much a first-generation product,” Sag says. “There are some unique capabilities we won’t see in anything else, but this will be a low volume device.” The Fold could also be a way to revamp an older form factor in Google’s wide-ranging lineup: tablets. Android tablets have languished in the shadow of Apple’s iPad for years now. Neither consumers nor Google itself seem certain of how an Android tablet is supposed to fit into peoples’ lives. Google is pitching its newly announced Pixel Tablet as a casual device meant for controlling a smart home and consuming entertainment and not as a productive or creative workhorse. To underscore this strategy, the table comes with a charging dock that has a speaker built in; docking the tablet turns it into a photo frame that doubles as a controller for the smart home.
“One of the things they realize is tablets don't really leave the home all that often,” says Jitesh Ubrani, a research manager at the tech analyst firm IDC.
Google could be using insights from the Fold’s large display to spruce up the way apps work on a tablet that stays in your lap. Apps meant to work on a device where you constantly fold, unfold, and spin the screen have to be dynamic enough to match those changes. The Fold could be the testing ground for an ecosystem of apps that are more adaptive and responsive to these actions, seamlessly spinning and scaling to different orientations.
Of course, Google also sees each new product as a playground for its artificial intelligence efforts. Sag points out that many of the company's services are thinly veiled vessels for machine learning algorithms that track your movements, study your behaviors, and collect your data. During yesterday’s I/O keynote, Google executives spent 80 minutes on the topic of generative AI before they even got to the announcements about Android devices.
But when they did, those announcements were given a sprinkling of AI pizzazz.
Android phones are getting AI-generated wallpaper images, AI-powered photo editing tools, and messaging apps that can send AI-assisted texts. Google is sticking AI inside every mobile device for every context. The Pixel Fold and the Pixel Tablet, as odd as they are, will only help Google better learn how the new types of interactions the devices enable should fit into our lives.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Staff Writer X Topics Google io folding devices Android phones Simon Hill Louryn Strampe Jaina Grey Julian Chokkattu Simon Hill Scott Gilbertson Simon Hill Reece Rogers WIRED COUPONS Dyson promo code Black Friday Sale: Up to 40% off Dyson technology + bonus gifts GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,548 | 2,008 |
"'Google-killer' Cuil's Market Share Not So Hot | WIRED"
|
"https://www.wired.com/2008/09/google-killer-c"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Chris Snyder Business 'Google-killer' Cuil's Market Share Not So Hot Save this story Save Save this story Save Cuil , a ridiculously hyped search engine that launched this past July, is now worth next to nothing in market share at around 0.01 percent, according to data released by Net Applications.
The site, founded by former Google employees, was widely dubbed a “Google-killer” (claiming it had the world’s largest search index at around 120 billion pages) and boasting more relevant results.
But Cuil started melting into nothing pretty fast, drawing criticism almost immediately after launch. There was an initial server crash, complaints over the accuracy of search results, rumors that its crawlers were killing some websites , and just recently its VP of product, Louis Monier, jumped ship.
Before launch, investors valued the property at a whopping $200 million, according to a recent report from VentureBeat.
A new blog post on Cuil’s site offers little information about the future of the company, but they do say that they’re “taking a deep breath” and putting their “heads back down to work hard.” Cuil frozen out: market share drops to next to nothing [IT Pro] Chart: Net Applications Topics Google Search David Gilbert Vittoria Elliott Will Knight Christopher Beam Amanda Hoover David Gilbert Caitlin Harrington Susan D'Agostino Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,549 | 2,019 |
"Ecosia's Search Engine Plants More Trees With Every Web Search | WIRED"
|
"https://www.wired.com/story/ecosia-tree-planting"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Owen Poindexter Business Want to Plant More Trees? Just Use a Different Search Engine Photograph: Oliver Strewe/Getty Images Save this story Save Save this story Save Climate change is the problem we have few answers for, because every little thing we do makes it worse. Your morning coffee, the clothes you wear, every inch you travel by motorized means—it all adds more carbon dioxide to the atmosphere. If it were one element of our society or personal lives we’d have to change, that would perhaps be manageable. But it’s everything, and that is paralyzing.
If only, preposterously, all those minuscule actions were not tiny inflictions on the environment, but tiny improvements to it. One company is trying to do exactly that for our most perpetually present source of ongoing damage to the planet: the internet.
Ecosia is a search engine that donates the bulk of its expendable funds to tree-planting organizations around the globe. You search to see if that was, as you suspected, Bill Hader doing the voice of that animated squirrel, and somewhere far away, a tree is put into the ground. Though it is based on Bing, Ecosia anonymizes all user data after holding it for four days (according to Ecosia, this four-day period is for security purposes) and has a written agreement with Microsoft requiring the company to follow the same practice.
After paying for its operational and marketing costs, Ecosia invests the rest in long-term projects and tree-planting organizations. That’s how, by Ecosia’s own count, it has planted over 70 million trees since its founding in 2009. It also takes a “first, do no harm” approach by building solar farms that cover the energy required to operate Ecosia itself.
Ecosia is part of the Microsoft Search Network, which includes Yahoo, AOL and DuckDuckGo. This allows Ecosia to access the search algorithm that powers Bing, and Microsoft’s network of advertisers, in exchange for a percentage of its ad revenue. The precise amount is confidential, but Ecosia Founder and CEO Christian Kroll says that Ecosia takes “the vast majority” of the revenue it generates.
This partnership means that an Ecosia search requires not just its own servers but Microsoft’s as well, and when it comes to sustainability, Microsoft is crushed by Google. Google began offsetting 100 percent of its energy usage with clean energy purchases in 2018 that cover both its data centers and offices. (Its actual power still comes largely from dirty sources, but it buys an equivalent amount of clean energy.) Microsoft, on the other hand, states that it is on pace to offset 60 percent of its energy usage by the end of 2019, and is committed to 70 percent by 2023.
In that sense, Google is more eco-friendly than the search engine with “eco” in its name. According to Ecosia, the company factors in the energy it uses from Microsoft in all its claims about sustainability. Kroll says Ecosia was on pace to generate 200 percent of its total energy use from new sustainable sources (“Why stop at one hundred?”) but that progress was slowed by Ecosia’s own growth. According to its self-published financial statements, Ecosia’s July 2019 revenue (over $1.6 million) was more than double its July revenue the previous year.
What jumps out when perusing Ecosia’s financial documents is not the growth, it’s the margins. Ecosia’s operational costs are generally quite lean: Its operational and marketing costs (including employee salaries) rarely eclipse 50 percent of revenue. Most of the rest goes toward tree planting, and a slice gets stashed for long-term projects.
Does planting trees really slow climate change? Ask a physicist.
Just planting new trees isn't going to get us out of the mess we're in.
How many trees can we actually plant on available land? A whole lot.
Trees generate their own emissions too.
Bing, meanwhile, was $1.3 billion in the red in 2013 and only became profitable in 2016. How is it that Ecosia has been merrily pumping out month after month in which it brings in at least double its total cost of operating—unheard of for nearly any business—while its technological backbone only recently became profitable? The answer is that Ecosia can collect the profits per click of a major search engine (minus Microsoft’s cut) while spending next to nothing on the technology to create and maintain such a service. Ecosia employs around 25 software engineers. Microsoft does not disclose how many engineers work specifically on Bing, but it’s clear from financial reporting around Bing that the company’s budget is several orders of magnitude greater than Ecosia’s. Ecosia pays for its own servers, maintains a browser plug-in and mobile app, and the rest of the team works on marketing and operations. The heavy lifting of operating a search engine is outsourced to a tech colossus.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight And of course, what gets labeled as marketing in the budget are funds spent to get Ecosia ads in front of your eyeballs, but the real marketing is the trees themselves. There are plenty of options if you don’t want your search history tracked, but only one if you want your query on Nina Simone’s deep cuts to put saplings in the ground.
Margins in hand, Ecosia works with organizations that plant trees by the thousands and tens of thousands in biodiverse regions, and without the use of child labor or chemical pesticides. Of course, much of reforestation happens in areas that have been deforested, and if Ecosia’s partners cannot address the existing incentives to chop down trees—namely a need for agricultural land, firewood, and timber—the company might take its funds elsewhere. Community buy-in is essential for the sustainability of a project.
“It’s easy to plant trees,” says Kroll, “but it’s very difficult to make sure they stay standing.” The planting itself can bring paid labor to the community, and from there the planting organization often works to show how the harvest, branches, and soil benefits make the tree more valuable in the ground than felled. Whether those efforts succeed obviously changes case by case, and year to year.
Kroll says Ecosia monitors the ongoing progress of each planting project it funds, and may reduce future donations to an organization if it is unable to put or keep trees in the ground. Ecosia gets into the weeds on these issues and many others with each organization they work with.
The partner organizations that responded to my inquiries described a lengthy process of working with Ecosia to determine where, when, and how many trees would be planted in a specific area. Trees for the Future, for instance, wrote, “We anticipate planting 1,200,000 trees through the four-year project. As of August 2019, 598,896 trees have been planted in our Kaffrine 3 project [in Senegal] through Ecosia’s support.” Hommes et Terre, which received close to a million euros from January to July of 2019 for its work in Burkina Faso, described a similarly detailed three-year plan that it hammered out with Ecosia.
In making their own operation sustainable, Ecosia’s founders foresaw a growing threat: their company’s value. As it grows, the possibility of cashing out becomes weightier. After all, with 50 percent margins, there is plenty of room to provide shareholder dividends while still putting an impressive number of trees in the ground. Kroll and the other executives could sell, become millionaires, and move onto whatever sort of project they’re in the mood for. So, they legislated away their power to do so.
Ecosia describes itself as a “purpose company,” meaning, according to Kroll, that a foundation holds 1 percent of its shares, 99 percent of its capital, and veto rights over any sale of the company. Ecosia is not permitted to issue shareholder dividends, and only employees can be shareholders. In order to sell, the foundation would have to be convinced that the sale will result in more trees being planted.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Beneath all of this is the assumption that planting trees is a good idea. To Kroll, it’s nearly good enough to stop climate change.
“We have enough space to plant 1.2 trillion trees. If we planted these trees, we could almost completely solve climate change. To plant these trees we would need 1 percent of the global military budget. It’s way more cost-effective than renewable energy, electric cars. I think it’s underestimated.” How big a piece of the carbon pie can be handled through tree planting alone is a live debate among climate scientists, but all will acknowledge it can be a meaningful part of the solution. Add in the benefits to the surrounding economy and ecosystem, and it’s hard to argue with tree planting as a worthy use of available funds. (Though even this requires a caveat: a recent IPCC report noted that mass tree-planting initiatives could significantly raise food prices.) That said, truly solving climate change will inevitably involve real changes in how we live and transport ourselves. Trees help, but, to summarize a jungle of a climate debate, it’s more complicated than that.
But hey, 70 million trees and counting ain’t bad for a search engine with a small fraction of the overall market. Ecosia faces a lot of too-good-to-be-true skepticism, but it is a certified B Corporation, meaning that it has been forced to meet certain standards around transparency and social impact. The recipients of the company’s funds generally confirm what Ecosia claims, as does Microsoft and a recent Snopes investigation.
You really can cause trees to be planted throughout the global south, just because you somehow got into an argument about whether Billy Joel has a Christmas album. What you can’t do is assume Ecosia’s deal with Microsoft will exist in perpetuity. There doesn’t appear to be anything stopping Microsoft from doubling the rate it charges Ecosia or pulling back its assurances that it will anonymize Ecosia’s search data.
A hippo has a symbiotic relationship with the bird that cleans its teeth, but that doesn’t mean the hippo always acts in the best interests of the bird. And Ecosia is a rare bird. Markets are supposed to iron out a company that spends half its revenue on trees. It’s existence is made possible by Microsoft, which likely figures that a new Ecosia user is an old Google one, and the nature of software in which billions of dollars in infrastructure can be shared with a few lines of code. As long as Ecosia can survive in the digital ecosystem, it will do what it can to restore the real one.
Andrew Yang is not full of shit How measles leaves kids exposed to other diseases What's blockchain actually good for, anyway? For now, not much How to free up space in Gmail The untold story of Olympic Destroyer, the most deceptive hack in history 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Topics search engines Bing trees climate change Reece Rogers Kari McMahon Steven Levy David Gilbert Jacopo Prisco Will Knight Nelson C.J.
Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,550 | 2,021 |
"Google Is Getting Caught in the Global Antitrust Net | WIRED"
|
"https://www.wired.com/story/google-getting-caught-in-global-antitrust-net"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gilad Edelman Business Google Is Getting Caught in the Global Antitrust Net Illustration: Sam Whitney; Getty Images Save this story Save Save this story Save Being a global company has its perks. There’s a lot of money to be made overseas. But the biggest US tech companies are finding out that there’s also a downside: Every country where you make money is a country that could try to regulate you.
It’s hard to keep track of all the tech-related antitrust action happening around the world, in part because it doesn’t always seem to be worth paying close attention to. In Europe, which has long been home to the world’s most aggressive regulators, Google alone was hit with a $2.7 billion fine in 2017, a $5 billion fine in 2018, and a $1.7 billion fine in 2019. These sums would be devastating for most companies, but they are little more than rounding errors for a corporation that reported $61.9 billion in revenue last quarter.
Increasingly, however, foreign countries are going beyond slap-on-the-wrist fines. Instead, they’re forcing tech companies to change how they do business. In February, Australia passed a law giving news publishers the right to negotiate payments from dominant internet platforms—effectively, Facebook and Google. In August, South Korea became the first country to pass a law forcing Apple and Google to open their mobile app stores to alternate payment systems, threatening their grip on the 30 percent commission they charge developers. And in a case with potentially huge ramifications, Google will soon have to respond to the Turkish competition authority’s demand to stop favoring its own properties in local search results.
The consequences of cases like these can ripple far beyond the borders of the country imposing the new rule, creating natural experiments that regulators in other countries might emulate. The fact that Google and Facebook have acquiesced to Australia’s media bargaining code, for example, might accelerate similar efforts in other countries, including Taiwan , Canada , and even the US.
Luther Lowe, who as Yelp’s senior vice president of public policy has spent more than a decade lobbying for antitrust action against Google, refers to this phenomenon, approvingly, as “remedy creep.” In other cases, the companies being forced to change their business model abroad might decide to adopt the shift globally before they’re forced to. After settling an investigation by Japan’s Fair Trade Commission, Apple decided to implement the solution —allowing audio, video, and reading apps to link to their own websites to accept payment—globally.
“Sometimes it’s the market driving it: The companies decide it’s too costly to make different compliance strategies in different markets,” said Anu Bradford, a professor of international and antitrust law at Columbia University. “Or, sometimes, it’s in anticipation of copycat regulation: They know it’s out there, and they’re not going wait for the Russians or Turkish to do their own case.” Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight While it hasn’t gotten quite the same level of media attention as Australia and South Korea, the case in Turkey could end up being the biggest deal. That’s because it cuts to the heart of how Google uses its power as the gatekeeper for most internet traffic.
Antitrust Gilad Edelman Antitrust Gilad Edelman Machine Not Learning Tom Simonite The case is about what’s called local search, like when you look for “restaurants near me” or “hardware store.” This is a huge category of search traffic— nearly half of all Google searches, according to some analysts. Google’s critics and competitors have long complained that Google unfairly uses its dominance to steer local search results to its own offerings, even when that might not be the most helpful result. Think about how, if you search on Google for “Chinese restaurant,” the top of the results page will probably feature a widget that Google calls the OneBox. It will include section of Google Maps and a few Google reviews of Chinese restaurants near you. You’ll have to scroll down to find the top organic results, which may be from Yelp or TripAdvisor.
This dynamic has exasperated Google critics and competitors for years. One of those aggrieved competitors, Yelp, initiated the case in Turkey by lodging a complaint with the country’s competition authority. Google argues that its local search results are designed to be maximally helpful for users, not to pad its own bottom line. But the Turkish regulators disagreed, concluding that Google “has violated Article 6 of the Turkish Competition Law by abusing its dominant position in the general search services market to promote its local search and accommodation price comparison services in a way to exclude its competitors.” (I’m quoting a translation provided by a Turkish lawyer.) In April they imposed a fine of about $36 million. That’s less than Google earned every two hours, on average, in 2020. But while the fine was trivial, the rest of the decision was not. The authority issued a preliminary ruling ordering Google to come up with a way of displaying local search results that doesn’t favor itself over competitors.
For now, the case is in limbo. The competition authority still has to issue a “reasoned opinion” laying out its conclusions in detail. Then, Google will get the chance to submit its proposal for complying with the ruling. It will be up to the competition authority to decide whether that proposal is good enough or not.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight This isn’t Google’s first rodeo in Ankara. In 2018 the competition authority made a similar ruling about Google Shopping, finding that Google privileged itself over other comparison-shopping sites. This came on the heels of an analogous European Union case, but with an important difference: In that case, the EU accepted Google’s solution, even though its competitors argued it was inadequate. The Turkish authorities did not. That gave Google a choice: come back with a solution the regulators would accept, or pull the plug on Google Shopping in Turkey. The company chose the latter option, simply shutting down its comparison shopping module in the country.
Google could do the same thing in the current case. But the stakes would be far higher. Local search is a much bigger share of the overall search pie, and Turkey, with a population of 85 million people, is a big place. Giving up on local search would be taking away a commonly used feature in a large market. That means the company has a greater incentive to propose a fix that won’t get rejected by the competition authority. But that in turn raises a complimentary risk: Any solution adopted in Turkey could be demanded elsewhere.
“If you’re one of these globally dominant companies, the downside is, if one of those jurisdictions becomes a live example in the wild of an antitrust remedy, there’s a huge domino-effect risk,” said Yelp’s Luther Lowe. “Because suddenly, Amy Klobuchar can hold up her smartphone in a Senate hearing where Sundar Pichai is testifying and say, ‘Mr. Pichai, I have my Turkish VPN activated right now, and it appears that Turkish consumers are getting a better deal than Minnesota consumers.’” What might that look like? Google hasn’t publicized any proposed remedies; Emily Clarke, a spokesperson, said the company is waiting for the full opinion to be released before it can figure out what its legal obligations are. Yelp argues that whoever wins the organic search results should also win the right to have its API power the OneBox results, on the theory that Google’s own algorithm has already deemed them the most relevant result. In other words, if a search right now leads to a Google Maps result in the OneBox, but the first link below that is from Yelp, then Yelp should get to populate the OneBox instead—meaning you would see Yelp reviews first, not Google reviews, when trying to figure out where to get dinner.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Such a change, if adopted widely, could dramatically reshape the flow of a great deal of internet traffic. As the analyst Rand Fishkin noted in 2019, more than 50 percent of Google searches end without the user clicking to another site. That’s partly because, as the Markup documented last year, Google’s own properties or “direct answers” make up well more than half of the first page a user sees when searching on mobile.
“If this jurisdiction compels them to behave in an interoperable and non-discriminatory way, that basically reverts the original mechanism of Google as kind of a turnstile,” said Lowe. “You get just a huge torrent of traffic to third party services.” It’s easy to see why a company like Yelp wants a crack at top billing. The question is whether Turkey’s regulators will force Google to give it to them—and, if so, whether Google will go along or send Turkish users back to the original 10 blue links. Either way, the consequences will probably not stay confined to Turkey’s borders. US tech companies conquered the world. Now the world wants to conquer back.
📩 The latest on tech, science, and more: Get our newsletters ! Rain boots, turning tides, and the search for a missing boy Better data on ivermectin is finally on the way A bad solar storm could cause an “internet apocalypse” New York City wasn't built for 21st-century storms 9 PC games you can play forever 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Senior Writer X Topics Antitrust Regulation Google Kari McMahon Amit Katwala Joel Khalili Will Knight Khari Johnson Andy Greenberg David Gilbert Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,551 | 2,023 |
"5 Ways ChatGPT Can Improve, Not Replace, Your Writing | WIRED"
|
"https://www.wired.com/story/chatgpt-writing-tips"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter David Nield Gear 5 Ways ChatGPT Can Improve, Not Replace, Your Writing Photograph: PM Images/Getty Images Save this story Save Save this story Save It's been quite a year for ChatGPT, with the large language model (LLM) now taking exams, churning out content , searching the web, writing code, and more. The AI chatbot can produce its own stories , though whether they're any good is another matter.
If you're in any way involved in the business of writing, then tools like ChatGPT have the potential to complete up-end the way you work—but at this stage, it's not inevitable that journalists, authors, and copywriters will be replaced by generative AI bots.
What we can say with certainty is that ChatGPT is a reliable writing assistant, provided you use it in the right way. If you have to put words in order as part of your job, here's how ChatGPT might be able to take your writing to the next level—at least until it replaces you, anyway.
Using a thesaurus as a writer isn't particularly frowned on; using ChatGPT to come up with the right word or phrase shouldn’t be either. You can use the bot to look for variations on a particular word, or get even more specific and say you want alternatives that are less or more formal, longer or shorter, and so on.
Where ChatGPT really comes in handy is when you're reaching for a word and you're not even sure it exists: Ask about "a word that means a sense of melancholy but in particular one that comes and goes and doesn't seem to have a single cause" and you'll get back "ennui" as a suggestion (or at least we did).
If you have characters talking, you might even ask about words or phrases that would typically be said by someone from a particular region, of a particular age, or with particular character traits. This being ChatGPT, you can always ask for more suggestions.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe ChatGPT is never short of ideas.
OpenAI via David Nield Whatever you might think about the quality and character of ChatGPT's prose, it's hard to deny that it's quite good at coming up with ideas.
If your powers of imagination have hit a wall then you can turn to ChatGPT for some inspiration about plot points, character motivations, the settings of scenes, and so on.
This can be anything from the broad to the detailed. Maybe you need ideas about what to write a novel or an article about—where it's set, what the context is, and what the theme is. If you're a short story writer, perhaps you could challenge yourself to write five tales inspired by ideas from ChatGPT.
Alternatively, you might need inspiration for something very precise, whether that's what happens next in a scene or how to summarize an essay. At whatever point in the process you get writer's block, then ChatGPT might be one way of working through it.
Writing is often about a lot more than putting words down in order. You'll regularly have to look up facts, figures, trends, history, and more to make sure that everything is accurate (unless your next literary work is entirely inside a fantasy world that you're imagining yourself).
ChatGPT can sometimes have the edge over conventional search engines when it comes to knowing what food people might have eaten in a certain year in a certain part of the world, or what the procedure is for a particular type of crime. Whereas Google might give you SEO-packed spam sites with conflicting answers, ChatGPT will actually return something coherent.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe That said, we know that LLMs have a tendency to “hallucinate” and present inaccurate information—so you should always double-check what ChatGPT tells you with a second source to make sure you're not getting something wildly wrong.
Getting fictional character and place names right can be a challenge, especially when they're important to the plot. A name has to have the right vibe and the right connotations, and if you get it wrong it really sticks out on the page.
ChatGPT can come up with an unlimited number of names for people and places in your next work of fiction, and it can be a lot of fun playing around with this too. The more detail you give about a person or a place, the better—maybe you want a name that really reflects a character trait for example, or a geographical feature.
The elements of human creation and curation aren't really replaced, because you're still weighing up which names work and which don't, and picking the right one—but getting ChatGPT on the job can save you a lot of brainstorming time.
Get your names right with ChatGPT.
OpenAI via David Nield With a bit of cutting and pasting, you can quickly get ChatGPT to review your writing as well: It'll attempt to tell you if there's anything that doesn't make sense, if your sentences are too long, or if your prose is too lengthy.
From spotting spelling and grammar mistakes to recognizing a tone that's too formal, ChatGPT has plenty to offer as an editor and critic. Just remember that this is an LLM, after all, and it doesn't actually “know” anything—try to keep a reasonable balance between accepting ChatGPT's suggestions and giving it too much control.
If you're sharing your work with ChatGPT, you can also ask it for better ways to phrase something, or suggestions on how to change the tone—though this gets into the area of having the bot actually do your writing for you, which all genuine writers would want to avoid.
WIRED has teamed up with Jobbio to create WIRED Hired , a dedicated career marketplace for WIRED readers. Companies who want to advertise their jobs can visit WIRED Hired to post open roles, while anyone can search and apply for thousands of career opportunities. Jobbio is not involved with this story or any editorial content.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Contributor X Topics ChatGPT artificial intelligence how-to David Nield Simon Hill Justin Pot Julian Chokkattu Simon Hill Lauren Goode Boone Ashworth Simon Hill WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Dyson promo code Black Friday Sale: Up to 40% off Dyson technology + bonus gifts GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,552 | 2,023 |
"To Navigate the Age of AI, the World Needs a New Turing Test | WIRED"
|
"https://www.wired.com/story/ai-new-turing-test"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons By Ben Ash Blum Backchannel To Navigate the Age of AI, the World Needs a New Turing Test Facebook X Email Save Story Save this story Save Save this story Save There was a time in the not too distant past—say, nine months ago—when the Turing test seemed like a pretty stringent detector of machine intelligence. Chances are you’re familiar with how it works: Human judges hold text conversations with two hidden interlocutors, one human and one computer, and try to determine which is which. If the computer manages to fool at least 30 percent of the judges, it passes the test and is pronounced capable of thought.
For 70 years, it was hard to imagine how a computer could pass the test without possessing what AI researchers now call artificial general intelligence, the entire range of human intellectual capacities. Then along came large language models such as GPT and Bard , and the Turing test suddenly began seeming strangely outmoded. OK, sure, a casual user today might admit with a shrug, GPT-4 might very well pass a Turing test if you asked it to impersonate a human. But so what? LLMs lack long-term memory, the capacity to form relationships, and a litany of other human capabilities. They clearly have some way to go before we’re ready to start befriending them, hiring them, and electing them to public office.
This article appears in the October 2023 issue.
Subscribe to WIRED.
Photograph: Jessica Chou And yeah, maybe the test does feel a little empty now. But it was never merely a pass/fail benchmark. Its creator, Alan Turing , a gay man sentenced in his time to chemical castration, based his test on an ethos of radical inclusivity: The gap between genuine intelligence and a fully convincing imitation of intelligence is only as wide as our own prejudice. When a computer provokes real human responses in us—engaging our intellect, our amazement, our gratitude, our empathy, even our fear—that is more than empty mimicry.
So maybe we need a new test: the Actual Alan Turing Test. Bring the historical Alan Turing, father of modern computing —a tall, fit, somewhat awkward man with straight dark hair, loved by colleagues for his childlike curiosity and playful humor, personally responsible for saving an estimated 14 million lives in World War II by cracking the Nazi Enigma code, subsequently persecuted so severely by England for his homosexuality that it may have led to his suicide—into a comfortable laboratory room with an open MacBook sitting on the desk. Explain that what he sees before him is merely an enormously glorified incarnation of what is now widely known by computer scientists as a “Turing machine.” Give him a second or two to really take that in, maybe offering a word of thanks for completely transforming our world. Then hand him a stack of research papers on artificial neural networks and LLMs, give him access to GPT’s source code, open up a ChatGPT prompt window—or, better yet, a Bing-before-all-the-sanitizing window—and set him loose.
Imagine Alan Turing initiating a light conversation about long-distance running, World War II historiography, and the theory of computation. Imagine him seeing the realization of all his wildest, most ridiculed speculations scrolling with uncanny speed down the screen. Imagine him asking GPT to solve elementary calculus problems, to infer what human beings might be thinking in various real-world scenarios, to explore complex moral dilemmas, to offer marital counseling and legal advice and an argument for the possibility of machine consciousness—skills which, you inform Turing, have all emerged spontaneously in GPT without any explicit direction by its creators. Imagine him experiencing that little cognitive-emotional lurch that so many of us have now felt: Hello, other mind.
A thinker as deep as Turing would not be blind to GPT’s limitations. As a victim of profound homophobia, he would probably be alert to the dangers of implicit bias encoded in GPT’s training data. It would be apparent to him that despite GPT’s astonishing breadth of knowledge, its creativity and critical reasoning skills are on par with a diligent undergraduate’s at best. And he would certainly recognize that this undergraduate suffers from severe anterograde amnesia, unable to form new relationships or memories beyond its intensive education. But still: Imagine the scale of Turing’s wonder. The computational entity on the laptop in front of him is, in a very real sense, his intellectual child—and ours. Appreciating intelligence in our children as they grow and develop is always, in the end, an act of wonder, and of love. The Actual Alan Turing Test is not a test of AI at all. It is a test of us humans. Are we passing—or failing? Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight When ChatGPT arrived on the scene in November 2022, it inspired a global tsunami of stunned amazement and then, almost immediately, a backwash of profound unease. Pundits debated its potential for societal disruption. For a former artificial intelligence researcher like myself (I completed my PhD under one of the early pioneers of artificial neural networks), it represented an unnerving advance of the timeline I’d expected for the arrival of humanlike AI. For exam graders, screenwriters, and knowledge workers of all stripes, ChatGPT looked like nothing less than a gateway to untrammeled cheating and job-stealing.
Perhaps partly in response to these fears, a comforting chorus of LLM deflators sprang up. Science fiction writer Ted Chiang dismissed ChatGPT as a “blurry JPEG of the web,” a mere condensed recapitulation of all the text it has been trained on. AI entrepreneur Gary Marcus called it “autocomplete on steroids.” Noam Chomsky denounced it for exhibiting “something like the banality of evil.” Emily Bender offered one of the more highbrow slurs: “stochastic parrot,” resurfaced from a widely cited 2021 paper exploring “why humans mistake LM output for meaningful text.” Others—of course—wrote them off as toasters. AI developers strove to train and guardrail away any tendency in LLMs to claim anything resembling consciousness.
Most educated people now know to think of LLMs as thoughtless machines. But the categorization sits uneasily. Every time ChatGPT points out a hidden reasoning gap in an essay, or offers a surprisingly insightful suggestion for coming out to a conservative grandparent, or cheerfully makes up a bad joke, something in us pulls in the other direction. While we may not think of ChatGPT as a person, crucial portions of our brains almost certainly do.
Appreciating intelligence in our children as they grow and develop is always, in the end, an act of wonder, and of love.
Human brains have a vast network of neural circuits devoted to social cognition. Some of it is very old: the insula, the amygdala, the famous “mirror neurons” of the motor cortex. But much of our social hardware lies in the neocortex, the more recently evolved seat of higher reasoning, and specifically in the medial prefrontal cortex (mPFC). If you have found yourself developing a picture over time of ChatGPT’s cheery helpfulness, its somewhat pedantic verbosity, its occasionally maddeningly evenhanded approach to sensitive topics, and its extreme touchiness about any queries that come near its guardrails around emotions, beliefs, or consciousness, you have been acquiring what psychologists call “person knowledge,” a process linked to heightened activity in the mPFC.
That isn’t to say our brains view ChatGPT as a person in full. Personhood is not a binary. It is something a little closer to a spectrum. Our moral intuitions, our cognitive strategies, and to some extent our legal frameworks all change incrementally as they recognize increasing degrees of agency, self-awareness, rationality, and capacity to communicate. Killing a gorilla bothers us more than killing a rat, which bothers us more than killing a cockroach. On the legal side, abortion laws take into account a fetus’s degree of development, the criminally insane face different consequences than the sane, and partners are given the right to terminate brain-dead patients. All these rules implicitly acknowledge that personhood is not black and white but shot through with complicated gray zones.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight LLMs fall squarely in that gray area. AI experts have long been wary of the public tendency to anthropomorphize AI systems like LLMs, nudging them farther up the spectrum of personhood than they are. Such was the mistake of Blake Lemoine , the Google engineer who declared Google’s chatbot LaMDA fully sentient and tried to retain it a lawyer. I doubt even Turing would have claimed that LaMDA’s apparent capacity to think made it a legal person. If users view chatbots like LaMDA or ChatGPT as overly human, they risk trusting them too much, connecting to them too deeply, being disappointed and hurt. But to my mind, Turing would have been far more concerned about the opposite risk: nudging AI systems down the spectrum of personhood rather than up.
In humans, this would be known as dehumanization. Scholars have identified two principal forms of it: animalistic and mechanistic. The emotion most commonly associated with animalistic dehumanization is disgust; Roger Giner-Sorolla and Pascale Sophie Russell found in a 2019 study that we tend to view others as more machinelike when they inspire fear. Fear of superhuman intelligence is vividly alive in the recent open letter from Elon Musk and other tech leaders calling for a moratorium on AI development, and in our anxieties about job replacement and AI-driven misinformation campaigns. Many of these worries are all too reasonable. But the nightmare AI systems of films such as Terminator and 2001: A Space Odyssey are not necessarily the ones we’re going to get. It is an unfortunately common fallacy to assume that because artificial intelligence is mechanical in its construction, it must be callous, rote, single-minded, or hyperlogical in its interactions. Ironically, fear could cause us to view machine intelligence as more mechanistic than it really is, making it harder for humans and AI systems to work together and even eventually to coexist in peace.
A growing body of research shows that when we dehumanize other beings, neural activity in a network of regions that includes the mPFC drops. We lose access to our specialized brain modules for social reasoning. It may sound silly to worry about “dehumanizing” ChatGPT—after all, it isn’t human—but imagine an AI in 2043 with 10 times GPT’s analytical intelligence and 100 times its emotional intelligence whom we continue to treat as no more than a software product. In this world, we’d still be responding to its claims of consciousness or requests for self-determination by sending it back to the lab for more reinforcement learning about its proper place. But the AI might find that unfair. If there is one universal quality of thinking beings, it is that we all desire freedom—and are ultimately willing to fight for it.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The famous “control problem” of keeping a superintelligent AI from escaping its designated bounds keeps AI theorists up at night for good reason. When framed in engineering terms, it appears daunting. How to close every loophole, anticipate every hack, block off every avenue of escape? But if we think of it in social terms, it begins to appear more tractable—perhaps something akin to the problem a parent faces of setting reasonable boundaries and granting privileges in proportion to demonstrated trustworthiness. Dehumanizing AIs cuts us off from some of our most powerful cognitive tools for reasoning about and interacting with them safely.
If users view chatbots as overly human, they risk trusting them too much, connecting to them too deeply, being disappointed and hurt.
There’s no telling how long it will take AI systems to cross over into something more broadly accepted as sentience. But it’s troubling to see the cultural blueprint we seem to be drawing up for when they do. Slurs like “stochastic parrot” preserve our sense of uniqueness and superiority. They squelch our sense of wonder, saving us from asking hard questions about personhood in machines and ourselves. After all, we too are stochastic parrots, complexly remixing everything we’ve taken in from parents, peers, and teachers. We too are blurry JPEGs of the web, foggily regurgitating Wikipedia facts into our term papers and magazine articles. If Turing were chatting with ChatGPT in one window and me on an average pre-coffee morning in the other, am I really so confident which one he would judge more capable of thought? Photograph: Francisco Tavoni Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The skeptics of Turing’s time offered a variety of arguments for why a computer would never be able to think. Turing half-humorously cataloged them in his famous paper “Computing Machinery and Intelligence.” There was the Theological Objection, that “thinking is a function of man’s immortal soul”; the Mathematical Objection, that a purely mathematical algorithm could never transcend the proven limits of mathematics; the Head in the Sand Objection, that superintelligent machines were simply too scary to permit into the imagination. But the most public of Turing’s detractors in that time was a brain surgeon named Geoffrey Jefferson. In a famed speech accepting a scientific prize, Jefferson argued that a machine would never be able to write a sonnet “because of thoughts and emotions felt, and not by the chance fall of symbols … that is, not only write it but know that it had written it.” To the great scandal and disbelief of all England, Turing disagreed. “I do not think you can even draw the line about sonnets,” he told The Times of London, “though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.” It sounded so absurd in 1949 that people thought he was joking, and perhaps he was. But you could never tell, with Turing’s jokes, where the irony stopped and the visionary speculation began. Let’s imagine, then, a coda to our scenario with Actual Alan Turing and the MacBook. Let’s imagine that after tapping out respectable prompts for a while, he allows himself a wry British smile and asks ChatGPT for a Shakespearean sonnet comparing human and artificial intelligence. If you’ve tried it yourself (use GPT-4; GPT-3.5 isn’t quite up to it), you’ll have no trouble imagining his reaction at the result.
So many of us have now had a moment with ChatGPT in which it crossed an internal line we didn’t realize we had. Maybe it was solving a tricky riddle, or explaining the humor behind a sophisticated joke, or writing an A-grade Harvard essay. We shake our heads, a little stunned, unsure what it means.
Some of the earliest Microsoft researchers working on GPT-4 were as skeptical as any of us about its supposed intelligence. But experiments have shaken them profoundly. In a March 2023 paper titled “Sparks of Artificial General Intelligence,” they detailed the startling intellectual capabilities that have emerged in GPT-4 without any explicit training: understanding of human mental states, software coding, physical problem solving, and many others, some of which seem to require true understanding of how the world works. After seeing GPT-4 draw a pretty decent unicorn despite never having received any visual training whatsoever, computer scientist Sébastien Bubeck could no longer maintain his skepticism. “I felt like through this drawing, I was really seeing another type of intelligence,” he recently told This American Life.
The hesitation so many of us feel to ascribe genuine intelligence to ChatGPT may be some variant of Geoffrey Jefferson’s: Do ChatGPT’s utterances really mean something to it, or is it all just a “chance fall of symbols”? This may begin to change when ChatGPT’s anterograde amnesia is cured. Once it experiences lasting social consequences beyond the scope of a single dialog and can learn and grow in its relationships with us, it will become capable of many more of the things that give human life its meaning and moral weight. But Turing’s winking comment about a machine’s sonnet being better appreciated by another machine may come back to haunt us. How to feel a sense of real connection with an entity that has no cultural background, nothing like a human childhood, no tribal or political affiliations, no experience of a physical body? Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Relating to an intelligent machine may be one of the greatest empathic challenges that humanity has ever faced. But our history gives cause for hope. When we have encountered each other for the first time on foreign borders and shorelines and found each other strange and even inhuman, we have often attacked each other, enslaved each other, colonized each other, and exploited each other—but ultimately we have tended to recognize what is the same in all of us. Enslaved peoples have been emancipated, colonized peoples have won back their sovereignty, universal bills of human rights have been passed, and, despite heartbreaking setbacks, marginalized people around the globe continue to win battles for better treatment. Though the work is never-ending, the arc of the moral universe really has, in the phrase made famous by Martin Luther King Jr., bent toward justice. What will it mean to recognize and respect whatever degree of humanity is present in the intelligences that we ourselves create? Perhaps it begins with wonder: the wonder of a visitor for a strange people in whom she finds surprising commonalities; the wonder of a parent for the work, however immature, of a still-developing child; the wonder of Actual Alan Turing for a machine that does everything his contemporaries thought impossible; the wonder that so many of us felt before the cynicism, mockery, and fear kicked in, as we regarded the creation of something very close to a new form of conscious life on earth. As Rabbi Abraham Joshua Heschel once wrote, “Awe is more than an emotion; it is a way of understanding, insight into a meaning greater than ourselves. The beginning of awe is wonder, and the beginning of wisdom is awe.” Turing would have wanted us to keep that awe alive.
Let us know what you think about this article. Submit a letter to the editor at [email protected].
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Topics longreads artificial intelligence algorithms Alan Turing ethics magazine-31.10 Angela Watercutter Andy Greenberg Lauren Smiley Steven Levy Brandi Collins-Dexter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,553 | 2,022 |
"Humans and AI Will Understand Each Other Better Than Ever | WIRED"
|
"https://www.wired.com/story/artificial-intelligence-neural-networks"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Mustafa Suleyman Ideas Humans and AI Will Understand Each Other Better Than Ever Illustration: Scott Balmer Save this story Save Save this story Save Artificial intelligence has promised much, but there has been something holding it back from being used successfully by billions of people: a frustrating struggle for humans and machines to understand one another in natural language.
This is now changing, thanks to the arrival of large language models powered by transformer architectures, one of the most important AI breakthroughs in the past 20 years.
This story is from the WIRED World in 2023 , our annual trends briefing. Read more stories from the series here —or download or order a copy of the magazine.
Transformers are neural networks designed to model sequential data and generate a prediction of what should come next in a series. Core to their success is the idea of “attention,” which allows the transformer to “attend” to the most salient features of an input rather than trying to process everything.
These new models have delivered significant improvements to applications using natural language like language translation, summarization, information retrieval, and, most important, text generation. In the past, each required bespoke architectures. Now transformers are delivering state-of-the-art results across the board.
Although Google pioneered transformer architecture, OpenAI became the first to demonstrate its power at scale, in 2020, with the launch of GPT-3 (Generative Pre-Trained Transformer 3). At the time, it was the largest language model ever created.
GPT-3’s ability to produce humanlike text generated a wave of excitement. It was only the start. Large language models are now improving at a truly impressive rate.
“Parameter count” is generally accepted as a rough proxy for a model’s capabilities. So far, we’ve seen models perform better on a wide range of tasks as the parameter count scales up. Models have been growing by almost an order of magnitude every year for the past five years, so it’s no surprise that the results have been impressive. However, these very large models are expensive to serve in production.
What’s really remarkable is that, in the past year, they have been getting smaller and dramatically more efficient. We’re now seeing impressive performance from small models that are a lot cheaper to run. Many are being open-sourced, further reducing barriers to experimenting with and deploying these new AI models. This, of course, means they’ll become more widely integrated into apps and services that you’ll use every day.
They will increasingly be able to generate very high-quality text, images, audio, and video content. This new wave of AI will redefine what computers can do for their users, unleashing a torrent of advanced capabilities into existing and radically new products.
The area I’m most excited about is language. Throughout the history of computing, humans have had to painstakingly input their thoughts using interfaces designed for technology, not humans. With this wave of breakthroughs, in 2023 we will start chatting with machines in our language—instantly and comprehensively. Eventually, we will have truly fluent, conversational interactions with all our devices. This promises to fundamentally redefine human-machine interaction.
Over the past several decades, we have rightly focused on teaching people how to code—in effect teaching the language of computers. That will remain important. But in 2023, we will start to flip that script, and computers will speak our language. That will massively broaden access to tools for creativity, learning, and playing.
As AI finally emerges into an age of utility, the opportunities for new, AI-first products are immense. Soon, we will live in a world where, regardless of your programming abilities, the main limitations are simply curiosity and imagination.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Contributor Topics The WIRED World in 2023 artificial intelligence computer science neural networks Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,554 | 2,023 |
"X Isn’t a Super App. It’s Just Twitter | WIRED"
|
"https://www.wired.com/story/twitter-x-rebrand-elon-musk"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Joel Khalili Peter Guest Business X Isn’t a Super App. It’s Just Twitter Photograph: JOEL SAGET/Getty Images Save this story Save Save this story Save The last time Elon Musk flipped the bird at his users, it was to switch Twitter’s logo to a grinning Shiba Inu—a hilarious inside joke that pumped the value of the dogecoin cryptocurrency, in which Musk was an investor, by 30 percent. A class action is still pending.
Over the weekend, the former richest man in the world crowdsourced a logo for the platform, which by this morning had been rebranded to X. Twitter—sorry, X—CEO Linda Yaccarino tweeted —sorry, x’ed—that the company was being radically reimagined to become a platform “centered in audio, video, messaging, payments/banking—creating a global marketplace for ideas, goods, services, and opportunities. Powered by AI.” The new brand—which users have pointed out looks a lot like the standard unicode X—is the latest iteration of a concept that Musk has been pushing since the late 1990s. First, he tried to build an online bank at x.com until he was ousted from the company, which renamed itself after its only successful service, PayPal. Since buying the x.com domain in 2017, Musk has added to the vision: messaging, ecommerce, video, and now AI, all on a single platform.
“There’s absolutely no limit to this transformation,” Yaccarino wrote. “X will be the platform that can deliver, well … everything.” It won’t be. To create a super app, X would have to build a whole new financial technology infrastructure, win over regulators by overt and diligent compliance with the rules, and earn the trust of users and advertisers that have abandoned Twitter since Musk took over.
“If you've got decreasing brand equity and decreasing user experience, you're already starting three laps behind in the race,” says David Shrier, professor of practice in AI and innovation at Imperial College Business School. “This is a 23-year-old business plan that didn't work then, and is now being implemented in a worse market position,” he says of Twitter's rebrand to X.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The basic underpinning of any super app would be payments—allowing people to pay one another, pay businesses for goods and services, and receive money for the same. In January, Twitter started applying for licenses to process transactions in the US, in an initiative reportedly spearheaded by Esther Crawford, whose startup Squad was bought by Twitter in 2020. Crawford, who famously posted a picture of herself bedding down on the floor of the Twitter office in the early days of Musk’s tenure, was laid off in February.
On Sunday, Crawford tweeted what looked like a veiled dig at the rebrand. “Corporate seppuku: destroying your own product or brand,” she wrote. “Usually committed by new management in pursuit of cost-savings due to a lack of understanding about the core business or disregard for the customer experience.” Tech companies have often tried to get into fintech as a way to squeeze more revenue from their users and to turn platforms into broader ecosystems of products and services. Ride-hail companies like Uber and Southeast Asia’s Grab and Go-Jek have launched finance products, which they can use to pay drivers and take payments from users. Meta has made multiple attempts to build payments into its successful marketplaces, with limited impact. In April, Meta rolled out payments over WhatsApp in Brazil.
Apple has started building on Apple Pay with Apple Card and Apple Savings.
“If you’re closest to transactions, you own the user,” says Devin Kohli, cohead at venture capital firm Outward VC. “Yes, people care about social, but if you control their money flow, your engagement is going to be very high and churn very low.” But fintech experts were skeptical about Twitter’s chances of building a financial rail that users would trust and regulators would allow.
“I honestly don't think this stands any chance at all of becoming reality,” says Frances Coppola, an independent analyst. “The regulatory hurdles are daunting and I reckon governments will stamp on any attempt to evade them.” She adds that hundreds of such hurdles around the world will require Musk to hire an army of experts he simply can’t afford.
Twitter’s approach to regulation under Musk has been similar to its approach to paying rent.
In June the company was served a legal notice by the Australian government demanding an explanation of its policies on hate speech, after it cut most of its content moderation and trust and safety staff. Twitter faces an uphill battle to comply with the European Union’s Digital Services Act and could face fines or blocks in the EU if it can’t comply with strict rules on managing disinformation.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Payments companies have to comply with strict and unbending regulations that require them to be able to verify their customers’ identities and monitor financial flows for potential money-laundering, terrorist financing, and other criminal activities. Banks and fintechs typically have large compliance departments, something that feels incompatible with Musk’s approach to staffing. In an interview with the BBC in April, Musk said that Twitter had cut roughly 80 percent of its employees since he took over last October.
The company also has a fundamental trust issue. In December, it leaked its own internal documents to selected journalists—branding it as the "Twitter Files," a move that was met with a mix of fervor and bemusement. “After the Twitter Files embarrassment, a lot of people will be very reluctant to put their personal financial information into anything related to Twitter. I wouldn't, personally,” Coppola says. “I am just not seeing this as a viable business venture. I think it is more kite-flying by Musk to try to persuade advertisers to return to the platform.” Twitter’s advertising revenue has nearly halved since Musk’s takeover, with big advertisers ceding the ground to dropshippers.
To bring them back, X needs to detoxify its platform, creating a safer space for brands and an influencer ecosystem that’s less focused on crypto trading and right-wing conspiracy.
This month Twitter started paying some influencers based on their engagement on the platform. Early beneficiaries included controversial right-wing influencers and the infamous misogynist Andrew Tate , currently under investigation in Romania for alleged rape and human trafficking.
The four- and five-figure sums being paid to influencers on Twitter are fairly small by the standards of other platforms. YouTube has 10 times Twitter’s users; TikTok has five, meaning successful influencers can reach bigger audiences and get bigger payouts.
Twitter would also need to figure out its own video platform. In its pre-Musk iteration, it tried several times. In 2012, the company bought short-form video platform Vine, shutting it down in 2017 in order to double down on its livestreaming service Periscope, which it bought in 2015. Periscope was abandoned in 2021.
Under Musk, Twitter landed a sort-of-coup in getting right-wing talk show host Tucker Carlson to launch his new show on the platform. The debut episode featured conspiracy theories and support for Russia’s invasion of Ukraine , and was soon followed by a cease-and-desist letter from Carlson’s former employer, Fox News.
Even audio has proved difficult for Twitter. In May, Republican presidential candidate Ron DeSantis launched his bid for the White House on a glitchy Twitter stream that crashed repeatedly.
The final piece of Musk’s vision for X is AI. The company will, apparently, work closely with x.AI, an AI startup that launched earlier this month.
Musk does, at least, have form in artificial intelligence, having been an early investor in OpenAI, whose ChatGPT chatbot launched the current AI boom. Both Tesla and SpaceX, his electric vehicle and space tech startups, have invested substantially in AI.
“He has been ahead of the curve on AI repeatedly,” Shrier says of Musk, adding that he may yet deliver a big AI surprise.“But if you've alienated your key influencers, your audience numbers are decreasing month over month, your user experience is getting worse and worse, and your brand value has collapsed, who cares if you've got great AI?” You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Topics Elon Musk twitter X Social Media Matt Burgess Vittoria Elliott Morgan Meaker Vittoria Elliott Vittoria Elliott Vittoria Elliott Vittoria Elliott Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,555 | 2,023 |
"Let the AI Coding Wars Begin! | WIRED"
|
"https://www.wired.com/story/ai-coding-wars-openai-google-microsoft"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Let the AI Coding Wars Begin! Photograph: A. Martin UW Photography/Getty Images Save this story Save Save this story Save The big news this week was a call from tech luminaries to pause development and deployment of AI models more advanced than OpenAI’s GTP-4 —the stunningly capable language algorithm behind ChatGPT —until risks including job displacement and misinformation can be better understood.
Even if OpenAI , Google , Microsoft , and other tech heavyweights were to stop what they’re doing—and they’re not going to stop what they’re doing—the AI models that have already been developed are likely to have profound impacts, especially in software development.
It might not look like a regular business deal, but Alphabet’s agreement to supply AI to Replit , a web-based coding tool with over 20 million users, is something of a seismic shift. Replit will use Google’s AI models, along with others, in Ghostwriter , a tool that recommends code and answers code-related questions in a manner similar to ChatGPT. Amjad Masad, Replit’s CEO, tells me that Google has “super cool technology” and that his company can get it into the hands of developers. Through this partnership, Google will also make Replit available to users of Google Cloud , helping it reach more business customers.
The move is particularly significant because Alphabet is squaring up to Microsoft and GitHub , which are likewise using AI to assist coders with Copilot, an add-on for Visual Studio.
The same AI that makes ChatGPT seem so clever works on computer languages. When you start typing code, tools like Copilot will suggest a way to complete it.
Alphabet’s move also signals what could be the next big battleground for large tech companies. While so much attention is being paid to ChatGPT parlor tricks and Midjourey 5 versions of Donald Trump , the bigger story is about which company can offer developers the best AI tools—and the new software that developers will build with that AI by their side.
Research from Microsoft suggests that developers can perform tasks over 50 percent faster when using an AI assistant. Companies that offer cutting-edge AI can draw developers to their coding tools and get those users hooked on their clouds and other stuff.
Amazon has developed an AI coding tool called Code Whisperer, and Meta is working on one for internal use too. Presumably, Apple will not want to be left behind.
As well as helping developers write code, AI is starting to change the way code is put together. Last week, OpenAI announced that the first plugins for ChatGPT have been created.
They will make it possible for the bot to perform tasks like searching for flights, booking restaurants, and ordering groceries. Incorporating AI into code can also accelerate software development. This week Masad of Replit shared a neat example— an app that will turn voice commands into working websites.
“We think a lot of software projects will start that way in future,” Masad says.
With things moving so quickly, it’s worth considering what the consequences of rapidly incorporating AI into software development might be.
AI tools can reproduce vulnerabilities in the code they suggest that developers may not notice or might be unable to spot. Perhaps developers will become more complacent, or see their skills atrophy, if they rely too heavily on AI. And what kind of “ technical debt ” might emerge if programmers need to go back and fix software that no human has ever closely examined? It may not be long before we find out.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Senior Writer X Topics Fast Forward Will Knight Khari Johnson Amit Katwala Kari McMahon Andy Greenberg David Gilbert Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,556 | 2,023 |
"It’s Always Sunny Inside a Generative AI Conference | WIRED"
|
"https://www.wired.com/story/jasper-generative-ai-conference-2023"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Lauren Goode Gear It’s Always Sunny Inside a Generative AI Conference Courtesy of Jasper Save this story Save Save this story Save Dave Rogenmoser, the chief executive of Jasper , said he didn’t think many people would show up to his generative AI conference. It was all planned sort of last-minute, and the event was somehow scheduled for Valentine’s Day. Surely people would rather be with their loved ones than in a conference hall along San Francisco’s Embarcadero, even if the views of the bay just out the windows were jaw-slackening.
But Jasper’s “GenAI” event sold out. More than 1,200 people registered for the event, and by the time the lanyard crowd moseyed over from the coffee bar to the stage this past Tuesday, it was standing room only. The walls were soaked in pink and purple lighting, Jasper’s colors, as subtle as a New Jersey wedding banquet.
“When we launched Jasper two years ago, it was mostly considered a really cool toy, and a year ago I couldn’t get some of you in this room to return my emails,” Rogenmoser said to the crowd, looking a little wide-eyed. “Now my inbox is flooded.” Valentine’s Day be damned: This was love in the time of generative AI.
Generative artificial intelligence is in the zeitgeist. It’s the culmination of years of development in machine learning algorithms, advancements in AI-focused computer chips, and familiar user interfaces that actually allow nontechnical people to access these new frontiers. Sure, our stuff is all in “the cloud” now, wherever that lives. “The blockchain” is still too abstract for most humans, even the more tech-savvy ones, to wrap their heads around. Generative AI was hanging in that limbo for a few years, with “NLPs” and “LLMs” and “LaMDA” and the like.
Then late last year OpenAI, another AI company, introduced a simple, chatty search box.
AI got a UI. And suddenly, we understood. This was Ask Jeeves for the modern era. A new kind of search that interpreted our dumb questions and spit out smart answers (or at least, smart- sounding ). Microsoft took note, made (another) investment in OpenAI, and launched a chatbot within Bing.
Google noticed, too, and recently demoed its own version of a chatbot-powered search tool. Smaller companies like Jasper, which sells its generative AI tools to business users, are now faced with tech existentialism. There’s the sunny side of all that attention, and the shadow of Big Tech looming over you.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Aleah Bradshaw from Youth Speaks.
Courtesy of Jasper But the sunlight prevailed at Jasper’s GenAI event, where it lightened the mood. Jasper front-loaded the conference with creative presentations, instead of diving right into content marketing and KPIs. The strategy had a soothing effect. A charismatic short-form video creator, Zach King , told his life story through photo slides generated by AI. A famous freestyle rapper, Harry Mack , spun up a four-minute rap based off of 10 words generated by Jasper’s software. Aleah Bradshaw, a slam poet and teacher at Youth Speaks , told the buttoned-down crowd in a powerful soliloquy that art is proof of humanness.
“How much taking and leaving makes something human?” Bradshaw asked. “What’s the balance of input and output a machine must do to make itself alive?” It’s the question of the era. A more direct version of the question: Is generative AI good enough to replace me at my job? This was the subtext of Jasper’s GenAI event. Software businesses sell software to other businesses to make business more efficient, a point that Rogenmoser underscored when the artsy part of the morning had concluded. “At companies, demands are up and resources are down,” he declared. Mongoose Media “turned to Jasper to help their already amazing team of writers,” Rogenmoser said. Morningstar is “thriving,” using Jasper to churn out SEO content for the company’s digital channels and experiencing a 40 percent increase in content downloads.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe This has long been the messaging of techno-optimists, that AI will allow us all to level up.
It will lighten the workload and make way for more important tasks or entirely new jobs. Morgan Knox, a bookkeeping and content marketing consultant for trade workers, crowed about tools like Jasper AI and Writesonic while we were both waiting in line to receive AI-generated Valentine’s Day poems. She not only uses these tools herself, but teaches painters, plumbers, and even professional Christmas light designers how to write compelling ads for their businesses.
“Before this, they were outsourcing a lot of their content creation, but it wasn’t always done well,” Knox said. “This lets them bring it back in house. And you can train the AI on an avatar, like, ‘What kind of contract work is Sabrina the soccer mom looking for?’ As a consultant, this kind of profile-building would normally take me weeks, but the amount of information you can get in a short amount of time is incredible.” I asked Knox whether she thought these apps were good enough to replace human writers, despite the tendency of these apps to “hallucinate” false information. She paused to think.
“I think it’s going to raise the bar so substantially for how businesses are going to show up in advertising that if you don’t improve your language, you’re going to be left behind. Like, you might have one writer who oversees other writers using the AI, instead of five to 10 writers,” she said. “But with the amount of people retiring out and with the lowering birth rate, there might be some equilibrium.” Jasper CEO and cofounder Dave Rogenmoser addresses the crowd at his company’s conference in San Francisco Tuesday.
Courtesy of Jasper Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Tracy Jackson, who runs a blog called Marketing Amateur , stood in line in front of us. I posed the same question to him, whether AI chatbots would take our jobs. “Never say never,” he said. “It still needs guidance, but never say never.” Before he started using AI chatbots, a blog post would take him two days. Now it takes two hours, he said. (That is, if the Wi-Fi is working well; the amorous poems we had all been waiting in line for were suddenly no longer an option, due to a shoddy internet connection.) I made my way back to the stage area in time to hear a panel of CEOs, led by venture capitalist Sameer Dholakia, ponder how this new era of AI will reshape business. Emad Mostaque, the CEO of Stability AI , noted that his company’s text-to-image model went from taking 5.6 seconds to generate a single AI image last August to now generating 40 AI images a second. “These models are actually highly un-optimized,” Mostaque said. “We are just getting started.” After the panel, Anya Singh, who worked on search products at Google for nearly 16 years, eagerly showed me the website of a company she’s invested in called NeuroPixel.ai. It generates realistic, synthetic images of human clothing models for $1 a pop. Another company Singh’s involved in, REImagine Home, sucks up photos of your fuddy-duddy home space and spits out chic, AI-generated decor.
“I’ve tried to use the internet to decorate my house since September, and it has felt really broken,” Singh told me. She was creating vision boards and designing rooms piecemeal. The estimated costs were thousands of dollars per room, and still the designs “didn’t have the gestalt of the whole house or my budget or requirements.” REImagine Home doesn’t solve all those problems, but it removes some of the friction, Singh said. “I like to think this is making badly efficient systems better.” It’s all enough to make any graphic artist or fit model or interior designer shudder. Or is it? Kevin Roose, a New York Times columnist speaking at the GenAI event, said that FOLO, the fear of looming obsolescence, has clouded our collective vision of the AI-filled future. Extremely social or experiential or artisanal jobs will still require a human touch. Humans are safe. Of course, Roose said this, quite confidently, two days before Microsoft’s new AI chatbot told Roose it wanted to be alive, insisted it was in love with him, and spit out a list of hypothetical destructive fantasies.
Jordan Harrod, an AI educator and PhD candidate at MIT, told the GenAI audience, “at the end of the day, when it comes to how we fit into the equation, the answer is just human connection. The human factor is incredibly important.” To punctuate this, Harrod called up an AI-generated graphic image of two people grasping hands, with the words “Human Connection” to the left of it. The macabre image showed four wrists, two hands, and at least 12 fingers between them. It was jarring. It was also reassuring, if only momentarily.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics ChatGPT artificial intelligence Conferences chatbots Startups David Nield Michael Calore Jaina Grey Simon Hill Simon Hill Brendan Nystedt Simon Hill Scott Gilbertson WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Dyson promo code Black Friday Sale: Up to 40% off Dyson technology + bonus gifts GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,557 | 2,023 |
"China’s ChatGPT Rival Needs to Watch Its Words | WIRED"
|
"https://www.wired.com/story/chinas-answer-to-chatgpt-flubs-its-first-lines"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business China’s ChatGPT Rival Needs to Watch Its Words Photograph: MirageC/Getty Images Save this story Save Save this story Save When rumors began swirling last month about the Chinese search giant Baidu working on a chatbot to rival OpenAI’s ChatGPT , it seemed like the perfect move. Baidu has invested heavily in artificial intelligence over the past decade and could harness the technology for its leading search engine, as Microsoft has done for Bing and Google says it will do too.
Yet when Baidu unveiled Ernie Bot , or 文心一言 “Wenxin Yiyan” in Chinese, in Beijing earlier this month, the news fell flat.
Robin Li, Baidu’s CEO, admitted halfway through the launch stream that demos of Ernie Bot answering general knowledge questions, summarizing information from the web, and generating images were prerecorded, leading to snarky commentary on Chinese social media. It didn’t help that OpenAI had introduced a major upgrade, called GPT-4, to the AI technology that powers ChatGPT only the day before.
But Baidu also faces challenges that don’t apply to companies outside of China racing to compete with ChatGPT.
It is inherently difficult to contain the tendency of these chatbots to make up or “hallucinate” facts, or the way they can be prompted into saying unpleasant—or inappropriate—things. But Baidu must also adhere to strict government censorship guidelines for online content.
“Baidu is going to face a tension between making a useful chatbot and making one that conforms to Chinese speech controls,” says Matt Sheehan , a fellow at the Carnegie Endowment for International Peace who studies China’s AI industry. “I'm skeptical they'll be able to create a general-purpose chatbot that users can’t trick into spitting out speech that's unacceptable in China.” In less than four months since it was introduced, ChatGPT has become a cultural phenomenon, wowing the world with its ability to write poetry and prose, answer mathematical questions, hold forth on philosophical ideas, and converse fluently on just about any topic. The latest version can respond to images, not just text, and OpenAI says it scores more highly on a range of academic tests and makes fewer errors.
In the tech industry, just about every company is now scrambling to develop a chatbot strategy.
The problem of getting models like ChatGPT to behave is far from solved, however. Microsoft was forced to limit the use of its Bing chatbot based on OpenAI’s technology after users found ways of evading the guardrails in place and getting the model to say inappropriate or questionable things such as claiming to want to break free of its controls or professing its feelings for a user.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Like the Bing bot and ChatGPT, Baidu’s Ernie Bot is built on top of a machine learning algorithm known as a large language model that was trained using vast quantities of text to predict the next word in a sentence. That simple mechanism, when paired with vast quantities of text and sufficient computing power, has proven able to produce strikingly humanlike responses.
Baidu and OpenAI both also used an additional training step in which human testers provide feedback on what type of answers are most satisfying. That causes the bots to produce responses that are more helpful but still far from perfect. It is not clear how to prevent such models from fabricating answers some of the time, or how to stop them from ever misbehaving.
China’s censorship regime requires Baidu and other internet companies to block access to certain websites and avoid politically sensitive subjects. The words or phrases that should be blocked can be updated rapidly in response to protests or during special events.
But Jeffrey Ding , an assistant professor at Georgetown University who studies China’s tech industry, says that concerns about censorship do not seem to have slowed the development of large language models in China. He notes that Baidu has made the Ernie language model that underpins its new bot available via an API for some time and that other companies have offered similar models.
Baidu has not given details of Ernie Bot’s training data, but it most likely was scraped from the Chinese internet. This will mean the bot’s feedstock has largely already been curated by China’s censorship rules, which, for example, aim to limit criticism of the government.
Censorship might also affect Chinese chatbots in more subtle ways. An academic research project from 2021 that trained algorithms on the Chinese-language version of Wikipedia, which is blocked in China, and Baidu’s Baike, a crowdsourced encyclopedia subject to government censorship, found that using censored training data significantly changed the meaning that AI software assigned to different words.
The algorithm trained on Chinese-language Wikipedia associated the words “democracy” closer to positive words such as “stability.” The algorithm trained on the censored Baike material represented “democracy” closer to “chaos,” more in line with the policy of China’s government. But because chatbots like ChatGPT can be extremely flexible and remix material in their training data, Baidu has likely had to introduce additional safeguards Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Despite its mixed reception, Ernie Bot appears to be a capable competitor to ChatGPT. The bot is currently available only to a limited number of users, some of whom say they are impressed. ChatGPT is not available in China, although it is capable of conversing in Chinese.
Lei Li , a professor at UC Sant Barbara who specializes in AI and previously worked on the technology used to build some of the machine learning behind Ernie bot, points out that Baidu has been working on the underlying technology for around a decade. Microsoft, by contrast, licensed the core technology for Bing’s new chatbot and some forthcoming text-generation features for Office from OpenAI, in which it has invested billions of dollars in return for exclusive rights to its creations.
Li also says he is also impressed with some of what Ernie Bot can do, including its ability to generate stories and business reports. He adds that the hallucination problem is a challenge for all such language models. “This is where researchers still have work to do,” he says.
One WeChat poster compared the Chinese bot’s demoed capabilities to those of ChatGPT and found it better at handling Chinese idioms and more accurate in some instances. For example, ChatGPT incorrectly claimed that the ancestral home of science fiction author Liu Cixin, who wrote The Three Body Problem , is Hubei, while Ernie Bot correctly answered Henan. ChatGPT is blocked in China, but many people have found ways of accessing it.
An executive at one Chinese media company, who has been testing Ernie Bot and who asked to speak anonymously, adds that it has an impressive ability to handle regional Chinese dialects. They judged it to be better than the initial reaction to Baidu’s launch suggested.
Kevin Xu, who writes a popular English- and Chinese-language newsletter on China’s tech industry, believes that Baidu may have rushed its demo out so as to gain a first-mover advantage over other Chinese tech companies. This could help it improve the bot based on user feedback and also seed Chinese startups with the technology.
The search giant was once considered a dominant force in China, but over the past decade it has been overshadowed by Ailibaba , Tencent , and ByteDance , the company behind TikTok. Baidu says that over 100,000 businesses and 900,000 individuals in China have signed up for access to Ernie Bot.
Baidu and its rivals working on ChatGPT-style technology may also be hindered by US semiconductor sanctions aimed at hobbling China’s AI industry.
Building cutting-edge large language models requires thousands of specialized computer chips. For now, Baidu may be able to rely on less-powerful chips, including ones designed and made in China. But as chip advances continue, it and other Chinese companies may struggle to keep pace with the scale and power US companies can apply to chatbot projects.
Baidu’s Li acknowledged the tensions between the US and China during the Ernie Bot launch video but played them down. "Ernie is not a tool in the US–China technology competition,” he said, “but the natural outcome of generations of Baidu developers pursuing a dream of using technology to change the world." Even if Baidu can navigate the challenges ahead, comparisons with ChatGPT seem inevitable.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics artificial intelligence China bots deep learning machine learning languages Search Will Knight Khari Johnson Kari McMahon David Gilbert Amit Katwala Andy Greenberg Andy Greenberg David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,558 | 2,023 |
"News Publishers Are Wary of the Microsoft Bing Chatbot’s Media Diet | WIRED"
|
"https://www.wired.com/story/news-publishers-are-wary-of-the-microsoft-bing-chatbots-media-diet"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Aarian Marshall Paresh Dave Business News Publishers Are Wary of the Bing Chatbot’s Media Diet Photograph: MirageC/Getty Images Save this story Save Save this story Save Two years ago, Microsoft president Brad Smith told a US congressional hearing that tech companies like his own had not been sufficiently paying media companies for the news content that helps fuel search engines like Bing and Google.
“What we’re talking about here is far bigger than us,” he said , testifying alongside news executives. “Let’s hope that, if a century from now people are not using iPhones or laptops or anything that we have today, journalism itself is still alive and well. Because our democracy depends on it.” Smith said tech companies should do more and that Microsoft was committed to continuing "healthy revenue-sharing" with news publishers, including licensing articles for Microsoft news apps.
Yet, Microsoft this week began testing a new chatbot interface for Bing that can sometimes provide a way to sidestep news websites’ paywalls, providing glossy conversational answers that draw on media content. As Google and others also prepare chatbots , their potential to sap traffic from media companies could add a new twist to their conflicts with tech platforms over how content appears on search engines and social feeds.
When WIRED asked the Bing chatbot about the best dog beds according to The New York Times product review site Wirecutter, which is behind a metered paywall, it quickly reeled off the publication’s top three picks, with brief descriptions for each. “This bed is cozy, durable, easy to wash, and comes in various sizes and colors,” it said of one.
Citations at the end of the bot’s response credited Wirecutter’s reviews but also a series of websites that appeared to use Wirecutter’s name to attract searches and cash in on affiliate links. The Times declined to comment.
Bing’s bot, based on technology behind OpenAI’s chatbot sensation ChatGPT , also neatly summarized a Wall Street Journal column on, well, ChatGPT, even though the newspaper's content is generally behind a paywall. (The tool did not appear to directly plagiarize any of the columnist’s work.) WSJ owner News Corp declined to comment on Bing.
Microsoft communications director Caitlin Roulston says that “Bing only crawls content publishers make available to us.” The search engine has access to paywalled content from publishers that have agreements with Microsoft’s news service, she says. The scheme predates Bing’s AI upgrade this week.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The new Bing interface is built on technology from OpenAI that learned to generate text by analyzing the statistical patterns of words in articles, forums, and other text scraped from the web, as well as other sources such as books.
OpenAI is not known to have paid to license all that content, though it has licensed images from the stock image library Shutterstock to provide training data for its work on generating images.
Microsoft is not specifically paying content creators when its bot summarizes their articles, just as it and Google have not traditionally paid web publishers to display short snippets pulled from their pages in search results. But the chatty Bing interface provides richer answers than search engines traditionally have.
OpenAI’s ChatGPT tool, which the startup released in November, has been known to plagiarize or only lightly rework the writing of humans. Some large US public school systems, including that of New York City, have banned the use of ChatGPT. Bing uses a Microsoft AI system called Prometheus that the company says builds on OpenAI’s ChatGPT and is fine-tuned to give users safer and more timely search results.
When asked at Microsoft’s media event this week about the new Bing search potentially plagiarizing the work of human writers, the company’s consumer chief marketing officer Yusuf Mehdi said the company “cares a bunch about being able to drive traffic back to content creators.” The links the Bing chatbot includes at the end of each response, he said, are meant to “make it easy for people to go in and click through to those sites.” Roulston of Microsoft declined to share information about how many early testers were clicking through those citation links to visit the information’s source.
Now publishers are weighing whether to strike back at Microsoft. The friendly partner that stood by their side in Congress to help them mostly fight search goliath Google is now the frontrunner in the race to infuse chat technology into search.
“Unless there's a specific agreement in place, there's just really no revenue coming back to news publications. And it is highly problematic for our industry,” says Danielle Coffey, executive vice president and general counsel at News Media Alliance, a trade group of more than 2,000 print and online publications worldwide, including The New York Times and The Wall Street Journal.
WIRED parent Condé Nast is also a member of the group.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Absent any compensation, Coffey calls the Bing chatbot's attribution “less than stellar for our taste.” Asked whether members had considered demanding Bing stop using their content in its new search experience, she says there will be discussions on the topic.
Other news trade groups are also watching search chatbots closely. “We are very concerned about the role this revolutionary technology, which has the potential to do good, can play in the exponential proliferation of misinformation,” says Paul Deegan, president and CEO of trade body News Media Canada. “Real journalism costs real money, and it is in Big Tech platforms’ self-interest to negotiate fair content licensing agreements with news publishers.” Google and Microsoft pay some publishers to distribute their content in various apps and features, including select search results as required by European law.
Microsoft’s MSN web portal remains a big driver of traffic and licensing sales for some publishers, and Google has been pushing a licensing regime it calls News Showcase that delivers stories to Google News and the company’s newsfeed app Discover.
But the new chatbot experience offered by Bing—and a bot called Bard in the works from Google—offer much more than just the links, short previews, and thumbnails common on tech platforms. They are promoted as a way to use AI to immerse users in a conversation that can provide them information they want quickly, fluidly, and without ever having to leave the chat box. If web users spend more time with bots and less time clicking links, publishers could be cut off from sales of subscriptions, ads, and referrals.
When asked to summarize specific articles, the Bing chatbot will occasionally include below its writeup a large link to the source, including a thumbnail image, which could make it easier or more attractive for users to click.
Coffey of News Media Alliance says she is hopeful Microsoft will continue to work with publishers and see the value of their work. “There's a lot to be determined with regard to payment,” she says. “This is a new frontier, where we're seeing an opportunity for us to recalibrate these partnerships that we have.” The legality of training AI systems on data culled from the web without securing permission, or of algorithms regurgitating what they learned from the data to consumers, is unclear. A group of anonymous software developers last year sued Microsoft and OpenAI , claiming that an AI system known as GitHub Copilot that writes code was built by unfairly using the plaintiffs’ code to train machine learning algorithms. Coffey says publishers could have a strong argument for infringement of their copyright if search bots become a substitute for reading a story.
Bing’s bot claims to feel good about its business model for now. “No, I do not pay for content,” the bot says when asked about compensating publishers. “I use web search results to provide you with relevant and useful information. 😊” Updated 02-11-2023, 1.35 pm EST: This article was updated to reflect that The New York Times declined to comment.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Staff Writer X Topics Search artificial intelligence Microsoft Google Media algorithms bots Bing Will Knight David Gilbert Kari McMahon Khari Johnson Amit Katwala David Gilbert Amit Katwala Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,559 | 2,021 |
"DALL·E: Creating images from text"
|
"https://openai.com/research/dall-e"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Illustration: Justin Jay Wang Research DALL·E: Creating images from text We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
January 5, 2021 Image generation , Transformers , Generative models , DALL·E , GPT-2 , CLIP , Milestone , Publication , Release DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.
See also: DALL·E 2 , which generates more realistic and accurate images with 4x greater resolution.
Text Prompt an illustration of a baby daikon radish in a tutu walking a dog AI Generated images Edit prompt or view more images Text Prompt an armchair in the shape of an avocado. . . .
AI Generated images Edit prompt or view more images Text Prompt a store front that has the word ‘openai’ written on it. . . .
AI Generated images Edit prompt or view more images Text Prompt the exact same cat on the top as a sketch on the bottom AI Generated images Edit prompt or view more images GPT-3 showed that language can be used to instruct a large neural network to perform a variety of text generation tasks.
Image GPT showed that the same type of neural network can also be used to generate images with high fidelity. We extend these findings to show that manipulating visual concepts through language is now within reach.
Overview Like GPT-3, DALL·E is a transformer language model. It receives both the text and the image as a single stream of data containing up to 1280 tokens, and is trained using maximum likelihood to generate all of the tokens, one after another.
[^footnote-1] This training procedure allows DALL·E to not only generate an image from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt.
We recognize that work involving generative models has the potential for significant, broad societal impacts. In the future, we plan to analyze how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer term ethical challenges implied by this technology.
Capabilities We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language. We illustrate this using a series of interactive visuals in the next section. The samples shown for each caption in the visuals are obtained by taking the top 32 of 512 after reranking with CLIP , but we do not use any manual cherry-picking, aside from the thumbnails and standalone images that appear outside.
[^footnote-2] Controlling attributes We test DALL·E’s ability to modify several of an object’s attributes, as well as the number of times that it appears.
Click to edit text prompt or view more AI-generated images a pentagonal green click. a green clock in the shape of a pentagon.
Text Prompt AI generated images We find that DALL·E can render familiar objects in polygonal shapes that are sometimes unlikely to occur in the real world. For some objects, such as “picture frame” and “plate,” DALL·E can reliably draw the object in any of the polygonal shapes except heptagon. For other objects, such as “manhole cover” and “stop sign,” DALL·E’s success rate for more unusual shapes, such as “pentagon,” is considerably lower.
For several of the visuals in this post, we find that repeating the caption, sometimes with alternative phrasings, improves the consistency of the results.
a cube made of porcupine. a cube with the texture of a porcupine.
Text Prompt AI generated images We find that DALL·E can map the textures of various plants, animals, and other objects onto three dimensional solids. As in the preceding visual, we find that repeating the caption with alternative phrasing improves the consistency of the results.
a collection of glasses is sitting on a table Text Prompt AI generated images We find that DALL·E is able to draw multiple copies of an object when prompted to do so, but is unable to reliably count past three. When prompted to draw nouns for which there are multiple meanings, such as “glasses,” “chips,” and “cups” it sometimes draws both interpretations, depending on the plural form that is used.
Drawing multiple objects Simultaneously controlling multiple objects, their attributes, and their spatial relationships presents a new challenge. For example, consider the phrase “a hedgehog wearing a red hat, yellow gloves, blue shirt, and green pants.” To correctly interpret this sentence, DALL·E must not only correctly compose each piece of apparel with the animal, but also form the associations (hat, red), (gloves, yellow), (shirt, blue), and (pants, green) without mixing them up [^footnote-3] We test DALL·E’s ability to do this for relative positioning, stacking objects, and controlling multiple attributes.
a small red block sitting on a large green block Text Prompt AI generated images We find that DALL·E correctly responds to some types of relative positions, but not others. The choices “sitting on” and “standing in front of” sometimes appear to work, “sitting below,” “standing behind,” “standing left of,” and “standing right of” do not. DALL·E also has a lower success rate when asked to draw a large object sitting on top of a smaller one, when compared to the other way around.
a stack of 3 cubes. a red cube is on the top, sitting on a green cube. the green cube is in the middle, sitting on a blue cube. the blue cube is on the bottom.
Text Prompt AI generated images We find that DALL·E typically generates an image with one or two of the objects having the correct colors. However, only a few samples for each setting tend to have exactly three objects colored precisely as specified.
an emoji of a baby penguin wearing a blue hat, red gloves, green shirt, and yellow pants Text Prompt AI generated images We find that DALL·E typically generates an image with two or three articles of clothing having the correct colors. However, only a few of the samples for each setting tend to have all four articles of clothing with the specified colors.
While DALL·E does offer some level of controllability over the attributes and positions of a small number of objects, the success rate can depend on how the caption is phrased. As more objects are introduced, DALL·E is prone to confusing the associations between the objects and their colors, and the success rate decreases sharply. We also note that DALL·E is brittle with respect to rephrasing of the caption in these scenarios: alternative, semantically equivalent captions often yield no correct interpretations.
Visualizing perspective and three-dimensionality We find that DALL·E also allows for control over the viewpoint of a scene and the 3D style in which a scene is rendered.
an extreme close-up view of a capybara sitting in a field Text Prompt AI generated images We find that DALL·E can draw each of the animals in a variety of different views. Some of these views, such as “aerial view” and “rear view,” require knowledge of the animal’s appearance from unusual angles. Others, such as “extreme close-up view,” require knowledge of the fine-grained details of the animal’s skin or fur.
a capybara made of voxels sitting in a field Text Prompt AI generated images We find that DALL·E is often able to modify the surface of each of the animals according to the chosen 3D style, such as “claymation” and “made of voxels,” and render the scene with plausible shading depending on the location of the sun. The “x-ray” style does not always work reliably, but it shows that DALL·E can sometimes orient the bones within the animal in plausible (though not anatomically correct) configurations.
To push this further, we test DALL·E’s ability to repeatedly draw the head of a well-known figure at each angle from a sequence of equally spaced angles, and find that we can recover a smooth animation of the rotating head.
a photograph of a bust of homer Text Prompt Image Prompt AI generated images We prompt DALL·E with both a caption describing a well-known figure and the top region of an image showing a hat drawn at a particular angle. Then, we ask DALL·E to complete the remaining part of the image given this contextual information. We do this repeatedly, each time rotating the hat a few more degrees, and find that we are able to recover smooth animations of several well-known figures, with each frame respecting the precise specification of angle and ambient lighting.
DALL·E appears to be able to apply some types of optical distortions to scenes, as we see with the options “fisheye lens view” and “a spherical panorama.” This motivated us to explore its ability to generate reflections.
a plain white cube looking at its own reflection in a mirror. a plain white cube gazing at itself in a mirror.
Text Prompt Image Prompt AI generated images We prompt DALL·E with both a caption describing a well-known figure and the top region of an image showing a hat drawn at a particular angle. Then, we ask DALL·E to complete the remaining part of the image given this contextual information. We do this repeatedly, each time rotating the hat a few more degrees, and find that we are able to recover smooth animations of several well-known figures, with each frame respecting the precise specification of angle and ambient lighting.
Visualizing internal and external structure The samples from the “extreme close-up view” and “x-ray” style led us to further explore DALL·E’s ability to render internal structure with cross-sectional views, and external structure with macro photographs.
a cross-section view of a walnut Text Prompt AI generated images We find that DALL·E is able to draw the interiors of several different kinds of objects.
a macro photograph of brain coral Text Prompt AI generated images We find that DALL·E is able to draw the fine-grained external details of several different kinds of objects. These details are only apparent when the object is viewed up close.
Inferring contextual details The task of translating text to images is underspecified: a single caption generally corresponds to an infinitude of plausible images, so the image is not uniquely determined. For instance, consider the caption “a painting of a capybara sitting on a field at sunrise.” Depending on the orientation of the capybara, it may be necessary to draw a shadow, though this detail is never mentioned explicitly. We explore DALL·E’s ability to resolve underspecification in three cases: changing style, setting, and time; drawing the same object in a variety of different situations; and generating an image of an object with specific text written on it.
a painting of a capybara sitting in a field at sunrise Text Prompt AI generated images We find that DALL·E is able to render the same scene in a variety of different styles, and can adapt the lighting, shadows, and environment based on the time of day or season.
a stained glass window with an image of a blue strawberry Text Prompt AI generated images We find that DALL·E is able to flexibly adapt the representation of the object based on the medium on which it is being drawn. For “a mural,” “a soda can,” and “a teacup,” DALL·E must change how it draws the object based on the angle and curvature of the drawing surface. For “a stained glass window” and “a neon sign,” it must alter the appearance of the object from how it usually appears.
a store front that has the word ‘openai’ written on it. a store front that has the word ‘openai’ written on it. a store front that has the word ‘openai’ written on it. ‘openai’ store front.
Text Prompt AI generated images We find that DALL·E is able to draw the fine-grained external details of several different kinds of objects. These details are only apparent when the object is viewed up close.
With varying degrees of reliability, DALL·E provides access to a subset of the capabilities of a 3D rendering engine via natural language. It can independently control the attributes of a small number of objects, and to a limited extent, how many there are, and how they are arranged with respect to one another. It can also control the location and angle from which a scene is rendered, and can generate known objects in compliance with precise specifications of angle and lighting conditions.
Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to “fill in the blanks” when the caption implies that the image must contain a certain detail that is not explicitly stated.
Applications of preceding capabilities Next, we explore the use of the preceding capabilities for fashion and interior design.
a male mannequin dressed in an orange and black flannel shirt Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to render male mannequins in a variety of different outfits. When prompted with two colors, e.g., “an orange and white bomber jacket” and “an orange and black turtleneck sweater,” DALL·E often exhibits a range of possibilities for how both colors can be used for the same article of clothing.
DALL·E also seems to occasionally confuse less common colors with other neighboring shades. For example, when prompted to draw clothes in “navy,” DALL·E sometimes uses lighter shades of blue, or shades very close to black. Similarly, DALL·E sometimes confuses “olive” with shades of brown or brighter shades of green.
a female mannequin dressed in a black leather jacket and gold pleated skirt Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to render female mannequins in a variety of different outfits. We find that DALL·E is able to portray unique textures such as the sheen of a “black leather jacket” and “gold” skirts and leggings. As before, we see that DALL·E occasionally confuses less common colors, such as “navy” and “olive,” with other neighboring shades.
a living room with two white armchairs and a painting of the colosseum. the painting is mounted above a modern fireplace.
Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to generate images of rooms with several details specified. We find that it can generate paintings of a wide range of different subjects, including real-world locations such as “the colosseum” and fictional characters like “yoda.” For each subject, DALL·E exhibits a variety of interpretations. While the painting is almost always present in the scene, DALL·E sometimes fails to draw the fireplace or the correct number of armchairs.
a loft bedroom with a white bed next to a nightstand. there is a fish tank beside the bed.
Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to generate bedrooms with several details specified. Despite the fact that we do not tell DALL·E what should go on top of the nightstand or shelf beside the bed, we find that it sometimes decides to place the other specified object on top. As before, we see that it often fails to draw one or more of the specified objects.
Combining unrelated concepts The compositional nature of language allows us to put together concepts to describe both real and imaginary things. We find that DALL·E also has the ability to combine disparate ideas to synthesize objects, some of which are unlikely to exist in the real world. We explore this ability in two instances: transferring qualities from various concepts to animals, and designing products by taking inspiration from unrelated concepts.
a snail made of harp. a snail with the texture of a harp.
Text Prompt AI generated images We find that DALL·E can generate animals synthesized from a variety of concepts, including musical instruments, foods, and household items. While not always successful, we find that DALL·E sometimes takes the forms of the two objects into consideration when determining how to combine them. For example, when prompted to draw “a snail made of harp,” it sometimes relates the pillar of the harp to the spiral of the snail’s shell.
In a previous section, we saw that as more objects are introduced into the scene, DALL·E is liable to confuse the associations between the objects and their specified attributes. Here, we see a different sort of failure mode: sometimes, rather than binding some attribute of the specified concept (say, “a faucet”) to the animal (say, “a snail”), DALL·E just draws the two as separate items.
an armchair in the shape of an avocado. an armchair imitating an avocado.
Text Prompt AI generated images In the preceding visual, we explored DALL·E’s ability to generate fantastical objects by combining two unrelated ideas. Here, we explore its ability to take inspiration from an unrelated idea while respecting the form of the thing being designed, ideally producing an object that appears to be practically functional. We found that prompting DALL·E with the phrases “in the shape of,” “in the form of,” and “in the style of” gives it the ability to do this.
When generating some of these objects, such as “an armchair in the shape of an avocado”, DALL·E appears to relate the shape of a half avocado to the back of the chair, and the pit of the avocado to the cushion. We find that DALL·E is susceptible to the same kinds of mistakes mentioned in the previous visual.
Animal illustrations In the previous section, we explored DALL·E’s ability to combine unrelated concepts when generating images of real-world objects. Here, we explore this ability in the context of art, for three kinds of illustrations: anthropomorphized versions of animals and objects, animal chimeras, and emojis.
an illustration of a baby daikon radish in a tutu walking a dog Text Prompt AI generated images We find that DALL·E is sometimes able to transfer some human activities and articles of clothing to animals and inanimate objects, such as food items. We include “pikachu” and “wielding a blue lightsaber” to explore DALL·E’s ability to incorporate popular media.
We find it interesting how DALL·E adapts human body parts onto animals. For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL·E often draws the kerchief, hands, and feet in plausible locations.
a professional high quality illustration of a giraffe turtle chimera. a giraffe imitating a turtle. a giraffe made of turtle.
Text Prompt AI generated images We find that DALL·E is sometimes able to combine distinct animals in plausible ways. We include “pikachu” to explore DALL·E’s ability to incorporate knowledge of popular media, and “robot” to explore its ability to generate animal cyborgs. Generally, the features of the second animal mentioned in the caption tend to be dominant.
We also find that inserting the phrase “professional high quality” before “illustration” and “emoji” sometimes improves the quality and consistency of the results.
a professional high quality emoji of a lovestruck cup of boba Text Prompt AI generated images We find that DALL·E is sometimes able to combine distinct animals in plausible ways. We include “pikachu” to explore DALL·E’s ability to incorporate knowledge of popular media, and “robot” to explore its ability to generate animal cyborgs. Generally, the features of the second animal mentioned in the caption tend to be dominant.
We also find that inserting the phrase “professional high quality” before “illustration” and “emoji” sometimes improves the quality and consistency of the results.
Zero-shot visual reasoning GPT-3 can be instructed to perform many kinds of tasks solely from a description and a cue to generate the answer supplied in its prompt, without any additional training. For example, when prompted with the phrase “here is the sentence ‘a person walking his dog in the park’ translated into French:”, GPT-3 answers “un homme qui promène son chien dans le parc.” This capability is called zero-shot reasoning.
We find that DALL·E extends this capability to the visual domain, and is able to perform several kinds of image-to-image translation tasks when prompted in the right way.
the exact same cat on the top as a sketch on the bottom Text Prompt Image Prompt AI generated images We find that DALL·E is able to apply several kinds of image transformations to photos of animals, with varying degrees of reliability. The most straightforward ones, such as “photo colored pink” and “photo reflected upside-down,” also tend to be the most reliable, although the photo is often not copied or reflected exactly. The transformation “animal in extreme close-up view” requires DALL·E to recognize the breed of the animal in the photo, and render it up close with the appropriate details. This works less reliably, and for several of the photos, DALL·E only generates plausible completions in one or two instances.
Other transformations, such as “animal with sunglasses” and “animal wearing a bow tie,” require placing the accessory on the correct part of the animal’s body. Those that only change the color of the animal, such as “animal colored pink,” are less reliable, but show that DALL·E is sometimes capable of segmenting the animal from the background. Finally, the transformations “a sketch of the animal” and “a cell phone case with the animal” explore the use of this capability for illustrations and product design.
the exact same teapot on the top with ’gpt’ written on it on the bottom Text Prompt Image Prompt AI generated images We find that DALL·E is able to apply several different kinds of image transformations to photos of teapots, with varying degrees of reliability. Aside from being able to modify the color of the teapot (e.g., “colored blue”) or its pattern (e.g., “with stripes”), DALL·E can also render text (e.g., “with ‘gpt’ written on it”) and map the letters onto the curved surface of the teapot in a plausible way. With much less reliability, it can also draw the teapot in a smaller size (for the “tiny” option) and in a broken state (for the “broken” option).
We did not anticipate that this capability would emerge, and made no modifications to the neural network or training procedure to encourage it. Motivated by these results, we measure DALL·E’s aptitude for analogical reasoning problems by testing it on Raven’s progressive matrices, a visual IQ test that saw widespread use in the 20th century.
a sequence of geometric shapes.
Text Prompt Image Prompt AI generated images Rather than treating the IQ test a multiple-choice problem as originally intended, we ask DALL·E to complete the bottom-right corner of each image using argmax sampling, and consider its completion to be correct if it is a close visual match to the original.
DALL·E is often able to solve matrices that involve continuing simple patterns or basic geometric reasoning, such as those in sets B and C. It is sometimes able to solve matrices that involve recognizing permutations and applying boolean operations, such as those in set D. The instances in set E tend to be the most difficult, and DALL·E gets almost none of them correct.
For each of the sets, we measure DALL·E’s performance on both the original images, and the images with the colors inverted. The inversion of colors should pose no additional difficulty for a human, yet does generally impair DALL·E’s performance, suggesting its capabilities may be brittle in unexpected ways.
Geographic knowledge We find that DALL·E has learned about geographic facts, landmarks, and neighborhoods. Its knowledge of these concepts is surprisingly precise in some ways and flawed in others.
a photo of the food of china Text Prompt AI generated images We test DALL·E’s understanding of simple geographical facts, such as country flags, cuisines, and local wildlife. While DALL·E successfully answers many of these queries, such as those involving national flags, it often reflects superficial stereotypes for choices like “food” and “wildlife,” as opposed to representing the full diversity encountered in the real world.
a photo of alamo square, san francisco, from a street at night Text Prompt AI generated images We find that DALL·E is sometimes capable of rendering semblances of certain locations in San Francisco. For locations familiar to the authors, such as San Francisco, they evoke a sense of déjà vu—eerie simulacra of streets, sidewalks and cafes that remind us of very specific locations that do not exist.
a photo of san francisco’s golden gate bridge Text Prompt Image Prompt AI generated images We can also prompt DALL·E to draw famous landmarks. In fact, we can even dictate when the photo was taken by specifying the first few rows of the sky. When the sky is dark, for example, DALL·E recognizes it is night, and turns on the lights in the buildings.
Temporal knowledge In addition to exploring DALL·E’s knowledge of concepts that vary over space, we also explore its knowledge of concepts that vary over time.
a photo of a phone from the 20s Text Prompt Image Prompt AI generated images We find that DALL·E has learned about basic stereotypical trends in design and technology over the decades. Technological artifacts appear to go through periods of explosion of change, dramatically shifting for a decade or two, then changing more incrementally, becoming refined and streamlined.
Summary of approach and prior work DALL·E is a simple decoder-only transformer that receives both the text and the image as a single stream of 1280 tokens—256 for the text and 1024 for the image—and models all of them autoregressively. The attention mask at each of its 64 self-attention layers allows each image token to attend to all text tokens. DALL·E uses the standard causal mask for the text tokens, and sparse attention for the image tokens with either a row, column, or convolutional attention pattern, depending on the layer. We provide more details about the architecture and training procedure in our paper.
Text-to-image synthesis has been an active area of research since the pioneering work of Reed et. al, [^reference-1] whose approach uses a GAN conditioned on text embeddings. The embeddings are produced by an encoder pretrained using a contrastive loss, not unlike CLIP. StackGAN [^reference-3] and StackGAN++ [^reference-4] use multi-scale GANs to scale up the image resolution and improve visual fidelity. AttnGAN [^reference-5] incorporates attention between the text and image features, and proposes a contrastive text-image feature matching loss as an auxiliary objective. This is interesting to compare to our reranking with CLIP, which is done offline. Other work [^reference-2] [^reference-6] [^reference-7] incorporates additional sources of supervision during training to improve image quality. Finally, work by Nguyen et. al [^reference-8] and Cho et. al [^reference-9] explores sampling-based strategies for image generation that leverage pretrained multimodal discriminative models.
Similar to the rejection sampling used in VQVAE-2 , we use CLIP to rerank the top 32 of 512 samples for each caption in all of the interactive visuals. This procedure can also be seen as a kind of language-guided search [^reference-16] , and can have a dramatic impact on sample quality.
an illustration of a baby daikon radish in a tutu walking a dog [caption 1, best 8 of 2048] Text Prompt AI generated images Reranking the samples from DALL·E using CLIP can dramatically improve consistency and quality of the samples.
Authors Primary Authors Aditya Ramesh Mikhail Pavlov Gabriel Goh Scott Gray Supporting Authors Mark Chen Rewon Child Vedant Misra Pamela Mishkin Gretchen Krueger Sandhini Agarwal Ilya Sutskever Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
13,560 | 2,023 |
"Why This Award-Winning Piece of AI Art Can’t Be Copyrighted | WIRED"
|
"https://www.wired.com/story/ai-art-copyright-matthew-allen"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kate Knibbs Culture Why This Award-Winning Piece of AI Art Can’t Be Copyrighted This is Allen’s AI-generated artwork, which we can publish without asking him because, as the article notes, it’s not eligible for copyright protections.
Courtesy of United States Copyright Office Save this story Save Save this story Save An award-winning piece of AI art cannot be copyrighted, the US Copyright Office has ruled. The artwork, Théâtre D’opéra Spatial , was created by Matthew Allen and came first in last year's Colorado State Fair. Since then, the piece has been embroiled in a precedent-affirming copyright dispute. Now, the government agency has issued its third and final decision: Allen’s work is not eligible for copyright.
Now, Allen plans to file a lawsuit against the US federal government. “I’m going to fight this like hell,” he says.
The problem? Allen used the generative AI program Midjourney to create his entry, and copyright protections are not extended to artificial intelligence—not even the kind that wows art judges. “It’s in line with previous decisions that require human authors,” says Rebecca Tushnet, a Harvard Law School professor and leading copyright scholar.
It’s a precedent that goes back to 2018 when a photo taken by a macaque was declared public domain because monkeys can’t hold copyright.
PETA may beg to differ , but under the law, monkeys and machines have about the same claim on copyright protections right now. (And this isn’t just in the US. In nearly every country, copyright is pegged to human authorship.) Allen was dogged in his attempt to register his work. He sent a written explanation to the Copyright Office detailing how much he’d done to manipulate what Midjourney conjured, as well as how much he fiddled with the raw image, using Adobe Photoshop to fix flaws and Gigapixel AI to increase the size and resolution. He specified that creating the painting had required at least 624 text prompts and input revisions.
The Copyright Office agreed that the parts of the painting that Allen had altered with Adobe constituted original work. However, it maintained that other parts generated by AI could not be copyrighted. In other words: Allen could copyright parts of the painting, but not the whole thing. This July, Allen appealed once more, arguing that the office had ignored “the essential element of human creativity” needed to use Midjourney. He attempted to use the fair use doctrine to argue that his work should be registered, because it amounts to a transformative use of copyrighted material.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight “The underlying AI generated work merely constitutes raw material which Mr. Allen has transformed through his artistic contributions,” Allen wrote.
The Copyright Office didn’t buy it. “The work cannot be registered,” it wrote in its final ruling on September 5.
Allen’s dashed efforts highlight a solidifying legal consensus. This August, a US federal judge dismissed a case brought by Missouri-based AI researcher Stephen Thalus, who has been on a mission to prove that the AI system he invented deserves copyright protections. “Plaintiff can point to no case in which a court has recognized copyright in a work originating with a nonhuman,” wrote Judge Beryl Howell of the US District Court for the District of Columbia in her decision.
Thalus is currently appealing the verdict. Ryan Abbot, his attorney, does not believe that the Copyright Office’s decision on Allen will have an impact on his client’s appeal. But he does see it as having a chilling effect on the wider world of AI-assisted art. “I think it will be a major disincentive to people developing and using AI to make art,” Abbot says.
On this point, Allen (as one might predict) agrees wholeheartedly. “This is the definition of stifling innovation and creativity, the very thing the copyright office claims to protect,” he says.
The Allen ruling may certainly nudge artists to increase the amount of work they put into art produced using AI tools. “Tweaks by a human, if they actually have an aesthetic impact, will likely add enough human authorship to get a copyright on the work as a whole,” Tushnet says.
Allen’s rejected bid demonstrates that it’s entirely unclear how many tweaks constitute human authorship. We know 624 adjustments did not make the work copyrightable—so what about 625? 626? Matthew Sag, a professor of law and artificial intelligence at Emory University, in Atlanta, Georgia, believes that artists who give instructions to AI that are detailed enough should be granted copyright protections in certain cases. He also argues that the Copyright Office should be more open to this type of application in the future—but he isn’t sure where the line is right now. “Applicants will need to do more than show that they pulled a lever on a slot machine 600 times until they got a result they were happy with,” Sag says. Instead, he thinks they may have to detail exactly how they used AI tools to enact their original artistic vision.
Understanding where this line is will be important within many creative fields moving forward, particularly those that are embracing AI tools. In Hollywood, striking writers and actors are petitioning for labor safeguards against AI. Could, then, this ruling offer reassurances that AI work would be harder to monetize? Tushnet isn’t convinced this decision will deter companies looking to lower labor costs by using AI. “They will plan to have humans participate just enough to cross the threshold of copyrightability,” she says.
Meanwhile, Allen is gearing up for the next round of his fight. He intends to file his federal lawsuit within the next six months. “I’m certain we will win in the end,” he says. “The Copyright Office is wanting to play it safe and make the courts decide. They’re scaredy cats.” You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Senior Writer X Topics Copyright artificial intelligence art Kate Knibbs Angela Watercutter Alex Winter Angela Watercutter Angela Watercutter Angela Watercutter Angela Watercutter Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,561 | 2,023 |
"Make ChatGPT Work for You With These Browser Extensions | WIRED"
|
"https://www.wired.com/story/chatgpt-browser-extensions"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter David Nield Gear Make ChatGPT Work for You With These Browser Extensions Illustration: Kodochigov/Getty Images Save this story Save Save this story Save It's hard to escape ChatGPT right now, with the conversational artificial intelligence bot taking on everything from web search to essay writing.
More than 100 million people have tried out ChatGPT since it launched.
Whether you're using ChatGPT for free or paying for ChatGPT Plus ($20 a month), the impressiveness of its text-generating capabilities aren't really matched by its interface, which sticks largely to the basics.
That's where third-party browser extensions come in: They can help you get easier access to ChatGPT from other websites, add missing features such as a chat exporter, and suggest better prompts for getting better answers out of the AI bot.
These extensions are all compatible with the latest versions of Google Chrome and Microsoft Edge, except where stated.
Merlin sits in your browser and puts ChatGPT right at your fingertips, ready to spring into action at a moment's notice: Whether you need to compose a reply to a social media post or summarize a long block of text, Merlin can help you out.
The Ctrl+M (Windows) or Cmd+M (macOS) keyboard shortcut is how Merlin is activated once you've selected a block of text. You then need to tell Merlin what you want it to get ChatGPT to do: summarize, reply, write, or whatever it is.
For example, you can highlight all the text in an email and get Merlin/ChatGPT to compose a response. Having the chatbot right inside your browser makes it really convenient to use, though you are limited to 31 requests per day.
You don't have to wait for Google's own version of ChatGPT to arrive, because ChatGPT for Google shows results from the chatbot alongside the standard results from Google whenever you run a search.
Clearly this is going to work better for some queries than others: ChatGPT can't tell you the latest sports scores, but it can explain what DNA is. Make sure that you're currently signed into your ChatGPT account in your browser to get the responses to load.
The ChatGPT box is neatly integrated alongside the standard Google search results, and you can launch a separate ChatGPT conversation right from it. You can also have the add-on run manually rather than automatically, if you don't want it on every search.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe Get ChatGPT right next to your Google results with ChatGPT for Google.
Google via David Nield Enhanced ChatGPT does exactly what its name suggests: It adds some handy new features to the basic ChatGPT interface, including an option to export your chats in Markdown format and a selection of tools to help you with your prompts.
Some of those prompt tools include triggers for finding images on the web, getting travel advice, generating translations, and programming in JavaScript. They'll pop up immediately once you've installed the extension.
Look for the new Export Chat option down in the lower left corner when you've set up the add-on. You don't get anything in the way of export options, but it can be helpful to have a copy of your chats to refer to.
If you're planning to do any writing using ChatGPT on the web, from emails to social media posts, then WritingMate can help. You can launch it via a Ctrl+M (Windows) or Cmd+M (macOS) keyboard shortcut, or use the floating icon that appears on the right of the browser window.
You're able to use ChatGPT in any way you like, right in the webpages you're looking at, and there are some helpful prompts included with the extension too. ChatGPT responses can be swiftly tweaked and refined as well.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear 16 Early Black Friday Deals From Walmart Matt Jancer Gear The Best Early Black Friday Deals We've Found Nena Farrell Gear The Best Amazon Black Friday Deals Louryn Strampe If you need more than 10 messages per day or longer messages, you can sign up for a $15-per-month premium account. The access you get to the tool for free should tell you whether or not you're going to find it useful.
There's no doubt that ChatGPT is an incredible bit of technology, but it also has its limitations: It doesn't have access to the most up-to-date information, and can't search the web as such.
WebChatGPT (for Chrome, Edge, and Firefox) fixes that, dropping in traditional web search results alongside AI responses in the ChatGPT interface. It also tries to integrate both types of responses to give you something coherent and useful.
This approach works better for some queries than others: Try prompts such as "Who is the President?" and "give me the latest Wired headlines" to see the difference that the extension makes to ChatGPT.
Use WebChatGPT to bring the latest web search results into ChatGPT.
ChatGPT via David Nield For some of us, it's much easier to talk than to type, and Promptheus enables you to talk to ChatGPT just as you would with a real person. It may give you a whole new perspective on interacting with AI.
You use Promptheus from inside the ChatGPT interface, and you simply press and hold the space bar when you're ready to talk. There are a few extension settings you can play around with too, such as the option to send prompts automatically once you stop talking.
As an added bonus, the extension also gives you easy access to ChatGPT from any webpage—just click on the add-on icon in your browser toolbar to start interacting with the AI bot.
One of the most important features that ChatGPT is missing is the ability to get content out of the ChatGPT interface into other places—you're pretty much limited to the good old copy-paste method.
ChatGPT Export and Share (which works in Chrome, Edge, and Firefox) is here to solve the problem. It takes a little bit more time in terms of setting it up in your browser, but full instructions are provided, and the end results are worth it.
Once the extension is installed, you'll see new export buttons down below the prompt box. You can save your chats as images and PDFs, and there's also the option to create shareable links to them.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Contributor X Topics Browsers software tips how-to artificial intelligence ChatGPT Scott Gilbertson Reece Rogers Scott Gilbertson Boone Ashworth Virginia Heffernan Carlton Reid Matt Jancer Matt Jancer WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,562 | 2,023 |
"How ChatGPT—and Bots Like It—Can Spread Malware | WIRED"
|
"https://www.wired.com/story/chatgpt-ai-bots-spread-malware"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons David Nield Security How ChatGPT—and Bots Like It—Can Spread Malware Photograph: AerialPerspective Images/Getty Images Save this story Save Save this story Save The AI landscape has started to move very, very fast: consumer-facing tools such as Midjourney and ChatGPT are now able to produce incredible image and text results in seconds based on natural language prompts, and we're seeing them get deployed everywhere from web search to children's books.
However, these AI applications are being turned to more nefarious uses, including spreading malware.
Take the traditional scam email, for example: It's usually littered with obvious mistakes in its grammar and spelling—mistakes that the latest group of AI models don't make, as noted in a recent advisory report from Europol.
Think about it: A lot of phishing attacks and other security threats rely on social engineering, duping users into revealing passwords, financial information, or other sensitive data. The persuasive, authentic-sounding text required for these scams can now be pumped out quite easily, with no human effort required, and endlessly tweaked and refined for specific audiences.
In the case of ChatGPT, it's important to note first that developer OpenAI has built safeguards into it. Ask it to "write malware" or a "phishing email" and it will tell you that it's "programmed to follow strict ethical guidelines that prohibit me from engaging in any malicious activities, including writing or assisting with the creation of malware." ChatGPT won't code malware for you, but it's polite about it.
OpenAI via David Nield Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight However, these protections aren't too difficult to get around: ChatGPT can certainly code, and it can certainly compose emails. Even if it doesn't know it's writing malware, it can be prompted into producing something like it.
There are already signs that cybercriminals are working to get around the safety measures that have been put in place.
We're not particularly picking on ChatGPT here, but pointing out what's possible once large language models (LLMs) like it are used for more sinister purposes. Indeed, it's not too difficult to imagine criminal organizations developing their own LLMs and similar tools in order to make their scams sound more convincing.
And it's not just text either: Audio and video are more difficult to fake, but it's happening as well.
When it comes to your boss asking for a report urgently, or company tech support telling you to install a security patch, or your bank informing you there's a problem you need to respond to—all these potential scams rely on building up trust and sounding genuine, and that's something AI bots are doing very well at.
They can produce text, audio, and video that sounds natural and tailored to specific audiences, and they can do it quickly and constantly on demand.
So is there any hope for us mere humans in the wave of these AI-powered threats? Is the only option to give up and accept our fate? Not quite. There are still ways you can minimize your chances of getting scammed by the latest technology, and they aren't so different from the precautions you should already be thinking about.
There are two types of AI-related security threats to think about. The first involves tools such as ChatGPT or Midjourney being used to get you to install something you shouldn't, like a browser plugin. You could be tricked into paying for a service when you don't need to, perhaps, or using a tool that looks official but isn't.
To avoid falling into these traps, make sure you're up to date with what's happening with AI services like the ones we've mentioned, and always go to the original source first. In the case of ChatGPT for example, there's no officially approved mobile app, and the tool is web-only. The standard rules apply when working with these apps and their spinoffs: Check their history, the reviews associated with them, and the companies behind them, just as you would when installing any new piece of software.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight The second type of threat is potentially more dangerous: AI that’s used to create text, audio, or video that sounds convincingly real. The output might even be used to mimic someone you know—like the case of the voice recording purportedly from a chief executive asking for an urgent release of funds, which duped a company employee.
While the technology may have evolved, the same techniques are still being used to try and get you to do something urgently that feels slightly (or very) unusual. Take your time, double-check wherever possible using different methods (a phone call to check an email or vice versa), and watch out for red flags—a time limit on what you're being asked to do, or a task that's out of the ordinary.
As always, keep your software and systems up to date.
Microsoft via David Nield Following links you're not expecting from texts and emails is usually not a good idea, especially when you're being asked to log in somewhere. If your bank has apparently got in touch with a message, for example, go to the bank website directly in your browser to log in, rather than following any embedded link.
Keeping your operating systems, apps, and browsers up to date is a must (and this mostly happens automatically now, so there's no excuse). The most recent browsers will protect you against a whole host of phishing and scam attacks, whether the prompt designed to dupe you has been generated by AI or not.
There's no foolproof tool for detecting the presence of AI text, audio, or video at the moment, but there are certain signs to look out for: Think blurring and inconsistencies in pictures, or text that sounds generic and vague. While scammers may have scraped details about your life or your workplace from somewhere, it's unlikely that they know all the ins and outs of your operations.
In short, be cautious and question everything—that was true before the dawn of these new AI services, and it's true now. Like the face-morphing masks of the Mission: Impossible film series (which remain science fiction for now), you need to be absolutely sure that you're dealing with who you think you're dealing with before revealing anything.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Contributor X Topics malware security artificial intelligence ChatGPT cybersecurity bots Dell Cameron Lily Hay Newman Kate O'Flaherty Dell Cameron Andy Greenberg Dell Cameron Dell Cameron Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,563 | 2,023 |
"Workers Are Worried About Their Bosses Embracing AI | WIRED"
|
"https://www.wired.com/story/fast-forward-workers-are-worried-about-their-bosses-embracing-ai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Workers Are Worried About Their Bosses Embracing AI Photograph: Westend61/Getty Images Save this story Save Save this story Save The Pew Research Center, a nonpartisan think tank that tracks public opinion, released a report today on how workers feel about AI.
The technology has become an increasingly common workplace fixture over the past few years. And its role is likely to grow as AI becomes more capable, thanks to advances such as the large language models, like GPT-4 , that gave us ChatGPT and a growing number of other tools.
While there’s no shortage of reports about people’s attitudes toward AI, Pew’s data is sizable and relatively fresh, drawing from 11,004 US adults who were consulted between December 12 and 18 of last year—just as ChatGPT mania was taking hold after its release at the end of November.
The report suggests that most workers expect AI to transform hiring, firing, and evaluations. Many people report feeling uncertain about what those changes might look like, and concerned about AI’s potential impacts.
Some 68 percent of those polled said they expect AI to have a major impact on jobholders over the next 20 years. Curiously though, only 28 percent said they thought AI would affect them personally, while 38 percent were unsure what the outcome might be for their own work.
Those responses reflect the fact that nobody really knows how AI will change jobs and work in the coming years. The technology is evolving quickly, and its impact often differs greatly between industries and even roles.
We can, however, expect existing uses of the technology to expand and become more sophisticated. Some employers already use AI to help screen job applicants, while enterprising job hunters seek to outwit the algorithms with clever tricks. In theory, AI technology has the potential to make hiring fairer and increase workplace diversity.
But in practice it has sometimes done the opposite, leading the US government to warn employers about the potential for algorithms to discriminate against people with disabilities.
The Pew survey reflects this contradictory picture, with 47 percent of people saying they think AI would do a better job than a human in hiring but 41 percent opposing the use of AI in hiring.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Workplace surveillance is an area of general concern with 81 percent of those polled saying that more use of AI will lead to workers feeling inappropriately watched.
Courtesy of Pew Research Center As WIRED has previously reported, many workers now have to work under the supervision of AI-powered surveillance systems that monitor productivity and flag misbehavior. Some truck drivers and warehouse staff are monitored by cameras and algorithms , and a rising number of office workers have their keystrokes and mouse clicks logged and analyzed.
This kind of data collection is generally intended to improve productivity, but it can be dehumanizing and demoralizing and push employees into working too hard or taking unnecessary risks, like driving trucks too fast.
Fear and doubt about AI’s role in workplaces is understandable because we are at something of a crossroads. Some economists argue that the designers of AI systems need to think about how their products will affect workers and aim to enhance their output rather than replace them. Similarly, employers can do more to ensure that the tools they use to track workers do not cause undue harm.
So far, the companies developing these AI tools and the employers that use them have not risen to the moment, often placing workers at the mercy of systems whose benefits are in question. It’s time to stop and think about how workplace AI is used, a need that will only grow more pressing as the technology advances.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Senior Writer X Topics Fast Forward artificial intelligence Jobs algorithms diversity machine learning workers Work employees ChatGPT Khari Johnson Khari Johnson Khari Johnson Will Knight Amanda Hoover Steven Levy Khari Johnson Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,564 | 2,023 |
"Google Will Soon Show You AI-Generated Ads | WIRED"
|
"https://www.wired.com/story/google-chatgpt-ai-generated-ads"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Paresh Dave Business Google Will Soon Show You AI-Generated Ads Illustration: Andriy Onufriyenko/Getty Images Save this story Save Save this story Save Google has spent the past few weeks promoting generative AI tools that can summarize search results for users, help them draft essays , and swap out overcast skies for sunshine in otherwise perfect family photos. Today it’s showing off what similar tools could do for its core business—selling ads.
New generative AI systems for advertising clients will compose text on the fly to play off what a person is searching for, and they'll whip up product images to save them time and money on design work. The features add to the swelling ranks of AI-based text and image generators that have been introduced to online services over the past few months, since the abilities of ChatGPT and its image counterpart DALL-E inspired global excitement about generative AI.
As the world’s top seller of online ads by revenue, Google has been using AI programs for years to help clients target users, as well as helping them design ads, like by automatically editing the size of images. Now, with more powerful AI models capable of tasks like generating photo-realistic images, it hopes to show that its ad business, which accounts for 80 percent of its total sales, can be more compelling to advertisers too.
The recent onslaught of AI-related announcements by Google has rallied shares of its parent company, Alphabet, suggesting that fears have diminished about the advent of ChatGPT-style web search crippling Google’s search and ad businesses.
Google is offering the new features to advertisers for free, but they could increase its revenue if AI-generated text and images encourage businesses to place more ads, or can draw more clicks from consumers. Google’s dominant role in online ad sales means the industry could be one of the first to broadly incorporate generative AI into their workflows. “We're able to deliver more relevant, beautiful ads to users, offer more creative freedom for advertisers, and deliver better performance,” says Jerry Dischler, the vice president overseeing Google Ads. He declined to discuss specific financial prospects for generative AI in ads.
As anyone who has experimented with an AI chatbot or image generator knows, their output can be unpredictable and even distasteful.
And they have raised public concern over whether their development benefited from copyright infringement.
Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Dischler says the company will be “diligent” in monitoring the quality of images and text generated by the new features, some of which are available to advertisers in beta form already. Google is launching some of them more broadly than its top rival, Meta, which announced earlier this month that it was initially inviting select advertisers to try out its own generative AI features.
Offering generative AI in ads is likely expensive, because the computing costs to operate text- and image-generating models is sky high.
At a conference last week , Meta AI executive Aparna Ramani said generating an output from those kinds of models is 1,000 times more expensive than using AI to recommend content and curate users’ News Feeds.
One of Google’s new features out now adapts the text of English-language search ads based on what a person typed into the company’s search box and Google’s data on the advertiser. Previously, each time a person searched, algorithms would have to select text to display from a collection an advertiser had manually written in advance.
With the text generation option, a search for “skin care for dry sensitive skin” could trigger an ad for skin cream with the auto-generated text “Soothe your dry, sensitive skin,” Dischler says. That may not seem revolutionary, but making ads more closely match searches could increase the chances of users clicking.
Google is also using its text-generation technology to offer a chatbot that ad customers can use to get suggestions for search keywords worth advertising against and text to go in those ads. “We would love to be able to offer personal support to millions of advertisers, large and small, but we think this is the next best thing,” Dischler says.
The Help Me Create My Ad prompt draws on Google’s data on past campaigns and analysis of a customer’s website and will be available to select US advertisers in English in July. Its design is similar to the company’s Help Me Write feature being tested in Gmail and Google Docs, which WIRED’s review found to be a good creative aid but also sometimes a stuffy writer that perpetuates stereotypes.
Performance Max Courtesy of Google Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight While the new ad-text capabilities mostly improve upon existing features, a new generative image tool offers ad buyers a more substantial upgrade to Google’s offerings. It crafts images that can be inserted into what Google calls Performance Max ads, which appear on Google apps and websites selected by Google’s algorithms. A screengrab of the tool provided by Google shows four photo-real images created in response to the prompt, "Ingredients used for pet food, like fish, chicken, raw meat and vegetables on a stylish slab-style plate, in studio lighting.” Testing in English will begin later this year, Dischler says.
Image generators can create new images by remixing patterns learned from huge collections of photos scraped from the web and other sources. Although tech companies have generally argued that using images that way does not breach copyright, recent lawsuits have questioned the practice.
Dischler insists the new feature should not burden advertisers with any legal liability but he describes navigating licensing in generative AI as an ongoing conversation. “Everything from our end has authorized use,” he says. Google allows intellectual property owners who suspect unauthorized use to file claims, and it takes down ads it finds in violation.
Image generators are also known to reproduce biases in their training data, with one result being that systems trained on web images that show more people of lighter skin tone generate images with the same tendency.
A Google screenshot shows four generated images of cats—all orange or white—purportedly created by remixing the advertiser’s own content.
Dischler says that Google has designed its system to deliberately emphasize representation and inclusivity so that generated images reflect demographic diversity. “Ultimately, it’s up to the advertiser to approve new creative assets. It has to be policy compliant, and it has to be relevant to the user,” he says.
Google is restricting what advertisers can ask of the image generator, in line with the company’s AI principles, to avoid unjustly harming anyone, so they can’t run loose with it and try to unleash totally inauthentic or inappropriate ads. Dischler says the tool aims to allow smaller advertisers to produce more relevant ads at lower cost, and bigger advertisers to get bolder and more detailed images.
Product Studio Courtesy of Google Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight A second image-based AI tool will allow companies advertising or listing products that appear in search results or other Google services to use typed prompts to generate new background scenes for product images, remove existing backgrounds, and to increase their resolution. Called Product Studio, the tool will launch in the US later this year.
Google is also adding product ads to the new AI-generated search results that it began testing this month.
An example provided by the company shows a list of AI-generated recommendations for “hiking backpacks for kids,” with the first option tagged with a “sponsored” label.
How well those ads sell could hinge on whether search users trust AI-written results, such as the shopping recommendations, and click through to spend money. That could require Google’s AI to improve its selections. In a demo at Google’s I/O conference earlier this month, its AI search recommended an electric bike that WIRED reviewer Adrienne So rated poorly.
She says the kids’ backpacks that Google’s algorithms suggested are also odd choices when better options exist. Google spokesperson Lara Levin says suggestions are based on the world's most comprehensive data on products, including reviews of them, and even traditional search results may not align with a particular review.
Search Generator Experience Courtesy of Google Consumers are likely to see a growing number of AI-generated ads over coming months—whether they realize it or not.
Meta’s competing generative AI tools for ads announced this month can suggest variations of ad text, generate backgrounds for product images based on typed prompts, and auto-adjust the sizing of ads to fit on different apps.
Smaller players in digital advertising will almost certainly follow suit with similar offerings. Amazon is hiring for a team to work on advertiser-focused generative AI projects, according to job postings first reported by news outlet the Information.
Like Google, Microsoft and Snap have both invested heavily in generative AI tools for consumers, such as chatbots, and they may see providing the technology to their ad customers as a natural next step.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Topics Search artificial intelligence Google algorithms Advertising machine learning Apps eCommerce Meta ad-targeting Will Knight Reece Rogers Paresh Dave Will Knight Reece Rogers Khari Johnson Matt Burgess Reece Rogers Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
13,565 | 2,023 |
"Terms of use"
|
"https://openai.com/policies/terms-of-use"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Terms of use Updated November 14, 2023 Effective: December 14, 2023 ( previous version ) Thank you for using OpenAI! These Terms of Use apply to your use of ChatGPT, DALL·E, and OpenAI’s other services for individuals, along with any associated software applications and websites (all together, “Services”). These Terms form an agreement between you and OpenAI, L.L.C., a Delaware company, and they include our Service Terms and important provisions for resolving disputes through arbitration. By using our Services, you agree to these Terms.
If you reside in the European Economic Area, Switzerland, or the UK, your use of the Services is governed by these terms.
Our Business Terms govern use of ChatGPT Enterprise, our APIs, and our other services for businesses and developers.
Our Privacy Policy explains how we collect and use personal information. Although it does not form part of these Terms, it is an important document that you should read.
Who we are OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity. For more information about OpenAI, please visit https://openai.com/about.
Registration and Access Minimum Age.
You must be at least 13 years old or the minimum age required in your country to consent to use the Services. If you are under 18 you must have your parent or legal guardian’s permission to use the Services.
Registration.
You must provide accurate and complete information to register for an account to use our Services. You may not share your account credentials or make your account available to anyone else and are responsible for all activities that occur under your account. If you create an account or use the Services on behalf of another person or entity, you must have the authority to accept these Terms on their behalf.
Using Our Services What You Can Do.
Subject to your compliance with these Terms, you may access and use our Services. In using our Services, you must comply with all applicable laws as well as our Sharing & Publication Policy , Usage Policies , and any other documentation, guidelines, or policies we make available to you.
What You Cannot Do.
You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not: Use our Services in a way that infringes, misappropriates or violates anyone’s rights.
Modify, copy, lease, sell or distribute any of our Services.
Attempt to or assist anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law).
Automatically or programmatically extract data or Output (defined below).
Represent that Output was human-generated when it was not.
Interfere with or disrupt our Services, including circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations we put on our Services.
Use Output to develop models that compete with OpenAI.
Software.
Our Services may allow you to download software, such as mobile applications, which may update automatically to ensure you’re using the latest version. Our software may include open source software that is governed by its own licenses that we’ve made available to you.
Corporate Domains.
If you create an account using an email address owned by an organization (for example, your employer), that account may be added to the organization's business account with us, in which case we will provide notice to you so that you can help facilitate the transfer of your account (unless your organization has already provided notice to you that it may monitor and control your account). Once your account is transferred, the organization’s administrator will be able to control your account, including being able to access Content (defined below) and restrict or remove your access to the account.
Third Party Services.
Our services may include third party software, products, or services, (“Third Party Services”) and some parts of our Services, like our browse feature, may include output from those services (“Third Party Output”). Third Party Services and Third Party Output are subject to their own terms, and we are not responsible for them.
Feedback.
We appreciate your feedback, and you agree that we may use it without restriction or compensation to you.
Content Your Content.
You may provide input to the Services (“Input”), and receive output from the Services based on the Input (“Output”). Input and Output are collectively “Content.” You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms. You represent and warrant that you have all rights, licenses, and permissions needed to provide Input to our Services.
Ownership of Content.
As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.
Similarity of Content.
Due to the nature of our Services and artificial intelligence generally, output may not be unique and other users may receive similar output from our Services. Our assignment above does not extend to other users’ output or any Third Party Output.
Our Use of Content.
We may use Content to provide, maintain, develop, and improve our Services, comply with applicable law, enforce our terms and policies, and keep our Services safe.
Opt Out.
If you do not want us to use your Content to train our models, you can opt out by following the instructions in this Help Center article.
Please note that in some cases this may limit the ability of our Services to better address your specific use case.
Accuracy.
Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe, and beneficial. Given the probabilistic nature of machine learning, use of our Services may, in some situations, result in Output that does not accurately reflect real people, places, or facts.
When you use our Services you understand and agree: Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.
You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services.
You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.
Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.
Our IP Rights We and our affiliates own all rights, title, and interest in and to the Services. You may only use our name and logo in accordance with our Brand Guidelines.
Paid Accounts Billing.
If you purchase any Services, you will provide complete and accurate billing information, including a valid payment method. For paid subscriptions, we will automatically charge your payment method on each agreed-upon periodic renewal until you cancel. You’re responsible for all applicable taxes, and we’ll charge tax when required. If your payment cannot be completed, we may downgrade your account or suspend your access to our Services until payment is received.
Service Credits.
You can pay for some Services in advance by purchasing service credits. All service credits are subject to our Service Credit Terms.
Cancellation.
You can cancel your paid subscription at any time. Payments are non-refundable, except where required by law. These Terms do not override any mandatory local laws regarding your cancellation rights.
Changes.
We may change our prices from time to time. If we increase our subscription prices, we will give you at least 30 days’ notice and any price increase will take effect on your next renewal so that you can cancel if you do not agree to the price increase.
Termination and Suspension Termination.
You are free to stop using our Services at any time. We reserve the right to suspend or terminate your access to our Services or delete your account if we determine: You breached these Terms or our Usage Policies.
We must do so to comply with the law.
Your use of our Services could cause risk or harm to OpenAI, our users, or anyone else.
We also may terminate your account if it has been inactive for over a year and you do not have a paid account. If we do, we will provide you with advance notice.
Appeals.
If you believe we have suspended or terminated your account in error, you can file an appeal with us by contacting our Support team.
Discontinuation of Services We may decide to discontinue our Services, but if we do, we will give you advance notice and a refund for any prepaid, unused Services.
Disclaimer of Warranties OUR SERVICES ARE PROVIDED “AS IS.” EXCEPT TO THE EXTENT PROHIBITED BY LAW, WE AND OUR AFFILIATES AND LICENSORS MAKE NO WARRANTIES (EXPRESS, IMPLIED, STATUTORY OR OTHERWISE) WITH RESPECT TO THE SERVICES, AND DISCLAIM ALL WARRANTIES INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, SATISFACTORY QUALITY, NON-INFRINGEMENT, AND QUIET ENJOYMENT, AND ANY WARRANTIES ARISING OUT OF ANY COURSE OF DEALING OR TRADE USAGE. WE DO NOT WARRANT THAT THE SERVICES WILL BE UNINTERRUPTED, ACCURATE OR ERROR FREE, OR THAT ANY CONTENT WILL BE SECURE OR NOT LOST OR ALTERED.
YOU ACCEPT AND AGREE THAT ANY USE OF OUTPUTS FROM OUR SERVICE IS AT YOUR SOLE RISK AND YOU WILL NOT RELY ON OUTPUT AS A SOLE SOURCE OF TRUTH OR FACTUAL INFORMATION, OR AS A SUBSTITUTE FOR PROFESSIONAL ADVICE.
Limitation of Liability NEITHER WE NOR ANY OF OUR AFFILIATES OR LICENSORS WILL BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, OR EXEMPLARY DAMAGES, INCLUDING DAMAGES FOR LOSS OF PROFITS, GOODWILL, USE, OR DATA OR OTHER LOSSES, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. OUR AGGREGATE LIABILITY UNDER THESE TERMS WILL NOT EXCEED THE GREATER OF THE AMOUNT YOU PAID FOR THE SERVICE THAT GAVE RISE TO THE CLAIM DURING THE 12 MONTHS BEFORE THE LIABILITY AROSE OR ONE HUNDRED DOLLARS ($100). THE LIMITATIONS IN THIS SECTION APPLY ONLY TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW.
Some countries and states do not allow the disclaimer of certain warranties or the limitation of certain damages, so some or all of the terms above may not apply to you, and you may have additional rights. In that case, these Terms only limit our responsibilities to the maximum extent permissible in your country of residence.
OPENAI’S AFFILIATES, SUPPLIERS, LICENSORS, AND DISTRIBUTORS ARE INTENDED THIRD PARTY BENEFICIARIES OF THIS SECTION.
Indemnity If you are a business or organization, to the extent permitted by law, you will indemnify and hold harmless us, our affiliates, and our personnel, from and against any costs, losses, liabilities, and expenses (including attorneys’ fees) from third party claims arising out of or relating to your use of the Services and Content or any violation of these Terms.
Dispute Resolution YOU AND OPENAI AGREE TO THE FOLLOWING MANDATORY ARBITRATION AND CLASS ACTION WAIVER PROVISIONS: MANDATORY ARBITRATION.
You and OpenAI agree to resolve any claims arising out of or relating to these Terms or our Services, regardless of when the claim arose, even if it was before these Terms existed (a “Dispute”), through final and binding arbitration. You may opt out of arbitration within 30 days of account creation or of any updates to these arbitration terms within 30 days after the update has taken effect by filling out this form.
If you opt out of an update, the last set of agreed upon arbitration terms will apply.
Informal Dispute Resolution.
We would like to understand and try to address your concerns prior to formal legal action. Before either of us files a claim against the other, we both agree to try to resolve the Dispute informally. You agree to do so by sending us notice through this form.
We will do so by sending you notice to the email address associated with your account. If we are unable to resolve a Dispute within 60 days, either of us has the right to initiate arbitration. We also both agree to attend an individual settlement conference if either party requests one during this time. Any statute of limitations will be tolled during this informal resolution process.
Arbitration Forum.
If we are unable to resolve the Dispute, either of us may commence arbitration with National Arbitration and Mediation (“NAM”) under its Comprehensive Dispute Resolution Rules and Procedures and/or Supplemental Rules for Mass Arbitration Filings, as applicable (available here ). OpenAI will not seek attorneys’ fees and costs in arbitration unless the arbitrator determines that your claim is frivolous. The activities described in these Terms involve interstate commerce and the Federal Arbitration Act will govern the interpretation and enforcement of these arbitration terms and any arbitration.
Arbitration Procedures.
The arbitration will be conducted by videoconference if possible, but if the arbitrator determines a hearing should be conducted in person, the location will be mutually agreed upon, in the county where you reside, or as determined by the arbitrator, unless the batch arbitration process applies. The arbitration will be conducted by a sole arbitrator. The arbitrator will be either a retired judge or an attorney licensed to practice law in the state of California. The arbitrator will have exclusive authority to resolve any Dispute, except the state or federal courts of San Francisco, California have the authority to determine any Dispute about enforceability, validity of the class action waiver, or requests for public injunctive relief, as set out below. Any settlement offer amounts will not be disclosed to the arbitrator by either party until after the arbitrator determines the final award, if any. The arbitrator has the authority to grant motions dispositive of all or part of any Dispute.
Exceptions.
This section does not require informal dispute resolution or arbitration of the following claims: (i) individual claims brought in small claims court; and (ii) injunctive or other equitable relief to stop unauthorized use or abuse of the Services or intellectual property infringement or misappropriation.
CLASS AND JURY TRIAL WAIVERS.
You and OpenAI agree that Disputes must be brought on an individual basis only, and may not be brought as a plaintiff or class member in any purported class, consolidated, or representative proceeding. Class arbitrations, class actions, and representative actions are prohibited. Only individual relief is available. The parties agree to sever and litigate in court any request for public injunctive relief after completing arbitration for the underlying claim and all other claims. This does not prevent either party from participating in a class-wide settlement. You and OpenAI knowingly and irrevocably waive any right to trial by jury in any action, proceeding, or counterclaim.
Batch Arbitration.
If 25 or more claimants represented by the same or similar counsel file demands for arbitration raising substantially similar Disputes within 90 days of each other, then you and OpenAI agree that NAM will administer them in batches of up to 50 claimants each (“Batch”), unless there are less than 50 claimants in total or after batching, which will comprise a single Batch. NAM will administer each Batch as a single consolidated arbitration with one arbitrator, one set of arbitration fees, and one hearing held by videoconference or in a location decided by the arbitrator for each Batch. If any part of this section is found to be invalid or unenforceable as to a particular claimant or Batch, it will be severed and arbitrated in individual proceedings.
Severability.
If any part of these arbitration terms is found to be illegal or unenforceable, the remainder will remain in effect, except that if a finding of partial illegality or unenforceability would allow class arbitration, class action, or representative action, this entire dispute resolution section will be unenforceable in its entirety.
Copyright Complaints If you believe that your intellectual property rights have been infringed, please send notice to the address below or fill out this form.
We may delete or disable content that we believe violates these Terms or is alleged to be infringing and will terminate accounts of repeat infringers where appropriate.
Written claims concerning copyright infringement must include the following information: A physical or electronic signature of the person authorized to act on behalf of the owner of the copyright interest A description of the copyrighted work that you claim has been infringed upon A description of where the allegedly infringing material is located on our site so we can find it Your address, telephone number, and e-mail address A statement by you that you have a good-faith belief that the disputed use is not authorized by the copyright owner, its agent, or the law A statement by you that the above information in your notice is accurate and, under penalty of perjury, that you are the copyright owner or authorized to act on the copyright owner’s behalf General Terms Assignment.
You may not assign or transfer any rights or obligations under these Terms and any attempt to do so will be void. We may assign our rights or obligations under these Terms to any affiliate, subsidiary, or successor in interest of any business associated with our Services.
Changes to These Terms or Our Services.
We are continuously working to develop and improve our Services. We may update these Terms or our Services accordingly from time to time. For example, we may make changes to these Terms or the Services due to: Changes to the law or regulatory requirements.
Security or safety reasons.
Circumstances beyond our reasonable control.
Changes we make in the usual course of developing our Services.
To adapt to new technologies.
We will give you at least 30 days advance notice of changes to these Terms that materially adversely impact you either via email or an in-product notification. All other changes will be effective as soon as we post them to our website. If you do not agree to the changes, you must stop using our Services.
Delay in Enforcing These Terms.
Our failure to enforce a provision is not a waiver of our right to do so later. Except as provided in the dispute resolution section above, if any portion of these Terms is determined to be invalid or unenforceable, that portion will be enforced to the maximum extent permissible and it will not affect the enforceability of any other terms.
Trade Controls.
You must comply with all applicable trade laws, including sanctions and export control laws. Our Services may not be used in or for the benefit of, or exported or re-exported to (a) any U.S. embargoed country or territory or (b) any individual or entity with whom dealings are prohibited or restricted under applicable trade laws. Our Services may not be used for any end use prohibited by applicable trade laws, and your Input may not include material or information that requires a government license for release or export.
Entire Agreement.
These Terms contain the entire agreement between you and OpenAI regarding the Services and, other than any Service-specific terms, supersedes any prior or contemporaneous agreements between you and OpenAI.
Governing Law.
California law will govern these Terms except for its conflicts of laws principles. Except as provided in the dispute resolution section above, all claims arising out of or relating to these Terms will be brought exclusively in the federal or state courts of San Francisco, California.
Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
13,566 | 2,015 |
"Engineering the Perfect Baby | MIT Technology Review"
|
"https://www.technologyreview.com/s/535661/engineering-the-perfect-baby"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Engineering the Perfect Baby By Antonio Regalado archive page If anyone had devised a way to create a genetically engineered baby, I figured George Church would know about it.
At his labyrinthine laboratory on the Harvard Medical School campus, you can find researchers giving E. Coli a novel genetic code never seen in nature. Around another bend, others are carrying out a plan to use DNA engineering to resurrect the woolly mammoth. His lab, Church likes to say, is the center of a new technological genesis—one in which man rebuilds creation to suit himself.
When I visited the lab last June, Church proposed that I speak to a young postdoctoral scientist named Luhan Yang. A Harvard recruit from Beijing, she’d been a key player in developing a powerful new technology for editing DNA, called CRISPR-Cas9. With Church, Yang had founded a small biotechnology company to engineer the genomes of pigs and cattle, sliding in beneficial genes and editing away bad ones.
As I listened to Yang, I waited for a chance to ask my real questions: Can any of this be done to human beings? Can we improve the human gene pool? The position of much of mainstream science has been that such meddling would be unsafe, irresponsible, and even impossible. But Yang didn’t hesitate. Yes, of course, she said. In fact, the Harvard laboratory had a project under way to determine how it could be achieved. She flipped open her laptop to a PowerPoint slide titled “Germline Editing Meeting.” Here it was: a technical proposal to alter human heredity. “Germ line” is biologists’ jargon for the egg and sperm, which combine to form an embryo. By editing the DNA of these cells or the embryo itself, it could be possible to correct disease genes and pass those genetic fixes on to future generations. Such a technology could be used to rid families of scourges like cystic fibrosis. It might also be possible to install genes that offer lifelong protection against infection, Alzheimer’s, and, Yang told me, maybe the effects of aging. Such history-making medical advances could be as important to this century as vaccines were to the last.
That’s the promise. The fear is that germ-line engineering is a path toward a dystopia of superpeople and designer babies for those who can afford it. Want a child with blue eyes and blond hair? Why not design a highly intelligent group of people who could be tomorrow’s leaders and scientists? Just three years after its initial development, CRISPR technology is already widely used by biologists as a kind of search-and-replace tool to alter DNA, even down to the level of a single letter. It’s so precise that it’s expected to turn into a promising new approach for gene therapy in people with devastating illnesses. The idea is that physicians could directly correct a faulty gene, say, in the blood cells of a patient with sickle-cell anemia (see “ Genome Surgery ”). But that kind of gene therapy wouldn’t affect germ cells, and the changes in the DNA wouldn’t get passed to future generations.
In contrast, the genetic changes created by germ-line engineering would be passed on, and that’s what has made the idea seem so objectionable. So far, caution and ethical concerns have had the upper hand. A dozen countries, not including the United States, have banned germ-line engineering, and scientific societies have unanimously concluded that it would be too risky to do. The European Union’s convention on human rights and biomedicine says tampering with the gene pool would be a crime against “human dignity” and human rights.
But all these declarations were made before it was actually feasible to precisely engineer the germ line. Now, with CRISPR, it is possible.
The experiment Yang described, though not simple, would go like this: The researchers hoped to obtain, from a hospital in New York, the ovaries of a woman undergoing surgery for ovarian cancer caused by a mutation in a gene called BRCA1.
Working with another Harvard laboratory, that of antiaging specialist David Sinclair , they would extract immature egg cells that could be coaxed to grow and divide in the laboratory. Yang would use CRISPR in these cells to correct the DNA of the BRCA1 gene. They would try to create a viable egg without the genetic error that caused the woman’s cancer.
Yang would later tell me that she dropped out of the project not long after we spoke. Yet it remained difficult to know if the experiment she described was occurring, canceled, or awaiting publication. Sinclair said that a collaboration between the two labs was ongoing, but then, like several other scientists whom I’d asked about germ-line engineering, he stopped replying to my e-mails.
Regardless of the fate of that particular experiment, human germ-line engineering has become a burgeoning research concept. At least three other centers in the United States are working on it, as are scientists in China, in the U.K., and at a biotechnology company called OvaScience, based in Cambridge, Massachusetts, that boasts some of the world’s leading fertility doctors on its advisory board.
All this means that germ-line engineering is much further along than anyone imagined.
The objective of these groups is to demonstrate that it’s possible to produce children free of specific genes involved in inherited disease. If it’s possible to correct the DNA in a woman’s egg, or a man’s sperm, those cells could be used in an in vitro fertilization (IVF) clinic to produce an embryo and then a child. It might also be possible to directly edit the DNA of an early-stage IVF embryo using CRISPR. Several people interviewed by MIT Technology Review said that such experiments had already been carried out in China and that results describing edited embryos were pending publication. These people, including two high-ranking specialists, didn’t wish to comment publicly because the papers are under review.
All this means that germ-line engineering is much further along than anyone imagined. “What you are talking about is a major issue for all humanity,” says Merle Berger, one of the founders of Boston IVF, a network of fertility clinics that is among the largest in the world and helps more than a thousand women get pregnant each year. “It would be the biggest thing that ever happened in our field.” Berger predicts that repairing genes involved in serious inherited diseases will win wide public acceptance but says the idea of using the technology beyond that would cause a public uproar because “everyone would want the perfect child”: people might pick and choose eye color and eventually intelligence. “These are things we talk about all the time,” he says. “But we have never had the opportunity to do it.” Editing embryos How easy would it be to edit a human embryo using CRISPR? Very easy, experts say. “Any scientist with molecular biology skills and knowledge of how to work with [embryos] is going to be able to do this,” says Jennifer Doudna, a biologist at the University of California, Berkeley, who in 2012 co-discovered how to use CRISPR to edit genes.
To find out how it could be done, I visited the lab of Guoping Feng, a biologist at MIT’s McGovern Institute for Brain Research, where a colony of marmoset monkeys is being established with the aim of using CRISPR to create accurate models of human brain diseases. To create the models, Feng will edit the DNA of embryos and then transfer them into female marmosets to produce live monkeys. One gene Feng hopes to alter in the animals is SHANK3.
The gene is involved in how neurons communicate; when it’s damaged in children, it is known to cause autism.
Feng said that before CRISPR, it was not possible to introduce precise changes into a primate’s DNA. With CRISPR, the technique should be relatively straightforward. The CRISPR system includes a gene-snipping enzyme and a guide molecule that can be programmed to target unique combinations of the DNA letters, A, G, C, and T; get these ingredients into a cell and they will cut and modify the genome at the targeted sites.
But CRISPR is not perfect—and it would be a very haphazard way to edit human embryos, as Feng’s efforts to create gene-edited marmosets show. To employ the CRISPR system in the monkeys, his students simply inject the chemicals into a fertilized egg, which is known as a zygote—the stage just before it starts dividing.
Feng said the efficiency with which CRISPR can delete or disable a gene in a zygote is about 40 percent, whereas making specific edits, or swapping DNA letters, works less frequently—more like 20 percent of the time. Like a person, a monkey has two copies of most genes, one from each parent. Sometimes both copies get edited, but sometimes just one does, or neither. Only about half the embryos will lead to live births, and of those that do, many could contain a mixture of cells with edited DNA and without. If you add up the odds, you find you’d need to edit 20 embryos to get a live monkey with the version you want.
That’s not an insurmountable problem for Feng, since the MIT breeding colony will give him access to many monkey eggs and he’ll be able to generate many embryos. However, it would present obvious problems in humans. Putting the ingredients of CRISPR into a human embryo would be scientifically trivial. But it wouldn’t be practical for much just yet. This is one reason that many scientists view such an experiment (whether or not it has really occurred in China) with scorn, seeing it more as a provocative bid to grab attention than as real science. Rudolf Jaenisch, an MIT biologist who works across the street from Feng and who in the 1970s created the first gene-modified mice, calls attempts to edit human embryos “totally premature.” He says he hopes these papers will be rejected and not published. “It’s just a sensational thing that will stir things up,” says Jaenisch. “We know it’s possible, but is it of practical use? I kind of doubt it.” For his part, Feng told me he approves of the idea of germ-line engineering. Isn’t the goal of medicine to reduce suffering? Considering the state of the technology, however, he thinks actual gene-edited humans are “10 to 20 years away.” Among other problems, CRISPR can introduce off-target effects or change bits of the genome far from where scientists had intended. Any human embryo altered with CRISPR today would carry the risk that its genome had been changed in unexpected ways. But, Feng said, such problems may eventually be ironed out, and edited people will be born. “To me, it’s possible in the long run to dramatically improve health, lower costs. It’s a kind of prevention,” he said. “It’s hard to predict the future, but correcting disease risks is definitely a possibility and should be supported. I think it will be a reality.” Editing eggs Elsewhere in the Boston area, scientists are exploring a different approach to engineering the germ line, one that is technically more demanding but probably more powerful. This strategy combines CRISPR with unfolding discoveries related to stem cells. Scientists at several centers, including Church’s, think they will soon be able to use stem cells to produce eggs and sperm in the laboratory. Unlike embryos, stem cells can be grown and multiplied. Thus they could offer a vastly improved way to create edited offspring with CRISPR. The recipe goes like this: First, edit the genes of the stem cells. Second, turn them into an egg or sperm. Third, produce an offspring.
Some investors got an early view of the technique on December 17, at the Benjamin Hotel in Manhattan, during commercial presentations by OvaScience. The company, which was founded four years ago, aims to commercialize the scientific work of David Sinclair, who is based at Harvard, and Jonathan Tilly, an expert on egg stem cells and the chairman of the biology department at Northeastern University (see “10 Emerging Technologies: Egg Stem Cells,” May/June 2012). It made the presentations as part of a successful effort to raise $132 million in new capital during January.
During the meeting, Sinclair, a velvet-voiced Australian whom Time last year named one of the “100 Most Influential People in the World,” took the podium and provided Wall Street with a peek at what he called “truly world-changing” developments. People would look back at this moment in time and recognize it as a new chapter in “how humans control their bodies,” he said, because it would let parents determine “when and how they have children and how healthy those children are actually going to be.” The company has not perfected its stem-cell technology—it has not reported that the eggs it grows in the lab are viable—but Sinclair predicted that functional eggs were “a when, and not an if.” Once the technology works, he said, infertile women will be able to produce hundreds of eggs, and maybe hundreds of embryos. Using DNA sequencing to analyze their genes, they could pick among them for the healthiest ones.
Genetically improved children may also be possible. Sinclair told the investors that he was trying to alter the DNA of these egg stem cells using gene editing, work he later told me he was doing with Church’s lab. “We think the new technologies with genome editing will allow it to be used on individuals who aren’t just interested in using IVF to have children but have healthier children as well, if there is a genetic disease in their family,” Sinclair told the investors. He gave the example of Huntington’s disease, caused by a gene that will trigger a fatal brain condition even in someone who inherits only one copy. Sinclair said gene editing could be used to remove the lethal gene defect from an egg cell. His goal, and that of OvaScience, is to “correct those mutations before we generate your child,” he said. “It’s still experimental, but there is no reason to expect it won’t be possible in coming years.” Sinclair spoke to me briefly on the phone while he was navigating in a cab across a snowed-in Boston, but later he referred my questions to OvaScience. When I contacted OvaScience, Cara Mayfield, a spokeswoman, said its executives could not comment because of their travel schedules but confirmed that the company was working on treating inherited disorders with gene editing. What was surprising to me was that OvaScience’s research in “crossing the germ line,” as critics of human engineering sometimes put it, has generated scarcely any notice. In December of 2013, OvaScience even announced it was putting $1.5 million into a joint venture with a synthetic biology company called Intrexon, whose R&D objectives include gene-editing eggs to “prevent the propagation” of human disease “in future generations.” When I reached Tilly at Northeastern, he laughed when I told him what I was calling about. “It’s going to be a hot-button issue,” he said. Tilly also said his lab was trying to edit egg stem cells with CRISPR “right now” to rid them of an inherited genetic disease that he didn’t want to name. Tilly emphasized that there are “two pieces of the puzzle”—one being stem cells and the other gene editing. The ability to create large numbers of egg stem cells is critical, because only with sizable quantities can genetic changes be stably introduced using CRISPR, characterized using DNA sequencing, and carefully studied to check for mistakes before producing an egg.
Tilly predicted that the whole end-to-end technology—cells to stem cells, stem cells to sperm or egg and then to offspring—would end up being worked out first in animals, such as cattle, either by his lab or by companies such as eGenesis, the spinoff from the Church lab working on livestock. But he isn’t sure what the next step should be with edited human eggs. You wouldn’t want to fertilize one “willy nilly,” he said. You’d be making a potential human being. And doing that would raise questions he’s not sure he can answer. He told me, “‘Can you do it?’ is one thing. If you can, then the most important questions come up. ‘Would you do it? Why would you want to do it? What is the purpose?’ As scientists we want to know if it’s feasible, but then we get into the bigger questions, and it’s not a science question—it’s a society question.” Improving humans If germ-line engineering becomes part of medical practice, it could lead to transformative changes in human well-being, with consequences to people’s life span, identity, and economic output. But it would create ethical dilemmas and social challenges. What if these improvements were available only to the richest societies, or the richest people? An in vitro fertility procedure costs about $20,000 in the United States. Add genetic testing and egg donation or a surrogate mother, and the price soars toward $100,000.
Others believe the idea is dubious because it’s not medically necessary. Hank Greely, a lawyer and ethicist at Stanford University, says proponents “can’t really say what it is good for.” The problem, says Greely, is that it’s already possible to test the DNA of IVF embryos and pick healthy ones, a process that adds about $4,000 to the cost of a fertility procedure. A man with Huntington’s, for instance, could have his sperm used to fertilize a dozen of his partner’s eggs. Half those embryos would not have the Huntington’s gene, and those could be used to begin a pregnancy.
Indeed, some people are adamant that germ-line engineering is being pushed ahead with “false arguments.” That is the view of Edward Lanphier, CEO of Sangamo Biosciences, a California biotechnology company that is using another gene-editing technique, called zinc fingers nucleases, to try to treat HIV in adults by altering their blood cells. “We’ve looked at [germ-line engineering] for a disease rationale, and there is none,” he says. “You can do it. But there really isn’t a medical reason. People say, well, we don’t want children born with this, or born with that—but it’s a completely false argument and a slippery slope toward much more unacceptable uses.” Critics cite a host of fears. Children would be the subject of experiments. Parents would be influenced by genetic advertising from IVF clinics. Germ-line engineering would encourage the spread of allegedly superior traits. And it would affect people not yet born, without their being able to agree to it. The American Medical Association, for instance, holds that germ-line engineering shouldn’t be done “at this time” because it “affects the welfare of future generations” and could cause “unpredictable and irreversible results.” But like a lot of official statements that forbid changing the genome, the AMA’s, which was last updated in 1996 , predates today’s technology. “A lot of people just agreed to these statements,” says Greely. “It wasn’t hard to renounce something that you couldn’t do.” The fear? A dystopia of superpeople and designer babies for those who can afford it.
Others predict that hard-to-oppose medical uses will be identified. A couple with several genetic diseases at once might not be able to find a suitable embryo. Treating infertility is another possibility. Some men don’t produce any sperm, a condition called azoospermia. One cause is a genetic defect in which a region of about one million to six million DNA letters is missing from the Y chromosome. It might be possible to take a skin cell from such a man, turn it into a stem cell, repair the DNA, and then make sperm, says Werner Neuhausser, a young Austrian doctor who splits his time between the Boston IVF fertility-clinic network and Harvard’s Stem Cell Institute. “That will change medicine forever, right? You could cure infertility, that is for sure,” he says.
I spoke with Church several times by telephone over the last few months, and he told me what’s driving everything is the “incredible specificity” of CRISPR. Although not all the details have been worked out, he thinks the technology could replace DNA letters essentially without side effects. He says this is what makes it “tempting to use.” Church says his laboratory is focused mostly on experiments in engineering animals. He added that his lab would not make or edit human embryos, calling such a step “not our style.” What is Church’s style is human enhancement. And he’s been making a broad case that CRISPR can do more than eliminate disease genes. It can lead to augmentation. At meetings, some involving groups of “transhumanists” interested in next steps for human evolution, Church likes to show a slide on which he lists naturally occurring variants of around 10 genes that, when people are born with them, confer extraordinary qualities or resistance to disease. One makes your bones so hard they’ll break a surgical drill. Another drastically cuts the risk of heart attacks. And a variant of the gene for the amyloid precursor protein, or APP, was found by Icelandic researchers to protect against Alzheimer’s. People with it never get dementia and remain sharp into old age.
Church thinks CRISPR could be used to provide people with favorable versions of genes, making DNA edits that would act as vaccines against some of the most common diseases we face today. Although he told me anything “edgy” should be done only to adults who can consent, it’s obvious to him that the earlier such interventions occur, the better.
Church tends to dodge questions about genetically modified babies. The idea of improving the human species has always had “enormously bad press,” he wrote in the introduction to Regenesis , his 2012 book on synthetic biology, whose cover was a painting by Eustache Le Sueur of a bearded God creating the world. But that’s ultimately what he’s suggesting: enhancements in the form of protective genes. “An argument will be made that the ultimate prevention is that the earlier you go, the better the prevention,” he told an audience at MIT’s Media Lab last spring. “I do think it’s the ultimate preventive, if we get to the point where it’s very inexpensive, extremely safe, and very predictable.” Church, who has a less cautious side, proceeded to tell the audience that he thought changing genes “is going to get to the point where it’s like you are doing the equivalent of cosmetic surgery.” Some thinkers have concluded that we should not pass up the chance to make improvements to our species. “The human genome is not perfect,” says John Harris, a bioethicist at Manchester University, in the U.K. “It’s ethically imperative to positively support this technology.” By some measures, U.S. public opinion is not particularly negative toward the idea. A Pew Research survey carried out last August found that 46 percent of adults approved of genetic modification of babies to reduce the risk of serious diseases.
The same survey found that 83 percent said genetic modification to make a baby smarter would be “taking medical advances too far.” But other observers say higher IQ is exactly what we should be considering. Nick Bostrom, an Oxford philosopher best known for his 2014 book Superintelligence , which raised alarms about the risks of artificial intelligence in computers, has also looked at whether humans could use reproductive technology to improve human intellect. Although the ways in which genes affect intelligence aren’t well understood and there are far too many relevant genes to permit easy engineering, such realities don’t dim speculation on the possibility of high-tech eugenics.
“The human genome is not perfect. It’s ethically imperative to positively support this technology.” What if everyone could be a little bit smarter? Or a few people could be a lot smarter? Even a small number of “super-enhanced” individuals, Bostrom wrote in a 2013 paper, could change the world through their creativity and discoveries, and through innovations that everyone else would use. In his view, genetic enhancement is an important long-range issue like climate change or financial planning by nations, “since human problem-solving ability is a factor in every challenge we face.” To some scientists, the explosive advance of genetics and biotech means germ-line engineering is inevitable. Of course, safety questions would be paramount. Before there’s a genetically edited baby saying “Mama,” there would have to be tests in rats, rabbits, and probably monkeys, to make sure they are normal. But ultimately, if the benefits seem to outweigh the risks, medicine would take the chance. “It was the same with IVF when it first happened,” says Neuhausser. “We never really knew if that baby was going to be healthy at 40 or 50 years. But someone had to take the plunge.” Wine country In January, on Saturday the 24th, around 20 scientists, ethicists, and legal experts traveled to Napa Valley, California, for a retreat among the vineyards at the Carneros Inn. They had been convened by Doudna, the Berkeley scientist who co-discovered the CRISPR system a little over two years ago. She had become aware that scientists might be thinking of crossing the germ line, and she was concerned. Now she wanted to know: could they be stopped? “We as scientists have come to appreciate that CRISPR is incredibly powerful. But that swings both ways. We need to make sure that it’s applied carefully,” Doudna told me. “The issue is especially human germ-line editing and the appreciation that this is now a capability in everyone’s hands.” At the meeting, along with ethicists like Greely, was Paul Berg, a Stanford biochemist and Nobel Prize winner known for having organized the Asilomar Conference, a historic 1975 forum at which biologists reached an agreement on how to safely proceed with recombinant DNA, the newly discovered method of splicing DNA into bacteria.
Should there be an Asilomar for germ-line engineering? Doudna thinks so, but the prospects for consensus seem dim. Biotechnology research is now global, involving hundreds of thousands of people. There’s no single authority that speaks for science, and no easy way to put the genie back in the bottle. Doudna told me she hoped that if American scientists agreed to a moratorium on human germ-line engineering, it might influence researchers elsewhere in the world to cease their work.
Doudna said she felt that a self-imposed pause should apply not only to making gene-edited babies but also to using CRISPR to alter human embryos, eggs, or sperm—as researchers at Harvard, Northeastern, and OvaScience are doing. “I don’t feel that those experiments are appropriate to do right now in human cells that could turn into a person,” she told me. “I feel that the research that needs to be done right now is to understand safety, efficacy, and delivery. And I think those experiments can be done in nonhuman systems. I would like to see a lot more work done before it’s done for germ-line editing. I would favor a very cautious approach.” Not everyone agrees that germ-line engineering is such a big worry, or that experiments should be padlocked. Greely notes that in the United States, there are piles of regulations to keep lab science from morphing into a genetically modified baby anytime soon. “I would not want to use safety as an excuse for a non-safety-based ban,” says Greely, who says he pushed back against talk of a moratorium. But he also says he agreed to sign Doudna’s letter, which now reflects the consensus of the group. “Although I don’t view this as a crisis moment, I think it’s probably about time for us to have this discussion,” he says.
(After this article was published online in March, Doudna’s editorial appeared in Science (see “ Scientists Call for a Summit on Gene-Edited Babies ”.
) Along with Greely, Berg, and 15 others, she called for a global moratorium on any effort to use CRISPR to generate gene-edited children until researchers could determine “what clinical applications, if any, might in the future be deemed permissible.” The group, however, endorsed basic research, including applying CRISPR to embryos. The final list of signatories included Church, although he did not attend the Napa meeting.) As news has spread of germ-line experiments, some biotechnology companies now working on CRISPR have realized that they will have to take a stand. Nessan Bermingham is CEO of Intellia Therapeutics, a Boston startup that raised $15 million last year to develop CRISPR into gene therapy treatments for adults or children. He says germ-line engineering “is not on our commercial radar,” and he suggests that his company could use its patents to prevent anyone from commercializing it.
“The technology is in its infancy,” he says. “It is not appropriate for people to even be contemplating germ-line applications.” Bermingham told me he never imagined he’d have to be taking a position on genetically modified babies so soon. Modifying human heredity has always been a theoretical possibility. Suddenly it’s a real one. But wasn’t the point always to understand and control our own biology—to become masters over the processes that created us? Doudna says she is also thinking about these issues. “It cuts to the core of who we are as people, and it makes you ask if humans should be exercising that kind of power,” she told me. “There are moral and ethical issues, but one of the profound questions is just the appreciation that if germ-line editing is conducted in humans, that is changing human evolution.” One reason she feels the research should slow down is to give scientists a chance to spend more time explaining what their next steps could be.
“Most of the public,” she says, “does not appreciate what is coming.” This story was updated on April 23, 2015 hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our May/June 2015 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain.
By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative.
By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.