id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
1,113
2,020
"Facebook AI Research applies Transformer architecture to streamline object detection models | VentureBeat"
"https://venturebeat.com/2020/05/28/facebook-ai-research-applies-transformer-architecture-to-streamline-object-detection-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook AI Research applies Transformer architecture to streamline object detection models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Six members of Facebook AI Research (FAIR) tapped the popular Transformer neural network architecture to create end-to-end object detection AI, an approach they claim streamlines the creation of object detection models and reduces the need for handcrafted components. Named Detection Transformer (DETR), the model can recognize objects in an image in a single pass all at once. DETR is the first object detection framework to successfully integrate the Transformer architecture as a central building block in the detection pipeline, FAIR said in a blog post. The authors added that Transformers could revolutionize computer vision as they did natural language processing in recent years, or bridge gaps between NLP and computer vision. “DETR directly predicts (in parallel) the final set of detections by combining a common CNN with a Transformer architecture,” reads a FAIR paper published Wednesday alongside the open source release of DETR. “The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors.” Created by Google researchers in 2017, the Transformer network architecture was initially intended as a way to improve machine translation, but has grown to become a cornerstone of machine learning for making some of the most popular pretrained state-of-the-art language models, such as Google’s BERT, Facebook’s RoBERTa, and many others. In conversation with VentureBeat, Google AI chief Jeff Dean and other AI luminaries declared Transformer-based language models a major trend in 2019 they expect to continue in 2020. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Transformers use attention functions instead of a recurrent neural network to predict what comes next in a sequence. When applied to object detection, a Transformer is able to cut out steps to building a model, such as the need to create spatial anchors and customized layers. DETR achieves results comparable to Faster R-CNN , an object detection model created primarily by Microsoft Research that’s earned nearly 10,000 citations since it was introduced in 2015, according to arXiv. The DETR researchers ran experiments using the COCO object detection data set as well as others related to panoptic segmentation, the kind of object detection that paints regions of an image instead of with a bounding box. One major issue the authors say they encountered: DETR works better on large objects than small objects. “Current detectors required several years of improvements to cope with similar issues, and we expect future work to successfully address them for DETR,” the authors wrote. DETR is the latest Facebook AI initiative that looks to a language model solution to solve a computer vision challenge. Earlier this month, Facebook introduced the Hateful Meme data set and challenge to champion the creation of multimodal AI capable of recognizing when an image and accompanying text in a meme violates Facebook policy. In related news, earlier this week, the Wall Street Journal reported that an internal investigation concluded in 2018 that Facebook’s recommendation algorithms “exploit the human brain’s attraction to divisiveness,” but executives largely ignored the analysis. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,114
2,020
"Congress introduces bill that bans facial recognition use by federal government | VentureBeat"
"https://venturebeat.com/2020/06/25/congress-introduces-bill-that-bans-facial-recognition-use-by-federal-government"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Congress introduces bill that bans facial recognition use by federal government Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Members of the United States Congress introduced a bill today, The Facial Recognition and Biometric Technology Moratorium Act of 2020 , that would prohibit the use of U.S. federal funds to acquire facial recognition systems or “any biometric surveillance system” use by federal government officials. It would also withhold federal funding through the Byrne grant program for state and local governments that use the technology. The bill is sponsored by Senators Ed Markey (D-MA) and Jeff Merkley (D-OR) as well as Representatives Ayanna Pressley (D-MA) and Pramila Jayapal (D-WA). Pressley previously introduced a bill prohibiting use of facial recognition in public housing, while Merkley introduced a facial recognition moratorium bill in February with Senator Cory Booker (D-NJ). The news comes a day after the Boston City Council in Pressley’s congressional district unanimously passed a facial recognition ban , one of the largest cities in the United States to do so. News also emerged this week about Robert Williams , who’s thought to be the first person falsely accused of a crime and arrested due to misidentification by facial recognition. People in favor of a facial recognition ban argue that even if racial bias and misidentification issues are resolved, facial recognition will be used to target communities of color. The Perpetual Lineup at Georgetown University’s Center for Democracy and Technology, which tracks local and federal law enforcement use of facial recognition , found that at least one in four local and state police officers in the U.S. have access to facial recognition tech today. The group also warns that facial recognition will disproportionately impact African Americans. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Facial recognition is a uniquely dangerous form of surveillance. This is not just some Orwellian technology of the future — it’s being used by law enforcement agencies across the country right now, and doing harm to communities right now,” Fight for the Future deputy director Evan Greer said in a statement shared with VentureBeat and posted online. “Facial recognition is the perfect technology for tyranny. It automates discriminatory policing and exacerbates existing injustices in our deeply racist criminal justice system. This legislation effectively bans law enforcement use of facial recognition in the United States. That’s exactly what we need right now. We give this bill our full endorsement.” Greer continued by saying that Republican lawmakers who call themselves privacy or civil liberties supporters who vote against the bill would “expose themselves as hypocrites.” Criticism of facial recognition has grown in recent weeks as citizens call for police reform and even defunding police in order to reallocate funds and solve more problems without armed law enforcement. Amazon , IBM , and Microsoft agreed to halt or end sale of facial recognition technology for police earlier this month. In doing so, officials from all three companies called for federal regulation of the tech. Both Amazon and Microsoft declined to respond to questions from VentureBeat about whether their moratoriums apply to federal law enforcement. The European Commission also considered a five-year moratorium of facial recognition use in public places earlier this year, but those plans were scrapped. A lack of Republican sponsors means that while the bill introduced today could move forward in the Democrat-led House of Representatives, Republicans currently hold the majority of seats in the U.S. Senate and thus could block passage. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,115
2,020
"Facebook CTO says hiring matters for mitigating AI bias, but the company lacks AI research diversity stats | VentureBeat"
"https://venturebeat.com/2020/06/15/facebook-cto-says-hiring-matters-for-mitigating-ai-bias-but-the-company-lacks-ai-research-diversity-stats"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook CTO says hiring matters for mitigating AI bias, but the company lacks AI research diversity stats Share on Facebook Share on X Share on LinkedIn Facebook CTO Mike Schroepfer delivers a speech during the the Web Summit at Parque das Nacoes in Lisbon on November 8, 2016 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Facebook CTO Mike Schroepfer endorses the idea that hiring is an important part of diversity in AI and preventing bias for teams building products for users, but he can’t tell you the number of Black people who work at Facebook AI Research. Created with Yann LeCun in 2013, Facebook AI Research has locations in Silicon Valley, New York, and Paris. With more than 100 employees, FAIR has become one of the largest and most influential AI research organizations in the world. A Facebook AI spokesperson subsequently said Facebook has reported employee diversity numbers for six years but does not tally diversity statistics by individual teams. VentureBeat asked Facebook in July 2019 about the number of Black employees at Facebook AI Research and received no response. In November 2019, VentureBeat asked Google and Facebook (again) about diversity stats and was told by both companies that they do not supply AI research division diversity numbers. On a separate but potentially related note, Snap CEO Evan Spiegel last Friday said in an internal meeting that he does not intend to release diversity statistics because they will only reinforce the perception that Silicon Valley is a less than diverse place. After this story was published, a Snap spokesperson contacted VentureBeat to say the company plans to release its own version of a diversity report in the “near term” but declined to share a specific time frame. Schroepfer spoke publicly with AI journalists last week for the first time since protests against White supremacy set off by the police killing of George Floyd erupted in more than 2,000 cities across the U.S. and in major cities around the world. The subject of bias models came up after a reporter asked whether Facebook assessed winning models from the Deepfake Detection challenge for algorithmic bias based on skin tone. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Look, I think that representation is really important … I’d say we care a lot about those [issues], which is why we’ve focused a lot on improving diversity across the board in the company,” Schroepfer said. “I also think that the real solution to these problems for things like making sure you have a diverse data set is actually the process, understanding of formalizing this across the company, so there are statistical methods to determine whether this data set is representative in the ways you care about.” A 2019 analysis by Algorithmic Accountability Act coauthor Mutale Nkonde found that Facebook AI Research had no Black employees. Proposed in April 2019, the Algorithmic Accountability Act would require corporations to assess AI for safety, security, and bias. Above: Facebook 2019 Diversity Report statistics on technical staff by ethnicity In response, Schroepfer said bias is generally found in AI because training data fails to be representative of users. However, policymakers in Washington, AI ethics researchers, and many others stress that hiring diverse, pluralistic teams is essential for making AI with more people in mind. Prior to the killing of George Floyd, diversity efforts at companies like Facebook and Google had made only incremental progress. In the wake of Floyd’s death, government and business leaders are being challenged to fight White supremacy and institutional racism. But many critics and members of the Black tech community are frustrated by the tech industry’s lack of progress despite years of public diversity reports at companies like Facebook and Google, and historic underfunding of startups with Black founders. Human Utility founder Tiffani Ashley Bell coined the phrase “Make the hire. Send the wire.” to succinctly answer the question of how venture capitalists and tech giants can make a real difference. In recent weeks, Facebook CEO Mark Zuckerberg defended President Trump’s right to post the phrase “when the looting starts, the shooting starts,” which is associated with bigotry and suppression of civil rights protests dating back to the 1960s. Twitter censured a tweet with the same message for violating company policy and labeled it as “glorifying violence.” Several senior Facebook staff members have threatened to resign over Zuckerberg’s stance, and employees staged a virtual walkout on June 1. In recent days, multiple news outlets reported that Facebook fired an engineer who participated in the protest. Current and former Facebook employees had negative things to say about the company before the death of George Floyd. In November 2018, former Facebook employee Mark Luckie asserted that “Facebook is failing its Black employees and its Black users.” A year later, a Medium post by an anonymous group of a dozen current and former Black employees of Facebook went viral with accounts of repeated failures by company executives to address racial issues in the workplace. But Schroepfer defended Facebook’s record on diversity, emphasizing the company’s efforts to build the Society in AI and Responsible Innovation labs over the past two years. In a 2019 F8 keynote address , he highlighted Facebook’s increased use of AI for content moderation and the work of Responsible Innovation teams working with product teams on subjects like election integrity, security, and algorithmic fairness. Among highlights of the algorithmic fairness team’s work, Schroepfer said last year one Black employee “took it upon herself” to ensure pose estimation AI used for Portal’s camera works as well for people with light skin as it does for people with dark skin. “I think that the important thing is to build tools that work, that people use by default, so it’s representative by default. But obviously, you know, the more representative the team is, the better we can make sure that all the perspectives of our users are incorporated in the products we build,” he said. VentureBeat asked a Facebook spokesperson for more details about the activities of the algorithmic fairness team and whether there were specific initiatives within the company to audit algorithms Facebook uses in production, but did not receive a response at the time this story was published. An AI Now Institute report on diversity at major tech companies like Facebook and Google released last year found that more than 80% of computer science academics teaching AI are men, with women making up only 15% of AI researchers at Facebook and 10% of AI researchers at Google. The report insists that companies that do not make concerted efforts to create diverse teams can perpetuate structural inequality that exists today. Updated 1:12 p.m. June 16 to include comment on diversity statistics from a Snap spokesperson. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,116
2,020
"Microsoft won't sell police facial recognition until there's 'a national law in place' | VentureBeat"
"https://venturebeat.com/2020/06/11/microsoft-wont-sell-police-facial-recognition-until-theres-a-national-law-in-place"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft won’t sell police facial recognition until there’s ‘a national law in place’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft president Brad Smith today said the company will not sell facial recognition to police in the United States until there’s a “national law in place grounded in human rights that will govern this technology,” he told Washington Post Live. A Microsoft spokesperson confirmed Smith’s statement and offered no additional details. With IBM’s exit from the facial recognition business at the start of the week, Amazon and Microsoft were two of the biggest companies known to still make facial recognition tech available to police departments and government agencies. Then Amazon announced on Wednesday that it will pause sales to police for one year. Smith repeated a call made by Amazon, IBM, and a number of privacy and racial justice advocates in recent years for Congress to pass national facial recognition regulation. Smith first called for facial recognition regulation from Congress nearly two years ago. Microsoft has played a role in crafting privacy and facial recognition regulation in places like California and Microsoft’s home state of Washington. “[I]f all of the responsible companies in this country cede this market to those that are not prepared to take a stand, we won’t necessarily serve the national interest or the lives of the black and African people of this nation as well. We need Congress to act, not just tech companies alone. That’s the only way we will guarantee that we will protect the lives of people,” Smith said today. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Amazon and Microsoft may have temporarily halted sales to police, but several smaller or lesser known companies continue to sell facial recognition to police. Members of Congress have put forward a range of bills addressing the regulation of AI , but no facial recognition regulation like the kind that received bipartisan support in the House of Representatives earlier this year and in hearings dating back to May 2019. No such bill has emerged to become a law. Former House Oversight and Reform Committee chair Elijah Cummings (D-MD) passed away last year, but he summed up some basic tenets of facial recognition regulation in a statement provided to VentureBeat before his death by congressional staff. “I believe there should be front-end accountability for law enforcement’s use of facial recognition technology. I also believe that people should be informed of their participation in a facial recognition technology system and should be able to ‘opt-in’ when possible,” Cummings said. “This technology is evolving extremely rapidly, without any [real] safeguards, whether we are talking about commercial use or government use. There are real concerns about the risks that this technology poses to our civil rights and liberties, and our right to privacy.” Cummings’ concern with facial recognition stemmed in part from use of facial recognition during protests in Baltimore after the killing of Freddie Gray by police in 2015. Other committee members and experts testifying before Congress expressed concern with facial recognition use at political rallies, tracking people’s movement with live facial recognition, and the possibility of a chilling effect on protests and freedom of speech. An assessment of police use of facial recognition by the Georgetown University’s Center on Privacy and Technology found that roughly half of all U.S. adults are included in a facial recognition network used by law enforcement and that one in four police officers have access to the technology. However, there are few or no guidelines, and the proliferation of facial recognition adversely impacts African-Americans. In a June 2019 testimony before the committee , a Government Accountability Office (GAO) official testified that the FBI had not yet complied with a number of recommendations made in 2015 for the audit and assessment of facial recognition and image data provided by about 20 states. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,117
2,020
"IBM walked away from facial recognition. What about Amazon and Microsoft? | VentureBeat"
"https://venturebeat.com/2020/06/10/ibm-walked-away-from-facial-recognition-what-about-amazon-and-microsoft"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM walked away from facial recognition. What about Amazon and Microsoft? Share on Facebook Share on X Share on LinkedIn Algorithmic Justice League founder Joy Buolamwini discusses the Gender Shades project in a presentation at Stanford University in October 2019. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. IBM’s choice to end commercial sales of facial recognition may increase pressure on Amazon and Microsoft to halt contracts with law enforcement agencies. Algorithmic Justice League (AJL) founder Joy Buolamwini said she already sees signs pointing in that direction. On Tuesday, for example, the same day as George Floyd’s funereal, more than 250 Microsoft employees urged CEO Satya Nadella to end contracts with police. “Google has stepped back from facial recognition, as has IBM, and we already see Microsoft workers mobilizing to demand change,” Buolamwini told VentureBeat Tuesday shortly after testifying in front of the Boston City Council in favor of a proposed facial recognition ban. Hours after this article was published, Amazon imposed a one-year moratorium on facial recognition use by police. Amazon and Microsoft remain the only two major tech companies actively attempting to sell facial recognition software to governments and law enforcement agencies. Teaming up with Google AI researcher Timnit Gebru in 2018 and AI Now Institute researcher Deborah Raji in 2019, Buolamwini published the Gender Shades project , audits of facial recognition from companies like Amazon, IBM, Face++, and Microsoft that found that they perform best for white men and worst for women with darker skin tones. Arguing that even perfect facial recognition can be misused, Gebru made the case for a facial recognition ban in an interview published this morning by the New York Times. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We have reached a moment of reckoning where more people are re-evaluating law enforcement and the capacity for abuse that can be accelerated with high-tech tools,” Buolamwini said. “The moment we are in is a wake-up call for all tech companies working with police. At AJL, we support halting face surveillance and urge Microsoft, Amazon, and others to follow IBM’s lead in not equipping police with facial recognition technology for surveillance, racial profiling, and other harmful abuses.” Thanks to the Gender Shapes project these three black women AI researchers coauthored, knowledge of race and gender bias is far more common today among lawmakers considering regulation in state legislatures and Congress. Earlier this year and in multiple hearings in 2019, facial recognition regulation received support from a bipartisan group of lawmakers in Congress. On Tuesday, Buolamwini and the AJL called for major tech companies to commit $1 million to racial justice tech organizations like Data for Black Lives and Black in AI. AJL also urged facial recognition companies to sign the Safe Face pledge to not sell facial recognition to law enforcement agencies or other actors with power to use lethal force. Earlier this week, IBM CEO Arvind Krishna said his company will no longer sell or research facial recognition and joined the calls for regulation other major tech companies, including Amazon, Google, and Microsoft, are making. Google previously stated it has no plans to sell facial recognition without additional regulation and earlier this year supported calls for moratorium. IBM halted face detection for a publicly available API last year , and independent analysis found that IBM has no publicly known facial recognition contracts in the U.S. However, IBM’s decision carries some symbolic significance, given the company’s research with NYPD and history of working with Nazi Germany during World War II. By contrast, Microsoft advocates for facial recognition in California and Washington legislatures , and AWS CEO Andy Jassy has previously stated that his company will sell facial recognition to any government so long as it’s legal. In May 2019, Amazon shareholders formally rejected a halt of facial recognition sale to governments. Research published last year by the National Academy of Sciences found that police violence is a leading cause of death for black men in the United States. George Floyd, whose killing led to some of the longest and largest protests in recorded history, was buried in his hometown of Houston on Tuesday. In December 2019, the Department of Commerce’s National Institute of Standards and Technology (NIST) conducted its first facial recognition vendor test based on race. In an analysis of algorithms from almost 100 companies, the test found systems misidentified the faces of people of Asian or African descent 10 to 100 times more often than white faces. NIST Information Technology Laboratory director Dr. Charles Romine testified in January before the House Oversight and Reform Committee that Amazon was in talks with NIST for Rekognition to participate in its facial recognition vendor test. A NIST spokesperson today told VentureBeat that Amazon has not submitted any algorithm for evaluation under its Facial Recognition Vendor Test program. Though CEO Jeff Bezos told an All Lives Matter customer they aren’t wanted this week, Amazon has a flawed past. Similar to criticisms made of an Amazon fulfillment center worker who executives attempted to describe as “not smart or articulate,” in January 2019 AWS VP of AI Matt Wood took the unusual step of criticizing the Gender Shades analysis in a blog post , an apparent attempt to discredit the accuracy of the work. Gender Shades was subsequently defended by a group of nearly 80 prominent AI researchers , including Yoshua Bengio, the deep learning pioneer and supporter of equality who is among the most-cited scholars in the world today. For their work to reveal facial recognition performance disparities based on race and gender disparities in commercially available facial recognition systems, in July 2019, VentureBeat presented Buolamwini, Gebru, and Raji with its inaugural AI Innovation award in the AI for Good category. The trio are among a range of black women being mentioned in social media this week with the #CiteBlackWomen hashtag. To call more attention to the lack of citation and media coverage of women scholars as well as algorithmic bias, earlier this year the AJL, together with seven prominent women in tech and AI, launched the Voicing Erasure project. Updated at 12:56 pm Pacific to include NIST spokesperson comment on Amazon participation in U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) Facial Recognition Vendor Test program. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,118
2,020
"Amazon imposes one-year moratorium on police use of its facial recognition technology | VentureBeat"
"https://venturebeat.com/2020/06/10/amazon-imposes-one-year-moratorium-on-police-use-of-its-facial-recognition-technology"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon imposes one-year moratorium on police use of its facial recognition technology Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today declared a halt of the sale of facial recognition to police departments for one year. The news comes one day after George Floyd, a man killed by the Minneapolis Police Department, was laid to rest in Houston, and shortly after IBM pledged to end the sale of or research into facial recognition technology. Amazon and Microsoft are under increasing pressure to cancel police contracts following the killing of George Floyd and subsequent rage over institutional racism and white supremacy. For example, OneZero learned Tuesday that more than 250 Microsoft employees urged CEO Satya Nadella to cancel the company’s police contracts. VentureBeat reached out to Microsoft to ask if it also plans to put a moratorium in place to reconsider the sale of facial recognition technology. “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” Amazon said in a brief statement shared in a Day One blog this afternoon. “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.” In the past, AWS CEO Andy Jassy said Amazon will sell facial recognition to any government so long as it’s legal , and Amazon shareholders last year rejected a vote to halt sale of facial recognition to government customers. Amazon reportedly attempted to sell its facial recognition tech to U.S. Immigration and Customs Enforcement (ICE) in 2018 and it’s been used in trials by police in cities like Orlando, but the extent to which Amazon’s Rekognition is used by police today is unknown. It is not yet known if the moratorium includes contracts with federal law enforcement agencies like ICE or local police departments only. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dr. Charles Romine from the U.S. Department of Commerce’s National Institute for Standards and Technology testified before Congress in January that NIST was in talks with Amazon to evaluate its Rekognition software. However, a NIST spokesperson today told VentureBeat that Amazon has not submitted any algorithm for analysis under the Facial Recognition Vendor Test (FRVT) program. NIST results finding racial bias in facial recognition systems follows the work of the Gender Shades project. Dating back to 2018, AI researchers Joy Buolamwini, Timnit Gebru, and Deborah Raji found that facial recognition software from companies like Amazon work best for white men and worst for women with dark skin. This week, Buolamwini and Gebru urged tech giants like Amazon to ban facial recognition. Buolamwini said she was surprised by the Amazon moratorium news given Amazon’s public dismissal of Gender Shades project research. While reiterating a call for a facial recognition ban, Buolamwini said “Racial justice requires algorithmic justice.” “With IBM’s decision and Amazon’s recent announcement, the efforts of so many civil liberties organizations, activists, shareholders, employees and researchers to end harmful use of facial recognition are gaining even more momentum,” she said. “Microsoft also needs to take a stand. More importantly our lawmakers need to step up. We cannot rely on self-regulation or hope companies will choose to reign in harmful deployments of the technologies they develop.” The American Civil Liberties Union (ACLU) is a supporter of facial recognition ban legislation passed in places like San Francisco and frequently called attention to Rekognition classifying lawmakers and NFL athletes as criminals. The ACLU also filed a lawsuit targeting Amazon and Microsoft government contracts last fall. “It took two years for Amazon to get to this point, but we’re glad the company is finally recognizing the dangers face recognition poses to Black and Brown communities and civil rights more broadly,” said ACLU Northern California tech director Nicole Ozer said in a statement shared with VentureBeat. “We urge Microsoft and other companies to join IBM, Google, and Amazon in moving towards the right side of history.” European Union Commission officials considered a five-year moratorium of facial recognition earlier this yea r but backed away from the idea in February. Democratic U.S. Senators Cory Booker (D–NJ) and Jeff Merkley (D–OR) also backed a moratorium earlier this year with the introduction of the Ethical Use of Artificial Intelligence Act. Alphabet and Google CEO Sundar Pichai supported the idea of a facial recognition moratorium earlier this year. Updated 4:29 pm to include comment from Joy Buolamwini and Nicole Ozer. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,119
2,020
"A fight for the soul of machine learning | VentureBeat"
"https://venturebeat.com/2020/05/20/a-fight-for-the-soul-of-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion A fight for the soul of machine learning Share on Facebook Share on X Share on LinkedIn Adam Campbell resigned in protest from his job at Google. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Last Tuesday, Google shared a blog post highlighting the perspectives of three women of color employees on fairness and machine learning. I suppose the comms team saw trouble coming: The next day NBC News broke the news that diversity initiatives at Google are being scrapped over concern about conservative backlash , according to eight current and former employees speaking on condition of anonymity. The news led members of the House Tech Accountability Caucus to send a letter to CEO Sundar Pichai on Monday. Citing Google’s role as a leader in the U.S. tech community, the group of 10 Democrats questioned why, despite corporate commitments over years, Google diversity still lags behind the diversity of the population of the United States. The 10-member caucus specifically questioned whether Google employees working with AI receive additional bias training. When asked by VentureBeat, a Google spokesperson did not respond to questions raised by members of Congress but said any suggestion that the company scaled back diversity initiatives is “categorically false.” Pichai called diversity a “foundational value” for the company. For her part, Google AI ethical research scientist Timnit Gebru, one of the three women featured in the Google blog post, spelled out her feelings about the matter on Twitter. "The House members specifically asked in their letter if employees working in artificial intelligence undergo additional bias training." Followup question: whats the demographic makeup of the directors VPs and such making decisions about "AI ethics" https://t.co/4AMFwsbSzh — Timnit Gebru (@timnitGebru) May 19, 2020 VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hiring AI practitioners from diverse backgrounds is seen as a way to catch bias embedded in AI systems. Many AI companies pay lip service to the importance of diversity. As one of the biggest and most influential AI companies on the planet, what Google does or doesn’t do stands out and may be a bellwether of sorts for the AI industry. And right now, the company is cutting back on diversity initiatives at a time when clear ties are being drawn between surveillance AI startups and alt-right or white supremacy groups. Companies with documented algorithmic bias like Google, as well as those associated with alt-right groups, seem to really like government contracts. That’s a big problem in an increasingly diverse America. Stakeholders in this world of AI can ignore these problems, but they’ll only fester and risk not just a public trust crisis, but practical harms in people’s lives. Reported diversity program cutbacks at Google matter more than at virtually any other company in the world. Google began much of the modern trend of divulging corporate diversity reports that spell out the number of women and people of color within its ranks. According to Google’s 2020 diversity report , roughly 1 in 3 Google employees are women, whereas 3.7% are African American, 5.9% are Latinx, and 0.8% are Native American. Stagnant, slow progress on diversity in tech matters a lot more today than it did in the past now that virtually all tech companies — especially companies like Amazon, Google, and Microsoft — call themselves AI companies. Tech, and AI more specifically, suffers from what’s referred to as AI’s “white guy problem.” Analysis and audits of a vast swath of AI models have found evidence of bias based on race, gender, and a range of other characteristics. Somehow, AI produced by white guys often seems to work best on white guys. Intertwined with news about Google’s diversity and inclusion programs is recent revelatory reporting about surveillance AI startups Banjo and Clearview. Banjo founder and CEO Damien Patton stepped down earlier this month after OneZero reported that he had been a member of a white supremacist group who participated in shooting up a synagogue. A $21 million contract with Utah first responders is under review, according to Deseret News. And in an article titled “The Far-Right Helped Create The World’s Most Powerful Facial Recognition Technology,” Huffington Post reported on Clearview’s extensive connections with white supremacists, including a collaborator whose interest in facial recognition stems from a desire to track down people in the United States illegally. Clearview AI scraped billions of images from the web to train its facial recognition system and recently committed to working exclusively with government and law enforcement agencies. That some AI roads lead back to President Trump should come as little surprise. The Trump campaign’s largest individual donor in 2016 was early AI researcher Robert Mercer. Palantir founder Peter Thiel voiced his support for President Trump onstage at the Republican National Convention in 2016, and his company is getting hundreds of millions of dollars in government contracts. There’s also Cambridge Analytica , a company that maintained close ties with Trump campaign officials like Mercer and Steve Bannon. And, when OpenAI cofounder Elon Musk was taking a break from bickering with Facebook’s head of AI theories on Twitter a few days ago, he pushed people to “take the red pill,” a famous phrase from The Matrix that’s been appropriated by people with racist or sexist beliefs. Also this week: Machine learning researcher Abeba Birhane, winner of Best Paper award at the Black in AI workshop at NeurIPS 2019 for her work on relational ethics to address bias, had this to say: The last couple of days of being targeted by racist eugenicists has made me realize that there are waaaay more racist cranks than you think in academia spouting long discredited pseudoscience. Coincidentally ML is reviving this horrid history. -> — Abeba Birhane (@Abebab) May 18, 2020 Looking back at the Banjo and Clearview episodes, AI Now Institute research Sarah Myers West argued that racist and sexist elements have existed within the machine learning community since its beginning. “We need to take a long, hard look at a fascination with the far right among some members of the tech industry, putting the politics and networks of those creating and profiting from AI systems at the heart of our analysis. And we should brace ourselves: We won’t like what we find,” she said in a Medium post. That’s one side of AI right now. On the other side, while Google takes steps backward in diversity, and startups with ties to white supremacists seek government contracts, others in the AI ethics community are working to turn the vague principles that have been established in recent years into actual actions and company policy. In January, researchers from Google, including Gebru, released a framework for internal company audits of AI models that is designed to close AI accountability gaps within organizations. Forward momentum Members of the machine learning community pointed to signs of more maturity at conferences like NeurIPS , and the recent ICLR featured a diverse panel of keynote speakers and Africa’s machine learning community. At TWIMLcon in October 2019, a panel of machine learning practitioners shared thoughts on how to operationalize AI ethics. And in recent weeks, AI researchers have proposed a number of constructive ways organizations can convert ethics principles into practice. Last month, AI practitioners from more than 30 organizations created a list of 10 recommendations for turning ethics principles into practice, including bias bounties, which are akin to bug bounties for security software. The group also suggested creating a third-party auditing marketplace as a way to encourage reproducibility and verify company claims about AI system performance. The group’s work is part of a larger effort to make AI more trustworthy, verify results, and ensure “beneficial societal outcomes from AI.” The report asserts that “existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.” In a keynote address at the all-digital ICLR, sociologist and Race After Technology author Ruha Benjamin asserted that deep learning without historical or social context is “superficial learning.” Considering the notion of anti-blackness in AI systems and the new Jim Code , Benjamin encouraged building AI that empowers people, and she stressed that AI companies should view diverse hiring as an opportunity to build more robust models. “An ahistoric and asocial approach to deep learning can capture and contain, can harm people. A historically and sociologically grounded approach can open up possibilities. It can create new settings. It can encode new values and build on critical intellectual traditions that have continually developed insights and strategies grounded in justice. My hope is we all find ways to build on that tradition,” she said. Analysis published in Proceedings of the National Academy of Sciences last month indeed found that women and people of color in academia produce scientific novelty at higher rates than white men, but those contributions are often “devalued and discounted” in the context of hiring and promotion. A fight over AI’s soul is raging as algorithmic governance or AI used by government grows in interest and real-world applications. Use of algorithmic tools may increase as many governments around the world, such as state governments in the U.S., face budgetary shortfalls due to COVID-19. A joint Stanford-NYU study released in February found that only 15% of algorithms used by the United States government are considered highly sophisticated. The report concluded that government agencies need more in-house talent to create custom models and assess AI from third-party vendors, and warned of a trust crisis if people doubt AI used by government agencies. “If citizens come to believe that AI systems are rigged, political support for a more effective and tech-savvy government will evaporate quickly,” the report reads. A case study about how Microsoft, OpenAI, and the world’s democratic nations in the OECD are turning ethics principles into action also warns that governments and businesses could face increasing pressure to put their promises into practice. “There is growing pressure on AI companies and organizations to adopt implementation efforts, and those actors perceived to verge from their stated intentions may face backlash from employees, users, and the general public. Decisions made today about how to operationalize AI principles at scale will have major implications for decades to come, and AI stakeholders have an opportunity to learn from existing efforts and to take concrete steps to ensure that AI helps us build a better future,” the report reads. Bias and better angels When Google, one of the biggest and influential AI companies today, cuts back diversity initiatives after public retaliation against LGBT employees last fall, it sends a clear message. Will AI companies, like their tech counterparts, choose to bend to political winds? Racial bias has been found in the automatic speech recognition performance from Apple, Amazon, Google, and Microsoft. Research published last month found popular pretrained machine learning algorithms like Google’s BERT contain bias ranging from race and gender to religious or professional discrimination. Bias has also been documented in object detection and facial recognition, and in some instances has negatively impacted hiring , health care , and financial lending. The risk assessment algorithm the U.S. Department of Justice uses assigns higher recidivism scores to black people in prisons — known COVID-19 hotspots — which affects early release. People who care about the future of AI and its use in bettering human lives should be outspoken and horrified about a blurring line between whether biased AI is the product of genuine racial extremists or indifferent (mostly) white men. For the person of color on the receiving end of that bias, whether the racism was generated actively or passively doesn’t really matter that much. The AI community should resolve that it cannot move at the same slow pace of progress on diversity as the wider tech industry, and it should consider the danger of the “white default” spreading in an increasingly diverse world. One development to watch in this context in the months ahead is how Utah considers its $21 million contract with Banjo, which state officials are currently reviewing. They also have to decide if they’re OK employing surveillance technology built by a racist. Another, of course, is Google. Will Google make meaningful progress on diversity hiring and retention or just ignore the legacy of its scrapped implicit bias training program Sojourn and let the wound fester? What’s also worth watching is Google’s thirst for government contracts. The company recently hired Josh Marcuse to act as its head of strategy and innovation for the global public sector, including military contracts. Marcuse was director of the Defense Innovation Board (DIB), a group formed in 2016 that last fall created AI ethics principles for the U.S. Department of Defense. Former Google chair Eric Schmidt was the DIB chair who led the process of developing the principles. Schmidt’s close ties with Silicon Valley and the Pentagon on machine learning initiatives were documented in a recent New York Times article. Keep an eye on Congress as well, where data privacy laws proposed in recent months call for additional study of algorithmic bias. The Consumer Online Privacy Rights Act (COPRA) supported by Senate Democrats would make algorithmic discrimination illegal in housing, employment, lending, and education, and would allow people to file lawsuit for data misuse. And then there’s the question of how the AI community itself will respond to Google’s alleged reversal and slow or superficial progress on diversity. Will people insist on more diversity in AI or chalk this up, like example after example of algorithmic bias that leaches trust from the industry, as sad and unfortunate and wrong, but do nothing? The question of whether to speak up or do nothing was raised recently by Soul of America author and historian Jon Meacham. In a conversation with Kara Swisher, Meacham, who’s host of a new podcast called “Hope, Through History,” said the story of the United States is not a “nostalgic fairy tale” and never was. We’re a nation of perennial struggles with a history that includes struggle against apartheid-like systems of power. He says the change wrought by events like the civil rights movement came not from when the powerful decided to do something, but when the powerless convinced the powerful to do the right thing. In other words, the arc of the moral universe “doesn’t bend toward justice if there aren’t people insisting that it swerve towards justice,” he said. The future The United States is a diverse country that U.S. Census estimates say will have no racial majority in the coming decades , and that’s already true in many cities. United Nations estimates say Africa will be the youngest continent on Earth for decades to come and will account for most global population growth until 2050. Building for the future quite literally means building and investing with diversity in mind. We should all want to avoid finding out what happens when systems known to work best for white men are implemented in a world where the majority of people are not white men. Tech is not alone. Education is also experiencing diversity challenges, and in journalism, newsrooms often fail to reflect the diversity of its audience. Basic back-of-the-envelope math says businesses that fail to recognize the value of diversity may suffer as the world continues to grow more diverse. AI that makes previously impossible things possible for people with disabilities, or that tackles borderless challenges like climate change and COVID-19, appeal to our humanity. Tools like the AI technology for sustainable global development that dozens of AI researchers released earlier this week appeal to our better angels. If sources speaking with NBC News under condition of anonymity are accurate, Google now has to decide whether to revisit diversity initiatives that bear results or carry on with business as usual. But even if it’s not today or in the immediate future, if the company fails to act, it could bring about demands from an increasingly diverse base of consumers or even a social movement. The notion of building a larger movement to demand progress on tech’s lack of diversity has come up before. In a talk at the Afrotech conference about the black tech ecosystem in the United States, Dr. Fallon Wilson talked of the need for a black tech movement to confront the lack of progress toward diversity in tech. Wilson said such a movement could involve groups like the Algorithmic Justice League and draw inspiration from previous social movements in the United States like the civil rights movement. If such a movement ever mounted boycotts like the civil rights movement in the 1960s, future demographics of women or people of color could include a majority of the population. Algorithmic discrimination today is pervasive, and it seems to be not just an acceptable outcome to some but the desired result. AI should be built with the next generation in mind. At the intersection of all these issues are government contracts, and making tools that work for everyone should be an incontrovertible matter of law. Policy that requires system audits and demand routine government surveillance reports should form the cornerstone of government applications of AI that interact with citizens or make decisions about people’s lives. To do otherwise risks a trust crisis. There’s a saying popular among political journalists that “All governments lie.” Just as governments are held accountable, unspeakably wealthy tech companies who seek to do business with governments should also have to show some receipts. Because whether it’s tomorrow or months or years from now, people are going to continue demand progress. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,120
2,020
"Ruha Benjamin on deep learning: Computational depth without sociological depth is 'superficial learning’ | VentureBeat"
"https://venturebeat.com/2020/04/29/ruha-benjamin-on-deep-learning-computational-depth-without-sociological-depth-is-superficial-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ruha Benjamin on deep learning: Computational depth without sociological depth is ‘superficial learning’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Princeton University associate professor of African American Studies and Just Data Lab director Dr. Ruha Benjamin said engineers creating AI models should consider more than data sets when deploying systems. She further asserted that “computational depth without historic or sociological depth is superficial learning.” “An ahistoric and asocial approach to deep learning can capture and contain, can harm people. A historically and sociologically grounded approach can open up possibilities. It can create new settings. It can encode new values and build on critical intellectual traditions that have continually developed insights and strategies grounded in justice. My hope is we all find ways to build on that tradition,” she said. In a talk that examined the tools needed to build just and humane AI systems, she warns that without such guiding principles, people in the machine learning community can become like IBM workers who participated in the Holocaust during World War II — technologists involved in automated human destruction hidden within bureaucratic technical operations. Alongside deep learning pioneer Yoshua Bengio , Benjamin was a keynote speaker this week at the all-digital International Conference on Learning Representations (ICLR), an annual machine learning conference. ICLR was originally scheduled to take place in Addis Ababa, Ethiopia this year to engage the African ML community. But due to the pandemic, ICLR became a digital conference with keynote speakers, poster sessions, and even social events happening entirely online. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Harmful algorithmic bias has proven to be fairly pervasive in AI. Recent examples include ongoing racial disparity in facial recognition performance identified by federal tech standards maker NIST late last year, but researchers have also found bias in top-performing pretrained language models , object detection , automatic voice AI , and home lending. Benjamin also referenced instances of bias in health care , personal lending, and job hiring processes but said AI makers’ recognition of historical and sociological contexts can lead to more just and humane AI systems. “If it is the case that inequity and injustice [are] woven into the very fabric of our societies, then that means each twist, coil, and code is a chance for us to weave new patterns, practices, and politics. The vastness of the problem will be its undoing once we accept that we are pattern makers,” she said. Benjamin explored themes from her book Race After Technology , which urges people to consider imagining a tool for counteracting power imbalances and examines issues like algorithmic colonialism and anti-blackness embedded in AI systems, as well as the overall role of power in AI. Benjamin also returned to her assertion that imagination is a powerful resource for people who feel disempowered by the status quo and for AI makers whose systems will either empower or oppress. “We should acknowledge that most people are forced to live inside someone else’s imagination, and one of the things we have to come to grips with is how the nightmares that many people are forced to endure are really the underside of elite fantasies about efficiency, profit, safety, and social control,” she said. “Racism, among other axes of domination, helps to produce this fragmented imagination, so we have misery for some and monopoly for others.” Answering questions Tuesday in a live conversation with members of the machine learning community, Benjamin said her next book and work at the Just Data Lab will focus on matters related to race and tech during the COVID-19 global pandemic. Among recent examples at the intersection of these issues, Benjamin points to the Department of Justice’s use of a PATTERN algorithm to reduce prison populations during the pandemic. An analysis found that the algorithm is more than 4 times as likely to label white inmates low risk as black inmates. Benjamin’s keynote comes as companies’ attempts to address algorithmic bias have drawn accusations of ethics washing , similar to criticism leveled at the lack of progress on diversity in tech over the better part of the last decade. When asked about opportunities ahead, Benjamin said it’s important that organizations maintain ongoing conversations around diversity and pay more than lip service to these issues. “One area that I think is really crucial to understand[ing] the importance of diversity is in the very problems that we set out to solve as tech practitioners,” she said. “I would encourage us not to think about it as cosmetic or downstream — where things have already been decided and then you want to bring in a few social scientists or you want to bring in a few people from marginalized communities. Thinking about it much earlier in the process is vital.” Recent efforts to put ethical principles into practice within the machine learning ethics community include a framework from Google AI ethics leaders for internal auditing and an approach to ethics checklists from principal researchers at Microsoft. Earlier this month, researchers from 30 major AI organizations — including Google and OpenAI — suggested creating a third-party AI auditing marketplace and “bias bounties ” to help put principles into practice. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,121
2,020
"Algorithmic Justice League protests bias in voice AI and media coverage | VentureBeat"
"https://venturebeat.com/2020/03/31/algorithmic-justice-league-protests-bias-voice-ai-and-media-coverage"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Algorithmic Justice League protests bias in voice AI and media coverage Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A group of seven influential women studying algorithmic bias, AI, and technology have released a spoken word piece called “ Voicing Erasure. ” The project highlights racial bias in the speech recognition systems made by tech giants and recognizes the overlooked contributions of female scholars and researchers in the field. A report titled “Racial disparities in automated speech recognition” was also published roughly a week ago. The authors found that automatic speech recognition systems for Apple, Amazon, Google, IBM, and Microsoft collectively achieve word error rates of 35% for African-American voices versus 19% for white voices. Automatic speech recognition systems from these tech giants can do things like transcribe speech-to-text and power AI assistants like Alexa, Cortana, and Siri. The Voicing Erasure project is a product of the Algorithmic Justice League , a group created by Joy Buolamwini. Participants in the computer science art piece include former White House CTO Megan Smith; Race After Technology author Ruha Benjamin; Design Justice author Sasha Costanza-Chock; and Kimberlé Crenshaw, a professor of law at Columbia Law School and UCLA. “We cannot let the promise of AI overshadow real and present harms,” Benjamin said in the piece. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 2018 and 2019, Buolamwini and collaborators carried out audits of facial recognition bias that are frequently cited by lawmakers and activists. The team’s findings are recognized as central to understanding race and gender disparities in the performance of facial recognition systems from tech giants like Amazon and Microsoft. Buolamwini was also part of the Coded Bias documentary, which premiered at the Sundance Film Festival earlier this year, and “ AI, Ain’t I A Woman?,” a play on an 1851 Sojourner Truth speech with a similar name. Additional audits are in the works, Buolamwini told VentureBeat, but the performance piece was made to underscore racial disparities we already know exist in automated speech recognition. The Voicing Erasure project also highlights the ways voice assistants often reinforce gender stereotypes. In an effort to roll back some of that gendered bias, most major assistants today offer both masculine and feminine voice options, with the exception of Amazon’s Alexa. The poetic protest also recognizes the sexism female researchers can encounter in the field, pointing to a New York Times article about the bias report that cites multiple male authors but fails to recognize lead author Allison Koenecke, who appears in Voicing Erasure. Algorithms of Oppression author Dr. Safiya Noble, who has also been critical of tech journalists, participated in the spoken word project. “Racial disparities in automated speech recognition” was published in the Proceedings of the National Academy of Sciences by a team of 10 researchers from Stanford University and Georgetown University. They found that Microsoft’s automatic speech assistant tech performed best, while Apple and Google ranked worst. Above: Stanford Computational Policy Lab Each conversational AI system transcribed a total of 42 white speakers and 73 African-American speakers from data sets with nearly 20 hours of voice recordings. Researchers focused on voice data from Humboldt County and Sacramento, California, drawing on data sets with African-American Vernacular English (AAVE), like Voices of California and the Corpus of Regional African American Language (CORAAL). The authors said these discrepancies likely derive from speech recognition systems using insufficient audio data from African-American speakers during training. They said the error rates also highlight the need for speech recognition system makers, academics, and governments sponsoring research to invest in inclusivity. “Such an effort, we believe, should entail not only better collection of data on AAVE speech but also better collection of data on other nonstandard varieties of English, whose speakers may similarly be burdened by poor ASR performance — including those with regional and nonnative-English accents,” the report reads. “We also believe developers of speech recognition tools in industry and academia should regularly assess and publicly report their progress along this dimension.” In statements following the release of the study , Google and IBM Watson pledged to do more to correct this type of bias. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,122
2,019
"DeepMind’s AI has now outcompeted nearly all human players at StarCraft II | MIT Technology Review"
"https://www.technologyreview.com/s/614650/ai-deepmind-outcompeted-most-players-at-starcraft-ii"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts DeepMind’s AI has now outcompeted nearly all human players at StarCraft II By Karen Hao archive page AlphaStar (Zerg, in red) defending an early aggression where the opponent built part of the base near AlphaStar's base. courtesy of DeepMind In January of this year, DeepMind announced it had hit a milestone in its quest for artificial general intelligence. It had designed an AI system, called AlphaStar, that beat two professional players at StarCraft II, a popular video game about galactic warfare. This was quite a feat. StarCaft II is highly complex, with 10 26 choices for every move. It’s also a game of imperfect information—and there are no definitive strategies for winning. The achievement marked a new level of machine intelligence. Now DeepMind, an Alphabet subsidiary, is releasing an update. AlphaStar now outranks the vast majority of active StarCraft players, demonstrating a much more robust and repeatable ability to strategize on the fly than before. The results, published in Nature today, could have important implications for applications ranging from machine translation to digital assistants or even military planning. StarCraft II is a real-time strategy game, most often played one on one. A player must choose one of three human or alien races—Protoss, Terran, or Zerg—and alternate between gathering resources, building infrastructure and weapons, and attacking the opponent to win the game. Every race has unique skill sets and limitations that affect the winning strategy, so players commonly pick and master playing with one. AlphaStar used reinforcement learning , where an algorithm learns through trial and error, to master playing with all the races. “This is really important because it means that the same type of methods can in principle be applied to other domains,” said David Silver, DeepMind’s principal research scientist, on a press call. The AI also reached a rank above 99.8% of the active players in the official online league. In order to attain such flexibility, the DeepMind team modified a commonly used technique known as self-play, in which a reinforcement-learning algorithm plays against itself to learn faster. DeepMind famously used this technique to train AlphaGo Zero , the program that taught itself without any human input to beat the best players in the ancient game of Go. The lab also used it in the preliminary version of AlphaStar. Conventionally in self-play, both versions of the algorithm are programmed to maximize their chances of winning. But the researchers discovered that that didn’t necessarily result in the most robust algorithms. For such an open-ended game, it risked pigeon-holing the algorithm into specific strategies that would only work under certain conditions. Taking inspiration from the way pro StarCraft II players train with one another, the researchers instead programmed one of the algorithms to expose the flaws of the other rather than maximize its own chance of winning. “That’s kind of [like] asking a friend to play against you,” said Oriol Vinyals, the lead researcher on the project, on the call. “These friends should show you what your weaknesses are, so then eventually you can become stronger.” The method produced much more generalizable algorithms that could adapt to a broader range of game scenarios. The researchers believe AlphaStar’s strategy development and coordination skills could be applied to many other problems. “We chose StarCraft [...] because we felt it mirrored a lot of challenges that actually come up in real-world applications,” said Silver. These applications could include digital assistants, self-driving cars, or other machines that have to interact with humans, he said. “The complexity [of StarCraft] is much more reminiscent of the scales that we’re seeing in the real world,” said Silver. But AlphaStar demonstrates AI’s significant limitations, too. For example, it still needs orders of magnitude more training data than a human player to attain the same level of skill. Such learning software is also still a long way off from being translated into sophisticated robotics or real-world applications. To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free. hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done. By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,123
2,022
"The wrong data privacy strategy could cost you billions | VentureBeat"
"https://venturebeat.com/2022/02/02/the-wrong-data-privacy-strategy-could-cost-you-billions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The wrong data privacy strategy could cost you billions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Tianhui Michael Li, founder of The Data Incubator , and Maxime Agostini, cofounder and CEO of Sarus. Why differential privacy overcomes many of the fatal flaws of de-identification and data masking If data is the new oil , then privacy is the new environmentalism. With the growing use of data comes the need for strong privacy protections. Indeed, robust data privacy protects against rogue employees spying on users (like these at Uber or Google ) or even well-meaning employees who have been hacked , as well as when data is shared between departments or companies. Unfortunately, the conventional approaches to protecting the privacy of shared data are fundamentally flawed and have encountered several high-profile failures. With enhanced regulatory scrutiny from Europe’s GDPR, California’s CCPA, and China ’s PIPL , failures can cost companies millions in fines. In response, companies have focused on the bandaid solution — such as risk assessments and data masking — performed by already overtapped compliance teams. These solutions are slow, burdensome, and often inaccurate. We make the case that companies should use differential privacy , which is fast becoming the gold standard for protecting personal data and has been implemented in privacy-sensitive applications by industry leaders like Google, Apple, and Microsoft. Differential privacy is now emerging as not only the more secure solution but one that is lighter-weight and can enable safe corporate data collaboration. Companies are embracing differential privacy as they look to capture the $3 trillion of value Mckinsey estimates will be generated by data collaboration. Data masking is vulnerable to attackers with side information The common industry solution, data masking, sometimes called de-identification, leaves companies vulnerable to privacy breaches and regulatory fines. At its simplest form, it aims to make data records anonymous by removing all personally identifiable information (PII) , or anything that is sufficient to identify a single individual. Such identifiers can be obvious (name, email, phone number, social security number) or less so (IP, internal ID, date of birth, or any unique combinations of the above). For example, in medical data, HIPAA compliance proposes a list of 18 identifiers that need to be removed to qualify for safe harbor compliance. There is no shortage of masking techniques, such as deletion, substitution, perturbation, hashing, shuffling, redaction, etc. All come with their specific parameterization to make it harder to re-identify an individual. But while data masking is a first attempt at anonymization, it does not make data sets anonymous. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 1996, Massachusetts released the hospital records of its state employees in a noble attempt to foster research on improving healthcare and controlling costs. The governor at the time, William Weld, assured the public that their medical records would be safe and the state had taken pains to de-identify their dataset by removing critical PII. Little did he know that MIT graduate student Latanya Sweeney took on the challenge of re-identifying the data. By purchasing voter roll data, she was able to learn the governor’s birth date and zip code, which, when combined with his sex, uniquely identified his hospital visit in the dataset. In a final theatrical flourish, she even mailed Governor Weld’s health care records to his office. This famous case is a reminder that, as long as there is something potentially unique left in the de-identified record, someone with the right “side information” may use that as a way to carry out a re-identification attack. Indeed, even just sharing simple aggregates — like sums and averages — can be enough to re-identify users given the right side information. Data masking is slow, manual, and burdens already-overtapped compliance teams Regulators have long understood that de-identification is not a silver bullet due to re-identification with side information. When regulators defined anonymous or de-identified information, they refrained from giving a precise definition and deliberately opted for a practical one based on the reasonable risks of someone being re-identified. GDPR mentions “ all the means reasonably likely to be used ” whereas CCPA defines de-identified to be “ information that cannot reasonably identify ” an individual. The ambiguity of both definitions leaves places the burden of privacy risk assessment onto the compliance team. For each supposedly de-identified dataset, they need to prove that the re-identification risk is not reasonable. To meet those standards and keep up with proliferating data sharing, organizations have had to beef up their compliance teams. This appears to have been the process that Netflix followed when they launched a million-dollar prize to improve its movie recommendation engine in 2006. They publicly released a stripped-down version of their dataset with 500,000 movie reviews, enabling anyone in the world to develop and test prediction engines that could beat theirs. The company appears to have deemed the risk of re-identification based on user film ratings negligible. Nonetheless, researchers from UT Austin were able to leverage user ratings of movies as a “fingerprint” to tie a user’s private Netflix reviews to their public IMDB reviews. The IMDB accounts sometimes had real user names while the corresponding Netflix accounts often had extra movie reviews not in the public IMDB accounts. Some of these extra reviews revealed apparent political affiliations, religious beliefs, sexual preferences, and other potentially sensitive information. As a result, Netflix ended up settling a privacy lawsuit for an undisclosed amount. Data masking strategies can always be adjusted in an attempt to meet the growing pressure to protect privacy but their intrinsic limitations mean they will never fully meet expectations. While Governor Weld’s re-identification may seem obvious in retrospect, the Netflix re-identification case highlights how side information can be difficult to anticipate, especially as users are increasingly prone to share previously private yet seemingly innocuous information on social media. Accurate risk assessments for privacy attacks are an unrealistic ask for compliance teams; they are perilous at best and futile at worst. Nonetheless, organizations have responded with lengthier reviews and more stringent data masking requirements that sometimes amputated the business value of the resulting data. This manual approach to protecting privacy has led to a significant slowdown in data projects, high cost of compliance, significant data engineering load, and missed opportunities. Differential privacy to the rescue By studying the risk of re-identification more thoroughly, researchers were able to better articulate the fundamental requirements for information to be anonymous. They realized that a robust definition of anonymous should not rely on what side information may be available to an attacker. This led to the definition of Differential Privacy in 2006 by Cynthia Dwork , then a researcher at Microsoft. It quickly became the gold standard for privacy and has been used in global technology products like Chrome , the iPhone , and Linkedin. Even the US Census used it for the 2020 census. Differential privacy solves the problem of side information by looking at the most powerful attacker possible: an attacker who knows everything about everyone in a population except for a single individual. Let’s call her Alice. When releasing information to such an attacker, how can you protect Alice’s privacy? If you release exact aggregate information for the whole population (e.g., the average age of the population), the attacker can compute the difference between what you shared and the expected value of the aggregate with everyone but Alice. You just revealed something personal about Alice. The only way out is to not share the exact aggregate information but add a bit of random noise to it and only share the slightly noisy aggregate information. Even for the most well-informed of attackers, differential privacy makes it impossible to deduce what value Alice contributed. Also, note that we have talked about simple insights like aggregations and averages but the same possibilities for re-identification apply to more sophisticated insights like machine learning or AI models, and the same differential privacy techniques can be used to protect privacy by adding noise when training models. Now, we have the right tools to find the optimal tradeoff: adding more noise makes it harder for a would-be attacker to re-identify Alice’s information, but at a greater loss of data fidelity for the data analyst. Fortunately, in practice, there is a natural alignment between differential privacy and statistical significance. After all, an insight that is not differentially private means it depends too much on just one individual, but in that case, it is not statistically significant either. Used properly, differential privacy should not get in the way of statistically significant insights, and neither differential privacy nor statistical significance are typically of concern at “big data” scales. Differential privacy provides guarantees around the worst-case effectiveness of even the most powerful attacker. With differential privacy, producing privacy-preserving analytics or machine learning models calls for a new way of interacting with personal data. The traditional approach was to run data through a data-masking pipeline before providing the altered data to the data analyst. With differential privacy, no data (whether masked or not) is sent to an analyst. Instead, an analyst submits queries and a system runs those on the data and adds appropriate noise. This paradigm works for both business analytics and machine learning use cases. It also fits very well with the modern data infrastructures where the data is often stored and processed on distributed systems with data practitioners working remotely. Differential privacy doesn’t just better protect user privacy, but it can do so automatically for new datasets without lengthy, burdensome privacy risk assessments. This is critical for companies looking to stay nimble as they capture part of what McKinsey estimates is $3 trillion dollars of value generated by data collaboration. Traditional data compliance team committees are costly, might take months to deliberate on a single case, and make fallible pronouncements about privacy. Additionally, each dataset and data project calls for a bespoke data masking strategy and ad-hoc anonymization pipeline, adding yet another burden to stretched data engineering resources. In some cases, compliance may even forbid sharing of data if no viable masking technique is known. With differential privacy, we can let the math and computers algorithmically determine how much noise needs to be added to meet the protection standards, cheaply, quickly, and reliably. Much as new distributed computing frameworks like Hadoop and Spark made it easy to scale data and computation, differential privacy is making it easier to scale privacy protection and data governance. To achieve anonymization, organizations have long relied on applying various data masking techniques to de-identify data. As the anecdotes about Massachusetts Governor Weld and Netflix have shown, and privacy research has proven, as long as there is exact information left in the data, one may use it to carry out re-identification attacks. Differential privacy is the modern, secure, mathematically rigorous, and practical way to protect user privacy at scale. Maxime Agostini is the cofounder and CEO of Sarus , a privacy company that lets organizations leverage confidential data for analytics and machine learning. Prior to Sarus, he was cofounder and CEO of AlephD, a marketing tech company that he led until a successful acquisition by Verizon Media. Tianhui Michael Li is the founder of The Data Incubator , an eight-week fellowship to help Ph.D.s and postdocs transition from academia into industry. It was acquired by Pragmatic Institute. Previously, he headed monetization data science at Foursquare and has worked at Google, Andreessen Horowitz, J.P. Morgan, and D.E. Shaw. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,124
2,022
"AI Weekly: AI supercomputers and facial recognition to verify taxpayers' identities | VentureBeat"
"https://venturebeat.com/2022/01/28/ai-weekly-ai-supercomputers-and-facial-recognition-to-verify-taxpayers-identities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: AI supercomputers and facial recognition to verify taxpayers’ identities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Supercomputers and facial recognition dominated the headlines this week in AI — but not necessarily in equal measure. Meta, the company formerly known as Facebook, announced it’s building a server cluster for AI research that it claims will be among the fastest of its kind. Meanwhile, the IRS quietly implemented a new program with a vendor, ID.me, that controversially uses facial recognition technology to verify the identity of taxpayers. Meta’s new “AI supercomputer” — called AI Research SuperCluster (RSC) — is impressive, to be sure. Work on it began a year and a half ago, with phase one reaching the operational stage within the past few weeks. Currently, RSC features 760 Nvidia GGX A100 systems containing 6,080 connected GPUs as well as custom cooling, power, networking, and cabling systems. Phase two will be completed by 2022, bringing RSC up to 16,000 total GPUs and the capacity to train AI systems “on datasets as large as an exabyte.” Meta says RSC will be applied to training a range of systems across Meta’s businesses, including content moderation algorithms, augmented reality features, and experiences for the metaverse. But the company hasn’t announced plans to make RSC’s capabilities public, which many experts say highlight the resource inequalities in the AI industry. “I think it’s important to remember that Meta spends money on big expensive spectacles because money is their strength — they can outspend people and get the big results, the big headlines they want that way,” Mike Cook, an AI researcher at Queen Mary University in London, told VentureBeat via email. “I absolutely hope Meta manages to do something interesting with this and we all get to benefit, but it’s really important that we put this in context — private labs like [Meta’s] redefine progress along these narrow lines that they excel at, so that they can position themselves as leaders.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Large corporations dominate the list of “AI supercomputers,” unsurprisingly, given the costs involved in building such systems. Microsoft two years ago announced that it created a 10,000-GPU AI supercomputer running on its Azure platform with research lab OpenAI. Nvidia has its own in-house supercomputer, Selene, that it uses for AI research including training natural language and computer vision models. Os Keyes, an AI ethicist at the University of Washington, characterized the trend as “worrying.” Keyes says that the direction of larger and more expensive AI compute infrastructure wrongly rewards “scale and hegemony,” while locking in “monolithic organizational forms” as the logical or efficient way of doing things. “It says some interesting things about Meta — about where it’s choosing to focus efforts,” Keyes said. “That Meta’s direction of investment is in algorithmic systems demonstrates exactly how hard they’ve pinned themselves to ‘technosolutionism’ … It’s change driven by what impresses shareholders and what impresses the ‘California ideology,’ and that isn’t change at all.” Aiden Gomez, the CEO of Cohere, a startup developing large language models for a range of use cases, called RSC a “major accomplishment.” But he stressed that it’s “another piece of evidence that only the largest organizations are able to develop upon and benefit from this technology.” While language models in particular have become more accessible in recent years, thanks to efforts like Hugging Face’s BigScience and EleutherAI , cutting-edge AI systems remain expensive to train and deploy. For example, training language models like Nvidia’s and Microsoft’s Megatron 530B can cost up to millions of dollars — not accounting for storage expenses. Inference — actually running the trained model — is another barrier. One estimate pegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year. “The big push for us at Cohere is changing this and broadening access to the outputs of powerful supercomputer advances – large language models – through an affordable platform,” Gomez said. “Ultimately, we want to avoid the extremely resource-intensive situation where everyone needs to build their own supercomputer in order to get access to high quality AI.” Facial recognition for taxes In other news, the IRS this year announced that it’s contracting with ID.me, a Virgnia-based facial recognition company, to verify taxpayers’ identities online. As reported by Gizmodo, users with an IRS.gov account will need to provide a government ID, a selfie, and copies of their bills starting this summer perform certain tasks, like getting a transcript online (but not to e-file taxes). The IRS pitches the new measures as a way to “protect the security of taxpayers.” But ID.me has a problematic history, as evidenced by complaints from residents in the roughly 30 states that contracted with the company for unemployment benefit verification. In New York, News10NBC detailed accounts of residents struggling to navigate through ID.me’s system, including one woman who claimed she’d waited 19 weeks for her benefits. Some have suggested that people of color are more likely to be misidentified by the system — which wouldn’t be surprising or unprecedented. Gender and racial prejudices are a well – documented phenomenon in facial analysis algorithms , attributable to imbalances in the datasets used to train the algorithms. In a 2020 study , researchers showed that algorithms could even become biased toward facial expressions, like smiling, or different outfits — which might reduce their recognition accuracy. Worryingly, ID.me hasn’t been fully honest about its technology’s capabilities. Contrary to some of ID.me’s public statements, the company matches faces against a large database — a practice that privacy advocates fear poses a security risk and could lead to “mission creep” from government agencies. “This dramatically expands the risk of racial and gender bias on the platform,” Surveillance Technology Oversight Project executive director Albert Fox Cahn told Gizmodo. “More fundamentally, we have to ask why Americans should trust this company with our data if they are not honest about how our data is used. The IRS shouldn’t be giving any company this much power to decide how our biometric data is stored.” For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,125
2,022
"Meta is developing a record-breaking supercomputer to power the metaverse | VentureBeat"
"https://venturebeat.com/2022/01/24/meta-is-developing-a-record-breaking-supercomputer-to-power-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta is developing a record-breaking supercomputer to power the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Following Meta’s (formerly Facebook) October announcement that it’s pushing to stake its claim on the metaverse , the company today announced that it has developed the AI Research SuperCluster (RSC), which it claims is among the fastest AI supercomputers running today. Once it is fully built, Meta says it will be the fastest operating supercomputer — the company is aiming to complete it by the middle of this year. CEO Mark Zuckerberg noted that the experiences the company is building for the metaverse require enormous compute power — reaching into quintillions of operations per second. The RSC will enable new AI models to learn from trillions of examples, understand hundreds of languages, and more. Data storage company Pure Storage and chip-maker Nvidia are part of the supercluster that Facebook has built. Particularly, Nvidia has been a key player supporting the metaverse, with its omniverse product billed as “ metaverse for engineers. ” After full deployment, Meta’s RSC will be the largest customer installation of Nvidia DGX A100 systems, said Nvidia in its press release today. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Rob Lee, CTO at Pure Storage, told VentureBeat via email that the RSC is significant to other companies outside Meta because the technologies (such as AI and AR/VR) powering the metaverse are more broadly applicable and in-demand in industries across the board. According to Lee, technical decision makers are always looking to learn from bleeding-edge practitioners, and the RSC provides great validation of the core components that are powering the world’s largest AI supercomputer. “Meta’s world-class team saw the value of pairing the performance, density and simplicity of Pure Storage products to power Nvidia GPUs created for this groundbreaking work pushing the boundaries of performance and scale,” said Lee. He added that enterprises of all sizes will be able to benefit from Meta’s work, expertise, and learnings in advancing how they pursue their data, analytics, and AI strategies. Scale is becoming a big deal In a blog released today, Meta claims that AI supercomputing is needed at scale. According to Meta, realizing the benefits of self-supervised learning and transformer-based models requires various domains — whether vision, speech, language, or for critical applications like identifying harmful content. AI at Meta’s scale will require massively powerful computing solutions capable of instantly analyzing ever-increasing amounts of data. Meta’s RSC is a breakthrough in supercomputing that will lead to new technologies and customer experiences enabled by AI, said Lee. “Scale is important here in multiple ways,” said Lee. He noted that firstly, Meta processes a tremendous amount of information on a continual basis, and so there’s a certain amount of scale in data processing performance and capacity that requires. “Secondly, AI projects depend on large volumes of data — with more varied and complete data sets providing better results. Thirdly, all of this infrastructure has to be managed at the end of the day, and so space and power efficiency and simplicity of management at scale is critical as well. Each of these elements is equally important, whether in a more traditional enterprise project or operating at Meta’s scale,” Lee said. Tackling the security and privacy issues that come with supercomputing Over the past few years, Meta has received several backlashes on its privacy and data policies, with the Federal Trade Commission (FTC) announcing it was investigating substantial concerns on Facebook’s privacy practices in 2018. Meta wants to tackle security and privacy issues from the get-go, stating that the company safeguards data in RSC by designing RSC from the ground up with privacy and security in mind. Meta claims this will enable its researchers to safely train models using encrypted user-generated data that is not decrypted until right before training. “For example, RSC is isolated from the larger internet, with no direct inbound or outbound connections, and traffic can flow only from Meta’s production data centers. To meet our privacy and security requirements, the entire data path from our storage systems to the GPUs is end-to-end encrypted and has the necessary tools and processes to verify that these requirements are met at all times.” said the company blog. Meta explains that data must go through a privacy review process to confirm it has been correctly anonymized before it is then imported into the RSC. The company also claims that the data is also encrypted before it can be used to train AI models, and decryption keys are deleted regularly to ensure old data is no longer accessible. To build this supercomputer, Nvidia provided the compute layer — including the Nvidia DGX A100 systems as its compute nodes. The GPUs communicate via an Nvidia Quantum 200 Gbps InfiniBand two-level Clos fabric. Lee noted that contributions from Penguin Computing hardware and software are “the glue” that unite Penguin, Nvidia, and Pure Storage. Together, these three partners were crucial to providing Meta with a massive supercomputing solution. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,126
2,021
"Mark Zuckerberg’s metaverse: What it means for the enterprise | VentureBeat"
"https://venturebeat.com/2021/08/07/mark-zuckerbergs-metaverse-what-it-means-for-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Mark Zuckerberg’s metaverse: What it means for the enterprise Share on Facebook Share on X Share on LinkedIn The idea of the metaverse has been percolating in tech circles since Neal Stephenson coined the term in his 1992 novel Snow Crash. This was the year after the World Wide Web was open to the public, and before websites for general use became widely available. The idea is that many virtual spaces will converge with the internet to form one large virtual world where we can all conduct our lives digitally. Tim Sweeney of Epic has spoken about how gaming may evolve into the metaverse. And Microsoft CEO characterised Azure digital twins and IoT as a metaverse in his earnings call last week. Facebook’s Mark Zuckerberg seems to be the most passionate proponent of the metaverse, acknowledging in a recent interview in The Verge that “I’ve been thinking about some of this stuff since I was in middle school and just starting to code.” He stirred a lot of interest in late June when he announced to his staff than an overarching goal was to connect many of the company’s initiatives to help bring the metaverse to life. And he has outlined a number of factors that are paving the way: The concept of “presence” and being engaged with friends and colleagues more naturally with spatial references Virtual reality and the investment Facebook has made to enable broad VR adoption with the Quest 2 The promise of augmented reality and the ability to seamlessly overlay digital data and imagery on the physical world using comfortable headsets The importance of being able to access content across any hardware including PC, mobile, gaming consoles, and XR headsets, and portability across software platforms The limitations of interacting with one another through the ”little glowing rectangle” of a phone screen (and by implication the limitations imposed by Google and Apple who control the operating systems on mobile devices) The limitations of interacting with people by means of an endless sea of little rectangles on Zoom The value of visually sharing multiple images or data sets concurrently in a “digital whiteboard room” rather than one page of one document at a time on a Zoom call. How will Facebook’s metaverse initiatives impact the enterprise? Facebook’s business model will evolve toward the metaverse concept, and some of the stepping stone technologies that Facebook is investing in will change how businesses interact with their customers and how individuals on business teams work with each other. At some point, we may see a metaverse evolve that creates a new community with enhanced presence, a forum and marketplace for content creators of various stripes, and new forms of digital commerce. But it is unlikely that this metaverse will suddenly appear next year, causing a new “land grab” and disrupting the existing social network advertising model. Instead, enterprises have time to prepare for this scenario. However, some of the stepping stone elements of Facebook’s metaverse journey are here today and can be used to enhance existing enterprise product and service offerings and gain experience. A number of the factors Zuckerberg has highlighted hint at future metaverse scenarios that may be valuable to begin planning for. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Adopting existing metaverse building blocks The Quest 2 is a self-contained unit with very good visual performance, spatial sound, and excellent hand tracking at a remarkable $299 price point. The available content today is largely for gaming, but the headset is capable of supporting compelling enterprise applications as well. It provides an excellent viewing device for team-based VR scenario or procedure training. 360-degree video viewed through the Quest 2 provides the experience of presence for product demonstrations or museum displays. A Quest 2 headset can be sent to a client for less than the cost of a salesperson’s visit and used to demonstrate a product in a much more engaging way than a typical video or slide show. Immersive VR advertising will become important in the metaverse, but there is no reason not to experiment with it now. Zuckerberg is right that we were not meant to interact with each other via little rectangles on a mobile device or a Zoom screen. Facebook Horizon is an early effort to create “presence,” providing a VR space where you can interact with cartoon avatars of friends. Presumably it will evolve into a social space where small groups of professional colleagues or client teams can interact more naturally. In the meantime, prepare for the metaverse and adopt development platforms that allow content to be easily made available on PC, mobile, and XR headsets. New dev platforms make the process of creating of multiplatform 3D content more akin to creating a PowerPoint than to creating a AAA game or feature film. Some also provide the ability to develop applications collaboratively, with colleagues from multiple sites working simultaneously on the same editor screen. This recreates the experience of working together in a meeting room. It can be valuable to gain organizational experience in building digital avatars, and 3D content, and new tools for collaboration. Future metaverse building blocks Augmented reality glasses are a key priority for Zuckerberg. Apple, Microsoft, and several smaller firms are also working on the challenge of creating comfortable headgear that seamlessly overlays the digital world onto the physical one. Launch dates are still uncertain, but it is likely there will be significant steps forward in the next 24 months. There is a lot more opportunity with AR than “presence” during a conversation with a colleague. Spatial and workflow applications on future AR headsets will be important productivity tools for front-line workers. Hands-free gesture or voice control provides much more natural engagement. Using visual, acoustic, and other sensors coupled to AI for identifying patterns can allow the parallel digital world to incorporate the current situation in the physical world and make that available to team members irrespective of location. Gain experience with these technologies now. They can run on mobile devices and easily transition to AR headsets as those headsets become cost effective and comfortable. Community, creators, and digital commerce in the future metaverse Roblox and YouTube already provide examples of marketplaces for user-generated content. The future metaverse is expected to expand the role of creators to share and monetize content. Emerging platforms such as ToneStone bring user-generated content to new markets such as music creation. There is value in gaining experience in channelling such content. Digital commerce will evolve to incorporate more immersive experiences, AR and VR tools, and perhaps less dependence on specific platforms. Organization can gain experience with these trends now and be prepared to take advantage of metaverse opportunities as they evolve. A number of companies build critical tools for the graphics technology, gaming, computer aided design, application development, and geospatial industries who will likely play important roles in providing critical elements for the emerging metaverse. Facebook has nearly 3 billion monthly active users, Fortnite has 350 million active users, and Roblox and Minecraft have commanded 200 million active users. The scale of a potential metaverse as a more interactive, richly detailed evolution of the world wide web, has huge implications for how we keep in touch, play, and work. While Facebook would like to control the metaverse, Zuckerberg acknowledges that it is unlikely any single company will do so. Still, the company has provided some useful new devices, capabilities, and observations that are likely to shape the future metaverse. There is much to learn here. David Brebner is founder and CEO of Umajin , which creates SaaS apps for mobile, AR/VR, IoT, and AI. He previously founded apps company Fingertapps and user experience research company Unlimited Realities. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,127
2,020
"Facebook's improved AI isn't preventing harmful content from spreading | VentureBeat"
"https://venturebeat.com/2020/11/19/facebooks-improved-ai-isnt-preventing-harmful-content-from-spreading"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook’s improved AI isn’t preventing harmful content from spreading Share on Facebook Share on X Share on LinkedIn A woman looks at the Facebook logo on an iPad in this photo illustration. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook claims it’s becoming better at detecting — and removing — objectionable content from its platform, despite the fact that misleading, untrue, and otherwise harmful posts continue to make their way into millions of users’ feeds. During a briefing with reporters ahead of Facebook’s latest Community Standards Enforcement Report, which outlines the actions Facebook took between June and August to remove posts that violate its rules, the company said that it’s deployed new AI systems optimized to identify hate speech and misinformation uploaded to Instagram and Facebook before it’s reported by members of the community. Facebook’s continued investment in AI content-filtering technologies comes as reports suggest the company is failing to stem the spread of problematic photos, videos, and posts. Buzzfeed News this week reported that according to internal Facebook documents, labels being attached to misleading or false posts around the 2020 U.S. presidential election have had little to no impact on how the posts are being shared. Reuters recently found over three dozen pages and groups that featured discriminatory language about Rohingya refugees and undocumented migrants. In January, Seattle University associate professor Caitlin Carlson published results from an experiment in which she and a colleague collected more than 300 posts that appeared to violate Facebook’s hate speech rules and reported them via the service’s tools. According to the report, only about half of the posts were ultimately removed. In its defense, Facebook says that it now proactively detects 94.7% of hate speech it ultimately removes, the same percentage as Q2 2020 and up from 80.5% in all of 2019. It claims 22.1 million hate speech posts were taken down from Facebook and Instagram in Q3, of which 232,400 were appealed and 4,700 were restored. Facebook says it couldn’t always offer users the option to appeal decisions due to pandemic-related staffing shortages — Facebook’s moderators, roughly 15,000 of whom are contract employees, have encountered roadblocks while working from home related to the handling of sensitive data. But the company says that it gave people the ability to indicate they disagreed with decisions, which in some cases led to the overturning of takedowns. Above: Rule-violating Facebook content taken down proactively. To achieve the incremental performance gains and automatically place labels on 150 million pieces of content viewed from the U.S., Facebook says it launched an AI model architecture called Linformer , which is now used to analyze billions of Facebook and Instagram posts. With Linformer, which was made available in open source earlier this year, Facebook says the model’s computations increase at a linear rate, making it possible to use larger pieces of training text and theoretically achieve better content detection performance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Also new is SimSearchNet++, an improved version of Facebook’s existing SimSearchNet computer vision algorithm that’s trained to match variations of an image with a degree of precision. Deployed as part of a photo indexing system that runs on user-uploaded images, Facebook says it’s resilient to manipulations such as crops, blurs, and screenshots and predictive of matching, allowing it to identify more matches while grouping collages of misinformation. For images containing text, moreover, the company claims that SimSearchNet++ can spot matches with “high” accuracy using optical character recognition. Beyond SimSearchNet++, Facebook says it’s developed algorithms to determine when two pieces of content convey the same meaning and that detect variations of content independent fact-checkers have already debunked. (It should be noted that Facebook has reportedly pressured at least a portion of its over 70 third-party international fact-checkers to change their rulings, potentially rendering the new algorithms less useful than they might be otherwise.) The approaches build on technologies including Facebook’s ObjectDNA, which focuses on specific objects within an image while ignoring distracting clutter. This allows the algorithms to find reproductions of a claim that incorporates pieces from an image that’s been flagged, even if the pictures seem different from each other. Facebook’s LASER cross-language sentence-level embedding, meanwhile, represents 93 languages across text and images in ways that enable the algorithms to evaluate the semantic similarity of sentences. To tackle disinformation, Facebook claims to have begun using a deepfake detection model trained on over 100,000 videos from a unique dataset commissioned for the Deepfake Detection Challenge , an open, collaborative initiative organized by Facebook and other corporations and academic institutions. When a new deepfake video is detected, Facebook taps multiple generative adversarial networks to create new, similar deepfake examples to serve as large-scale training data for its deepfake detection model. Facebook declined to disclose the accuracy rate of its deepfake detection model, but the early results of the Deepfake Detection challenge imply that deepfakes are a moving target. The top-performing model of over 35,000 from more than 2,000 participants achieved only 82.56% accuracy against the public dataset created for the task. Facebook also says it built and deployed a framework called Reinforcement Integrity Optimizer (RIO), which uses reinforcement learning to optimize the hate speech classifiers that review content uploaded to Facebook and Instagram. RIO, whose impact wasn’t reflected in the newest enforcement report because it was deployed during Q3 2020, guides AI models to learn directly from millions of pieces of content and uses metrics as reward signals to optimize models throughout development. As opposed to Facebook’s old classification systems, which were trained on fixed datasets and then deployed to production, RIO continuously evaluates how well it’s doing and attempts to learn and adapt to new scenarios, according to Facebook. Facebook points out that hate speech varies widely from region to region and group to group, and that it can evolve rapidly, drawing on current events and topics like elections. Users often try to disguise hate speech with sarcasm and slang, intentional misspellings, and photo alterations. The conspiracy movement known as QAnon infamously uses codenames and innocuous-sounding hashtags to hide their activities on Facebook and other social media platforms. A data sampler within RIO estimates the value of rule-violating and rule-following Facebook posts as training examples, deciding which ones will produce the most effective hate speech classifier models. Facebook says it’s working to deploy additional RIO modules, including a model optimizer that will enable engineers to write a customized search space of parameters and features; a “deep reinforced controller” that will generate candidate data sampling policies, features, and architectures; and hyperparameters and an enforcement and ranking system simulator to provide the right signals for candidates from the controller. “In typical AI-powered integrity systems, prediction and enforcement are two separate steps. An AI model predicts whether something is hate speech or an incitement to violence, and then a separate system determines whether to take an action, such as deleting it, demoting it, or sending it for review by a human expert … This approach has several significant drawbacks, [because] a system might be good at catching hate speech that reaches only very few people but fails to catch other content that is more widely distributed,” Facebook explains in a blog post. “With RIO, we don’t just have a better sampling of training data. Our system can focus directly on the bottom-line goal of protecting people from seeing this content.” There’s a limit to what AI can accomplish, however, particularly with respect to content like memes. When Facebook launched the Hateful Memes dataset, a benchmark made to assess the performance of models for removing hate speech, the most accurate algorithm — Visual BERT COCO — achieved 64.7% accuracy, while humans demonstrated 85% accuracy on the dataset. A New York University study published in July estimated that Facebook’s AI systems make about 300,000 content moderation mistakes per day, and problematic posts continue to slip through Facebook’s filters. In one Facebook group that was created this month and rapidly grew to nearly 400,000 people, members calling for a nationwide recount of the 2020 U.S. presidential election swapped unfounded accusations about alleged election fraud and state vote counts every few seconds. Countering this last assertion, Facebook says that during the lead-up to the U.S. elections, it removed more than 265,000 pieces of content from Facebook proper and Instagram for violating its voter interference policies. Moreover, the company claims that the prevalence of hate speech on its platform between July and September was as little as 0.10% to 0.11% equating to “10 to 11 views of hate speech for every 10,000 views of content.” (It’s important to note that the prevalence metric is based on a random sample of posts, measures the reach of content rather than pure post count, and hasn’t been evaluated by external sources.) Potential bias and other shortcomings in Facebook’s AI models and datasets threaten to further complicate matters. A recent NBC investigation revealed that on Instagram in the U.S. last year, Black users were about 50% more likely to have their accounts disabled by automated moderation systems than those whose activity indicated they were white. And when Facebook had to send content moderators home and rely more on AI during quarantine, CEO Mark Zuckerberg said mistakes were inevitable because the system often fails to understand context. Technological challenges aside, groups have blamed Facebook’s inconsistent, unclear, and in some cases controversial content moderation policies for stumbles in taking down abusive posts. According to the Wall Street Journal , Facebook often fails to handle user reports swiftly and enforce its own rules, allowing material — including depictions and praise of “grisly violence” — to stand, perhaps because many of its moderators are physically distant. In one instance, 100 Facebook groups affiliated with QAnon grew at a combined pace of over 13,600 new followers a week this summer, according to a New York Times database. In another, Facebook failed to enforce a year-old “call to arms” policy prohibiting pages from encouraging people to bring weapons to intimidate, allowing Facebook users to organize an event at which two protesters were killed in Kenosha, Wisconsin. Zuckerberg himself allegedly said that former White House advisor Steve Bannon’s suggestion that Dr. Anthony Fauci and FBI Director Christopher Wray be beheaded was not enough of a violation of Facebook’s rules to permanently suspend him from the platform — even in light of Twitter’s decision to permanently suspend Bannon’s account. Civil rights groups including the Anti-Defamation League, the National Association for the Advancement of Colored People, and Color of Change also claim that Facebook fails to enforce its hate speech policies both in the U.S. and in regions of the world like India and Myanmar, where Facebook has been used to promote violence against and interment of minorities. The groups organized an advertising boycott in which over 1,000 companies reduced spending on social media advertising for a month. Last week, Facebook revealed that it now combines content identified by users and models into a single collection before filtering, ranking, deduplicating, and handing it off to its thousands of moderators. By using AI to prioritize potentially fraught posts for moderators to review, the idea is to delegate the removal of low-priority content to automatic systems. But a reliance on human moderation isn’t necessarily better than leaning heavily on AI. Lawyers involved in a $52 million settlement with Facebook’s content moderators earlier this year determined that as many as half of all Facebook moderators may develop mental health issues on the job attributable to exposure to graphic videos, hate speech, and other disturbing material. Just this week, more than 200 Facebook contractors said in an open letter that the company is making content moderators return to the office during the pandemic because its attempt to rely more heavily on automated systems has “failed.” The workers called on Facebook and its outsourcing partners including Accenture and CPL to improve safety and working conditions and offer hazard pay. They also want Facebook to hire all of its moderators directly, let those who live with high-risk people work from home indefinitely, and offer better health care and mental health support. In response to pressure from lawmakers , the FCC , and others, Facebook implemented rules this summer and fall aimed at tamping down on viral content that violates standards. Members and administrators belonging to groups removed for running afoul of its policies are temporarily unable to create any new groups. Facebook no longer includes any health-related groups in its recommendations, and QAnon is banned across all of the company’s platforms. The Facebook Oversight Board, an external group that will make decisions and influence precedents about what kind of content should and shouldn’t be allowed on Facebook’s platform, began reviewing content moderation cases in October. And Facebook agreed to provide mental health coaching to moderators as it rolls out changes to its moderation tools designed to reduce the impact of viewing harmful content. But it’s becoming increasingly evident that preventing the spread of harmful content on Facebook is an intractable problem — a problem worsened by the company’s purported political favoritism and reluctance to act on research suggesting its algorithms stoke polarization. For all its imperfections, AI could be a part of the solution, but it’ll take more than novel algorithms to reverse Facebook’s worrisome trend toward divisiveness. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,128
2,021
"Google details new AI accelerator chips | VentureBeat"
"https://venturebeat.com/2021/05/18/google-details-new-ai-accelerator-chips"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google details new AI accelerator chips Share on Facebook Share on X Share on LinkedIn Tensor processing units (TPUs) in one of Google's data centers. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At Google I/O 2021, Google today formally announced its fourth-generation tensor processing units (TPUs) , which the company claims can complete AI and machine learning training workloads in close-to-record wall clock time. Google says that clusters of TPUv4s can surpass the capabilities of previous-generation TPUs on workloads including object detection, image classification, natural language processing, machine translation, and recommendation benchmarks. TPUv4 chips offers more than double the matrix multiplication TFLOPs of a third-generation TPU (TPUv3), where a single TFLOP is equivalent to 1 trillion floating-point operations per second. (Matrices are often used to represent the data that feeds into AI models.) It also offers a “significant” boost in memory bandwidth while benefiting from unspecified advances in interconnect technology. Google says that overall, at an identical scale of 64 chips and not accounting for improvement attributable to software, the TPUv4 demonstrates an average improvement of 2.7 times over TPUv3 performance. Google’s TPUs are application-specific integrated circuits (ASICs) developed specifically to accelerate AI. They’re liquid-cooled and designed to slot into server racks; deliver up to 100 petaflops of compute; and power Google products like Google Search, Google Photos, Google Translate, Google Assistant, Gmail, and Google Cloud AI APIs. Google announced the third generation in 2018 at its annual I/O developer conference and this morning took the wraps off the successor, which is in the research stages. Cutting-edge performance TPUv4 clusters — or “pods” — total 4,096 chips interconnected with 10 times the bandwidth of most other networking technologies, according to Google. This enables a TPUv4 pod to deliver more than an exaflop of compute, which is equivalent to about 10 million average laptop processors at peak performance VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “This is a historic milestone for us — previously to get an exaflop, you needed to build a custom supercomputer,” Google CEO Sundar Pichai said during a keynote address. “But we already have many of these deployed today and will soon have dozens of TPUv4 four pods in our datacenters, many of which will be operating at or near 90% carbon-free energy.” This year’s MLPerf results suggest Google’s fourth-generation TPUs are nothing to scoff at. When tasked with training a BERT model on a large Wikipedia corpus, training took 1.82 minutes with 256 fourth-gen TPUs — only slightly slower than the 0.39 minutes it took with 4,096 third-gen TPUs. Meanwhile, achieving a 0.81-minute training time with Nvidia hardware required 2,048 A100 cards and 512 AMD Epyc 7742 CPU cores. Google says that TPUv4 pods will be available to cloud customers starting later this year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,129
2,020
"Amazon debuts Trainium, a custom chip for machine learning training in the cloud | VentureBeat"
"https://venturebeat.com/2020/12/01/amazon-debuts-trainium-a-custom-chip-for-machine-learning-training-workloads"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon debuts Trainium, a custom chip for machine learning training in the cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today debuted AWS Trainium, a chip custom-designed to deliver what the company describes as cost-effective machine learning model training in the cloud. It comes ahead of the availability of new Habana Gaudi-based Amazon Elastic Compute Cloud (EC2) instances built specifically for machine learning training, powered by Intel’s new Habana Gaudi processors. “We know that we want to keep pushing the price performance on machine learning training, so we’re going to have to invest in our own chips,” AWS CEO Andy Jassy said during a keynote address at Amazon’s re:Invent conference this morning. “You have an unmatched array of instances in AWS, coupled with innovation in chips.” Amazon claims that Trainium will offer the most teraflops of any machine learning instance in the cloud, where a teraflop translates to a chip being able to process 1 trillion calculations a second. (Amazon is quoting 30% higher throughput and 45% lower cost-per-inference compared with the standard AWS GPU instances.) When Trainium becomes available to customers in the second half of 2021 as EC2 instances and in SageMaker , Amazon’s fully managed machine learning development platform, it will support popular frameworks including Google’s TensorFlow, Facebook’s PyTorch, and MxNet. Moreover, Amazon says it will use the same Neuron SDK as Inferentia , the company’s cloud-hosted chip for machine learning inference. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “While Inferentia addressed the cost of inference, which constitutes up to 90% of ML infrastructure costs, many development teams are also limited by fixed ML training budgets,” AWS wrote in a blog post. “This puts a cap on the scope and frequency of training needed to improve their models and applications. AWS Trainium addresses this challenge by providing the highest performance and lowest cost for ML training in the cloud. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference.” Absent benchmark results, it’s unclear how Trainium’s performance might compare with Google’s tensor processing units (TPUs), the search giant’s chips for AI training workloads hosted in Google Cloud Platform. Google says its forthcoming fourth-generation TPU offers more than double the matrix multiplication teraflops of a third-generation TPU. (Matrices are often used to represent the data that feeds into AI models.) It also offers a “significant” boost in memory bandwidth while benefiting from unspecified advances in interconnect technology. Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington’s Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks. Amazon has increasingly leaned into AI and machine learning training and inferencing services as demand in the enterprise grows. According to one estimate , the global machine learning market was valued at $1.58 billion in 2017 and is expected to reach $20.83 billion in 2024. In November, Amazon announced that it shifted part of the computing for Alexa and Rekognition to Inferentia-powered instances, aiming to make the work faster and cheaper while moving it away from Nvidia chips. At the time, the company claimed the shift to Inferentia for some of its Alexa work resulted in 25% better latency at a 30% lower cost. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,130
2,019
"Intel acquires AI chip startup Habana Labs for $2 billion | VentureBeat"
"https://venturebeat.com/2019/12/16/intel-acquires-ai-chip-startup-habana-labs-for-2-billion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel acquires AI chip startup Habana Labs for $2 billion Share on Facebook Share on X Share on LinkedIn Intel AI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a clear signal of its ambitions in the AI chip market, Intel this morning announced that it has acquired Habana Labs , an Israel-based developer of programmable AI and machine learning accelerators for cloud datacenters. The deal is worth approximately $2 billion, and Intel says it will strengthen its AI strategy as Habana begins sampling its proprietary silicon to customers. Habana — which raised $75 million in venture capital last November — will remain an independent business unit and continue to be led by its current management team. It will report to Intel’s data platforms group. Board chair Avigdor Willenz will serve as senior adviser to the business unit, as well as to Intel. “This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need — from the intelligent edge to the datacenter,” said Navin Shenoy, executive vice president and general manager of the data platforms group at Intel. “More specifically, Habana turbocharges our AI offerings for the datacenter with a high-performance training processor family and a standards-based programming environment to address evolving AI [compute requirements].” Habana offers two silicon products targeting workloads in AI and machine learning: the Gaudi AI Training Processor and the Goya AI Inference Processor. The former, which is optimized for “hyperscale” environments, is anticipated to power datacenters that deliver up to 4 times the throughput versus systems built with the equivalent number of graphics chips at half the energy per chip (140 watts). As for the Goya processor, which was unveiled in June and which is now commercially available, it offers up to 3 times the AI inferencing performance as Nvidia chips, where throughput and latency are concerned. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Gaudi is available as a standard PCI-Express card, as well as a mezzanine card that is compliant with the Open Compute Project accelerator module specs. It features one of the industry’s first on-die implementations of Remote Direct Memory Access over Ethernet (RDMA and RoCE) on an AI chip. This provides 10 100Gbps or 20 50Gbps communication links, enabling it to scale up to as many “thousands” of discrete accelerator cards. (A complete system with eight Gaudis, called the HLS-1, will ship in the coming months.) Goya will complement Intel’s in-house Nervana NNP-I, codenamed Springhill, which is based on a 10-nanometer Ice Lake processor that will allow it to cope with high workloads using minimal amounts of energy. As for Guadi, it’ll slot alongside Intle’s Nervana Neural Net L-1000 (codenamed Spring Crest), which is optimized for image recognition and whose architecture is distinct from other chips in that it lacks a standard cache hierarchy and its on-chip memory is managed directly by software. (Intel has previously claimed the NNP-T’s 24 compute clusters, 32GB of HBM2 stacks, and local SRAM deliver up to 10 times the AI training performance of competing graphics cards.) On the software side of the equation, Habana offers a development and execution environment — SynapseAI — with libraries and a JIT compiler designed to help customers deploy solutions as AI workloads. Importantly, it supports all of the standard AI and machine learning frameworks (e.g., Google’s TensorFlow and Facebook’s PyTorch), as well as the Open Neural Network Exchange format championed by Microsoft, IBM, Huawei, Qualcomm, AMD, Arm, and others. “We have been fortunate to get to know and collaborate with Intel, given its investment in Habana, and we’re thrilled to be officially joining the team,” said Habana CEO David Dahan. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.” The future of Intel is AI. Its books imply as much — the Santa Clara company’s AI chip segments notched $3.5 billion in revenue this year, and it expects the market opportunity to grow 30% annually from $2.5 billion in 2017 to $10 billion by 2022. Putting this into perspective, AI chip revenues were up from $1 billion a year in 2017. Intel’s purchase of Habana comes after its acquisition of San Mateo-based Movidius, which designs specialized low-power processor chips for computer vision, in September 2016. Intel bought field-programmable gate array (FPGA) manufacturer Altera in 2015 and a year later acquired Nervana , filling out its hardware platform offerings and setting the stage for an entirely new generation of AI accelerator chipsets. And in August, Intel snatched up Vertex.ai , a startup developing a platform-agnostic AI model suite. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,131
2,020
"Cerebras' wafer-size chip is 10,000 times faster than a GPU | VentureBeat"
"https://venturebeat.com/2020/11/17/cerebras-wafer-size-chip-is-10000-times-faster-than-a-gpu"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cerebras’ wafer-size chip is 10,000 times faster than a GPU Share on Facebook Share on X Share on LinkedIn The Cerebras wafer Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cerebras Systems and the federal Department of Energy’s National Energy Technology Laboratory today announced that the company’s CS-1 system is more than 10,000 times faster than a graphics processing unit (GPU). On a practical level, this means AI neural networks that previously took months to train can now train in minutes on the Cerebras system. Cerebras makes the world’s largest computer chip, the WSE. Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware. But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cerebras’s CS-1 system uses the WSE wafer-size chip, which has 1.2 trillion transistors, the basic on-off electronic switches that are the building blocks of silicon chips. Intel’s first 4004 processor in 1971 had 2,300 transistors, and the Nvidia A100 80GB chip , announced yesterday, has 54 billion transistors. Feldman said in an interview with VentureBeat that the CS-1 was also 200 times faster than the Joule Supercomputer, which is No. 82 on a list of the top 500 supercomputers in the world. “It shows record-shattering performance,” Feldman said. “It also shows that wafer scale technology has applications beyond AI.” Above: The Cerebras WSE has 1.2 trillion transistors compared to Nvidia’s largest GPU, the A100 at 54.2 billion transistors. These are fruits of the radical approach Los Altos, California-based Cerebras has taken, creating a silicon wafer with 400,000 AI cores on it instead of slicing that wafer into individual chips. The unusual design makes it a lot easier to accomplish tasks because the processor and memory are closer to each other and have lots of bandwidth to connect them, Feldman said. The question of how widely applicable the approach is to different computing tasks remains. A paper based on the results of Cerebras’ work with the federal lab said the CS-1 can deliver performance that is unattainable with any number of central processing units (CPUs) and GPUs, which are both commonly used in supercomputers. (Nvidia’s GPUs are used in 70% of the top supercomputers now ). Feldman added that this is true “no matter how large that supercomputer is.” Cerebras is presenting at the SC20 supercomputing online event this week. The CS-1 beat the Joule Supercomputer at a workload for computational fluid dynamics, which simulates the movement of fluids in places such as a carburetor. The Joule Supercomputer costs tens of millions of dollars to build, with 84,000 CPU cores spread over dozens of racks, and it consumes 450 kilowatts of power. Above: Cerebras has a half-dozen or so supercomputing customers. In this demo, the Joule Supercomputer used 16,384 cores, and the Cerebras computer was 200 times faster, according to energy lab director Brian Anderson. Cerebras costs several million dollars and uses 20 kilowatts of power. “For these workloads, the wafer-scale CS-1 is the fastest machine ever built,” Feldman said. “And it is faster than any other combination or cluster of other processors.” A single Cerebras CS-1 is 26 inches tall, fits in one-third of a rack, and is powered by the industry’s only wafer-scale processing engine, Cerebras’ WSE. It combines memory performance with massive bandwidth, low latency interprocessor communication, and an architecture optimized for high bandwidth computing. The research was led by Dirk Van Essendelft, machine learning and data science engineer at NETL, and Michael James, Cerebras cofounder and chief architect of advanced technologies. The results came after months of work. In September 2019, the Department of Energy announced its partnership with Cerebras, including deployments with Argonne National Laboratory and Lawrence Livermore National Laboratory. The Cerebras CS-1 was announced in November 2019. The CS-1 is built around the WSE, which is 56 times larger, has 54 times more cores, 450 times more on-chip memory, 5,788 times more memory bandwidth, and 20,833 times more fabric bandwidth than the leading GPU competitor, Cerebras said. Above: Cerebras at the Lawrence Livermore National Lab. Depending on workload, from AI to HPC, the CS-1 delivers hundreds or thousands of times more compute than legacy alternatives, and it does so at a fraction of the power draw and space. Feldman noted that the CS-1 can finish calculations faster than real time, meaning it can start the simulation of a power plant’s reaction core when the reaction starts and finish the simulation before the reaction ends. “These dynamic modeling problems have an interesting characteristic,” Feldman said. “They scale poorly across CPU and GPU cores. In the language of the computational scientist, they do not exhibit ‘strong scaling.’ This means that beyond a certain point, adding more processors to a supercomputer does not yield additional performance gains.” Cerebras has raised $450 million and has 275 employees. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,132
2,020
"Graphcore raises $222 million to scale up AI chip production | VentureBeat"
"https://venturebeat.com/2020/12/29/graphcore-raises-222-million-to-scale-up-ai-chip-production"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Graphcore raises $222 million to scale up AI chip production Share on Facebook Share on X Share on LinkedIn Graphcore cofounders Nigel Toons (L, CEO) and Simon Knowles (R, CTO). Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Graphcore , a Bristol, U.K.-based startup developing chips and systems to accelerate AI workloads, today announced it has raised $222 million in a series E funding round led by the Ontario Teachers’ Pension Plan Board. The investment, which values the company at $2.77 billion post-money and brings its total raised to date to $710 million, will be used to support continued global expansion and further accelerate future silicon, systems, and software development, a spokesperson told VentureBeat. The AI accelerators Graphcore is developing — which the company calls Intelligence Processing Units (IPUs) — are a type of specialized hardware designed to speed up AI applications, particularly neural networks, deep learning, and machine learning. They’re multicore in design and focus on low-precision arithmetic or in-memory computing, both of which can boost the performance of large AI algorithms and lead to state-of-the-art results in natural language processing, computer vision, and other domains. Graphcore, which was founded in 2016 by Simon Knowles and Nigel Toon, released its first commercial product in a 16-nanometer PCI Express card — C2 — that became available in 2018. It’s this package that launched on Microsoft Azure in November 2019 for customers “focused on pushing the boundaries of [natural language processing]” and “developing new breakthroughs in machine intelligence.” Microsoft is also using Graphcore’s products internally for various AI initiatives. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Earlier this year, Graphcore announced the availability of the DSS8440 IPU Server, in partnership with Dell, and launched Cirrascale IPU-Bare Metal Cloud, an IPU-based managed service offering from cloud provider Cirrascale. More recently, the company revealed some of its other early customers — among them Citadel Securities, Carmot Capital, the University of Oxford, J.P. Morgan, Lawrence Berkeley National Laboratory, and European search engine company Qwant — and open-sourced its libraries on GitHub for building and executing apps on IPUs. In July, Graphcore unveiled the second generation of its IPUs, which will soon be made available in the company’s M2000 IPU Machine. (Graphcore says its M2000 IPU products are now shipping in “production volume” to customers.) The company claims this new GC200 chip will enable the M2000 to achieve a petaflop of processing power in a 1U datacenter blade enclosure that measures the width and length of a pizza box. The M2000 is powered by four of the new 7-nanometer GC200 chips, each of which packs 1,472 processor cores (running 8,832 threads) and 59.4 billion transistors on a single die, and it delivers more than 8 times the processing performance of Graphcore’s existing IPU products. In benchmark tests, the company claims the four-GC200 M2000 ran an image classification model — Google’s EfficientNet B4 with 88 million parameters — more than 32 times faster than an Nvidia V100-based system and over 16 times faster than the latest 7-nanometer graphics card. A single GC200 can deliver up to 250 TFLOPS, or 1 trillion floating-point-operations per second. Beyond the M2000, Graphcore says customers will be able to connect as many as 64,000 GC200 chips for up to 16 exaflops of computing power and petabytes of memory, supporting AI models with theoretically trillions of parameters. That’s made possible by Graphcore’s IPU-POD and IP-Fabric interconnection technology, which supports low-latency data transfers up to rates of 2.8Tbps and directly connects with IPU-based systems (or via Ethernet switches). The GC200 and M2000 are designed to work with Graphcore’s bespoke Poplar, a graph toolchain optimized for AI and machine learning. It integrates with Google’s TensorFlow framework and the Open Neural Network Exchange (an ecosystem for interchangeable AI models), in the latter case providing a full training runtime. Preliminary compatibility with Facebook’s PyTorch arrived in Q4 2019, with full feature support following in early 2020. The newest version of Poplar introduced exchange memory management features intended to take advantage of the GC200’s unique hardware and architectural design with respect to memory and data access. Graphcore might have momentum on its side, but it has competition in a market that’s anticipated to reach $91.18 billion by 2025. In March, Hailo, a startup developing hardware designed to speed up AI inferencing at the edge, nabbed $60 million in venture capital. California-based Mythic has raised $85.2 million to develop custom in-memory architecture. Mountain View-based Flex Logix in April launched an inference coprocessor it claims delivers up to 10 times the throughput of existing silicon. And last November, Esperanto Technologies secured $58 million for its 7-nanometer AI chip technology. Beyond the Ontario Teachers’ Pension Plan Board, Graphcore’s series E saw participation from funds managed by Fidelity International and Schroders. They joined existing backers Baillie Gifford, Draper Esprit, and others. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,133
2,020
"SambaNova Systems raises $250 million for software-defined AI hardware | VentureBeat"
"https://venturebeat.com/2020/02/25/sambanova-systems-raises-250-million-for-software-defined-ai-hardware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SambaNova Systems raises $250 million for software-defined AI hardware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The infrastructure required to handle AI workloads is often as complex as it is sprawling, but a cottage industry of startups has emerged whose focus is developing solutions for end customers. SambaNova Systems is one such startup — the Palo Alto, California-based firm, which was founded in 2017 by Rodrigo Liang and Stanford professors Kunle Olukotun and Chris Ré, provides systems that run AI and data-intensive apps from the datacenter to the edge. In a reflection of investors’ voracious appetite for the market, it today announced that it’s raised $250 million in series C funding. “Raising $250 million in this funding round with support from new and existing investors puts us in a unique category of capitalization,” said CEO Liang, a veteran of Sun Microsystems and Oracle. “This enables us to further extend our market leadership in enterprise computing.” SambaNova’s products — and its customers, for that matter — remain largely under lock and key, but the company previously revealed it’s developing “software-defined” devices inspired by DARPA-funded research in efficient AI processing. Leveraging a combination of algorithmic optimizations and custom board-based hardware, SambaNova claims it’s able to dramatically improve the performance and capability of most AI-imbued apps. According to Olukotun, SambaNova’s platform is designed to scale from tiny electronic devices to enormous remote datacenters. “SambaNova’s innovations in machine learning algorithms and software-defined hardware will dramatically improve the performance and capability of intelligent applications,” Olukotun added. “The flexibility of the SambaNova technology will enable us to build a unified platform providing tremendous benefits for business intelligence, machine learning, and data analytics.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One thing’s for certain: SambaNova’s founders are a decorated bunch. Olukotun — who recently received the IEEE Computer Society’s Harry H. Goode Memorial Award — is the leader of the Stanford Hydra Chip Multiprocessor (CMP) research project, which produced a chip design that pairs four specialized processors and their caches with a shared secondary cache. Ré, an associate professor in the Department of Computer Science at Stanford University’s InfoLab, is a MacArthur genius award recipient who’s also affiliated with the Statistical Machine Learning Group, Pervasive Parallelism Lab, and Stanford AI Lab. The AI chip market is anticipated to be worth $91.18 billion by 2025, and dedicated AI chip startups raised $1.5 billion in 2017 alone, among them Kneron, Blaize, AIStorm, Graphcore, Quadric, and Esperanto Technologies. But SambaNova’s total raised — over $450 million to date, following a $56 million series A funding round in March 2018 and a $150 million series B funding round in April 2019 — is nothing to shake a stick at. BlackRock led the series C round with participation from existing investors including GV, Intel Capital, Walden International, WRVI Capital, and Redline Capital. SambaNova currently has over 150 employees. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,134
2,020
"OpenAI begins publicly tracking AI model efficiency | VentureBeat"
"https://venturebeat.com/2020/05/05/openai-begins-publicly-tracking-ai-model-efficiency"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI begins publicly tracking AI model efficiency Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI today announced it will begin tracking machine learning models that achieve state-of-the-art efficiency, an effort it believes will help identify candidates for scaling and achieving top overall performance. To kick-start things, the firm published an analysis suggesting that since 2012, the amount of compute needed to train an AI model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of 2 every 16 months. Beyond spotlighting top-performing AI models, OpenAI says that publicly measuring efficiency — which here refers to reducing the compute needed to train a model to perform a specific capability — will paint a quantitative picture of algorithmic progress. It’s OpenAI’s assertion that this in turn will inform policy making by renewing the focus on AI’s technical attributes and societal impact. “Algorithmic improvement is a key factor driving the advance of AI. It’s important to search for measures that shed light on overall algorithmic progress, even though it’s harder than measuring such trends in compute,” OpenAI wrote in a blog post. “Increases in algorithmic efficiency allow researchers to do more experiments of interest in a given amount of time and money. [Our] … analysis suggests policymakers should increase funding for compute resources for academia, so that academic research can replicate, reproduce, and extend industry research.” OpenAI says that in the course of its survey, it found that Google’s Transformer architecture surpassed a previous state-of-the-art model — seq2seq, which was also developed by Google — with 61 times less compute three years after seq2seq’s introduction. DeepMind’s AlphaZero , a system that taught itself from scratch how to master the games of chess, shogi, and Go, took 8 times less compute to match an improved version of the system’s predecessor — AlphaGoZero — one year later. And OpenAI’s own Dota 2-playing OpenAI Five Rerun required 5 times less training compute to surpass OpenAI Five — the model on which it’s based — just three months later. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: The results from OpenAI’s study of AI model efficiency. OpenAI speculates that algorithmic efficiency might outpace gains from Moore’s law, the observation that the number of transistors in an integrated circuit doubles about every two years. “New capabilities … typically require a significant amount of compute expenditure to obtain, then refined versions of those capabilities … become much more efficient to deploy due to process improvements,” OpenAI wrote. “Our results suggest that for AI tasks with high levels of investment [in] researcher time and or compute, algorithmic efficiency might outpace … hardware efficiency.” As a part of its benchmarking effort , OpenAI says it will start with vision and translation efficiency benchmarks — specifically ImageNet and WMT14 — and that it will consider adding more benchmarks over time. (Original authors and collaborators will receive credit.) No human captioning, other images, or other data will be allowed, but there won’t be any restrictions on training data used for translation or augmented augmentation. “Industry leaders, policymakers, economists, and potential researchers are all trying to better understand AI progress and decide how much attention they should invest and where to direct it,” OpenAI wrote. “Measurement efforts can help ground such decisions.” OpenAI isn’t the first to propose publicly benchmarking of the efficiency of AI models, it’s worth noting. Last year, scientists at the Allen Institute for AI, Carnegie Mellon University, and the University of Washington advocated for making efficiency a more common evaluation criterion for AI academic papers, alongside accuracy and related measures. Other proposals have called for an industry-level energy analysis and a compute-per-watt standard for machine learning projects. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,135
2,020
"Microsoft trains world's largest Transformer language model | VentureBeat"
"https://venturebeat.com/2020/02/10/microsoft-trains-worlds-largest-transformer-language-model"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft trains world’s largest Transformer language model Share on Facebook Share on X Share on LinkedIn Microsoft Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft AI & Research today shared what it calls the largest Transformer-based language generation model ever and open-sourced a deep learning library named DeepSpeed to make distributed training of large models easier. At 17 billion parameters, Turing NLG is twice the size of Nvidia’s Megatron , now the second biggest Transformer model, and includes 10 times as many parameters as OpenAI’s GPT-2. Turing NLG achieves state-of-the-art results on a range of NLP tasks. Like Google’s Meena and initially with GPT-2, at first Turing NLG may only be shared in private demos. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Language generation models with the Transformer architecture predict the word that comes next. They can be used to write stories, generate answers in complete sentences, and summarize text. Experts from across the AI field told VentureBeat 2019 was a seminal year for NLP models using the Transformer architecture, an approach that led to advances in language generation and GLUE benchmark leaders like Facebook’s RoBERTa , Google’s XLNet , and Microsoft’s MT-DNN. Also today: Microsoft open-sourced DeepSpeed, a deep learning library that’s optimized for developers to deliver low latency, high throughput inference. DeepSpeed contains the Zero Redundancy Optimizer (ZeRO) for training models with 100 million parameters or more at scale, which Microsoft used to train Turing NLG. “Beyond saving our users time by summarizing documents and emails, T-NLG can enhance experiences with the Microsoft Office suite by offering writing assistance to authors and answering questions that readers may ask about a document,” Microsoft AI Research applied scientist Corby Rosset wrote in a blog post today. Both DeepSpeed and ZeRO are being made available to developers and machine learning practitioners, because training large networks like those that utilize the Transformer architecture can be expensive and can encounter issues at scale. In other natural language AI news, Google’s DeepMind today released the Compressive Transformer long-range memory model and PG19, a benchmark for analyzing the performance of book-length language generation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,136
2,018
"DeepMind's AlphaZero beats state-of-the-art chess and shogi game engines | VentureBeat"
"https://venturebeat.com/2018/12/06/google-deepmind-alphazero-chess-shogi-go"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind’s AlphaZero beats state-of-the-art chess and shogi game engines Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Almost a year ago exactly, DeepMind, the British artificial intelligence (AI) division owned by Google parent company Alphabet, made headlines with preprint research (“Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”) describing a system — AlphaZero — that could teach itself how to master the game of chess, a Japanese variant of chess called shogi, and the Chinese board game Go. In each case, it beat a world champion, demonstrating a state-of-the-art knack for learning two-person games with perfect information — that is to say, games where any decision is informed of all the events that have previously occurred. DeepMind’s claims were impressive to be sure, but they hadn’t undergone peer review. That’s changed. DeepMind today announced that, after months of back-and-forth revisions, its work on AlphaZero has been accepted in the journal Science, where it’s made the front page. “A couple of years ago, our program, AlphaGo, defeated the 18-time world champion Go champion, Lee Sedol, by four games to one. But for us, that was actually the beginning of the journey to build a general-purpose learning system that could learn for itself to play many different games to superhuman level,” David Silver, lead researcher on AlphaZero, told reporters assembled in a conference room at NeurIPS 2018 in Montreal. “AphaZero is the next step in that journey. It learned from scratch to defeat world champion programs in Gi, Chess, and Shogi, started from no knowledge except the game rules.” The games were chosen both for their complexity and the rich history of prior AI research that’s been conducted about them, Silver explained. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Chess … represents what can be achieved by traditional methods of AI when they’ve been pushed to the absolute limit, and so we wanted to see whether we could overturn the traditional approaches that we use a lot handcrafting using a completely principled self-learning approach,” he said. “The reason we chose Shogi is that, in terms of difficulty, it’s one of the few board games aside from Go [that’s] very, very challenging, for even specialized program and computer programs to play. It was only … in the last year or two that there have been any computer programs that have been able to compete with human world champions.” Toward that end, the paper published this week describes how DeepMind outperforms chess- and shogi-playing algorithms such as Stockfish, Elmo, and IBM’s Deep Blue by leveraging a deep neural network — layered mathematical functions that mimic the behavior of neurons in the human brain — rather than handcrafted rules. Its dynamic mode of play results in creative and unconventional strategies that inspired a forthcoming book by two-time British chess champion and Grandmaster Matthew Sadler and women’s international master Natasha Regan, who painstakingly reviewed AlphaZero’s nearly 1,000 chess games. “Traditional engines are exceptionally strong and make few obvious mistakes, but can drift when faced with positions with no concrete and calculable solution … Impressively, [AlphaZero] manages to impose its style of play across a very wide range of positions and openings,” Sadler said. “It’s precisely in such positions where ‘feeling’, ‘insight’ or ‘intuition’ is required that AlphaZero comes into its own. AlphaZero plays like a human on fire. It’s a very beautiful style.” For instance, in chess, AlphaZero discovered motifs such as openings (the initial moves of a chess game), king safety (ways in which to protect the king piece), and pawn structure (the configuration of pawn pieces on the chessboard). It tends to swarm around the opponent’s king and to maximize the mobility of its pieces while minimizing those of enemy pieces. And not unlike a human, it’s willing to sacrifice pieces in the pursuit of long-term goals. Teaching AlphaZero how to play each of the three games required simulating millions of matches against itself in a process known as reinforcement learning, in which a system of rewards and punishments drives an AI agent toward specific goals. AlphaZero played randomly at first, but eventually came to avoid losses by adjusting parameters to favor a certain playstyle. The total amount of time it took to train AlphaZero varied depending on the game. A minimum of 700,000 training steps (each step representing 4,096 board positions) on systems with 5,000 first-generation tensor processing units (TPUs) and 16 second-generation TPUs — Google-designed application-specific integrated circuits (ASIC) optimized for machine learning — took 9 hours to generate and play games of Chess, and about 12 hours and 13 days for shogi and Go, respectively. The trained AlphaZero uses a Monte-Carlo Tree Search (MCTS) — a heuristic search algorithm for decision processes — to choose each move. It’s able to complete searches remarkably quickly, Demis Hassabis, CEO and cofounder of DeepMind, told reporters — about 60,000 positions per second in chess compared to Stockfish’s roughly 60 million. “That’s not as efficient as a human Grandmaster, who probably only looks at about 100 positions. decision,” Hassabis said, “but we’re a thousand times more efficient in terms of the amount of brute force calculation than handcrafted engines.” To test the fully trained AlphaZero, DeepMind researchers pitted it against the aforementioned Stockfish and Elmo game engines, in addition to its predecessor, AlphaGo Zero. Running on a single machine with 44 processor cores and four of Google’s first-generation TPUs — hardware with roughly the same inference power as a workstation with several Nvidia Titan V graphics processing units (GPUs) — AlphaZero handily won a majority of games within the three-hour-per-match constraints imposed on it. In chess, out of 1,000 matches against Stockfish, AlphaZero won 155 and lost only 6. Additionally, it came out on top in games that started with common human chess-playing strategies; with games that began from a set of positions used in the 2016 Top Chess Engine Championship (TCEC) tournament; and with games using the latest version of Stockfish — Stockfish 9 — and Stockfish variants configured with World Championship configurations, time controls, and openings. In shogi, meanwhile, AlphaZero defeated the 2017 CSA world champion version of Elmo 91.2 percent of the time. And in Go against AlphaGo Zero, it won 61 percent of games. Move sequences from several hundred of AlphaZero’s chess and shogi games have been released alongside the paper, Hassabis said, and already, the chess community is harnessing AlphaZero’s insights to fuel debate on the recent World Chess Championship match between Magnus Carlsen and Fabiano Caruana. “It was fascinating to see how AlphaZero’s analysis differed from that of top chess engines and even top Grandmaster play,” Regan said. “Having spent many months exploring AlphaZero’s chess games, I feel that my conception and understanding of the game had been altered and enriched. AlphaZero has provided us with a check on everything we as humans have taught ourselves about the game of chess, and it could be a powerful teaching tool for the whole community.” The endgame isn’t merely superhuman chess programs, of course. The goal is to use learnings from the AlphaZero project to develop systems capable of solving society’s toughest challenges, Hassabis said. DeepMind is currently involved in several health-related AI projects, including an ongoing trial at the U.S. Department of Veterans Affairs that seeks to predict when patients’ conditions will deteriorate during a hospital stay. Previously, it partnered with the U.K.’s National Health Service to develop an algorithm that could search for early signs of blindness. And in a paper presented at the Medical Image Computing & Computer Assisted Intervention conference earlier this year, DeepMind researchers said they’d developed an AI system capable of segmenting CT scans with “near-human performance.” More recently, DeepMind’s AlphaFold — an AI system that can predict complicated protein structures — placed first out of 98 competitors in the CASP13 protein-folding competition. “Alpha Zero is a stepping stone for us all the way to general AI,” Hassabis said. “The reason we test ourselves and all these games is … that [they’re] a very convenient proving ground for us to develop our algorithms … Ultimately, [we’re developing algorithms that can be] translate[ed] into the real world to work on really challenging problems … and help experts in those areas.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,137
2,020
"Hailo raises $60 million to accelerate the launch of its AI edge chip | VentureBeat"
"https://venturebeat.com/2020/03/05/hailo-raises-60-million-to-accelerate-the-launch-of-its-ai-edge-chip"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hailo raises $60 million to accelerate the launch of its AI edge chip Share on Facebook Share on X Share on LinkedIn Hailo's forthcoming Hailo-8 chip. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Hailo , a startup developing hardware designed to speed up AI inferencing at the edge, today announced that it’s raised $60 million in series B funding led by previous and new strategic investors. CEO Orr Danon says the tranche will be used to accelerate the rollout of Hailo’s Hailo-8 chip, which was officially detailed in May 2019 ahead of an early 2020 ship date — a chip that enables devices to run algorithms that previously would have required a datacenter’s worth of compute. Hailo-8 could give edge devices far more processing power than before, enabling them to perform AI tasks without the need for a cloud connection. “The new funding will help us [deploy to] … areas such as mobility, smart cities, industrial automation, smart retail and beyond,” said Danon in a statement, adding that Hailo is in the process of attaining certification for ASIL-B at the chip level (and ASIL-D at the system level) and that it is AEC-Q100 qualified. Hailo-8, which Hailo says it has been sampling over a year with “select partners,” features an architecture (“Structure-Defined Dataflow”) that ostensibly consumes less power than rival chips while incorporating memory, software control, and a heat-dissipating design that eliminates the need for active cooling. Under the hood of the Hailo-8, resources including memory, control, and compute blocks are distributed throughout the whole of the chip, and Hailo’s software — which supports Google’s TensorFlow machine learning framework and ONNX (an open format built to represent machine learning models) — analyzes the requirements of each AI algorithm and allocates the appropriate modules. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hailo-8 is capable of 26 tera-operations per second (TOPs), which works out to 2.8 TOPs per watt. Here’s how that compares with the competition: Nvidia Jetson Xavier NX: 21 TOPs (1.4 TOPs per watt) Google’s Edge TPU: 4 TOPs (2 TOPs per watt) AIStorm: 2.5 TOPs (10 TOPs per watt) Kneron KL520: 0.3 TOPs (1.5 TOP per watt) In a recent benchmark test conducted by Hailo, the Hailo-8 outperformed hardware like Nvidia’s Xavier AGX on several AI semantic segmentation and object detection benchmarks, including ResNet-50. At an image resolution of 224 x 224, it processed 672 frames per second compared with the Xavier AGX’s 656 frames and sucked down only 1.67 watts (equating to 2.8 TOPs per watt) versus the Nvidia chip’s 32 watts (0.14 TOPs per watt). Hailo says it’s working to build the Hailo-8 into products from OEMs and tier-1 automotive companies in fields such as advanced driver-assistance systems (ADAS) and industries like robotics, smart cities, and smart homes. In the future, Danon expects the chip will make its way into fully autonomous vehicles, smart cameras, smartphones, drones, AR/VR platforms, and perhaps even wearables. In addition to existing investors, NEC Corporation, Latitude Ventures, and the venture arm of industrial automation and robotics company ABB (ABB Technology Ventures) also participated in the series B. It brings three-year-old, Tel Aviv-based Hailo’s total venture capital raised to date to $88 million. It’s worth noting that Hailo has plenty in the way of competition. Startups AIStorm , Esperanto Technologies , Quadric, Graphcore, Xnor , and Flex Logix are developing chips customized for AI workloads — and they’re far from the only ones. The machine learning chip segment was valued at $6.6 billion in 2018, according to Allied Market Research , and it is projected to reach $91.1 billion by 2025. Mobileye, the Tel Aviv company Intel acquired for $15.3 billion in March 2017, offers a computer vision processing solution for AVs in its EyeQ product line. Baidu in July unveiled Kunlun, a chip for edge computing on devices and in the cloud via datacenters. Chinese retail giant Alibaba said it launched an AI inference chip for autonomous driving, smart cities, and logistics verticals in the second half of 2019. And looming on the horizon is Intel’s Nervana , a chip optimized for image recognition that can distribute neural network parameters across multiple chips, achieving very high parallelism. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,138
2,019
"Untether AI raises $20 million to develop machine learning inferencing hardware | VentureBeat"
"https://venturebeat.com/2019/11/05/untether-ai-raises-20-million-to-develop-machine-learning-inferencing-hardware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Untether AI raises $20 million to develop machine learning inferencing hardware Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Untether AI , a Toronto-based startup that’s developing high-efficiency, high-performance chips for AI inferencing workloads, this morning announced that it has raised a $20 million series A round, following a small seed investment. Radical Ventures joined Intel Capital and other investors in the round, with Radical Ventures partner Tomi Poutanen joining as a board member. Newly appointed CEO Arun Iyengar, a former AMD and Xilinx executive, said the fresh capital will lay runway for Untether’s next stage of growth. “The history for computing and AI chips is being written right now, and Untether AI is emerging as a pioneer in the space,” he added. “We look forward to working with world-class investors Intel Capital and Radical Ventures as we prepare to introduce groundbreaking chip innovations that will enable new frontiers in AI applications.” Untether AI, which was founded in 2018 by CTO Martin Snelgrove, Darrick Wiebe, and Raymond Chik, says it continues to make “rapid progress” developing a chip that combines “power efficiency” with “digital processing … robustness.” Snelgrove and Wiebe claim to have eliminated major data transfer bottlenecks, such that data in their architecture moves a quoted 1,000 times faster than is typical. That would be a boon for machine learning computation, where data sets are frequently dozens or hundreds of gigabytes in size. The bulk of the performance gains arise from a near-memory compute technique that builds memory and logic into a single integrated circuit package. For instance, in a 2.5D near-memory compute architecture, processor dies are stacked atop an electrical signal conduit (an interposer ) that links the components and the board. Interposers often incorporate high-bandwidth memory, which stacks dynamic random access memory (DRAM) dies to bolster chip bandwidth. Samsung’s latest HBM2 technology consists of eight 8Gbit DRAM dies, which are connected with 5,000 through-silicon vias to deliver 307GBps of bandwidth total — over 3 times the bandwidth of non-stacked DDR4 memory. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’re thrilled to welcome Arun as CEO to lead Untether AI into our next stage of growth,” said Snelgrove. “The investment momentum, as well as incredible technical strides we’ve been making this year, have propelled Untether AI’s product and go-to-market roadmaps to exciting new paces. With his extensive experience scaling businesses from tens of millions to hundreds of millions of dollars, Arun is a savvy technology veteran who will lead Untether AI to new heights.” Untether has a direct competitor in Redwood, California-based Mythic , which has raised $85.2 million to further develop its own in-memory architecture. And there’s no shortage of adjacent startup rivals in a chip segment market that’s anticipated to reach $91.18 billion by 2025. San Francisco-based startup AI Storm earlier this year raised $13.2 million for its family of AI edge computing chips, and Mountain View-based Flex Logix in April launched an inference coprocessor it claims delivers up to 10 times the throughput of existing chips. Yet another competitor — Xnor.ai — recently debuted an always-on solar-powered device capable of accelerating state-of-the-art machine learning algorithms. But Poutanen is unconcerned. “Arun’s appointment as CEO marks an exciting development for Untether AI,” he said. “With Arun’s deep end-market expertise and the founding team’s extensive experience bringing chips to market, Untether AI’s developing transformative chip architecture has the potential to disrupt a range of industries, from datacenters [to] autonomous vehicles, vision processing, and other embedded applications.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,139
2,019
"AIStorm raises $13.2 million for AI edge computing chips | VentureBeat"
"https://venturebeat.com/2019/02/11/aistorm-raises-13-2-million-for-ai-edge-computing-chips"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AIStorm raises $13.2 million for AI edge computing chips Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Edge computing — that is, network architectures in which computation is relegated to smart devices, as opposed to servers in the cloud — is forecast to be a $6.72 billion market by 2022. Its growth will coincide with that of the deep learning chipset market, which some analysts predict will reach $66.3 billion by 2025. There is reason for that — edge computing is projected to make up roughly three-quarters of the total global AI chipset business in the next six years. David Schie, a former senior executive at Maxim, Micrel, and Semtech, thinks both markets are ripe for disruption. He — along with WSI, Toshiba, and Arm veterans Robert Barker, Andreas Sibrai, and Cesar Matias — in 2011 cofounded AIStorm , a San Jose-based artificial intelligence (AI) startup that develops chipsets that can directly process data from wearables, handsets, automotive devices, smart speakers, and other internet of things (IoT) devices. Today the startup emerged from stealth with $13.2 million in series A backing from biometrics supplier Egis Technology, imaging sensor company TowerJazz, Meyer Corporation, and Linear Dimensions Semiconductor — all four of which say they plan to integrate the company’s technology into upcoming products. Schie, who serves as CEO, said the fresh capital will fuel AIStorm’s engineering and go-to-market efforts. “AIStorm’s revolutionary … approach allows implementation of edge solutions in lower-cost analog technologies,” he added. AIStorm calls its tech “AI-in-Sensor” processing (AIS), and claims it has the potential to eliminate not only the power requirements and cost associated with traditional at-the-edge machine learning implementations, but also the latency. To that end, AIStorm’s patented chip design is capable of 2.5 theoretical operations per second and 10 theoretical operations per second per watt, which Schie contends is 5 to 10 times lower than the average GPU-based system’s power draw. Moreover, through use of a technique called switched charge processing, which allows the chip to control the movement of electrons between storage elements, he says the chip is able to further boost efficiency by ingesting and processing data without first digitizing it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why’s that last bit important? Consider a security camera pointed at a warehouse. Points of interest — the areas around doors where intruders might enter, for instance — comprise only a fraction of the total pixels, so a connected system has to poll the sensor’s image data to try to figure out where to focus. By contrast, AIStorm’s chip lets the sensor itself deal with events, make decisions, and perform analyses. “Edge applications must process huge amounts of data generated by sensors,” Egis Technology COO Todd Lin explained. “Digitizing that data takes time, which means that these applications don’t have time to intelligently select data from the sensor data stream, and instead have to collect volumes of data and process it later.” According to Schie, those advantages — along with the AIStorm chipset’s programmable architecture and compatibility with popular abstraction layers, like Google’s TensorFlow — could enable biometric authentication on devices like smartwatches and augmented reality glasses, or cameras with battery lives of years instead of weeks or months. “It makes a ton of sense to combine the sensor with the imager and skip the costly digitization process,” said Dr. Avi Strum, senior vice president and general manager of TowerJazz’s sensors business. “For our customers, this will open up new possibilities in smart, event-driven operation and high-speed processing at the edge.” AIStorm tested chip out this month and plans to ship production orders next year. In addition to its Silicon Valley headquarters, the company has offices in Phoenix, Arizona and Graz, Austria. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,140
2,018
"Esperanto Technologies raises $58 million for 7-nanometer AI chips | VentureBeat"
"https://venturebeat.com/2018/11/05/esperanto-technologies-raises-58-million-for-7-nanometer-ai-chips"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Esperanto Technologies raises $58 million for 7-nanometer AI chips Share on Facebook Share on X Share on LinkedIn A RISC-V processor prototype. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The artificial intelligence (AI) chips business is hot — red hot, by most accounts. Intel , Google , AMD , Arm , and others are vying for a market some analysts forecast will be worth $91 billion by 2025, and they’re not the only ones. Four-year-old San Francisco startup Esperanto Technologies has a stake in the race, too, and it’s making small but meaningful steps toward challenging the sector’s incumbents. Esperanto this week announced a $58 million series B funding round led by “numerous” undisclosed venture and strategic capital investors, bringing its total haul to date to $63 million. CEO Dave Ditzel said the cash infusion will help to accelerate development of its first-generation chip lineup. “Despite still operating largely in stealth mode, we appreciate this strong show of support from strategic and VC investors who had confidential briefings about our plans and believe we have a compelling solution for accelerating ML applications,” he said. “Esperanto has assembled one of the most experienced VLSI product engineering teams in the ML industry, and we believe that will be a differentiating factor as we drive toward our … products.” The startup — which counts Western Digital as one of its previous investment partners — aims to develop energy efficient, high-performance compute solutions based on RISC-V, an open source and royalty-free instruction set architecture (ISA). RISC-V isn’t the first open computing architecture, but it’s designed to be useful in a wide range of devices and has a substantial body of supporting organizations that includes Google, Hewlett Packard Enterprise, IBM, Qualcomm, Oracle, Nvidia, and others. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With its forthcoming 64-bit 7-nanometer processor, Esperanto says it’ll leverage standards such as the Open Compute Platform (OCP), Facebook’s Pytorch framework and Glow compiler, and the Open Neural Network Exchange (ONNX) to accelerate AI and machine learning workflows. The aforementioned chip, the design of which will be licensable, packs more than a thousand of Esperanto’s ET-Minion RISC-V cores — cores designed to deliver the best teraflop performance per watt efficiency, according to the company — on a single die, with a distributed memory architecture Esperanto claims “improves processing utilization” and “relieves memory bandwidth bottlenecks.” ET-Maxion — Esperanto’s other core product offering — implements features such as quad-issue out-of-order execution, branch prediction, and prefetching algorithms, and can run high-level functions such managing on-chip data movement and scheduling. It works as either an accelerator or bootable core, serving in the latter configuration as an interface to system software layers. Esperanto isn’t a fly-by-night operation. Ditzel, who previously founded x86 chip designer Transmeta, is the former vice president and chief architect for Intel’s hybrid parallel computing division and chief technology officer for Sun Microsystems’ SPARC technology business. David Glasco, Esperanto’s vice president of engineering, was previously the architecture and design lead for Tesla’s Autopilot system-on-chip hardware. And chief architect Roger Espasa currently co-chairs the RISC-V Foundation vector extensions task group and is active on the RISC-V technical committee. Additionally, Esperanto has more than 100 employees on its payroll, including AI experts, processor architects, chip designers, software developers, and system engineers from Intel, DEC, MIPS, Sony Interactive Entertainment, and QED. “Next-generation applications in machine learning, AI, and real-time analytics require the highest levels of performance and optimization for these advanced workloads,” Martin Fink, chief technology officer of Western Digital, said in a statement. “The RISC-V platform, and Esperanto solutions, free developers to innovate and optimize for special-purpose computing.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,141
2,018
"Mythic snags $40 million to advance AI chips | VentureBeat"
"https://venturebeat.com/2018/03/20/mythic-snags-40-million-to-advance-ai-chips"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mythic snags $40 million to advance AI chips Share on Facebook Share on X Share on LinkedIn Members of Mythic's Redwood City team stand outside with the company's logo. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Mythic announced today that it has closed a $40 million round of investment led by SoftBank Ventures to help bring a high-speed, low-power AI chip to market. The company will use this massive infusion of cash to help launch its specialized silicon next year, roughly seven years after the company was first incubated at the Michigan Integrated Circuits Lab. Mythic’s technology performs machine learning inference calculations using analog electrical signals and flash memory, which the company says allows for higher performance and a lower-power draw than more conventional techniques, like using GPUs. Mythic is targeting one of the biggest bottlenecks in machine learning computation: moving data to and from memory for processing. Because all of the neural network inference computation takes place on the same silicon that stores its weights, it’s supposed to work faster. Above: Mythic’s proof of concept chip, center. The startup is part of a bumper crop of new companies that hope to capitalize on the massive interest in AI by offering new custom silicon that promises to outperform general-purpose architectures. However, Mythic is competing against not only fellow newcomers but also legacy chipmakers racing to add dedicated AI chips to their product portfolio. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Mythic cofounder and CEO Mike Henry told VentureBeat in an interview that he thinks the company’s technology and its focus on edge computing applications will give it a leg up over the competition. The startup’s first products will be boards that allow developers to slot Mythic’s chips into hardware they’re already working with. In the future, Henry wants to see his company’s chips designed into gadgets, rather than added in after the fact. That will make the technology more sticky, since it will be locked into a generation of hardware. Lockheed Martin Ventures, the defense contractor’s venture capital arm, made a strategic investment in Mythic as part of the round. Henry said that Lockheed Martin plans to be a customer when the silicon hits the market. This round included participation from Mythic’s existing investors Draper Fisher Jurvetson, Lux Capital, Data Collective, and AME Cloud Ventures. Sun Microsystems cofounder Andy Bechtolsheim (who was an early investor in Google) also joined the round. SoftBank Ventures named Arm executive vice president Rene Haas to Mythic’s board as part of the deal. He’s a natural fit, since he worked at Nvidia prior to joining Arm. (It’s worth noting that Haas’ appointment doesn’t connote any sort of partnership between Arm and Mythic, however.) Mythic plans to receive the first samples of its silicon by the end of this year, with full production ramping up in 2019. The company is also going to keep growing its headcount. This time last year , the firm consisted of 10 people in Austin, Texas, and it has since grown to 45 employees, with an additional office in Redwood City, California. Henry said he expects to reach 85 employees by the end of this year. The company still has several key proof points ahead of it. Right now, it’s easy to make claims about how its technology will perform at launch, but the proof will come when customers get their hands on the final product next year and can test those claims for themselves. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,142
2,014
"DarkHotel: A Sophisticated New Hacking Attack Targets High-Profile Hotel Guests | WIRED"
"https://www.wired.com/2014/11/darkhotel-malware"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kim Zetter Security DarkHotel: A Sophisticated New Hacking Attack Targets High-Profile Hotel Guests Getty Images Save this story Save Save this story Save The hotel guest probably never knew what hit him. When he tried to get online using his five-star hotel's WiFi network, he got a pop-up alerting him to a new Adobe software update. When he clicked to accept the download, he got a malicious executable instead. What he didn't know was that the sophisticated attackers who targeted him had been lurking on the hotel's network for days waiting for him to check in. They uploaded their malware to the hotel's server days before his arrival, then deleted it from the hotel network days after he left. That's the conclusion reached by researchers at Kaspersky Lab and the third-party company that manages the WiFi network of the unidentified hotel where the guest stayed, located somewhere in Asia. __ Kaspersky says the attackers have been active for at least seven years, conducting surgical strikes against targeted guests at other luxury hotels in Asia as well as infecting victims via spear-phishing attacks and P2P networks.__ Kaspersky researchers named the group DarkHotel, but they're also known as Tapaoux by other security firms who have been separately tracking their spear-phishing and P2P attacks. The attackers have been active since at least 2007, using a combination of highly sophisticated methods and pedestrian techniques to ensnare victims, but the hotel hacks appear to be a new and daring development in a campaign aimed at high-value targets. "Every day this is getting bigger and bigger," says Costin Raiu, manager of Kaspersky's Global Research and Analysis Team. "They're doing more and more hotels." The majority of the hotels that are hit are in Asia but some are in the U.S. as well. Kaspersky will not name the hotels but says they've been uncooperative in assisting with the investigation. The attackers' methods include the use of zero-day exploits to target executives in spear-phishing attacks as well as a kernel-mode keystroke logger to siphon data from victim machines. They also managed to crack weak digital signing keys to generate certificates for signing their malware, in order to make malicious files appear to be legitimate software. "Obviously, we're not dealing with an average actor," says Raiu. "This is a top-class threat actor. Their ability to do the kernel-mode key logger is rare, the reverse engineering of the certificate, the leveraging of zero days---that puts them in a special category." >"Their targeting is nuclear themed, but they also target the defense industry base in the U.S." Targets in the spear-phishing attacks include high-profile executives---among them a media executive from Asia---as well as government agencies and NGOs and U.S. executives. The primary targets, however, appear to be in North Korea, Japan, and India. "All nuclear nations in Asia," Raiu notes. "Their targeting is nuclear themed, but they also target the defense industry base in the U.S. and important executives from around the world in all sectors having to do with economic development and investments." Recently there has been a spike in the attacks against the U.S. defense industry. The attackers seems to take a two-pronged approach---using the P2P campaign to infect as many victims as possible and then the spear-phishing and hotel attacks for surgically targeted attacks. In the P2P attacks thousands of victims are infected with botnet malware during the initial stage, but if the victim turns out to be interesting, the attackers go a step further to place a backdoor on the system to exfiltrate documents and data. Until recently, the attackers had about 200 command-and-control servers set up to manage the operation. Kaspersky managed to sinkhole 26 of the command server domains and even gained access to some of the servers, where they found unprotected logs identifying thousands of infected systems. A lot of the machines in the attackers' logs, however, turned out to be sandboxes set up by researchers to ensnare and study botnets, showing how indiscriminating the attackers were in their P2P campaign. The attackers shut down much of their command infrastructure in October, however, presumably after becoming aware that the Kaspersky researchers were tracking them Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "As far as I can see there was an emergency shut down," Raiu says. "I think there is a lot of panic over this." That panic may be because the campaign shows signs of possibly emanating from an important U.S. ally: South Korea. Researchers point out that one variant of malware the attackers used was designed to shut down if it found itself on a machine whose codepage was set to Korean. The key logger the attackers used also has Korean characters inside and appears to have ties to a coder in South Korea. The sophisticated nature of the key logger as well as the attack on the RSA keys indicates that DarkHotel is likely a nation-state campaign---or at least a nation-state supported campaign. If true, this would make the attack against the U.S. defense industry awkward, to say the least. Raiu says the key logger, a kernel-mode logger, is the best written and most sophisticated logger he's seen in his years as a security researcher. Kernel-mode malware is rare and difficult to pull off. Operating at the core of the machine rather than the user level where most software applications run, allows the malware to better bypass antivirus scanners and other detection systems. But kernel-mode malware requires a skillful touch since it can easily crash a system if not well-designed. "You have to be very skilled in kernel-level development and this is already quite a rare skillset," says Vitaly Kamluk, principal security researcher at Kaspersky Lab. "Then you have to make it very stable…. It must be very stable and very well tested." There's no logical reason to use a kernel-level keylogger says Raiu since it's so easy to write key loggers that hook the Windows API using about four lines of code. "But these guys prefer to do a kernel-level keylogger, which is about 300 kilobytes in size---the driver for the key logger---which is pretty crazy and very unusual. So the guy who did it is super confident in his coding skills. He knows that his code is top-notch." The logger, which was created in 2007, appears to have been written by someone who goes by the name "Chpie"---a name that appears in source code for the logger. Chpie is the name used by a South Korean coder who is known to have created another kernel-level key logger that Raiu says appears to be an earlier version of this one. The key logger in the DarkHotel attack uses some of the same source code but is more sophisticated, as if it's an upgraded version of the earlier keylogger. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Aside from the sophisticated key logger, the attacker's use of digital certificates to sign their malware also points to a nation-state or nation-state supported actor. The attackers found that a certificate authority belonging to the Malaysian government as well as Deutsche Telekom were using weak 512-bit signing keys. The small key size allowed the attackers, with a little super-computing power, to factor the 512-bit RSA keys (essentially re-engineer them) to generate their own digital certificates to sign their malware. "You very rarely, if ever, see such techniques used by APT (advanced persistent threat) groups," Raiu says. "Nobody else as far as we know has managed to do something similar, despite the fact that these certificates existed for some time.... This is [an] NSA-level infection mechanism." These sophisticated elements of the attack are important, but the most intriguing part of the DarkHotel campaign is the hotel operation. The Kaspersky researchers first became aware of the hotel attacks last January when they got reports through their automated system about a cluster of customer infections. They traced the infections to the networks of a couple of hotels in Asia. Kamluk traveled to the hotels to see if he could determine how guests were being infected, but nothing happened to his machine. The hotels proved to be of no help when Kamluk told them what was happening to guests. But during his stay, he noticed that both hotels used the same third-party firm to manage its guest WiFi. Some hotels own and operate their network infrastructure; others use a managed services firm. The company managing the WiFi network of the two hotels Kamluk visited wishes to remain anonymous, but it was an unusually willing partner in getting to the bottom of the attacks. It acted quickly to provide Kaspersky with server images and logs to track down the attackers. Although the attackers left very few traces, "There were certain command lines which should not have been there in the hotel system," a senior executive with the managed-services company says. In one case, the researchers found a reference to a malicious Windows executable in the directory of a Unix server. The file itself was long gone, but a reference pointing to its former existence remained. "[T]there was a file-deletion record and a timestamp of when it happened," says Kamluk. Judging from traces left behind, the attackers had operated outside normal business hours to place their malware on the hotel system and infect guests. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "They started early in the morning before the hotel staff would arrive to the office and then after they leave the office they were also distributing the malware then," says the senior executive. "This is not just something that happened yesterday. These are people who have been taking their time. They've been trying to access networks over the last years." It's unclear how many other hotels they've attacked, but it appears the hackers cherry-pick their targets, only hitting hotels where they know their victims will be staying. When victims attempt to connect to the WiFi network, they get a pop-up alert telling them their Adobe Flash player needs an update and offering them a file, digitally signed to make it look authentic, to download. If the victims accept they download, they get a Trojan delivered instead. Crucially, the alerts pop up before guests actually get onto the WiFi network, so even if they abandon their plan to get online, they are infected the moment they hit "accept." The malware doesn't then immediately go to work. Instead it sits quietly for six months before waking up and calling home to a command-and-control server. Raiu says this is likely meant to circumvent the watchful eyes of IT departments who would be on the lookout for suspicious behavior immediately after an executive returned from a trip to Asia. At some of the hotels, only a few victims appear to have been targeted. But on other systems, it appears the attackers targeted a delegation of visitors; in that instance, evidence shows they tried to hit every device attempting to get online during a specific period of time. "Seems like some event occurred or maybe some delegation visited the hotel and stayed there for a few days and they tried to hit as many members of the delegation as possible," Raiu says. He thinks the victims were ones the attackers couldn't reach through ordinary spearphishing attacks---perhaps because their work networks were carefully protected. Kaspersky still doesn't know how the attackers get onto the hotel servers. They don't live on the servers the way criminal hackers do---that is, maintain backdoor access to the servers to gain re-entry over an extended period of time. The DarkHotel attackers come in, do their deed, then erase all evidence and leave. But in the logs, the researchers found no backdoors on the systems, so either the attackers never used them or successfully erased any evidence of them. Or they had an insider who helped them pull off the attacks. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The researchers don't know exactly who the attackers were targeting in the identified hotel attacks. Guests logging onto WiFi often have to enter their last name and room number in the WiFi login page, but neither Kaspersky, nor the company that maintained the WiFi network, had access to the guest information. Reports that come into Kaspersky's automated reporting system from customers are anonymous, so Kaspersky is seldom able to identify a victim beyond an IP address. The number of hotels that have been hit is also unknown. So far the researchers have found fewer than a dozen hotels with infection indicators. "Maybe there are some hotels that … use to be infected and we just cannot learn about that because there are no traces," the network-management executive says. The company worked with Kaspersky to scour all of the hotel servers it manages for any traces of malware and are "fairly confident that the malware doesn't sit on any hotel server today." But that is just one network-management company. Presumably, the DarkHotel operation is still active on other networks. Safeguarding against such an attack can be difficult for hotel guests. The best defense is to double check update alerts that pop up on your computer during a stay in a hotel. Go to the software vendor's site directly to see if an update has been posted and download it directly from there. Though, of course, this won't help if the attackers are able to redirect your machine to a malicious download site. X X Topics malware Travel Threat Level Andy Greenberg Andy Greenberg Kate O'Flaherty Lily Hay Newman Lily Hay Newman Lily Hay Newman Andy Greenberg Matt Burgess Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,143
2,016
"America's Top Spy James Clapper and the Future of Cyberwar and Surveillance | WIRED"
"https://www.wired.com/2016/11/james-clapper-us-intelligence"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Garrett M. Graff Security America's Top Spy Talks Snowden Leaks and Our Ominous Future Save this story Save Save this story Save [ On Thursday morning *, November 17**,** James Clapper announced** that** he** had** submitted** his** letter** of** resignation**. He** will** serve** out** the** remaining** 64** days** of** his** term**.*] Public appearances don't come easily to James Clapper , the United States director of national intelligence. America’s top spy is a 75-year-old self-described geezer who speaks in a low, guttural growl; his physical appearance—muscular and bald—recalls an aging biker who has reluctantly accepted life in a suit. Clapper especially hates appearing on Capitol Hill, where members of Congress wait to ambush him and play what he calls “stump the chump.” As he says, “I rank testimony—particularly in the open—right up there with root canals and folding fitted sheets.” One of the things Clapper does profess to enjoy about his job is meeting with the men and women who make up his covert empire of 17 agencies, which range from brand names like the CIA, NSA, DEA, and FBI to lesser-known units like the Treasury Department’s Office of Intelligence and Analysis. As he has traveled the country and the world over his six years in office, he has hosted scores of town hall meetings with intelligence officers, analysts, and operatives. The events are typically low-key, focusing less on what’s in the news than on the byzantine and, to Clapper, almost soothing minutiae of the military-intelligence bureaucracy. And so it was that he found himself in late August in an auditorium at US Strategic Command near Omaha, Nebraska, headquarters of the nation’s nuclear forces, taking questions from a group of 180 civilian and military personnel. There were fairly routine queries about China, recruiting, and coordination between the intel services. Then an older man in a suit, a lifer like Clapper, reached for the microphone and asked him something no one ever had in his tenure as director of national intelligence. For a moment the question stopped Clapper in his tracks. “Is spying moral?” Clapper has found himself defending his agencies from the charge that they’re leading the nation into a dystopian future. Jared Soares Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Back in the early 1970s, James Clapper was a young military assistant to the director of the NSA when the entire US intelligence establishment was thrown into upheaval. A team of antiwar activists had broken into an FBI field office in Media, Pennsylvania, and made off with thousands of files. In them was evidence of multiple illegal domestic spying programs, conducted by J. Edgar Hoover’s FBI, aimed largely at neutralizing left-wing dissent in America. Public faith in US intelligence, already poisoned by the CIA’s cold war regime of dirty tricks, plummeted further. And Congress moved to rein in America’s spies, hardening laws and norms against domestic surveillance. Some 40 years later, Clapper now presides over a broader intelligence purview than any one of his bosses did back in the ’70s. And hanging over his tenure is a sense that our spies have once again overstepped the bounds of acceptable behavior. Many in the public today regard former NSA contractor Edward Snowden as a whistle-blower and a hero for exposing another era of domestic surveillance. Clapper has found himself defending his agencies from the charge that they’re leading the nation into a dystopian future in which an all-­seeing government kills from the sky with no accountability, hoovers up vast troves of data from law-abiding people the world over, and undermines personal computer security through back doors, malware, and industry side deals. He argues, though, that today’s scandals pale by comparison to those of an earlier era. The programs exposed by Snowden, he says, “had all kinds of oversight by all three branches of government, very limited sets of data, and a very small cadre of people who had access to it. We had none of that in the ’70s.” The Feds Will Soon Be Able to Legally Hack Almost Anyone How Baltimore Became America’s Laboratory for Spy Tech Edward Snowden: The Untold Story Clapper says he has never doubted the morality of his profession. The job of the intelligence community is, in his view, honorably straightforward: to provide policy­makers with objective analysis derived from intelligence gathered through legally authorized methods. It’s the battlefield that’s confusing and dystopian. From Clapper’s standpoint, the country is locked in a seemingly constant state of war against a protean and often faceless set of enemies, at a time when a single employee can walk out with a thumb drive containing decades’ worth of secrets. It’s enough to make him nostalgic for the comparatively uncomplicated era of nuclear détente. “Sometimes I long for the halcyon days of the cold war,” he tells me. “We had a single adversary and we understood it.” Rather than worry whether his spies have gone too far, Clapper worries that leaders in Washington are ill-equipped to tackle the multiplying, metastasizing set of threats that face America. His annual appearances on Capitol Hill—filled with discussions about ISIS, cyberwar, North Korea’s nuclear program, and new Russian and Chinese aggression—have been so routinely pessimistic that he refers to his yearly global threat assessment as the Litany of Doom. Unpredictable instability has been a constant for this administration and will be, he says, for the next one too. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But in mere weeks, when a new presidential administration takes office, all those issues will be someone else’s problem. For Clapper, the transition can’t come soon enough. He has spent much of this year literally counting down the days he has left. Some mornings, when he briefs the commander in chief, known as Intelligence Customer Number One, President Barack Obama will ask him what the current tally is and then offer Clapper a fist bump. In his final months in the role, Clapper and more than a dozen of his top aides and advisers provided WIRED with an unprecedented series of interviews discussing the state of America’s intelligence apparatus and the threats they’ll be handing off to a new administration come January 20. Even six years in, such exchanges don’t come naturally. “In this job,” Clapper says, “I’ve found the less I talk, the better.” Chip Somodevilla/Getty Images The nation's first director of national intelligence, John Negroponte, opened shop in 2005 with a staff of 11 crammed into a small office close to the White House—filling a new post created in the aftermath of 9/11 in recognition that the country needed a single figure to oversee its intelligence efforts. By the time Clapper arrived in the job five years later, the staff occupied a 51-acre complex in McLean, Virginia. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Though discreetly identified only by a roadside sign, 1550 Tysons McLean Drive is actually easily visible to passengers landing at Reagan National Airport. From the air, its two buildings form an L and an X, a nod to its gratuitously patriotic post-9/11 moniker, Liberty Crossing, or “LX” in government-speak. The compound houses the 1,700 employees of the office of the director of national intelli­gence as well as the National Counterterrorism Center, another post-9/11 creation, whose multistory command post was built to mimic the fictional one in Kiefer Sutherland’s drama 24. It’s a city unto itself, with a police force, a Dunkin’ Donuts, and a Starbucks. Clapper’s office, on the sixth floor of the L building, is large but mostly barren except for standard-issue government-executive dark wood furniture. One notable exception: a poster by the door of a stern bald eagle, with the caption "I am smiling." Clapper's armored, antenna-topped black SUV---more tank than car---has a satellite dish to keep him in secure contact wherever he’s driving around DC. Clapper is about as steeped in the intelligence business as any American ever has been. His father worked in signals intel­ligence during World War II. And when the young James met President John F. Kennedy in 1962 as a 21-year-old Air Force ROTC cadet, he told the com­mander in chief that he too intended to become an intelli­gence officer. It’s the only profession he ever really aspired to. Clapper met his wife at the NSA (her father also was an intelligence officer), and in Vietnam he shared a trailer with his father, who was the NSA’s deputy chief of operations there. By now Clapper has devoted more than a half century to the field. In 2007, then–secretary of defense Robert Gates installed him as the Pentagon’s undersecretary of defense for intelligence, overseeing all four of its defense-related intel offices. Then in 2010, angry over the intelligence community’s intransigence and failure to connect the dots to prevent the Christmas Day bombing attempt aboard a Northwest Airlines flight, Obama turned to Clapper and made him the nation’s fourth director of national intelligence in just five years. Clapper figured he’d spend his tenure working behind the scenes, coordinating the nation’s many-tentacled intelligence apparatus. Clapper’s life is a whirl of video teleconferences and non­descript spaces—subterranean briefing rooms, flatscreen-lined command centers, and eavesdropping-proof chambers called sensitive compartmented informa­tion facilities, or SCIFs (pronounced “skiffs” in spookspeak). His armored, antenna-topped black SUV—more tank than car—even has a satellite dish to keep Clapper in secure contact wherever he’s driving around DC. When he travels, a special team converts a hotel room into a secure communi­cations suite. His digital hearing aids are regularly checked by security to ensure that no foreign adversary is listening, and his counterintelligence team dumbs down the iPads he uses to brief the president in the Oval Office so that they can’t transmit or eavesdrop. Clapper will be remembered for something that originated inside his workforce: one of the most significant intelligence breaches in US history. Clapper holds one of the broadest portfolios in government. The entire world is his domain: every election, economic upheaval, technological advance, terrorist plot, or foreign leader’s bad hair day. “I never get a pass in meetings,” he says. Thanks to the documents leaked by Snowden, the American public now knows that Clapper’s empire encompasses more than 107,000 employees, roughly equivalent to the population of Green Bay, Wisconsin. Their combined budget exceeds $52 billion, including $10 billion for the NSA and $14 billion for the CIA, $2.6 billion of which goes for covert action programs like drone strikes and sabotaging Iran’s nuclear program. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It’s inside that workforce where Clapper has had his biggest successes, making headway in areas like procurement reform and IT upgrades or building partnerships with foreign governments and domestic agencies. Clapper has also tried hard to improve diversity, which he says still has a long way to go, and he became an unlikely champion for integrating LGBT employees into the intelligence commu­nity. “If I’d been able to work all the time on improving the institution and the community, that’d have been much more satisfying,” he says. But he knows that few outsiders will recall any of that. Instead he will most likely be remembered for something else that originated inside his workforce: one of the most significant intelligence breaches in US history. Gabriella Demczuk/Getty Images On Saturday, June 8, 2013, Clapper was at the office, giving a rare TV interview to NBC’s Andrea Mitchell in an attempt to quell the growing controversy over a series of leaks in The Guardian and The Washington Post about the nation’s post-9/11 surveillance programs. “It is literally—not figuratively, literally—gut-wrenching to see this happen, because of the huge, grave damage it does to our intel­ligence capabilities,” Clapper told Mitchell. Minutes later, a member of his security detail—plainclothes, Glock-carrying CIA guards who each wear generic badges iden­tifying them as a US special agent—interrupted to say Clapper had to take an urgent telephone call. That’s when he first heard the name that would, more than any other person, define his tenure: Edward Snowden. In addition to the general shock waves that Snowden’s leaks sent, they caused a particular problem for Clapper person­ally. Upon discovering that the NSA had been vacuuming up global internet communications under a program code­named Prism, the media quickly directed a spotlight on a seemingly innocuous Capitol Hill exchange that had occurred three months earlier between Clapper and US senator Ron Wyden. In a hearing on March 12, 2013, Wyden had asked Clapper, “Does the NSA collect any type of data at all on millions, or hundreds of millions, of Americans?” “No, sir,” Clapper replied. “It does not?” Wyden asked, somewhat dumbfounded, since as a high-ranking intelligence committee member he knew otherwise. “Not wittingly,” Clapper said. “There are cases where they could inadvertently, perhaps, collect, but not wittingly.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The hearing moved on with hardly a note of the exchange, but Wyden and his intelligence staffer were floored by what seemed to be an outright lie. Edward Snowden on Clapper: “He saw deceiving the American people as what he does, as his job, as something completely ordinary.” Wyden, along with US senators Dianne Feinstein and Mark Udall, had spent years pushing back against the worst excesses of the post-9/11 surveillance state. Wyden had watched as intelligence leaders at the NSA, who reported to Clapper, issued a series of purposefully misleading statements about their programs. They had already spent years on a “deception spree,” Wyden tells me. “He presided for years over an intelligence community that was riddled with examples.” These included then–NSA director Keith Alexander’s 2012 comment at the DefCon hacker convention that the agency didn’t collect dossiers on millions of Americans, which Wyden calls “one of the most false statements ever made about US intelligence.” According to Snowden, it was Clapper’s response to Wyden that sent him over the edge. Though Snowden did not respond to an interview request for this story, he told WIRED in 2014 that he was horrified by how glaring and banal Clapper’s lie was: “He saw deceiving the American people as what he does, as his job, as something completely ordinary.” Clapper brusquely rejects the idea that his exchange with Wyden motivated Snowden. “He’s tried to sell that story, but it’s bullshit,” he says, pointing to the fact that Snowden’s document-gathering began months before Clapper entered that Senate committee room. “If for whatever reason Snowden felt compelled to expose what he felt were abuses related to so-called quote-unquote ‘domestic surveillance,’ I might be able to understand what he did. But he exposed so much else that had nothing to do with domestic surveillance that has been profoundly damaging,” Clapper says. “I think he’s a narcissist. I don’t buy the idealism that he professes. I don’t buy that a bit.” After a series of evolving explanations, Clapper tried to clean up his mess of a statement to Wyden by writing an apology of sorts to Intelligence Committee chair Feinstein, two weeks after the Snowden leaks started: “My response was clearly erroneous.” He resisted calls to resign, even as critics called for his indictment for perjury. Senator Rand Paul said Clapper should share a jail cell with Snowden himself. “I’m convinced that if we’d explained the program and the need, Prism would have been no more controversial than the FBI storing millions of fingerprints.” Over the past year, the explanation that Clapper has settled on is that he simply got confused answering Wyden’s question. Clapper says he was thinking about the programs that collected content, while Wyden was asking about programs that collected metadata. “The popular narrative is that I lied, but I just didn’t think of it. Yes, I made a mistake, but I didn’t lie. There’s a big difference.” Clapper knows the Wyden exchange and Snowden reve­lations will dominate his legacy. “I’m quite sure that will be in the first line of my Washington Post obituary,” he says. “But that’s life in the big city.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg If anything, Clapper says, the public backlash over the Snowden leaks surprised him—and the intelligence com­munity as a whole. “The shock was a shock,” he says. His agencies thought they were doing exactly what the Ameri­can people wanted them to be doing—using every tool legally available to them. “I never met a collection capability I didn’t like, you know?” he jokingly told a group of intel leaders this fall. In his mind the adverse reaction stemmed in part from the fact that, in the era after 9/11, the Bush administration claimed too much power for its sprawling war on terror in secret. More should have been publicly debated and authorized by Congress, he says, including the sweeping domestic surveillance program that lay at the heart of Snowden’s explosive disclosures. Clapper believes that in the wake of the September 11 attacks, the public and Con­gress would have given the nation’s spies almost anything they requested. “We could’ve gotten legislation to drive a truck through,” Clapper says. “I’m convinced that if we’d explained the program and the need, Prism would have been no more controversial than the FBI storing millions of fingerprints.” In fact, he says, while the legislative changes after Snow­den’s revelations made the process slower for the NSA, it greatly boosted the total amount of data the agency could legally access. “Instead of the NSA storing the data, we go to the companies and ask them for it,” he says. “It actually gave us broader access across a broader range of providers than the original programs. If people think their civil liberties and privacy are going to be better protected by the providers, OK.” The Coming Threats The new presidential administration will need to look ahead to a whole range of emerging technological threats, many of which are being studied inside a spy skunkworks called the Intelligence Advanced Research Projects Activity, led by director Jason Matheny. —G.M.G. Human genomic modification What if you could create a popula­tion of a million Einsteins? As gene science advances, countries will likely adopt human modifi­cations at different rates—and might even select for different traits. “There are plausible scenarios where there are strong first-mover advantages,” Matheny says. Counterspace weapons Increased reliance on satellites for GPS, weather forecasting, communications, imaging, and mapping will likely make space one of the first battlefields of the next major war. Russia has been building new radar jammers and laser weapons that could blind US satellites, and China has tested an antisatellite missile. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg 3-D-printed weapons The rapid advance and miniatur­ization of 3-D printers will give individuals an ability to manufac­ture weapons that until recently belonged only to nations. “You could imagine a state 10 years from now where someone could use insect-sized drones that were built by 3-D printers, then weaponized with botulism toxin,” Matheny says. Artificial intelligence As more companies and governments invest in machine learning programs, Matheny is concerned about the unintended consequences of letting these systems out onto the Internet. “We worry about how those systems are embedded in critical infrastructure—financial systems, energy systems, weapon systems.” Bioweapons Advancing technology could allow scientists to create new super­viruses—or even bring back extinct diseases. Scientists have been able to synthesize the poliovirus and make designer forms of deadly mousepox and cowpox. “There’s a line that nature is the best bioterrorist,” Matheny says. “We don’t actually know that’s true.” Since the Snowden breach, Clapper has tried to make more of an effort to talk publicly about the intelligence community’s work and release more of its records. This is partly just a concession to an unkind reality: Clapper doesn’t really think it’s possible to prevent another Snowden. Indeed, evidence suggests there is at least one other leaker still siphoning information about more recent classified NSA programs. He believes his workforce has to get out in front of a new era in which the government can hide far less. “At some point there will need to be a fairly fundamental change in the classification system,” he warned intelligence executives this fall. The current one, he said, “was born in a hard-copy paper era, and the rules we have today really aren’t compatible with technology and the way we conduct our business.” That’s similar to what Wyden says he’s been arguing for years. The past decade has shown that secrets don’t keep, he says, and when the American people discover they’re being misled, that undermines their trust in government and leads them to question its morality and ethics. “The whole history of America is that the truth eventually comes out,” Wyden says. “I continue to be concerned about how, in the intelligence community, too often what the American people are told isn’t in line with what I learn about privately. That’s not right.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Among other small steps toward openness, Clapper has overseen an effort to ease into public view more information about the drone program, which has faced increasing opposition, particularly after the September 2011 killing of Anwar al-Aulaqi, an American cleric who had embraced al Qaeda and become a top leader of its affiliate in Yemen. That strike, which also killed another American, Samir Khan, and a second strike weeks later, which accidentally killed al-Aulaqi’s 16-year-old son, brought new attention to the killing of US citizens abroad by US intelligence and military without judiciary oversight. In July, Clapper disclosed for the first time the government’s tally of civilians killed by drones in areas outside of hostile activities. Released around 6 pm on Friday of the Fourth of July holiday weekend, the tally was widely derided as laughably low—between 2009 and 2015, Clapper said, the US conducted 473 drone strikes, killing around 2,500 “combatants” and between 64 and 116 “noncombat­ants.” These are just a fraction of the numbers that have been compiled by nongovernmental groups, which estimate more like 450 civilian dead in Pakistan alone. But Clapper told me he stands by his figures. “We did expose the full truth,” he says. Then he adds a curious caveat: “I think that’s a fair and accurate representation to the extent that we could be public about it.” Wyden says he has indeed seen a recent shift toward transparency in Clapper’s empire. The new NSA director, Michael Rogers, has been much more open with Congress. “I’m quite encouraged by Mike Rogers’ approach,” Wyden says. “He’s been very different.” But mostly Clapper’s critics say that while the intelligence world might be offering more transparency at the margins, they haven’t seen evidence of any major philosophical shift. The ACLU’s principal tech­nologist, Christopher Soghoian, says that while Clapper’s office has started a Tumblr and pushed to declassify some significant historical documents—including the drone casualty report and 28 long-hidden pages of a post-9/11 government investigation that dealt with Saudi Arabia’s role in financing and coordinating the attacks—it has yet to make public or confirm the existence of a single surveillance program or tool not exposed by Snowden. “To the casual observer it might seem like the DNI’s being more transpar­ent,” Soghoian says. “What I think is that the DNI’s office has embraced transparency theater.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One of the biggest projects of Clapper’s tenure post-Snowden has been to declassify thousands of the top-secret intelligence dossiers, known today as the President’s Daily Brief, that have been delivered to the Oval Office every morning since the Kennedy administration. Over the past year, Clapper and CIA director John Brennan have disclosed the majority of them up through the Ford administration. In August the two men traveled to the Richard Nixon Presidential Library to mark the release of some 2,500 Nixon- and Ford-era briefings. Clapper spent the flight to California hunched over his laptop, reading the declassified documents. The experience was an odd one, he admitted, because the papers still had plenty of redactions—white boxes blocking out snippets and paragraphs of text. It had been years since Clapper had read documents in which anything was redacted from his eyes. “I do have to say that as I was reading, I was thinking, ‘I wonder why we redacted that? Could we have released more? What were we covering up right there?’” Just weeks before Election Day 2016, Clapper accused Russian officials of meddling in US politics, hacking campaigns and political parties. Before the event at the Nixon library, he and Brennan took a private tour of the museum, which was undergoing an extensive renovation. The guide explained that once construction was complete, the tour would begin not with Nixon’s birth but with the turbulent 1960s. “We’ll start people with the chaos of 1968. By the time they finish walking through, they’ll be wondering why anyone wanted to be president then,” the energetic young guide explained. As the two intel chiefs walked into the next gallery, Clapper muttered under his breath to Brennan, “Still a valid question.” One of the most alarming threats that has dogged Clapper’s tenure is a form of warfare that the United States itself pioneered. In 2008 a secret team of Israeli and American operatives unleashed the Stuxnet virus on Iran’s Natanz nuclear plant, using the worm to physically destroy the plant’s uranium centrifuges. It is widely considered the first major modern cyberweapon. The covert attack came to light in 2010, just as Clapper was taking office. In the years since, other nations have attacked the US, from Iran’s theft of customer data from the Las Vegas Sands casino in 2014 to North Korea’s hack of Sony’s email servers. Just weeks before Election Day 2016, Clapper accused Russian officials of meddling in US politics, hacking cam­paigns and political parties. Those assaults were minus­cule compared to what the US will face in the years to come, Clapper says. He’s worried not just about data destruction and theft but about what he calls the “next push of the envelope”: data manipulation, whereby adversaries subtly edit and corrupt information inside US computer systems, undermining confidence in government or industry records. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Government and private networks aren’t nearly as secure as they need to be, Clapper says. At the same time, he sees the offensive capability of the NSA and the Pentagon as key to keeping the peace online. Clapper has lamented the rapid spread of apps and services that offer end-to-end encryp­tion ; he argues that Snowden’s revelations have “sped up” the world’s adoption of advanced encryption by as much as seven years. He says that he and FBI director James Comey have never advocated for backdoor access to private data—a move that critics say is sure to make everyone more vulnerable to hacking by third parties who will inevitably discover and exploit the same back door. He believes the government needs to work with the tech industry to balance society’s desire for security with concerns over personal privacy. “I think with all the creativity and intellectual horsepower that’s in the industry, if they put their minds to it and some resources, they could come up with a solution.” He wonders if a type of escrow system in which encryption keys could be held by multiple parties would work. “There’s got to be a better way than this absolutist business, so that pornographers, rapists, criminals, terrorists, druggies, and human traffickers don’t get a pass.” Clapper has little faith in encryption as a bulwark against cyberattacks. Instead he thinks the answer lies in a strategy of deterrence. “People understood nuclear deterrence. Cyber’s much harder to grasp. I don’t want that homework assignment.” That’s why it doesn’t bother him that America inaugurated the era of cyberwarfare. “I’m glad, if we were in fact the first,” he says. He hopes that the use of weapons like Stuxnet—and their demonstrated power to wreak real-world havoc—will eventually help keep the peace between state adversaries and perhaps even engender a strategic analogue to the cold war’s mutually assured destruction doctrine. If nations recognize that any act of cyberaggression is certain to result in retaliatory strikes that will wipe out their own critical systems, then they won’t act. “Until we create the substance and psychology of deterrence, these attacks are going to continue,” he says. He has little idea what that strategic deterrence looks like, though. “People understood nuclear deterrence. Cyber’s much harder to grasp.” That’s one problem for which he’s happy to pass the buck to his successor: “I don’t want that homework assignment.” In other respects too, he says, the nation needs to look further ahead. America is too preoccupied with terrorism and not focused enough on the most troubling long-range threats—from war in space, as China and Russia build antisatellite capability and threaten America’s dominance of technologies like GPS, to the ways in which artificial intelligence and human genomic modification could endanger national security. I ask him if the American people should just get used to terrorism attacks like those in Paris or San Bernardino, California. “I do,” he replies, his words clipped. “Got used to the cold war—went on a long time. Decades.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WIRED While Clapper grudgingly accepts the damage the Snowden affair has done to his own reputation, he worries more deeply about the impact it’s had on the intelligence workforce. He hates the thought that America might turn on his employees. He fears that, in the same way the nation and Congress turned their backs on the CIA officers who ran the agency’s “black sites” and torture program in the wake of 9/11, the country will one day turn on the people who carry out drone attacks. “I worry that people will decide retroactively that killing people with drones was wrong, and that will lead us to criticize, indict, and try people who helped kill with drones,” he says. “I find it really bothersome to set a moral standard retrospectively,” he says. “People raise all sorts of good questions about things America has done. Everyone now agrees that interning Japanese [Americans] in World War II was egregious—but at the time it seemed like it was in the best interests of the country.” Clapper, who endured a $40 million Senate investigation and condemnation of the CIA’s torture program, says he is concerned that today’s spies are at risk of similar changes in the political winds—where legally authorized actions they undertook in good faith become the basis for political witch hunts. He argues that during the past 15 years, the intelligence community has made mistakes—but it’s never willfully violated the law. “I have always accepted intelligence was an honorable profession. We are all mindful of the need to comply with our moral values and the law.” Just as discomfiting to Clapper is the idea that such witch hunts will in turn lead his employees to question the worth and honor of their work. That’s why the question at the Omaha town hall meeting bothered him: Is spying moral? As he stood before a sea of suits and military uniforms, formulating his answer, Clapper knew something the rest of the room didn’t. That very week the FBI was hot on the trail of yet another Booz Allen Hamilton contractor it thought might be responsible for yet another round of leaks about classified NSA surveillance programs. After a pause, Clapper answered unapologetically: “We can do our job with a clear conscience, but we have to be careful. The history of the intelligence community is replete with violations of the trust of the American people.” That doesn’t mean that the job is immoral—it just means the job has to be done correctly. “I have always accepted intelligence was an honorable profession. We are all mindful of the need to comply with our moral values and the law.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Clapper’s grandson—who is about the same age as Clapper was when he was commissioned in the Air Force as an intelligence officer—recently started a technology job at the CIA. The two men, 53 years apart in age, have had long conversations over the past year about technology, the future of US intelligence, and its workforce. Clapper says he believes the intelligence world is doing fine with recruiting new hires but struggles to retain staff, particularly technologists lured by private-sector salaries and fewer restrictions. “When I was commissioned in the Air Force, I was committed to the institution for a career. He and those of his generation don’t look at it that way. They’re not as wedded to institutions,” Clapper says. “I’ve worked at least part of every day for the last six years. When we finish talking, I’m going to keep working. I’ve got to be in the Oval tomorrow morning.” Although he’ll enjoy a single Bombay gin and tonic or martini some nights, Clapper doesn’t have much opportunity to really relax. “Have you had a day off in the last six years? Really off?” he asks me, a rhetorical question that turns uncomfortable as he waits for an answer. It’s past 10 pm aboard his Air Force Gulfstream as we travel back to DC from the Nixon library event , and we are still an hour from landing at Joint Base Andrews. “I haven’t,” he finally continues. “I’ve worked at least part of every day for the last six years. When we finish talking, I’m going to keep working. Then tonight, I’ll go to my SCIF and keep working. I’ve got to be in the Oval tomorrow morning.” Clapper says he’s looking forward to leaving it all behind, even if many of his colleagues are anxious about what will come after him. As he said in public appearances this fall, “It makes a lot of people nervous that, with an election cycle that’s been sportier than we’re used to, we’ll drop a new president with new national security leaders into this situation.” Those officials will confront a world that he says looks little like the sound-bite versions offered at rallies. “I’m always struck by the simplicity of the campaign trail—but when I’m in the White House Situation Room, all of a sudden it’s complicated and complex,” he says. When it’s his time to leave in a few weeks, he’ll be happy to say good-bye to the SCIFs, the briefing rooms, the armored motorcades, the ever-watchful security. He looks forward to cleaning out his basement and, most of all, being spontaneous again. “Being under surveillance seven-by-24,” he says, pausing. “It’s stressful.” Unlike most of the foreign and domestic targets of the agencies he oversees, though, he knows he’s being watched. Garrett M. Graff ( @vermontgmg ) wrote about Civis Analytics in issue 24.07. This article appears in the December issue. Subscribe now. Contributing Editor Topics magazine-24.12 National Affairs security Dell Cameron Dell Cameron Dell Cameron Dell Cameron Dell Cameron Andy Greenberg Andy Greenberg Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,144
2,014
"An Unprecedented Look at Stuxnet, the World's First Digital Weapon | WIRED"
"https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kim Zetter Security An Unprecedented Look at Stuxnet, the World's First Digital Weapon This recent undated satellite image provided by Space Imaging/Inta SpaceTurk shows the once-secret Natanz nuclear complex in Natanz, Iran, about 150 miles south of Tehran. AP Photo/Space Imaging/Inta SpaceTurk, HO Save this story Save Save this story Save In January 2010, inspectors with the International Atomic Energy Agency visiting the Natanz uranium enrichment plant in Iran noticed that centrifuges used to enrich uranium gas were failing at an unprecedented rate. The cause was a complete mystery—apparently as much to the Iranian technicians replacing the centrifuges as to the inspectors observing them. Five months later a seemingly unrelated event occurred. A computer security firm in Belarus was called in to troubleshoot a series of computers in Iran that were crashing and rebooting repeatedly. Again, the cause of the problem was a mystery. That is, until the researchers found a handful of malicious files on one of the systems and discovered the world's first digital weapon. Stuxnet, as it came to be known, was unlike any other virus or worm that came before. Rather than simply hijacking targeted computers or stealing information from them, it escaped the digital realm to wreak physical destruction on equipment the computers controlled. Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon , written by WIRED senior staff writer Kim Zetter, tells the story behind Stuxnet's planning, execution and discovery. In this excerpt from the book, which will be released November 11, Stuxnet has already been at work silently sabotaging centrifuges at the Natanz plant for about a year. An early version of the attack weapon manipulated valves on the centrifuges to increase the pressure inside them and damage the devices as well as the enrichment process. Centrifuges are large cylindrical tubes—connected by pipes in a configuration known as a "cascade"—that spin at supersonic speed to separate isotopes in uranium gas for use in nuclear power plants and weapons. At the time of the attacks, each cascade at Natanz held 164 centrifuges. Uranium gas flows through the pipes into the centrifuges in a series of stages, becoming further "enriched" at each stage of the cascade as isotopes needed for a nuclear reaction are separated from other isotopes and become concentrated in the gas. Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon As the excerpt begins, it's June 2009—a year or so since Stuxnet was first released, but still a year before the covert operation will be discovered and exposed. As Iran prepares for its presidential elections, the attackers behind Stuxnet are also preparing their next assault on the enrichment plant with a new version of the malware. They unleash it just as the enrichment plant is beginning to recover from the effects of the previous attack. Their weapon this time is designed to manipulate computer systems made by the German firm Siemens that control and monitor the speed of the centrifuges. Because the computers are air-gapped from the internet, however, they cannot be reached directly by the remote attackers. So the attackers have designed their weapon to spread via infected USB flash drives. To get Stuxnet to its target machines, the attackers first infect computers belonging to five outside companies that are believed to be connected in some way to the nuclear program. The aim is to make each "patient zero" an unwitting carrier who will help spread and transport the weapon on flash drives into the protected facility and the Siemens computers. Although the five companies have been referenced in previous news reports , they've never been identified. Four of them are identified in this excerpt. The two weeks leading up to the release of the next attack were tumultuous ones in Iran. On June 12, 2009, the presidential elections between incumbent Mahmoud Ahmadinejad and challenger Mir-Hossein Mousavi didn’t turn out the way most expected. The race was supposed to be close, but when the results were announced—two hours after the polls closed—Ahmadinejad had won with 63 percent of the vote over Mousavi’s 34 percent. The electorate cried foul, and the next day crowds of angry protesters poured into the streets of Tehran to register their outrage and disbelief. According to media reports, it was the largest civil protest the country had seen since the 1979 revolution ousted the shah and it wasn’t long before it became violent. Protesters vandalized stores and set fire to trash bins, while police and Basijis, government-loyal militias in plainclothes, tried to disperse them with batons, electric prods, and bullets. That Sunday, Ahmadinejad gave a defiant victory speech, declaring a new era for Iran and dismissing the protesters as nothing more than soccer hooligans soured by the loss of their team. The protests continued throughout the week, though, and on June 19, in an attempt to calm the crowds, the Ayatollah Ali Khamenei sanctioned the election results, insisting that the margin of victory—11 million votes—was too large to have been achieved through fraud. The crowds, however, were not assuaged. The next day, a twenty-six-year-old woman named Neda Agha-Soltan got caught in a traffic jam caused by protesters and was shot in the chest by a sniper’s bullet after she and her music teacher stepped out of their car to observe. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Two days later on June 22, a Monday, the Guardian Council, which oversees elections in Iran, officially declared Ahmadinejad the winner, and after nearly two weeks of protests, Tehran became eerily quiet. Police had used tear gas and live ammunition to disperse the demonstrators, and most of them were now gone from the streets. That afternoon, at around 4:30 p.m. local time, as Iranians nursed their shock and grief over events of the previous days, a new version of Stuxnet was being compiled and unleashed. While the streets of Tehran had been in turmoil, technicians at Natanz had been experiencing a period of relative calm. Around the first of the year, they had begun installing new centrifuges again, and by the end of February they had about 5,400 of them in place, close to the 6,000 that Ahmadinejad had promised the previous year. Not all of the centrifuges were enriching uranium yet, but at least there was forward movement again, and by June the number had jumped to 7,052, with 4,092 of these enriching gas. In addition to the eighteen cascades enriching gas in unit A24, there were now twelve cascades in A26 enriching gas. An additional seven cascades had even been installed in A28 and were under vacuum, being prepared to receive gas. Iranian President Mahmoud Ahmadinejad during a tour of centrifuges at Natanz in 2008. Office of the Presidency of the Islamic Republic of Iran Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The performance of the centrifuges was improving too. Iran’s daily production of low-enriched uranium was up 20 percent and would remain consistent throughout the summer of 2009. Despite the previous problems, Iran had crossed a technical milestone and had succeeded in producing 839 kilograms of low-enriched uranium—enough to achieve nuclear-weapons breakout capability. If it continued at this rate, Iran would have enough enriched uranium to make two nuclear weapons within a year. This estimate, however, was based on the capacity of the IR-1 centrifuges currently installed at Natanz. But Iran had already installed IR-2 centrifuges in a small cascade in the pilot plant, and once testing on these was complete and technicians began installing them in the underground hall, the estimate would have to be revised. The more advanced IR-2 centrifuges were more efficient. It took 3,000 IR-1s to produce enough uranium for a nuclear weapon in one year, but it would take just 1,200 IR-2 centrifuges to do the same. Cue Stuxnet 1.001, which showed up in late June. To get their weapon into the plant, the attackers launched an offensive against computers owned by four companies. All of the companies were involved in industrial control and processing of some sort, either manufacturing products and assembling components or installing industrial control systems. They were all likely chosen because they had some connection to Natanz as contractors and provided a gateway through which to pass Stuxnet to Natanz through infected employees. To ensure greater success at getting the code where it needed to go, this version of Stuxnet had two more ways to spread than the previous one. Stuxnet 0.5 could spread only by infecting Step 7 project files—the files used to program Siemens PLCs. This version, however, could spread via USB flash drives using the Windows Autorun feature or through a victim’s local network using the print-spooler zero-day exploit that Kaspersky Lab, the antivirus firm based in Russia, and Symantec later found in the code. Based on the log files in Stuxnet, a company called Foolad Technic was the first victim. It was infected at 4:40 a.m. on June 23, a Tuesday. But then it was almost a week before the next company was hit. The following Monday, about five thousand marchers walked silently through the streets of Tehran to the Qoba Mosque to honor victims killed during the recent election protests. Late that evening, around 11:20 p.m., Stuxnet struck machines belonging to its second victim—a company called Behpajooh. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It was easy to see why Behpajooh was a target. It was an engineering firm based in Esfahan—the site of Iran’s new uranium conversion plant, built to turn milled uranium ore into gas for enriching at Natanz, and was also the location of Iran’s Nuclear Technology Center, which was believed to be the base for Iran’s nuclear weapons development program. Behpajooh had also been named in US federal court documents in connection with Iran’s illegal procurement activities. Behpajooh was in the business of installing and programming industrial control and automation systems, including Siemens systems. The company’s website made no mention of Natanz, but it did mention that the company had installed Siemens S7-400 PLCs, as well as the Step 7 and WinCC software and Profibus communication modules at a steel plant in Esfahan. This was, of course, all of the same equipment Stuxnet targeted at Natanz. At 5:00 a.m. on July 7, nine days after Behpajooh was hit, Stuxnet struck computers at Neda Industrial Group, as well as a company identified in the logs only as CGJ, believed to be Control Gostar Jahed. Both companies designed or installed industrial control systems. Iranian President Mahmoud Ahmadinejad observes computer monitors at the Natanz uranium enrichment plant in central Iran, where Stuxnet was believed to have infected PCs and damaged centrifuges. Office of the Presidency of the Islamic Republic of Iran Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Neda designed and installed control systems, precision instrumentation, and electrical systems for the oil and gas industry in Iran, as well as for power plants and mining and process facilities. In 2000 and 2001 the company had installed Siemens S7 PLCs in several gas pipeline operations in Iran and had also installed Siemens S7 systems at the Esfahan Steel Complex. Like Behpajooh, Neda had been identified on a proliferation watch list for its alleged involvement in illicit procurement activity and was named in a US indictment for receiving smuggled microcontrollers and other components. About two weeks after it struck Neda, a control engineer who worked for the company popped up on a Siemens user forum on July 22 complaining about a problem that workers at his company were having with their machines. The engineer, who posted a note under the user name Behrooz, indicated that all PCs at his company were having an identical problem with a Siemens Step 7 .DLL file that kept producing an error message. He suspected the problem was a virus that spread via flash drives. When he used a DVD or CD to transfer files from an infected system to a clean one, everything was fine, he wrote. But when he used a flash drive to transfer files, the new PC started having the same problems the other machine had. A USB flash drive, of course, was Stuxnet’s primary method of spreading. Although Behrooz and his colleagues scanned for viruses, they found no malware on their machines. There was no sign in the discussion thread that they ever resolved the problem at the time. It's not clear how long it took Stuxnet to reach its target after infecting machines at Neda and the other companies, but between June and August the number of centrifuges enriching uranium gas at Natanz began to drop. Whether this was the result solely of the new version of Stuxnet or the lingering effects of the previous version is unknown. But by August that year, only 4,592 centrifuges were enriching at the plant, a decrease of 328 centrifuges since June. By November, that number had dropped even further to 3,936, a difference of 984 in five months. What's more, although new machines were still being installed, none of them were being fed gas. Clearly there were problems with the cascades, and technicians had no idea what they were. The changes mapped precisely, however, to what Stuxnet was designed to do. Reprinted from Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon Copyright © 2014 by Kim Zetter. Published by Crown Publishers, an imprint of Random House LLC. X X Topics cyberwar stuxnet Threat Level Andy Greenberg Dhruv Mehrotra Justin Ling Matt Burgess Matt Burgess Lily Hay Newman Matt Burgess Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,145
2,013
"NSA Snooping Was Only the Beginning. Meet the Spy Chief Leading Us Into Cyberwar | WIRED"
"https://www.wired.com/2013/06/general-keith-alexander-cyberwar"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons James Bamford Security NSA Snooping Was Only the Beginning. Meet the Spy Chief Leading Us Into Cyberwar The Secret War: Infiltration. Sabotage. Mayhem. For years, four-star general Keith Alexander has been building a secret army capable of laynching devastating cyberattacks. Now it's ready to unleash hell. Mark Weaver, Mike Theiler/Corbis, Enzo Signorelli/Getty Images, Nick Servian/Alamy Save this story Save Save this story Save Inside Fort Meade, Maryland, a top-secret city bustles. Tens of thousands of people move through more than 50 buildings—the city has its own post office, fire department, and police force. But as if designed by Kafka, it sits among a forest of trees, surrounded by electrified fences and heavily armed guards, protected by antitank barriers, monitored by sensitive motion detectors, and watched by rotating cameras. To block any telltale electromagnetic signals from escaping, the inner walls of the buildings are wrapped in protective copper shielding and the one-way windows are embedded with a fine copper mesh. This is the undisputed domain of General Keith Alexander, a man few even in Washington would likely recognize. Never before has anyone in America’s intelligence sphere come close to his degree of power, the number of people under his command, the expanse of his rule, the length of his reign, or the depth of his secrecy. A four-star Army general, his authority extends across three domains: He is director of the world’s largest intelligence service, the National Security Agency; chief of the Central Security Service; and commander of the US Cyber Command. As such, he has his own secret military, presiding over the Navy’s 10th Fleet, the 24th Air Force, and the Second Army. Alexander runs the nation’s cyberwar efforts, an empire he has built over the past eight years by insisting that the US’s inherent vulnerability to digital attacks requires him to amass more and more authority over the data zipping around the globe. In his telling, the threat is so mind-bogglingly huge that the nation has little option but to eventually put the entire civilian Internet under his protection, requiring tweets and emails to pass through his filters, and putting the kill switch under the government’s forefinger. “What we see is an increasing level of activity on the networks,” he said at a recent security conference in Canada. “I am concerned that this is going to break a threshold where the private sector can no longer handle it and the government is going to have to step in.” In its tightly controlled public relations, the NSA has focused attention on the threat of cyberattack against the US—the vulnerability of critical infrastructure like power plants and water systems, the susceptibility of the military’s command and control structure, the dependence of the economy on the Internet’s smooth functioning. Defense against these threats was the paramount mission trumpeted by NSA brass at congressional hearings and hashed over at security conferences. But there is a flip side to this equation that is rarely mentioned: The military has for years been developing offensive capabilities, giving it the power not just to defend the US but to assail its foes. Using so-called cyber-kinetic attacks, Alexander and his forces now have the capability to physically destroy an adversary’s equipment and infrastructure, and potentially even to kill. Alexander—who declined to be interviewed for this article—has concluded that such cyberweapons are as crucial to 21st-century warfare as nuclear arms were in the 20th. And he and his cyberwarriors have already launched their first attack. The cyberweapon that came to be known as Stuxnet was created and built by the NSA in partnership with the CIA and Israeli intelligence in the mid-2000s. The first known piece of malware designed to destroy physical equipment, Stuxnet was aimed at Iran’s nuclear facility in Natanz. By surreptitiously taking control of an industrial control link known as a Scada (Supervisory Control and Data Acquisition) system, the sophisticated worm was able to damage about a thousand centrifuges used to enrich nuclear material. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The success of this sabotage came to light only in June 2010, when the malware spread to outside computers. It was spotted by independent security researchers, who identified telltale signs that the worm was the work of thousands of hours of professional development. Despite headlines around the globe, officials in Washington have never openly acknowledged that the US was behind the attack. It wasn’t until 2012 that anonymous sources within the Obama administration took credit for it in interviews with The New York Times. But Stuxnet is only the beginning. Alexander’s agency has recruited thousands of computer experts, hackers, and engineering PhDs to expand US offensive capabilities in the digital realm. The Pentagon has requested $4.7 billion for “cyberspace operations,” even as the budget of the CIA and other intelligence agencies could fall by $4.4 billion. It is pouring millions into cyberdefense contractors. And more attacks may be planned. –> We jokingly referred to him as Emperor Alexander, because whatever Keith wants, Keith gets. Inside the government, the general is regarded with a mixture of respect and fear, not unlike J. Edgar Hoover, another security figure whose tenure spanned multiple presidencies. “We jokingly referred to him as Emperor Alexander—with good cause, because whatever Keith wants, Keith gets,” says one former senior CIA official who agreed to speak on condition of anonymity. “We would sit back literally in awe of what he was able to get from Congress, from the White House, and at the expense of everybody else.” Now 61, Alexander has said he plans to retire in 2014; when he does step down he will leave behind an enduring legacy—a position of far-reaching authority and potentially Strangelovian powers at a time when the distinction between cyberwarfare and conventional warfare is beginning to blur. A recent Pentagon report made that point in dramatic terms. It recommended possible deterrents to a cyberattack on the US. Among the options: launching nuclear weapons. Mark Weaver, John Hyde/Getty Images, Getty Images, Evgeniyozhulay/Getty Images He may be a four-star Army general, but Alexander more closely resembles a head librarian than George Patton. His face is anemic, his lips a neutral horizontal line. Bald halfway back, he has hair the color of strong tea that turns gray on the sides, where it is cut close to the skin, more schoolboy than boot camp. For a time he wore large rimless glasses that seemed to swallow his eyes. Some combat types had a derisive nickname for him: Alexander the Geek. Born in 1951, the third of five children, Alexander was raised in the small upstate New York hamlet of Onondaga Hill, a suburb of Syracuse. He tossed papers for the Syracuse Post-Standard and ran track at Westhill High School while his father, a former Marine private, was involved in local Republican politics. It was 1970, Richard Nixon was president, and most of the country had by then begun to see the war in Vietnam as a disaster. But Alexander had been accepted at West Point, joining a class that included two other future four-star generals, David Petraeus and Martin Dempsey. Alexander would never get the chance to serve in Vietnam. Just as he stepped off the bus at West Point, the ground war finally began winding down. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In April 1974, just before graduation, he married his high school classmate Deborah Lynn Douglas, who grew up two doors away in Onondaga Hill. The fighting in Vietnam was over, but the Cold War was still bubbling, and Alexander focused his career on the solitary, rarefied world of signals intelligence, bouncing from secret NSA base to secret NSA base, mostly in the US and Germany. He proved a competent administrator, carrying out assignments and adapting to the rapidly changing high tech environment. Along the way he picked up masters degrees in electronic warfare, physics, national security strategy, and business administration. As a result, he quickly rose up the military intelligence ranks, where expertise in advanced technology was at a premium. In 2001, Alexander was a one-star general in charge of the Army Intelligence and Security Command, the military’s worldwide network of 10,700 spies and eavesdroppers. In March of that year he told his hometown Syracuse newspaper that his job was to discover threats to the country. “We have to stay out in front of our adversary,” Alexander said. “It’s a chess game, and you don’t want to lose this one.” But just six months later, Alexander and the rest of the American intelligence community suffered a devastating defeat when they were surprised by the attacks on 9/11. Following the assault, he ordered his Army intercept operators to begin illegally monitoring the phone calls and email of American citizens who had nothing to do with terrorism, including intimate calls between journalists and their spouses. Congress later gave retroactive immunity to the telecoms that assisted the government. In 2003, Alexander, a favorite of defense secretary Donald Rumsfeld, was named the Army’s deputy chief of staff for intelligence, the service’s most senior intelligence position. Among the units under his command were the military intelligence teams involved in the human rights abuses at Baghdad’s Abu Ghraib prison. Two years later, Rumsfeld appointed Alexander—now a three-star general—director of the NSA, where he oversaw the illegal, warrantless wiretapping program while deceiving members of the House Intelligence Committee. In a publicly released letter to Alexander shortly after The New York Times exposed the program, US representative Rush Holt, a member of the committee, angrily took him to task for not being forthcoming about the wiretapping: “Your responses make a mockery of congressional oversight.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Alexander also proved to be militant about secrecy. In 2006 a senior agency employee named Thomas Drake allegedly gave information to The Baltimore Sun showing that a publicly discussed program known as Trailblazer was millions of dollars over budget and behind schedule. 1 In response, federal prosecutors charged Drake with 10 felony counts, including retaining classified documents and making false statements. He faced up to 35 years in prison—despite the fact that all of the information Drake was alleged to have leaked was not only unclassified and already in the public domain but in fact had been placed there by NSA and Pentagon officials themselves. (As a longtime chronicler of the NSA, I served as a consultant for Drake’s defense team. The investigation went on for four years, after which Drake received no jail time or fine. The judge, Richard D. Bennett, excoriated the prosecutor and NSA officials for dragging their feet. “I find that unconscionable. Unconscionable,” he said during a hearing in 2011. “That’s four years of hell that a citizen goes through. It was not proper. It doesn’t pass the smell test.”) But while the powers that be were pressing for Drake’s imprisonment, a much more serious challenge was emerging. Stuxnet, the cyberweapon used to attack the Iranian facility in Natanz, was supposed to be untraceable, leaving no return address should the Iranians discover it. Citing anonymous Obama administration officials, The New York Times reported that the malware began replicating itself and migrating to computers in other countries. Cyber­security detectives were thus able to detect and analyze it. By the summer of 2010 some were pointing fingers at the US. Natanz is a small, dusty town in central Iran known for its plump pears and the burial vault of the 13th-century Sufi sheikh Abd al-Samad. The Natanz nuclear enrichment plant is a vault of a different kind. Tucked in the shadows of the Karkas Mountains, most of it lies deep underground and surrounded by concrete walls 8 feet thick, with another layer of concrete for added security. Its bulbous concrete roof rests beneath more than 70 feet of packed earth. Contained within the bombproof structure are halls the size of soccer pitches, designed to hold thousands of tall, narrow centrifuges. The machines are linked in long cascades that look like tacky decorations from a ’70s discotheque. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To work properly, the centrifuges need strong, lightweight, well-balanced rotors and high-speed bearings. Spin these rotors too slowly and the critical U-235 molecules inside fail to separate; spin them too quickly and the machines self-destruct and may even explode. The operation is so delicate that the computers controlling the rotors’ movement are isolated from the Internet by a so-called air gap that prevents exposure to viruses and other malware. In 2006, the Department of Defense gave the go-ahead to the NSA to begin work on targeting these centrifuges, according to The New York Times. One of the first steps was to build a map of the Iranian nuclear facility’s computer networks. A group of hackers known as Tailored Access Operations—a highly secret organization within the NSA—took up the challenge. They set about remotely penetrating communications systems and networks, stealing passwords and data by the terabyte. Teams of “vulnerability analysts” searched hundreds of computers and servers for security holes, according to a former senior CIA official involved in the Stuxnet program. Armed with that intelligence, so-called network exploitation specialists then developed software implants known as beacons, which worked like surveillance drones, mapping out a blueprint of the network and then secretly communicating the data back to the NSA. (Flame, the complex piece of surveillance malware discovered by Russian cybersecurity experts last year, was likely one such beacon.) The surveillance drones worked brilliantly. The NSA was able to extract data about the Iranian networks, listen to and record conversations through computer microphones, even reach into the mobile phones of anyone within Bluetooth range of a compromised machine. The next step was to create a digital warhead, a task that fell to the CIA Clandestine Service’s Counter-Proliferation Division. According to the senior CIA official, much of this work was outsourced to national labs, notably Sandia in Albuquerque, New Mexico. So by the mid-2000s, the government had developed all the fundamental technology it needed for an attack. But there was still a major problem: The secretive agencies had to find a way to access Iran’s most sensitive and secure computers, the ones protected by the air gap. For that, Alexander and his fellow spies would need outside help. This is where things get murky. One possible bread crumb trail leads to an Iranian electronics and computer wholesaler named Ali Ashtari, who later confessed that he was recruited as a spy by the Mossad, Israel’s intelligence service. (Israel denied the claim.) Ashtari’s principal customers were the procurement officers for some of Iran’s most sensitive organizations, including the intelligence service and the nuclear enrichment plants. If new computers were needed or routers or switches had to be replaced, Ashtari was the man to see, according to reports from semi-official Iranian news agencies and an account of Ashtari’s trial published by the nonprofit Iran Human Rights Voice. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg General Alexander's Empire The four-star general presides over a trifecta of intelligence agencies headquartered in Fort Meade, Maryland. Here's a guide to the alphabet soup of agency and subagency acronyms. —Cameron Bird NSA (National Security Agency) The nation's largest employer of mathematicians. The Department of Defense created this agency in 1952 to intercept, collect, and decrypt foreign communications. In the past decade, the NSA poured hundreds of millions of dollars into offensive cyberwar R&D. CSS (Central Security Service) Originally envisioned as a fourth branch of the armed services, this organization is now described as a "combat support agency." It coordinates with the Army, Navy, Coast Guard, Marines, and Air Force to eavesdrop on foreign signals—like tapping into undersea cable or wireless communications. USCYBERCOM (US Cyber Command) Established by the Department of Defense in 2009 to deter cyberattacks—"proactively." In March, Alexander gave a hint of the command's mandate to the House Armed Services Committee: "I would like to be clear that this team, this defend-the-nation team, is not a defensive team." CAE (Centers for Academic Excellence) Launched in 1998, this NSA initiative seeks to increase the number of college students competent in "information assurance." Last year the agency accredited four universities to lead the way in training the next generation of cyber operators in "collection, exploitation, and response." SCS (Special Collection Service) A unit whose existence has never been officially acknowledged by the defense establishment. But according to the accounts of an anonymous CIA official, members of the ultra-top-secret group are involved in covert eavesdropping from US embassies around the world. JFCC-NW (Joint Functional Component Command for Network Warfare) Created in 2005 as part of US Strategic Command, which controls the nation's nuclear arsenal, it played a lead role in promoting the idea of thwarting Iran's own nuclear ambitions with a cyberattack. Folded into Cybercom in 2010. Jeremy Loyd He not only had access to some of Iran’s most sensitive locations, his company had become an electronics purchasing agent for the intelligence, defense, and nuclear development departments. This would have given Mossad enormous opportunities to place worms, back doors, and other malware into the equipment in a wide variety of facilities. Although the Iranians have never explicitly acknowledged it, it stands to reason that this could have been one of the ways Stuxnet got across the air gap. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But by then, Iran had established a new counterintelligence agency dedicated to discovering nuclear spies. Ashtari was likely on their radar because of the increased frequency of his visits to various sensitive locations. He may have let down his guard. “The majority of people we lose as sources—who get wrapped up or executed or imprisoned—are usually those willing to accept more risk than they should,” says the senior CIA official involved with Stuxnet. In 2006, according to Iran Human Rights Voice, Ashtari was quietly arrested at a travel agency after returning from another trip out of the country. The malware targeting Iran replicated and spread to computers in other countries. In June 2008 he was brought to trial in Branch 15 of the Revolutionary Court, where he confessed, pleaded guilty to the charges, expressed remorse for his actions, and was sentenced to death. On the morning of November 17, in the courtyard of Tehran’s Evin Prison, a noose was placed around Ashtari’s neck, and a crane hauled his struggling body high into the air. Ashtari may well have been one of the human assets that allowed Stuxnet to cross the air gap. But he was not Israel’s only alleged spy in Iran, and others may also have helped enable malware transfer. “Normally,” says the anonymous CIA official, “what we do is look for multiple bridges, in case a guy gets wrapped up.” Less then two weeks after Ashtari’s execution, the Iranian government arrested three more men, charging them with spying for Israel. And on December 13, 2008, Ali-Akbar Siadat, another importer of electronic goods, was arrested as a spy for the Mossad, according to Iran’s official Islamic Republic News Agency. Unlike Ashtari, who said he had operated alone, Siadat was accused of heading a nationwide spy network employing numerous Iranian agents. But despite their energetic counterintelligence work, the Iranians would not realize for another year and a half that a cyberweapon was targeting their nuclear centrifuges. Once they did, it was only a matter of time until they responded. Sure enough, in August 2012 a devastating virus was unleashed on Saudi Aramco, the giant Saudi state-owned energy company. The malware infected 30,000 computers, erasing three-quarters of the company’s stored data, destroying everything from documents to email to spreadsheets and leaving in their place an image of a burning American flag, according to The New York Times. Just days later, another large cyberattack hit RasGas, the giant Qatari natural gas company. Then a series of denial-of-service attacks took America’s largest financial institutions offline. Experts blamed all of this activity on Iran, which had created its own cyber command in the wake of the US-led attacks. James Clapper, US director of national intelligence, for the first time declared cyberthreats the greatest danger facing the nation, bumping terrorism down to second place. In May, the Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team issued a vague warning that US energy and infrastructure companies should be on the alert for cyberattacks. It was widely reported that this warning came in response to Iranian cyberprobes of industrial control systems. An Iranian diplomat denied any involvement. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The cat-and-mouse game could escalate. “It’s a trajectory,” says James Lewis, a cyber­security expert at the Center for Strategic and International Studies. “The general consensus is that a cyber response alone is pretty worthless. And nobody wants a real war.” Under international law, Iran may have the right to self-defense when hit with destructive cyberattacks. William Lynn, deputy secretary of defense, laid claim to the prerogative of self-defense when he outlined the Pentagon’s cyber operations strategy. “The United States reserves the right,” he said, “under the laws of armed conflict, to respond to serious cyberattacks with a proportional and justified military response at the time and place of our choosing.” Leon Panetta, the former CIA chief who had helped launch the Stuxnet offensive, would later point to Iran’s retaliation as a troubling harbinger. “The collective result of these kinds of attacks could be a cyber Pearl Harbor,” he warned in October 2012, toward the end of his tenure as defense secretary, “an attack that would cause physical destruction and the loss of life.” If Stuxnet was the proof of concept, it also proved that one successful cyberattack begets another. For Alexander, this offered the perfect justification for expanding his empire. –> In May 2010, a little more than a year after President Obama took office and only weeks before Stuxnet became public, a new organization to exercise American rule over the increasingly militarized Internet became operational: the US Cyber Command. Keith Alexander, newly promoted to four-star general, was put in charge of it. The forces under his command were now truly formidable—his untold thousands of NSA spies, as well as 14,000 incoming Cyber Command personnel, including Navy, Army, and Air Force troops. Helping Alexander organize and dominate this new arena would be his fellow plebes from West Point’s class of 1974: David Petraeus, the CIA director; and Martin Dempsey, chair of the Joint Chiefs of Staff. Indeed, dominance has long been their watchword. Alexander’s Navy calls itself the Information Dominance Corps. In 2007, the then secretary of the Air Force pledged to “dominate cyberspace” just as “today, we dominate air and space.” And Alexander’s Army warned, “It is in cyberspace that we must use our strategic vision to dominate the information environment.” The Army is reportedly treating digital weapons as another form of offensive capability, providing frontline troops with the option of requesting “cyber fire support” from Cyber Command in the same way they request air and artillery support. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg All these capabilities require a giant expansion of secret facilities. Thousands of hard-hatted construction workers will soon begin erecting cranes, driving backhoes, and emptying cement trucks as they expand the boundaries of NSA’s secret city eastward, increasing its already enormous size by a third. “You could tell that some of the seniors at NSA were truly concerned that cyber was going to engulf them,” says a former senior Cyber Command official, “and I think rightfully so.” In May, work began on a $3.2 billion facility housed at Fort Meade in Maryland. Known as Site M, the 227-acre complex includes its own 150-megawatt power substation, 14 administrative buildings, 10 parking garages, and chiller and boiler plants. The server building will have 90,000 square feet of raised floor—handy for supercomputers—yet hold only 50 people. Meanwhile, the 531,000-square-foot operations center will house more than 1,300 people. In all, the buildings will have a footprint of 1.8 million square feet. Even more ambitious plans, known as Phase II and III, are on the drawing board. Stretching over the next 16 years, they would quadruple the footprint to 5.8 million square feet, enough for nearly 60 buildings and 40 parking garages, costing $5.2 billion and accommodating 11,000 more cyberwarriors. Alexander's forces are formidable—thousands of NSA spies, plus 14,000 cyber troops. In short, despite the sequestration, layoffs, and furloughs in the federal government, it’s a boom time for Alexander. In April, as part of its 2014 budget request, the Pentagon asked Congress for $4.7 billion for increased “cyberspace operations,” nearly $1 billion more than the 2013 allocation. At the same time, budgets for the CIA and other intelligence agencies were cut by almost the same amount, $4.4 billion. A portion of the money going to Alexander will be used to create 13 cyberattack teams. What’s good for Alexander is good for the fortunes of the cyber-industrial complex, a burgeoning sector made up of many of the same defense contractors who grew rich supplying the wars in Iraq and Afghanistan. With those conflicts now mostly in the rearview mirror, they are looking to Alexander as a kind of savior. After all, the US spends about $30 billion annually on cybersecurity goods and services. In the past few years, the contractors have embarked on their own cyber building binge parallel to the construction boom at Fort Meade: General Dynamics opened a 28,000-square-foot facility near the NSA; SAIC cut the ribbon on its new seven-story Cyber Innovation Center; the giant CSC unveiled its Virtual Cyber Security Center. And at consulting firm Booz Allen Hamilton, where former NSA director Mike McConnell was hired to lead the cyber effort, the company announced a “cyber-solutions network” that linked together nine cyber-focused facilities. Not to be outdone, Boeing built a new Cyber Engagement Center. Leaving nothing to chance, it also hired retired Army major general Barbara Fast, an old friend of Alexander’s, to run the operation. (She has since moved on.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Defense contractors have been eager to prove that they understand Alexander’s worldview. “Our Raytheon cyberwarriors play offense and defense,” says one help-wanted site. Consulting and engineering firms such as Invertix and Parsons are among dozens posting online want ads for “computer network exploitation specialists.” And many other companies, some unidentified, are seeking computer and network attackers. “Firm is seeking computer network attack specialists for long-term government contract in King George County, VA,” one recent ad read. Another, from Sunera, a Tampa, Florida, company, said it was hunting for “attack and penetration consultants.” One of the most secretive of these contractors is Endgame Systems, a startup backed by VCs including Kleiner Perkins Caufield & Byers, Bessemer Venture Partners, and Paladin Capital Group. Established in Atlanta in 2008, Endgame is transparently antitransparent. “We’ve been very careful not to have a public face on our company,” former vice president John M. Farrell wrote to a business associate in an email that appeared in a WikiLeaks dump. “We don’t ever want to see our name in a press release,” added founder Christopher Rouland. True to form, the company declined Wired’s interview requests. Perhaps for good reason: According to news reports, Endgame is developing ways to break into Internet-connected devices through chinks in their antivirus armor. Like safecrackers listening to the click of tumblers through a stethoscope, the “vulnerability researchers” use an extensive array of digital tools to search for hidden weaknesses in commonly used programs and systems, such as Windows and Internet Explorer. And since no one else has ever discovered these unseen cracks, the manufacturers have never developed patches for them. Endgame hunts for hidden security weaknesses that are ripe for exploitation. Thus, in the parlance of the trade, these vulnerabilities are known as “zero-day exploits,” because it has been zero days since they have been uncovered and fixed. They are the Achilles’ heel of the security business, says a former senior intelligence official involved with cyberwarfare. Those seeking to break into networks and computers are willing to pay millions of dollars to obtain them. According to Defense News’ C4ISR Journal and Bloomberg Businessweek , Endgame also offers its intelligence clients—agencies like Cyber Command, the NSA, the CIA, and British intelligence—a unique map showing them exactly where their targets are located. Dubbed Bonesaw, the map displays the geolocation and digital address of basically every device connected to the Internet around the world, providing what’s called network situational awareness. The client locates a region on the password-protected web-based map, then picks a country and city— say, Beijing, China. Next the client types in the name of the target organization, such as the Ministry of Public Security’s No. 3 Research Institute, which is responsible for computer security—or simply enters its address, 6 Zhengyi Road. The map will then display what software is running on the computers inside the facility, what types of malware some may contain, and a menu of custom-designed exploits that can be used to secretly gain entry. It can also pinpoint those devices infected with malware, such as the Conficker worm, as well as networks turned into botnets and zombies— the equivalent of a back door left open. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Bonesaw also contains targeting data on US allies, and it is soon to be upgraded with a new version codenamed Velocity, according to C4ISR Journal. It will allow Endgame’s clients to observe in real time as hardware and software connected to the Internet around the world is added, removed, or changed. But such access doesn’t come cheap. One leaked report indicated that annual subscriptions could run as high as $2.5 million for 25 zero-day exploits. The buying and using of such a subscription by nation-states could be seen as an act of war. “If you are engaged in reconnaissance on an adversary’s systems, you are laying the electronic battlefield and preparing to use it,” wrote Mike Jacobs, a former NSA director for information assurance, in a McAfee report on cyberwarfare. “In my opinion, these activities constitute acts of war, or at least a prelude to future acts of war.” The question is, who else is on the secretive company’s client list? Because there is as of yet no oversight or regulation of the cyberweapons trade, companies in the cyber-industrial complex are free to sell to whomever they wish. “It should be illegal,” says the former senior intelligence official involved in cyber­warfare. “I knew about Endgame when I was in intelligence. The intelligence community didn’t like it, but they’re the largest consumer of that business." Thus, in their willingness to pay top dollar for more and better zero-day exploits, the spy agencies are helping drive a lucrative, dangerous, and unregulated cyber arms race, one that has developed its own gray and black markets. The companies trading in this arena can sell their wares to the highest bidder—be they frontmen for criminal hacking groups or terrorist organizations or countries that bankroll terrorists, such as Iran. Ironically, having helped create the market in zero-day exploits and then having launched the world into the era of cyberwar, Alexander now says the possibility of zero-day exploits falling into the wrong hands is his “greatest worry.” He has reason to be concerned. In May, Alexander discovered that four months earlier someone, or some group or nation, had secretly hacked into a restricted US government database known as the National Inventory of Dams. Maintained by the Army Corps of Engineers, it lists the vulnerabilities for the nation’s dams, including an estimate of the number of people who might be killed should one of them fail. Meanwhile, the 2013 “Report Card for America’s Infrastructure” gave the US a D on its maintenance of dams. There are 13,991 dams in the US that are classified as high-hazard, the report said. A high-hazard dam is defined as one whose failure would cause loss of life. “That’s our concern about what’s coming in cyberspace—a destructive element. It is a question of time,” Alexander said in a talk to a group involved in information operations and cyberwarfare, noting that estimates put the time frame of an attack within two to five years. He made his comments in September 2011. Contributor James Bamford ([email protected]) wrote about the NSA's new Utah Data Center in issue 20.04. by Mark Weaver Note 1. Drake began communicating with the Baltimore Sun in 2006, not 2005. The documents Drake leaked showed that Trailblazer was over budget and behind schedule, not possibly illegal or a threat to privacy. Topics Threat Level Lily Hay Newman David Gilbert Lily Hay Newman David Gilbert Vittoria Elliott Dell Cameron Matt Burgess Scott Gilbertson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,146
2,017
"The Battle for Top AI Talent Only Gets Tougher From Here | WIRED"
"https://www.wired.com/2017/03/intel-just-jumped-fierce-competition-ai-talent"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Battle for Top AI Talent Only Gets Tougher From Here Ariel Zambelich/WIRED Save this story Save Save this story Save Andrew Ng helped create two of Silicon Valley's leading artificial intelligence labs. First, he built Google Brain, now the hub of AI research inside the internet giant. Then he built a lab in the Valley for Baidu, the company known as the Google of China. Ng was one of the primary figures behind the enormous and rapid rise of AI over the last five years as everyone from Facebook to Microsoft rebuilt themselves around deep learning. And on Tuesday night, he announced his departure from Baidu. He didn't say where he was going. And he didn't immediately respond to our request for comment. But odds are, he will show up at some other big name sometime soon. AI researchers are among the most prized talent in the modern tech world. A few years ago, Peter Lee, a vice president inside Microsoft Research, said that the cost of acquiring a top AI researcher was comparable to the cost of signing a quarterback in the NFL. Since then, the market for talent has only gotten hotter. Elon Musk nabbed several researchers out from under Google and Facebook in founding a new lab called OpenAI, and the big players are now buying up AI startups before they get off the ground. Intel’s Bold Plan to Reinvent Computer Memory (and Keep It a Secret) OpenAI Joins Microsoft on the Cloud’s Next Big Front: Chips Intel’s 15 Billion Reasons Why an AI Chip Revolution Has Arrived Today, this talent market may have shifted yet again. Chipmaker Intel just announced that it's building a lab for far-looking AI research, and company vice president Naveen Rao says Intel is prepared to pay up for the caliber of talent that now works inside Google Brain or the Facebook Artificial Intelligence Research Lab. "We're looking for researchers that could potentially go to these other places," he says, acknowledging the big dollars this will require. Asked if that could include a top name like Andrew Ng, he said, "Absolutely." Such ambition shows just how large the AI movement has become. Intel is launching a lab not because it wants to ultimately build its own AI, but because it wants to sell the enormous number of computer chips that others will need to build their AI. Today's AI movement revolves around deep neural networks, complex mathematical systems that can learn tasks by analyzing vast amounts of data. If you feed millions of cat photos into a neural network, for instance, it can learn to identify a cat. Typically, when a company like Google or Facebook trains a neural network in this way, it uses hundreds of GPU chips, graphics processors suited to this kind of math. And most of these GPUs come from nVidia, an Intel rival. Intel is hoping to build chips that replace GPUs. Last year, it acquired Rao's chip startup, Nervana, for a reported $400 million, believing its tech can help mount this challenge. Now, with Nervana as an anchor, Intel is creating a new product development group dedicated to AI. Rao will oversee the group, and he says this effort will include a research lab that explores entirely new concepts in deep learning and related areas, all with an eye toward building chips that the Googles and Facebooks will want. "We're actually going to have an emphasis on research---three, five, seven years out," he says. In some sense, this move is Intel desperately trying to market itself as a serious alternative to nVidia GPUs. And at this point, it's just not. But even that desperation underlines the importance of the new AI chip market, which is rapidly remaking computer data centers. If Intel actually hires people like Ng, maybe we can believe its hype---and the AI competition will get even fiercer. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence data deep learning Intel Will Knight Amit Katwala Kari McMahon Andy Greenberg Khari Johnson Joel Khalili David Gilbert Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,147
2,016
"Google Opens Montreal AI Lab in Global Race for Scarce Talent | WIRED"
"https://www.wired.com/2016/11/google-opens-montreal-ai-lab-snag-scarce-global-talent"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google Opens Montreal AI Lab to Snag Scarce Global Talent Getty Images Save this story Save Save this story Save Google is building a new artificial intelligence lab in Montreal dedicated to deep learning, a technology that's rapidly reinventing not only Google but the rest of the internet's biggest players. Hugo Larochelle will run the new lab after joining Google from the Twitter, where he was part of the company's central AI team. It's a homecoming for Larochelle, who earned a PhD in machine learning from the University of Montreal and remains a professor at the Université de Sherbrooke. Yoshua Bengio, one of the founding fathers of the movement, calls him "one of the rising stars of deep learning." Intel Looks to a New Chip to Power the Coming Age of AI Giant Corporations Are Hoarding the World’s AI Talent OpenAI Joins Microsoft on the Cloud’s Next Big Front: Chips At the moment, Larochelle is the new lab's sole hire, but the idea is that he will build a sizable team inside Google's existing engineering office in Montreal. The team will operate as an extension of Google Brain, the central operation that works to spread AI across the entire company. The move is part of a larger effort by the company to strengthen its ties to the deep learning community in Montreal, one of the key talent centers for this technology , a technology that percolated in academia for decades but has recently swept into the biggest internet companies. Today, Google also revealed that it is donating about $3.33 million ($4.5 million CAD) to the Montreal Institute for Learning Algorithms, or MILA, an academic lab that spans the University of Montreal and nearby McGill University, and this isn't the first time the company has funneled money into the program. Over the past ten years, Google had donated about $13 million CAD to academic research in the country and about half was earmarked for AI research. Because deep learning technology has only recently pushed into the commercial world, talent in the field is still quite scarce, and the big players are angling for any advantage they can find in the hunt for top researchers and new ideas. Last year, Facebook opened an AI lab in Paris, another deep learning hotbed , after building its first lab around New York Univeristy professor Yann LeCun in Manhattan. In Canada, Google already has strong ties to the University of Toronto after acqui-hiring Geoff Hinton, another founding father of the deep learning movement, in 2013. Apple, meanwhile, just hired Carnegie Mellon University researcher Russ Salakhutdinov. Amazon is building a new machine learning group around Alex Smola, another notable CMU researcher. And just last week, Google snapped up Stanford professor Fei-Fei Li, who started the ImageNet contest, a competition that helped catalyze the rise of deep neutral networks. Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these companies should be careful not to strip academia of the experts needed to teach the next generation of machine learning researchers. But the battle for talent won't likely abate anytime soon. The biggest companies are vacuuming up not just the top academics but also deep learning startups. (In recent years, Twitter bought three such startups, as did Apple.) News of Google's new lab comes just a week after Bengio invited more deep learning researchers to join him in north of border. "In the depressing aftermath of the US elections, I would like to point out that interesting things are happening in the great Canadian North, with a very different kind of government,” he said. If US researchers take him up on the offer, that could make it even harder for the small players to hire AI talent. But the big players have it covered. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence deep learning Enterprise Google Steven Levy Susan D'Agostino Christopher Beam Will Knight Niamh Rowe Amanda Hoover Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,148
2,016
"Artificial Intelligence Is Driving Huge Changes at Google, Facebook, and Microsoft | WIRED"
"https://www.wired.com/2016/11/google-facebook-microsoft-remaking-around-ai"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google, Facebook, and Microsoft Are Remaking Themselves Around AI Getty Images Save this story Save Save this story Save Fei-Fei Li is a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can "see." And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos---technology that has become commonplace on the world's biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world's biggest tech companies are remaking themselves around this breed of artificial intelligence. Intel Looks to a New Chip to Power the Coming Age of AI Giant Corporations Are Hoarding the World’s AI Talent OpenAI Joins Microsoft on the Cloud’s Next Big Front: Chips Facebook Manages to Squeeze an AI Into Its Mobile App Alongside a former Stanford researcher---Jia Li, who more recently ran research for the social networking service Snapchat ---the China-born Fei-Fei will lead a team inside Google's cloud computing operation , building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business. Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain , the team responsible for infusing the search giant's own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher. Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing---efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei's new group because that's good for the company's cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. "This isn't just window dressing," he says. Fei-Fei's group is an effort to solidify Google's position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more. Cloud computing doesn't always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn't build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way. When it announced Fei-Fei's appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI---and just a day after Microsoft formally unveiled new services for building "chatbots" and announced a deal to provide GPU services to OpenAI , the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman. Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company's products, including the Bing search engine, the Cortana digital assistant, and Microsoft's forays into robotics. The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products--- though not quite successfully. Still, it's the path from research to product the company hopes to accelerate in the years to come. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "With AI, we don't really know what the customer expectation is," Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want. In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company's security teams, which are looking for ways to identify security bugs and malware through machine learning. Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it's already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook's Applied ML group "a big flywheel that has changed everything" inside the company. "When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people," he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups. The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won't do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data---more like a coach than a player. As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers. Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it's certainly not. And these companies are intent on pushing it across the rest of the tech world too. Update: This story has been updated to clarify Fei-Fei Li's move to Google. She will remain on the faculty at Stanford after joining Google. Senior Writer X Topics artificial intelligence deep learning Enterprise Facebook Google Microsoft Steven Levy Will Knight Will Knight Will Knight Paresh Dave Niamh Rowe Khari Johnson Amanda Hoover Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,149
2,013
"Facebook Taps 'Deep Learning' Giant for New AI Lab | WIRED"
"https://www.wired.com/wiredenterprise/2013/12/facebook-yann-lecun"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Facebook Taps 'Deep Learning' Giant for New AI Lab Yann LeCun WIRED/Josh Valcarcel Save this story Save Save this story Save Facebook is building a research lab dedicated to the new breed of artificial intelligence, after hiring one of the preeminent researchers in the field: New York University professor Yann LeCun. With a post to Facebook this morning, LeCun announced that he had been tapped to run the lab, and the company confirmed the news with WIRED. "Facebook has created a new research laboratory with the ambitious, long-term goal of bringing about major advances in Artificial Intelligence," LeCun wrote, adding that Facebook's AI lab will include operations in Menlo Park, California, at the company's headquarters; in London; and at Facebook's new offices in New York City. In an email to WIRED , he said that he would remain in his position as a professor at NYU, maintaining teaching and research duties part-time, but that he would be based at Facebook's Manhattan office, which is only a a block from NYU's main campus. LeCun sits at the heart of a new AI movement known as "deep learning." The movement began in the academic world, but is now spreading to the giants of the web, including not only Facebook but Google, companies that are constantly looking for new means of building services that can interact with people more like the way we interact with each other. Google is already using deep learning techniques to help analyze and respond to voice commands on its Android mobile operating system. With deep learning, the basic idea is to build machines that actually operate like the human brain -- as opposed to creating systems that merely take a shortcut to solving problems that have traditionally required human intelligence. In the past, for instance, something like the Google's Search engine has tried to approximate human intelligence by rapidly analyzing enormous amounts of data, but people like LeCun aim to build massive "neutral networks" that actually mimic the way the brain works. The trouble is that we don't completely understand how that the brain works. But in recent years, LeCun and others in this field, including, most notably, University of Toronto professor Geoffrey Hinton, have made some significant progress in the area of deep learning, so much so that they're now being hired by the giants of the tech world. As LeCun builds an AI lab at Facebook, Hinton is now on staff at Google, building a system alongside other researchers from Toronto. Andrew Ng, the Stanford researcher who founded Google's deep learning project , known as the Google Brain, says that LeCun and Facebook are a natural fit. "Yann LeCun's move will be an exciting step both for machine learning and for Facebook," Ng says. "Machine learning is already used in hundreds of places throughout Facebook, ranging from photo tagging to ranking articles to your news feed. Better machine learning will be able to help improve all of these features, as well as help Facebook create new applications that none of us have dreamed of yet." Additional reporting by Daniela Hernandez Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics deep learning Enterprise Facebook Google research Morgan Meaker Reece Rogers Gregory Barber Caitlin Harrington Nelson C.J. Peter Guest Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,150
2,013
"Researcher Dreams Up Machines That Learn Without Humans | WIRED"
"https://www.wired.com/wiredenterprise/2013/06/yoshua-bengio"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Daniela Hernandez Business Researcher Dreams Up Machines That Learn Without Humans A visual representation of Yoshua Bengio's "learning model," where machines "think" about faces and emotions. Image: Courtesy Yoshua Bengio Save this story Save Save this story Save Yoshua Bengio recently had a vision -- a vision of how to build computers that learn like people do. It happened at an academic conference in May, and he was filled with excitement -- perhaps more so than he’d ever been during his decades-long career in "deep learning," an emerging field of computer science that seeks to engineer machines that mimic how the human brain processes information. Or, rather, how we assume the brain processes information. In his hotel room, Bengio started furiously scribbling mathematical equations that captured his new ideas. Soon he was bouncing these ideas off various colleagues, including deep learning pioneer Yann LeCun of New York University. Judging from their response, Bengio knew he was onto something big. When he made it back to his laboratory at the University of Montreal -- home to one of the biggest concentrations of deep-learning researchers -- Bengio and his team went to work turning his equations into functional, intelligent algorithms. About a month later, that hotel-room vision morphed into what he believes is one of the most important breakthroughs of his career, one that could accelerate the quest for artificial intelligence. In short, Bengio has developed new ways for computers to learn without much input from us humans. Typically, machine learning requires "labeled data" -- information that's been categorized by real people. If you want a computer to learn what a cat looks like, you must first show it what a cat looks like. Bengio seeks to eliminate this step. Yoshua Bengio. Image: Courtesy Yoshua Bengio Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "Today’s models can be trained on huge quantities of data, but that’s not enough," says Bengio, who together with LeCun and Google’s Geoffrey Hinton is one of the original musketeers of deep learning. "We need to discover learning algorithms that can take better advantage of all this unlabeled data that’s sitting out there." Currently, the most widely used deep-learning models -- so called artificial neural networks harnessed by the likes of search giants Google and Baidu -- use a combination of labeled and unlabeled data to make sense of the world. But unlabeled information far outweighs the amount people have been able to manually label, and if deep learning is to turn the corner, it must tackle areas where labeled data is scarce, including language translation and image recognition. Bengio's new models -- which he’s tested only on small data sets -- can teach themselves to capture what he calls the statistical structure of the data. Basically, when a machine learns to recognize faces, it can spew out new images that look like faces too, without human intervention. It can provide answers, like when shown only part of an image it can guess the rest -- or when shown only some words in a sentence it can guess the missing ones. Right now, the models don't have a direct commercial application, but if they can perfect them, he says, then "we can answer arbitrary questions about the variables modeled. Understanding the world means just that: We can have a good guess about any aspect of reality that is hidden to us, given those elements that we observe. That's why this is an important piece." On the surface, these algorithms look very much like the neural nets built by Hinton for Google’s image search and photo-tagging systems, he says, but they’re much better at exploring data that's thrown at them. In other words, they’re much more intuitive. “Intuition is just the part of the computation going on in our brain for which we don’t have conscious access. It’s really hard to decompose it into little pieces we can explain,” he says. “This is the reason why the traditional AI of the 80s and 70s failed – because it tried to build machines that could explain every single step through reasoning. It turns out it was impossible to do that. It’s much easier to train machines to develop intuitions to make the right decisions.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A picture illustrating how the learned generative model can fill in the missing left-part of a picture when given the right hand half. Each line has a series starting with random pixels on the left hand side and then the model randomly samples pixels so that the overall configuration is plausible. Image: Courtesy Yoshua Bengio In the world of machine learning, that’s a big deal. If Bengio’s initial findings hold up on larger data sets, they could lead to the development of algorithms that have better transfer, meaning they are more easily applied to all types of problems like natural language processing, voice recognition , and image recognition. Think of it like a previous experience you use to intuit what action you should take in a new situation. In engineering terms, the potential time saved on coding task-specific algorithms could be substantial. Unlike other machine-learning methods, deep learning is already endowed with some transfer, or intuitive, qualities, but Bengio and his team have been working towards making improvements for years. Recently, they won two international competitions focused on transfer learning. This resolve to iterate and improve on already existing technologies speaks to Bengio’s outlook on AI and, more broadly, on science. An academic through and through, he’s made it his life’s mission to find a fix for what’s holding back his and his colleagues’ dreams of building intelligent machines. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We do experiments whose goal is to figure out why…not necessarily to build something that we can sell tomorrow,” says Bengio. “Once you have that understanding, you can answer questions – you can do all sorts of useful things that are economically valuable.” That conviction, fueled by his own intuition that deep learning was the way to move machine learning forward even when it was a dirty concept, keeps him motivated and working with new students, post-docs and young professors to keep the AI dream alive. He draws inspiration from the myriad exchanges he's had with colleagues like LeCun, Hinton, and Jeff Dean of Google Brain fame. His career, he says, has really been a social endeavor. In that spirit, Bengio has put the code for his new algorithms on Github for other developers to tweak and improve, and details of the findings have been published in a series of papers on the academic researcher site arXiv.org. "My vision is of algorithms that can make sense of all the kinds of data that we see, that can extract the kind of information in the world around us that humans have," Bengio says. "I’m fairly confident that we’ll be able to train machines not just to perform tasks but to understand the world around us." Topics analytics artificial intelligence deep learning Enterprise Google neural networks research software Steven Levy Khari Johnson Will Knight Will Knight Gregory Barber Steven Levy Will Knight Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,151
2,013
"'Chinese Google' Unveils Visual Search Engine Powered by Fake Brains | WIRED"
"https://www.wired.com/wiredenterprise/2013/06/baidu-virtual-search"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Daniela Hernandez Business 'Chinese Google' Unveils Visual Search Engine Powered by Fake Brains Save this story Save Save this story Save Chinese search giant Baidu has served up its first ever visual search engine, which allows users to finally query the web using only images as input instead of keywords. Google has long offered this sort of thing, but Baidu continues to show that it's determined to keep pace with Larry Page and company. “We didn’t have any similar kind of product in China because we didn’t have the sufficient technology to handle this,” says Baidu's Kai Yu, who led the project. “In the China market, this is the first of its kind.” Unveiled last week, the tool grew out of Baidu’s newly launched Institute of Deep Learning , the company’s Beijing- and Silicon Valley-based research arm focused on deep learning, a field of computer science that seeks to mimic how the human brain works. The company has already deployed deep-learning algorithms for optical character, face, and voice recognition , online advertising and web search. Yu and engineers at IDL have been working on visual search since September to meet growing demand among their users, Yu says. Baidu’s visual search engine is powered by convolutional neural networks, the same type of deep-learning technology that also underlies Google’s photo tagging system , according to NYU’s Yann LeCun who developed convolutional neural nets in the 1980s and is working on photo-tagging systems based on the same technology. (Google’s neural nets are being developed by Alex Krizhevsky, Ilya Sutskever and Geoffry Hinton, whom Google hired in March to supercharge its deep-learning capabilities.) Convolutional neural nets are particularly useful for this type of application because they are engineered to be able to recognize objects from various angles, assuming the neural network has been trained to recognize it. The technology has also been used for handwriting recognition and for high-speed check-reading systems. They are "designed to recognize visual patterns from pixel images with minimal preprocessing. They can recognize patterns with extreme variability and robustness to distortions," according to LeCun's website. Yu’s team is using Nvidia GPU servers to train their neural nets, but unlike Google , Baidu is sticking to commodity CPU servers for their online deployment. Yu says Baidu engineers have done a “significant job of accelerating the online algorithm” to ensure it runs fast enough to meet user demands and that for now they don’t need to turn to GPUs, which are faster but can be more power-intensive than traditional CPUs. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Part of their trick has been to develop algorithms that need only compare the query image to a small number of images in Baidu’s distributed database, rather than the billions the company has access to. From there, the system can figure out what images are similar to the original input and crank out relevant search results quickly. Baidu has also chosen to index images in the main memory in order to retrieve them at speeds internet users have come to expect. “Some special large-scale indexing structure is being used…. We keep everything in memory, otherwise it would be very difficult to serve the query in a fast way. When you access data in hard disk, that will be very painful,” says Yu. “We go to memory, so that’s even faster than Flash. ” Baidu’s new service compares images using only pixel and image feature information to find images. Typically, search engines also look at images’ surrounding text on resident webpages to serve up better results. “This is our first version. We just tried purely image-based search and we found the result was quite amazing,” says Yu. “In the future, we’ll further improve the product by combining text information.” Future iterations will also port the product to mobile. Right now, Baidu’s visual-search engine is limited to the web, a move perhaps driven by pressure to bring the product to market. When Google launched its first image-search service, Google Goggles , in 2009, it started off on mobile and it took the company about two years to bring the service to the web. But Google is Google, and mobile image search can be more engineering-intensive. It presents a unique set technical challenges, like controlling for different quality cameras, blurring, color balance and over-exposure. Now that they’ve put themselves on the visual-search map, Baidu can concentrate on making a product that’s more in line with how people are searching today. “We want to make full use of [mobile] sensors to help users do all kinds of search in the most natural way,” Yu says. “Definitely, mobile search is our big target. We are already planning for this.” Topics analytics Baidu deep learning Enterprise Google neural networks research Search software Will Knight Amit Katwala Khari Johnson David Gilbert Andy Greenberg Kari McMahon David Gilbert Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,152
2,013
"'Chinese Google' Opens Artificial-Intelligence Lab in Silicon Valley | WIRED"
"https://www.wired.com/wiredenterprise/2013/04/baidu-research-lab"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Daniela Hernandez Business 'Chinese Google' Opens Artificial-Intelligence Lab in Silicon Valley Gianni Zamponi Save this story Save Save this story Save It doesn't look like much. The brick office building sits next to a strip mall in Cupertino, California, about an hour south of San Francisco, and if you walk inside, you'll find a California state flag and a cardboard cutout of R2-D2 and plenty of Christmas decorations -- even though we're well into April. But there are big plans for this building. It's where Baidu -- "the Google of China" -- hopes to create the future. In late January, word arrived that the Chinese search giant was setting up a research lab dedicated to "deep learning" -- an emerging computer science field that seeks to mimic the human brain with hardware and software -- and as it turns out, this lab includes an operation here in Silicon Valley, not far from Apple headquarters, in addition to a facility back in China. The company just hired its first researcher in Cupertino, with plans to bring in several more by the end of the year. Baidu calls its lab The Institute of Deep Learning, or IDL. Much like Google and Apple and others, the company is exploring computer systems that can learn in much the same way people do. "We have a really big dream of using deep learning to simulate the functionality, the power, the intelligence of the human brain," says Kai Yu, who leads Baidu’s speech- and image-recognition search team and just recently made the trip to Cupertino to hire that first researcher. "We are making progress day by day." If you want to compete with Google, it only makes sense to set up shop in Google's backyard. "In Silicon Valley, you have access to a huge talent pool of really, really top engineers and scientists, and Google is enjoying that kind of advantage," Yu says. Baidu first opened its Cupertino office about a year ago, bringing in various other employees before its big move into deep learning. In the '90s and onto the 2000s, deep learning research was at a low ebb. The artificial intelligence community moved toward systems that solved problems by crunching massive amounts of data, rather than trying to build " neural networks " that mimicked the subtler aspects of the human brain. Google's search engine was a prime example of system that took a short-cut around deep learning, and the American search giant is using a similar approach with its self-driving cars. But now, deep learning research is coming back into favor, and Google is among those driving the field forward. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google recently hired Geoffrey Hinton, the Godfather of deep learning, after some prodding from Stanford’s Andrew Ng, another power-player in the field, and many other companies are exploring the same area. IBM has long worked towards a computer model of the human brain. Apple now uses deep learning techniques in the iPhone's Siri voice recognition system. And Google has worked similar concepts into its own voice recognition system as well as Google Street View. Kai Yu, speech recognition, image recognition-aided search, bai du, bai, du Photo: Alex Washburn / Wired Alex Washburn Still, Baidu's decision to build an entire research lab dedicated to deep learning "is a bit of bold move," says New York University's Yann LeCun, a pioneer in the field, pointing out that the technology still has such a long way to go. But the IDL, he says, could be a way for Baidu to attract top talent and let creative engineers explore all sorts of blue-sky innovations -- stuff akin to Google Glass and other project gestated at Google's secretive X Lab. In fact, one of Yu's researchers is working on Baidu Eye, which many have called a Google Glass knock-off. But for now, Yu says, the lab's main priority is the exploration of deep learning algorithms. "We want to be focused," he says. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In November, Baidu released its first voice search service based on deep learning, and it claims the tool has reduced errors by about 30 percent. As Google and Apple have also seen, these improvements can change the way people interact with technology and how often they use it. When voice and image search services work like they're supposed to, we needn't fiddle with the teeny keyboards and small displays on mobile devices. Today, web searches for products or services give you little more than long list of links, and "then it’s your job to read through all of those webpages to figure out what’s the meaning," Yu says. But he wants something that works very differently. “We need to fundamentally change the architecture of the whole system," he explains. That means building algorithms that can identify images and understand natural language and then parse the relationships between all the stuff on the web and find exactly what you're looking for. It other words, it wants algorithms that work like people. Only faster. Kai Yu, speech recognition, image recognition-aided search, bai du, bai, du Photo: Alex Washburn / Wired Alex Washburn Topics analytics artificial intelligence Baidu China Enterprise Mobile research Search software voice assistants Paresh Dave Peter Guest Steven Levy Aarian Marshall Caitlin Harrington Will Knight Steven Levy Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,153
2,013
"Google Hires Brains that Helped Supercharge Machine Learning | WIRED"
"https://www.wired.com/wiredenterprise/2013/03/google_hinton"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robert McMillan Business Google Hires Brains that Helped Supercharge Machine Learning Geoffrey Hinton (right), one of the machine learning scientists hard at work on The Google Brain. Photo: University of Toronto U of T Save this story Save Save this story Save Google has hired the man who showed how to make computers learn much like the human brain. His name is Geoffrey Hinton, and on Tuesday, Google said that it had hired him along with two of his University of Toronto graduate students – Alex Krizhevsky and Ilya Sutskever. Their job: to help Google make sense of the growing mountains of data it is indexing and to improve products that already use machine learning – products such as Android voice search. Google paid an undisclosed sum to buy Hinton's company, DNNresearch. It's a bit of a best-of-both-worlds deal for the researcher. He gets to stay in Toronto, splitting his time between Google and his teaching duties at the University of Toronto, while Krizhevsky and Sutskever fly south to work at Google's Mountain View, California campus. Back in the 1980s, Hinton kicked off research into neural networks, a field of machine learning where programmers can build machine learning models that help them to sift through vast quantities of data and put together patterns, much like the human brain. Once a hot research topic, neural networks had apparently failed to live up to their initial promises until around 2006, when Hinton and his researchers – spurred on by some new kick-ass microprocessors – developed new "deep learning" techniques that fine-tuned the tricky and time consuming process of building neural network models for computer analysis. "Deep learning, pioneered by Hinton, has revolutionized language understanding and language translation," said Ed Lazowska, a computer science professor at the University of Washington. In an email interview, he said that a pretty spectacular December 2012 live demonstration of instant English-to-Chinese voice recognition and translation by Microsoft Research chief Rick Rashid was "one of many things made possible by Hinton's work." "Hinton has been working on neural networks for decades, and is one of the most brilliant minds of the field," said Andrew Ng, the Stanford University professor who set up Google's neural network team in 2011. Ng invited Hinton to Google last summer, where the Toronto academic spent a few months as a visiting professor. "I'm thrilled that he'll be continuing this work there, and am sure he'll help drive forward deep learning research at Google," Ng said via email. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google didn't want to comment, or let Hinton talk to us about his new job, but clearly, it's going to be important to Google's future. Neural network techniques helped reduce the error rate with Google's latest release of its voice recognition technology by 25 percent. And last month Google Fellow Jeff Dean told us that neural networks are becoming widely used in many areas of computer science. "We're not quite as far along in deploying these to other products, but there are obvious tie-ins for image search. You'd like to be able to use the pixels of the image and then identify what object that is," he said. "There are a bunch of other more specialized domains like optical character recognition." "I am betting on Google’s team to be the epicenter of future breakthroughs," Hinton wrote in a Google+ post announcing his move. You can watch Rick Rashid's cool demo here: Senior Writer X Topics analytics Enterprise Google machine learning neural networks research Search software Will Knight Amit Katwala Kari McMahon Andy Greenberg Khari Johnson David Gilbert David Gilbert Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,154
2,013
"How Google Retooled Android With Help From Your Brain | WIRED"
"https://www.wired.com/wiredenterprise/2013/02/android-neural-network"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robert McMillan Business How Google Retooled Android With Help From Your Brain A picture of the human voice, courtesy the AndroSpectro app. Photo: Ariel Zambelich/Wired Save this story Save Save this story Save When Google built the latest version of its Android mobile operating system, the web giant made some big changes to the way the OS interprets your voice commands. It installed a voice recognition system based on what's called a neural network – a computerized learning system that behaves much like the human brain. For many users, says Vincent Vanhoucke, a Google research scientist who helped steer the effort, the results were dramatic. "It kind of came as a surprise that we could do so much better by just changing the model," he says. Vanhoucke says that the voice error rate with the new version of Android – known as Jelly Bean – is about 25 percent lower than previous versions of the software, and that this is making people more comfortable with voice commands. Today, he says, users tend to use more natural language when speaking to the phone. In other words, they act less like they're talking to a robot. "It really is changing the way that people behave." It's just one example of the way neural network algorithms are changing the way our technology works – and they way we use it. This field of study had cooled for many years, after spending the 1980s as one of the hottest areas of research, but now it's back, with Microsoft and IBM joining Google in exploring some very real applications. When you talk to Android's voice recognition software, the spectrogram of what you've said is chopped up and sent to eight different computers housed in Google's vast worldwide army of servers. It's then processed, using the neural network models built by Vanhoucke and his team. Google happens to be very good at breaking up big computing jobs like this and processing them very quickly, and to figure out how to do this, Google turned to Jeff Dean and his team of engineers, a group that's better known for reinventing the way the modern data center works. Neural networks give researchers like Vanhoucke a way analyzing lots and lots of patterns – in Jelly Bean's case, spectrograms of the spoken word – and then predicting what a brand new pattern might represent. The metaphor springs from biology, where neurons in the body form networks with other cells that allow them to process signals in specialized ways. In the kind of neural network that Jelly Bean uses, Google might build up several models of how language works – one for English language voice search requests, for example – by analyzing vast swaths of real-world data. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "People have believed for a long, long time – partly based on what you see in the brain – that to get a good perceptual system you use multiple layers of features," says Geoffrey Hinton, a computer science professor at the University of Toronto. "But the question is how can you learn these efficiently." Android takes a picture of the voice command and Google processes it using its neural network model to figure out what's being said. Google's software first tries to pick out the individual parts of speech – the different types of vowels and consonants that make up words. That's one layer of the neural network. Then it uses that information to build more sophisticated guesses, each layer of these connections drives it closer to figuring out what's being said. Neural network algorithms can be used to analyze images too. "What you want to do is find little pieces of structure in the pixels, like for example like an edge in the image," says Hinton. "You might have a layer of feature-detectors that detect things like little edges. And then once you've done that you have another layer of feature detectors that detect little combinations of edges like maybe corners. And once you've done that, you have another layer and so on." Neural networks promised to do something like this back in the 1980s, but getting things to actually work at the multiple levels of analysis that Hinton describes was difficult. But in 2006, there were two big changes. First, Hinton and his team figured out a better way to map out deep neural networks – networks that make many different layers of connections. Second, low-cost graphical processing units came along, giving the academics had a much cheaper and faster way to do the billions of calculations they needed. "It made a huge difference because it suddenly made things go 30 times as fast," says Hinton. Today, neural network algorithms are starting to creep into voice recognition and imaging software, but Hinton sees them being used anywhere someone needs to make a prediction. In November, a University of Toronto team used neural networks to predict how drug molecules might behave in the real world. Jeff Dean says that Google is now using neural network algorithms in a variety of products – some experimental, some not – but nothing is as far along as the Jelly Bean speech recognition software. "There are obvious tie-ins for image search," he says. "You'd like to be able to use the pixels of the image and then identify what object that is." Google Street View could use neural network algorithms to tell the difference between different kinds of objects it photographs – a house and a license plate, for example. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And lest you think this may not matter to regular people, take note. Last year Google researchers, including Dean, built a neural network program that taught itself to identify cats on YouTube. Microsoft and IBM are studying neural networks too. In October, Microsoft Chief Research Officer Rick Rashid showed a live demonstration of Microsoft's neural network-based voice processing software in Tianjin, China. In the demo, Rashid spoke in English and paused after each phrase. To the audience's delight, Microsoft's software simultaneously translated what he was saying and then spoke it back to the audience in Chinese. The software even adjusted its intonation to make itself sound like Rashid's voice. "There's much work to be done in this area," he said. "But this technology is very promising, and we hope in a few years that we'll be able to break down the language barriers between people. Personally, I think this is going to lead to a better world." Senior Writer X Topics analytics Android Cloud Computing Enterprise Google Mobile neural networks research software voice assistants Will Knight Amit Katwala Kari McMahon Andy Greenberg Khari Johnson David Gilbert David Gilbert Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,155
2,018
"25 Years of WIRED Predictions: Why the Future Never Arrives | WIRED"
"https://www.wired.com/story/wired25-david-karpf-issues-tech-predictions"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons David Karpf Backchannel 25 Years of WIRED Predictions: Why the Future Never Arrives Deanne Cheuk Save this story Save Save this story Save On the cover of WIRED's eighth issue, the Pillsbury Doughboy stands against a wall, flanked by two men wearing neckties. All are blindfolded, stricken with terror. Together they face a firing squad of mismatched TV remotes. The cover line reads: “Is Advertising Finally Dead?” February 1994 By all appearances, the cover promised yet another gleeful epitaph for the declining institutions of the analog age. In just over a year, WIRED had already predicted the imminent demise of public education and The New York Times. Michael Crichton proclaimed in the fourth issue that “it is likely that what we now understand as the mass media will be gone within 10 years. Vanished, without a trace.” Advertising, it seemed, was the next industry marked for obsolescence. But the cover story itself—an essay by MIT Media Lab fellow Michael Schrage —was not, in fact, an epitaph at all. Instead, the article imagines how advertisers will adapt to, and eventually come to dominate, digital media. Read with the benefit of hindsight today, the piece has an almost Cassandra-like quality, foretelling a future both unpleasant and unavoidable—a future that feels a bit too much like now. It may be the most eerily prescient story that WIRED published in its early years. How do I know? This past summer, I pulled up a chair—for a time at the Library of Congress—and read every issue of the magazine’s print edition, chronologically and cover to cover. My aim was to engage in a particular kind of time travel. Back when founding editor Louis Rossetto was recruiting the first members of the WIRED team in the early 1990s, he said he was “trying to make a magazine that feels as if it has been mailed back from the future.” I was looking to use WIRED’s back catalog to construct a history of the future—as it was foretold, month after month, in the magazine’s pages. October 2018. Subscribe to WIRED. Plunkett + Kuhr Designers In part, the fun was in recognizing what WIRED saw coming—the flashes of uncanny foresight buried in old print. Back in the mid-’90s, a time when most Americans hadn’t even sent an email, the magazine was already deep into speculation about a world where everyone had a networked computer in their pocket. In 2003, when phones with cameras were just a novelty in the US (but popular in Asia), Xeni Jardin was predicting a “ phonecam revolution ” that would one day capture images of police brutality on the fly. Just as interesting were the things WIRED saw coming that never did. The November 1999 cover story held up a company called DigiScent , which hoped to launch the next web revolution by sending smells through the internet. (“Reekers, instead of speakers.”) But more than just scoring hits and misses, I was interested in identifying those visions of the future that remained always on the horizon, the things that WIRED—and, by extension, the broader culture—kept predicting but which remained always just out of reach. Again and again, the magazine held that the digital revolution would sweep away a host of old social institutions, draining them of their power as it rendered them obsolete. In their place, WIRED repeatedly proclaimed, the revolution would bring an era of transformative abundance and prosperity, its foothold in the future secured by the irresistible dynamics of bandwidth, processing power, and the free market. At the same time, an animating tension has always run through the magazine, one that stretches all the way back to Schrage’s 1994 essay. The cover loudly suggests the death of the analog order; the text anticipates how the old order will adapt, graft itself onto the digital revolution, and alter its trajectory. Cutting against the magazine’s exuberance—but also propelled along by it—is a heretical strain of ­gimlet-eyed, anxious ambivalence about who will pay for the future. It’s this tension that has produced some of WIRED’s moments of greatest foresight. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WIRED's first issue appeared four years after the Berlin Wall fell and two years after the Soviet Union dissolved. The Cold War was over, but WIRED insisted that this would be anything but a period of calm. Networked computers would forge a new world culture and ensure mass prosperity—in a series of dramatic overthrows. The inaugural issue opened with a manifesto : “The Digital Revolution is whipping through our lives like a Bengali typhoon—while the mainstream media is still groping for the snooze button.” Technology promised to unleash “social changes so profound their only parallel is probably the discovery of fire.” That WIRED was rooting for these changes was never in doubt. The magazine’s trademark move throughout the ’90s was to shout, “Brace yourself!” while also promising, with a wild grin, that beyond this patch of turbulence is a better world. Everything was up for transformation. A 1994 profile of the Electronic Frontier Foundation asked, “How hard could it be to hack government?” In 1997, Jon Katz argued that we were witnessing the “primordial stirrings of a new kind of nation—the Digital Nation—and the formation of a new postpolitical philosophy.” The old left-right politics of American democracy were sure to subside in the face of this new digital polity. “The Digital Nation points the way toward a more rational, less dogmatic approach to politics. The world’s information is being liberated, and so, as a consequence, are we.” Somehow, WIRED’s optimism come across not as saccharine, but as swaggering. The notion that the future of politics might, with the internet, become less rational and more dogmatic was scarcely explored. Yet somehow, WIRED’s optimism didn’t come across as saccharine, but as swaggering. For the June 1995 issue, then-executive editor Kevin Kelly sat down with Kirkpatrick Sale , a self-­described “neo-Luddite,” for a tart, extended ideological showdown on the subject of the technological future. Near the end of the Q&A, Sale predicted that industrial civilization would, in the next couple of decades, suffer economic collapse, class warfare, and widespread environmental disaster. In response, Kelly pulled out his checkbook. “I bet you US$1,000 that in the year 2020, we’re not even close to the kind of disaster you describe,” Kelly said. “I’ll bet on my optimism.” July 1997 The magazine’s July 1997 cover story announced “ The Long Boom : A History of the Future 1980–2020.” On the cover, a smiling globe holds a flower in its mouth, next to the words: “We’re facing 25 years of prosperity, freedom, and a better environment for the whole world. You got a problem with that?” By the following year, WIRED wasn’t just betting on technological optimism—it was giving readers tips on how to bet on it themselves, with their own money. The magazine launched the “WIRED Index,” a portfolio of companies at the heart of the so-called New Economy, “a broad range of enterprises that are using technology, networks, and information to reshape the world.” It would increase by 81 percent over the course of the next 12 months, outpacing every other broad-based financial index. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And yet, beneath the boisterous optimism that marked WIRED’s covers and its biggest proclamations, the magazine also trafficked occasionally in dark, deadpan warnings. Back in February 1994, the writer R.U. Sirius mused about the coarse dynamics that had already begun to present themselves in an online world where anyone can be a publisher. “As more and more people get a voice, a voice needs a special stridency to be heard above the din,” he wrote. “On the street, people tolerate diversity because they have to—you’ll get from here to there if you don’t get in anybody’s face. But the new media environment is always urging you to mock up an instant opinion about The Other … You can be part of the biggest mob in history. Atavistic fun, guys. Pile on!” In January 1997, Tom Dowe wrote an essay warning about, well, fake news: “The Net is opening up new terrain in our collective consciousness, between old-fashioned ‘news’ and what used to be called the grapevine—rumor, gossip, word of mouth. Call it paranews —information that looks and sounds like news, that might even be news. Or a carelessly crafted half-truth.” January 1998 Schrage’s 1994 essay on advertising was less dystopian, but it certainly wasn’t boisterous. “To appreciate tomorrow’s multimedia networks, don’t look to the Bob Metcalfes, Ted Nelsons, and Vint Cerfs for ideas and inspiration. Those techno-wonks won’t set the agenda,” he wrote. “The economics of advertising, promotion, and sponsorship—more than the technologies of teraflops, bandwidth, and GUI—will shape the virtual realities we may soon inhabit.” The article imagined a world where smartphones (well, PDAs) were ubiquitous and pulsing with ad-driven content. “No doubt, many PDA digimercials will prove to be the annoying equivalent of junk mail and those idiotic automated telemarketing calls. But so what … there’ll be a nice market in software that screens out the junk and highlights what PDA owners want.” Between those lines, you can catch an early, primordial glimmer of the basic idea behind AdSense and Facebook—a future that is at least more complicated than it is obviously liberating. But in WIRED, exuberance was almost always given the final word. In September 1999, the magazine published an essay by Kevin Kelly that squarely acknowledged widespread public fears of an impending stock market crash—and smiled in the face of incipient panic. The tech boom, he insisted, would not end. “Picture 20 more years of full employment, continued stock-market highs, and improving living standards. Two more decades of inventions as disruptive as cell phones, mammal cloning, and the Web. Twenty more years of Quake , index funds, and help-wanted signs. Prosperity not just for CEOs, but for ex-pipe-fitters, nursing students, and social workers as well.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Six months later, the dotcom bubble began to burst. April 2000 For a while, even WIRED sobered up. The April 2000 cover story was a brooding essay by Sun Microsystems cofounder Bill Joy, “ Why the Future Doesn’t Need Us. ” It envisioned a world where artificial intelligence and automation might mark humanity itself for obsolescence. The June 2001 update to the WIRED Index opened, “OUCH.” Not long thereafter, Kevin Kelleher wrote an essay titled “Death of the New Economy, RIP.” “In the end,” he wrote, “what really bruised the notion of a new economy is the figment that, by definition, it was going to make the world better. The real new economy is merely agnostic: If business cycles are shorter, they are also sharper and more painful on the way down.” When the September 11 attacks happened, it shook the magazine’s punchy faith in a networked world even further. November 2003 But by 2003, the digital revolution had turned exciting again. The spread of Wi-Fi and the growth of the open source movement kindled a thousand speculative business ideas. “Software is just the beginning,” WIRED declared in November 2003. “Open source is doing for mass innovation what the assembly line did for mass production. Get ready for the era when collaboration replaces the corporation.” Chris Anderson, who had become WIRED’s editor in chief in 2001, articulated the new optimism of the Web 2.0 era in a series of iconic articles. In “ The Long Tail ” (October 2004) and “ Free ! Why $0.00 Is the Future of Business” (March 2008), he argued that the new reality of infinite, too-cheap-to-meter digital storage would fundamentally change the entire economy. The internet would radically increase the scope and reach of niche entertainment markets, he wrote in “The Long Tail.” Books, music, movies, and television would transcend the limited selection space imposed by the physical inventory of bookstores and record stores. “Free” offered the more radical argument that the new digital economy was fundamentally organized around the economics of abundance rather than scarcity. In this era, the dominant business models would revolve around services that were, in one way or another, free. At the peak of the Web 2.0 era, in June 2008, WIRED celebrated its 15th anniversary. Founding editor Louis Rossetto returned with a reflection on what early WIRED had gotten right and wrong. He admitted that predictions of media’s demise had been premature. “Governments,” he added, “are still here, presumptuous and bossy as ever.” But the Long Boom was a big call that he confidently declared they had gotten right. “The boom began with the introduction of the personal computer, and it will continue until at least 2020,” he wrote. “There’s a lot of noise in the media about how the world is going to hell. Remember, the truth is out there, and it’s not necessarily what the politicians, priests, or pundits are telling you.” The Wall Street collapse began three months later. It would seem that invoking the Long Boom in WIRED is a bit like saying “Macbeth” in a theater. It is best not to tempt the fates. William Gibson is said to have remarked that “the future is already here—it’s just not evenly distributed.” Paging through the first 25 years of WIRED, what’s most striking is that the future never becomes evenly distributed. Sure, everyone gets on Facebook and uses Google, but the dinosaurs never die outright, and the new age of abundance never quite gains its inviolable foothold. The future just keeps arriving, mutating, bowing to the fickle pressures of advertising markets and quarterly earnings reports. In 2009, Demand Media was the future of news. This future seemed inevitable, if not particularly desirable. Demand Media was one of the largest content farms on the web, publishing 4,000 videos and articles per day through sites like eHow and Cracked.com. What content did it farm? Whatever its algorithm told it to. As Daniel Roth reported in November 2009 for WIRED, Demand Media tracked what people were searching for on the internet, what search terms advertisers were paying for, and which subjects competing online outlets were publishing about. Once the algorithm selected a topic, articles and video were assigned to an army of freelancers, who were paid rock-bottom rates ($15 for an article, $1 for fact-­checking, 25 to 50 cents for video quality control). As The New York Times seemed to teeter on the brink, Demand was reaping huge profits from digital ads. The future of media, WIRED said, was “fast, disposable, and profitable as hell.” The future just keeps arriving, mutating, bowing to fickle pressures. The videos and how-tos that Demand Media’s freelancer network put together were predictably shoddy. But that didn’t matter. The company didn’t need to provide good answers to your Google search; it just had to provide relevant answers that placed well in search rankings. When the company went public in January 2011, it was valued at $1.5 billion—reportedly worth more than The New York Times ! But its status as the future of media would be short-lived. Soon after the initial public offering, Google announced a change to its search algorithm specifically meant to downgrade content farms. Demand Media’s business model would never recover. Within a few years, the original executive team quietly left. The company sold some of its big domains and rebranded as Leaf Group in 2016. Demand’s brief moment in the zeitgeist proves Schrage’s point—that “the future of media is the future of advertising.” But in Demand Media’s case, the future of advertising was subject to the whims of Google’s engineering team. In retrospect, the larger lesson from Demand Media’s brief reign concerns fragility. In the rush to identify the next industry that will be disrupted by the digital revolution, we underrate how fragile the business models of the disruptors themselves tend to be. They usually have as much to fear as their old and lumbering counterparts. Consider: Napster didn’t kill the music business; the courts killed Napster. Then a dozen Napster-like flowers bloomed in its place. But they were a mess, and they had little money to invest in improvements. Then iTunes, Rhapsody, and (later) Spotify built business models that included a (reduced, different) role for record labels. Nothing ever quite seems to fulfill its imagined revolutionary potential, and nothing ever quite seems to die. The New York Times is still alive (and—contra Anderson—doesn’t cost $0.00 online anymore either, having instituted a paywall along with numerous other publications, including WIRED). Webzines, the blogosphere, and Demand Media were all supposed to kill the news business. Each proved at least as fragile as the industry it was disrupting, a leaf on the changing winds of digital advertising markets. Yesterday’s imagined futures just keep accruing, providing sedimentary layers that today’s future can be built atop. Looking back at WIRED’s early visions of the digital future, the mistake that seems most glaring is the magazine’s confidence that technology and the economics of abundance would erase social and economic inequality. Both Web 1.0 and Web 2.0 imagined a future that upended traditional economics. We were all going to be millionaires, all going to be creators, all going to be collaborators. But the bright future of abundance has, time and again, been waylaid by the present realities of earnings reports, venture investments, and shareholder capitalism. On its way to the many, the new wealth has consistently been diverted up to the few. In 2010, Clive Thompson wrote about the potential of “ peer-to-peer renting. ” A French company called Zilok was allowing people to “post possessions they’re willing to rent out, along with a price,” he wrote. “Want to use someone’s car for the day? That’s $60, cheaper than most auto-rental agencies.” These were the innocent, early days of the sharing economy. “We’re seeing a new relationship to property where access trumps ownership,” Thompson wrote. “We’re using bits to help us share atoms.” When Uber and Airbnb first arrived, they wore the halo of this broad sharing phenomenon. In July 2012, Alexia Tsotsis penned a glowing early profile of Uber in WIRED. “If this new model of resource maximization succeeds, it won’t just put extra money in the pockets of everyday people,” she wrote. “It will also change the way we think about work and consumption, with every purchase becoming a potential investment, every idle hour a potential paycheck.” These early views of the sharing economy were accurate depictions of the moment , but poor visions of the future. Within a few short years, many of those Uber drivers would be stuck paying off their cars in sub-minimum-wage jobs with no benefits. What began as an earnest insight about bits and atoms quickly turned into an arbitrage opportunity for venture capitalists eager to undercut large, lucrative markets by skirting regulations. To meet the growth and monetization demands of investors, yesterday’s sharing economy became today’s gig economy. By now, the digital revolution isn’t just the future; it has a history. Digital technology runs our economy. It organizes our daily lives. It mediates how we learn information, tell each other stories, and connect with our neighbors. It’s how we control and harass and encourage one another. It’s a tool of both surveillance and resistance. You can almost never be entirely offline anymore. The internet is setting the agenda for the world around us. The digital revolution’s track record suggests that its arc doesn’t always bend toward abundance—or in a straight line at all. It flits about, responding to the gravitational forces of hype bubbles and monopoly power, warped by the resilience of old institutions and the fragility of new ones. Today’s WIRED seems to have learned these lessons. Perhaps because of all that accrued history, the digital present affords less room for open-ended, boisterous optimism. Back in 1995, when Kevin Kelly made his $1,000 bet with Kirkpatrick Sale that in 2020 we wouldn’t even be close to economic collapse, class warfare, or widespread environmental disaster, the pages of WIRED told a story that supported his confidence. Judging from WIRED’s recent reporting—about the climate, discourse on social media, and international relations—the bet has, at the very least, gotten a lot more interesting. (“He is obviously losing,” Kelly says of Sale. “We should find him to make sure his check is still good.”) Old WIRED said the swaggering, optimistic stuff out loud and muttered its critical, dystopian remarks in wry stage whispers. New WIRED has almost reversed that formula. The first issue began by describing a typhoon no one else could see. Today, everyone sees it, and the magazine reports on the effects and movements of the storm. It still voices plenty of enthusiasm around the edges. But WIRED is no longer simply cheering the imminent arrival of the future. It seems to recognize that behind this patch of turbulence is probably another one. Enjoy the ride. David Karpf (@davekarpf) is an associate professor in the School of Media and Public Affairs at the George Washington University. This article appears in the October issue. Subscribe now. Listen to this story, and other WIRED features, on the Audm app. MORE FROM WIRED@25 Editor's Letter: Tech has turned the world upside down. Who will shake up the next 25 years ? Stewart Brand on how to make the whole Earth better How we captured (almost) all of the WIRED25 portraits Join us for a four-day celebration of our anniversary in San Francisco, October 12–15. From a robot petting zoo to provocative onstage conversations, you won't want to miss it. More information at www.Wired.com/25. Topics magazine-26.10 WIRED25 longreads Wired Magazine Internet technology Brandi Collins-Dexter Andy Greenberg Angela Watercutter Lauren Smiley Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,156
2,015
"Rewriting the Rules of Turing’s Imitation Game | MIT Technology Review"
"https://www.technologyreview.com/s/535391/rewriting-the-rules-of-turings-imitation-game"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Rewriting the Rules of Turing’s Imitation Game By Simon Parkin archive page We have self-driving cars, knowledgeable digital assistants, and software capable of putting names to faces as well as any expert. Google recently announced that it had developed software capable of learning—entirely without human help—how to play several classic Atari computer games with skill far beyond that of even the most callus-thumbed human player. But do these displays of machine aptitude represent genuine intelligence? For decades artificial-intelligence experts have struggled to find a practical way to answer the question. AI is an idea so commonplace that few of us bother to interrogate its meaning. If we did, we might discover a problem tucked inside it: defining intelligence is far from straightforward. If the ability to carry out complex arithmetic and algebra is a sign of intellect, then is a digital calculator, in some sense, gifted? If spatial reasoning is part of the story, then is a robot vacuum cleaner that’s capable of navigating its way around a building unaided something of a wunderkind? The most famous effort to measure machine intelligence does not resolve these questions; instead, it obscures them. In his 1950 paper Computing Machinery and Intelligence , published six years before the term “artificial intelligence” was coined, the British computer scientist Alan Turing considered the capacity of computers to imitate the human intellect. But he discarded the question “Can machines think?” The act of thinking is, he argued, too difficult to define. Instead, he turned to a black-box definition: if we accept humans as an intelligent species, then anything that exhibits behaviors indistinguishable from human behavior must also be intelligent. Turing also proposed a test, called the “imitation game,” in which a computer would prove its intelligence by convincing a person, through conversation, that it is also human. The imitation game was a thought experiment, not a formal scientific test. But as artificial intelligence advanced, the idea took on a life of its own, and the so-called Turing test was born. In the years since, the Turing test has been widely adopted and also widely criticized—not because of flaws in Turing’s original idea, but because of flaws in its execution. The best-known example is the Loebner Prize , which in 1990 began offering $100,000 for the first computer whose text conversation several judges deemed indistinguishable from that of a human. The Loebner Prize has been derided for allowing entrants to use cheap tricks, like confusing participants with odd diversions, in place of more honest approaches that uphold the spirit of Turing’s premise. A chatbot called Eugene Goostman made headlines last June for supposedly passing the Turing test in a contest organized at the University of Reading in the U.K. The software convinced 30 percent of the human judges involved that it was human. But as many AI experts pointed out at the time, and as transcripts of conversations with Goostman show, the chatbot relies on obfuscation and subterfuge rather than the natural back and forth of intelligent conversation. Here’s an excerpt from one exchange , for example: Scott: Which is bigger, a shoebox or Mount Everest? Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from … Scott: How many legs does a camel have? Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty—or, possibly, I’ve missed it? Scott: How many legs does a millipede have? Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me. “The Turing test as it’s been realized in the past few decades, especially by the Loebner competition, is not a valid test for AI,” says Leora Morgenstern, an expert on artificial intelligence who works at Leidos , a defense contractor headquartered in Virginia. “Turing’s original description mandated a freewheeling conversation that could range over any subject, and there was no nonsense allowed,” she says. “If the test taker was asked a question, it needed to answer that question.” Even more tangible advances, such as Google’s game-playing software, merely emphasize the way AI has fragmented in the decades since the field’s birth as an academic discipline in the 1950s. AI’s earliest proponents hoped to work toward some form of general intelligence. But as the complexity of the task unfurled, research fractured into smaller, more manageable tasks. This produced progress, but it also turned machine intelligence into something that could not easily be compared with human intellect. “Asking whether an artificial entity is ‘intelligent’ is fraught with difficulties,” says Mark Riedl , an associate professor at Georgia Tech. “Eventually a self-driving car will outperform human drivers. So we can even say that along one dimension, an AI is super-intelligent. But we might also say that it is an idiot savant, because it cannot do anything else, like recite a poem or solve an algebra problem.” Most AI researchers still pursue highly specialized areas, but some are now turning their attention back to generalized intelligence and considering new ways to measure progress. For Morgenstern, a machine will demonstrate intelligence only when it can show that once it knows one intellectually challenging task, it can easily learn another related task. She gives the example of AI chess players, which are able to play the game at a level few human players can match but are unable to switch to simpler games, such as checkers or Monopoly. “This is true of many intellectually challenging tasks,” says Morgenstern. “You can develop a system that is great at performing a single task, but it is likely that it won’t be able to do seemingly related tasks without a whole lot of programming and tinkering.” Riedl agrees that the test should be broad: “Humans have broad capabilities. Conversation is just one aspect of human intelligence. Creativity is another. Problem solving and knowledge are others.” With this in mind, Riedl has designed one alternative to the Turing test, which he has dubbed the Lovelace 2.0 test (a reference to Ada Lovelace, a 19th-century English mathematician who programmed a seminal calculating machine). Riedl’s test would focus on creative intelligence, with a human judge challenging a computer to create something: a story, poem, or drawing. The judge would also issue specific criteria. “For example, the judge may ask for a drawing of a poodle climbing the Empire State Building,” he says. “If the AI succeeds, we do not know if it is because the challenge was too easy or not. Therefore, the judge can iteratively issue more challenges with more difficult criteria until the computer system finally fails. The number of rounds passed produces a score.” Riedl’s test might not be the ideal successor to the Turing test. But it seems better than setting any single goal. “I think it is ultimately futile to place a definitive boundary at which something is deemed intelligent or not,” Riedl says. “Who is to say being above a certain score is intelligent or being below is unintelligent? Would we ever ask such a question of humans?” Why does the Turing test remain so well known outside of scientific circles if it is seemingly so flawed? The source of its fame is, perhaps, that it plays on human anxiety about being fooled by our own technology, of losing control of our creations (see “ Our Fear of Artificial Intelligence ”). So long as we can’t be imitated, we feel that we are, in a sense, safe. A more rigorous test may prove more practically useful. But for a test to replace Turing’s imitation game in the wider public consciousness it must first capture the public imagination. hide by Simon Parkin Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,157
2,013
"Inside Paul Allen's Plan to Reverse-Engineer the Human Brain | WIRED"
"https://www.wired.com/wiredscience/2013/10/paulallenqa"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Greg Miller Science Inside Paul Allen's Plan to Reverse-Engineer the Human Brain Save this story Save Save this story Save Rick Dahms In 2003, Microsoft cofounder Paul Allen spent $100 million to build the Allen Institute for Brain Science in Seattle. With laser-equipped microscopes and custom brain-slicers, the institute has mapped the brains of mice, monkeys, and humans, showing which genes are turned on—and where—to better understand vision, memory, autism, and other neural phenomena. Last year Allen ponied up another $300 million to aim the institute at a narrower but more ambitious goal: a complete understanding of how the mouse brain interprets visual information. To succeed, they’ll have to go beyond static gene maps and learn how to watch a living brain in action. The new method will track electrical activity in neurons—not just in one mouse but many. Called high-throughput electrophysiology, it’s the sort of big-science approach that the federal government is pushing with its Brain Research through Advancing Innovative Neurotechnologies initiative (yes, the acronym is indeed Brain), which the Allen Institute has been instrumental in planning. Allen talked with WIRED about his institute’s first decade and what he hopes it will do in the next. Of all the things you could have invested in, why brain research? Well, as a programmer you’re working with very simple structures compared to the brain. So I was always fascinated by how the brain works. I had a retreat with a bunch of scientists and basically polled them about what could be done to move the whole field ahead, and very quickly consensus formed around the idea of doing a complete genetic assay of the mouse brain. It’s an example of industrial-scale science where you bring together a team that’s focused on producing a database just like the Human Genome Project did. How do you think your investment has paid off so far? Oh, I think it’s had a real impact. If you talked to neuroscientists, they would say that everybody in the field who has a genetic component to their research uses our database. So that’s rewarding and heartening. Allen wants to visualize the brain at an unprecedented level of detail. | Allen Institute for Brain Science But big discoveries didn’t exactly flow from the research. Look, the genetic data is a big piece of the puzzle, but it’s not the whole. The brain has this amazing level of almost fractal complexity to it. When you start looking at any part of it in detail you realize that it’s much more complex than you thought. If you ask me five years from now, then I’ll be able to say either “I’m excited we had this breakthrough” or “We’ve come up with zero breakthroughs—I’m disappointed.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Tell me about the new direction the institute is taking. The basic idea is to instrument the brain at a very fine level of detail and measure all the parameters—from the diversity of cell types to the electrophysiology—in the mouse visual system, and from that reverse-engineer how it works. That’s an amazing challenge, and no one’s done it yet. We know a certain amount about neurons. You can do fMRI and watch parts of the brain light up. But what happens in the middle is poorly understood. We’re hoping for breakthroughs in understanding cell communication and information flow in the visual system. That’s what I placed a large bet on. Has it proven more difficult than you thought to translate research into treatments? We’re not focused on disease pathologies ourselves; we’re trying to focus on basic science. If we understand the basic science, that will help you bring treatments forward. My mother passed away because of Alzheimer’s, so I have a particular interest in helping these things move forward. How do you decide what to support? Anybody doing philanthropy has to find something that appeals to them from their own personal background or from intellectual curiosity. It depends on what resonates with you. You have a basketball team, a football team, and the guitar Jimi Hendrix played at Woodstock. You’ve invested in spaceships, brain research, and gorilla conservation. If my 13-year-old self were a billionaire, this is stuff that would have resonated with me. I think there’s a through-line from what inspired you when you were younger. Sometimes those things stay with you. If I sit down and say, OK, what are the most exciting problems to work on intellectually, then given my background, these are the ones that appeal the most to me. I just hope they change the world in a positive way. [ ](https://www.wired.com/magazine/) The Next Steve Jobs All the Data You Can Eat The Boston Bomb Squad [ ](https://www.wired.com/magazine/tablets) X Topics Brains and Behavior Neuroscience Paul Allen philanthropy Ramin Skibba Jim Robbins Matt Simon Swapna Krishna Emily Mullin Maryn McKenna Erica Kasper Matt Reynolds Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,158
2,017
"A Clever AI-Powered Robot Learns to Get a Grip | WIRED"
"https://www.wired.com/story/grasping-robot"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science A Clever AI-Powered Robot Learns to Get a Grip Save this story Save Save this story Save You remember claw machines, those confounded scams that bilked you out of your allowance. They were probably the closest thing you knew to an actual robot, really. They're not, of course, but they do have something very important in common with legit robots: They're terrible at handling objects with any measure of dexterity. You probably take for granted how easy it is to, say, pick up a piece of paper off a table. Now imagine a robot pulling that off. The problem is that a lot of robots are taught to do individual tasks really well, with hyper-specialized algorithms. Obviously, you can’t get a robot to handle everything it’ll ever encounter by teaching it how to hold objects one by one. Nope, that’s an AI’s job. Researchers at the University of California Berkeley have loaded a robot with an artificial intelligence so it can figure out how to robustly grip objects it’s never seen before, no hand-holding required. And that’s a big deal if roboticists want to develop truly intelligent, dexterous robots that can master their environments. The secret ingredient is a library of point clouds representing objects, data that the researchers fed into a neural network. “The way it's trained is on all those samples of point clouds, and then grasps,” says roboticist Ken Goldberg , who developed the system along with postdoc Jeff Mahler. “So now when we show it a new point cloud, it says, ‘This here is the grasp, and it's robust.’” Robust being the operative word. The team wasn’t just looking for ways to grab objects, but the best ways. Related Stories robotics Matt Simon Robots Matt Simon Artificial Intelligence Cade Metz Using this neural network and a Microsoft Kinect 3-D sensor, the robot can eyeball a new object and determine what would be a robust grasp. When it’s confident it’s worked that out, it can execute a good grip 99 times out of 100. “It doesn't actually even know anything about that the object is,” Goldberg says. “It just says it's a bunch of points in space, here's where I would grasp that bunch of points. So it doesn’t matter if it's a crumpled up ball of tissue or almost anything." Imagine a day when robots infiltrate our homes to help with chores, not just vacuuming like Roombas but doing dishes and picking up clutter so the elderly don’t fall and find themselves unable to get up. The machines are going to come across a whole lot of novel objects, and you, dear human, can’t be bothered to teach them how to grasp the things. By teaching themselves, they can better adapt to their surroundings. And precision is pivotal here: If a robot is doing dishes but can only execute robust grasps 50 times out of 100, you’ll end up with one embarrassed robot and 50 busted dishes. Here’s where the future gets really interesting. Robots won’t be working and learning in isolation—they’ll be hooked up to the cloud so they can share information. So say one robot learns a better way to fold a shirt. It can then distribute that knowledge to other robots like it and even entirely different kinds of robots. In this way, connected machines will operate not only as a global workforce, but as a global mind. At the moment, though, robots are still getting used to our world. And while Goldberg’s new system is big news, it ain’t perfect. Remember that the robot is 99 percent precise when it’s already confident it can manage a good grip. Sometimes it goes for the grasp even when it isn’t confident, or it just gives up. “So one of the things we're doing now is modifying the system,” Goldberg says, “and when it's not confident rather than just giving up it's going to push the object or poke it, move it some way, look again, and then grasp.” Fascinating stuff. Now if only someone could do something about those confounded claw machines. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Staff Writer X Elizabeth Finkel Celia Ford Swapna Krishna Max G. Levy Erica Kasper Celia Ford Matt Simon Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,159
2,016
"Come on, Let's Give the Robots Hands Already | WIRED"
"https://www.wired.com/2016/06/robot-body-parts"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Clive Thompson Gear Come on, Let's Give the Robots Hands Already Zohar Lazar Save this story Save Save this story Save Sure, Alphago—a Google computer that plays the game Go— beat Lee Sedol , the world's reigning master of the game. AI once again effortlessly outmaneuvered us poor bags of flesh. The machine revolution is nigh! Except there's one crucial thing AlphaGo couldn't do: pick up those black and white Go stones and put them down on the board. A Google programmer had to do that. “Maybe the hardest part is not playing the game but moving the pieces,” says Siddhartha Srinivasa, a roboticist at Carnegie Mellon University. He's only half kidding. Srinivasa is an expert in robot manipulation—the art of grabbing, holding, and using objects. And this, it turns out, is the real challenge for our emerging Skynet. Robots are increasingly able to understand the world, but they're terrible at handling it. If robots are really going to start helping us out in everyday life, they're going to have to get more than smart. They're going to have to get physical. As an example, take a look at the Amazon Picking Challenge. In this contest, robots had to grab loose objects—like a package of Oreos or a rubber duck—and put them in a container. The winner took fully 20 minutes to grapple with a mere 10 items. “Like watching paint dry,” as one observer noted. The other teams did far worse; a toddler could have beaten them all. Check In With the Velociraptor at the World’s First Robot Hotel Robots Will Steal Our Jobs, But They’ll Give Us New Ones Robots Can’t End Amazon’s Labor Woes Because They Don’t Have Hands The Sadness and Beauty of Watching Google’s AI Play Go The physical world defeats our bots because it's been designed by and for humans. We're masterful at dealing with mess and uncertainty. We intuitively grok the behavior of stacks of crap, things that roll over on their sides. Bots don't. “Just look at your own desk,” Srinivasa says. “It's filled with clutter, because humans are expert at dealing with clutter.” Today's workplace robots—like the droids that move stuff around in Amazon warehouses or the robots that weld parts on automobile assembly lines—work in super-clean “structured environments” designed to accommodate their potent but narrow set of capabilities. In other words, they're mollycoddled. When they reach to pick something up, we make sure it's exactly where they expect it to be. And when uncertainty arises, humans have to step in. Mercedes-Benz has lately been replacing some robots with humans because customers increasingly want their cars customized—and robots can't rejigger auto trim on the fly. So how can we give these robots a hand? One approach is “soft pneumatics,” designed to cushion a grab at everyday objects, says Oliver Brock, head of the Robotics and Biology Lab at the Technical University of Berlin (which won the Amazon Picking Challenge). Another would be better guidance algorithms for navigating the hard-to-predict physics of, say, piles of apples or stacks of pens. But either of those angles will require gathering tons more data on such objects—“orders of magnitude more” than we have now, says Stefanie Tellex, a Brown University roboticist. She's trying to get all the academic labs around the world that use one popular two-handed robot—known as Baxter—to network the machines together, so they can learn from one another. (Which, yes, sounds a little Skynetish.) Now, one note of caution: Do we want robots to be nimble enough to fold origami? Machines like that could take over nearly any manual-labor or service job from humans. But they'd also be our helpmates. As Srinivasa points out, millions of people struggle with mobility problems as a result of issues ranging from spinal-cord injuries to just sheer old age. Dexterous robots could help them feed and clothe themselves. “I think it's really important that we enable these people to have dignity of life,” he says. Nimble bots could do that. Plus, they could finally slap down their own Go pieces. Or petulantly wipe them all off the board in frustration when some human beats them, someday. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Contributor Topics artificial intelligence bots clive thompson magazine-24.06 Simon Hill Simon Hill Lauren Goode Nena Farrell Simon Hill Louryn Strampe Boone Ashworth Matt Jancer WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,160
2,019
"What Boston Dynamics' Rolling 'Handle' Robot Really Means | WIRED"
"https://www.wired.com/story/what-boston-dynamics-rolling-handle-robot-really-means"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science What Boston Dynamics' Rolling 'Handle' Robot Really Means Boston Dynamics Save this story Save Save this story Save For internet-goers, Boston Dynamics is that company that uploads insane videos of the humanoid Atlas robot doing backflips , of four-legged SpotMini opening doors and fighting off stick-wielding men , and as of last week, of a Segway-on-mescaline called Handle jetting around picking up and stacking boxes with a vacuum arm. For journalists and industry watchers, however, Boston Dynamics is that company that almost never talks about where all of this work is ultimately headed. That’s beginning to change. The company is now teasing its ambitions as the four-legged SpotMini nears its commercial release. Today, Boston Dynamics is getting even more explicit about its vision with an announcement that it’s acquired a Silicon Valley startup called Kinema Systems , which builds vision software that helps industrial robot arms manipulate boxes. This acquisition is giving the Handle robot the gray matter it needs to follow SpotMini to market. What for years has been fodder for internet video gold is now taking shape as a unified vision of the robotic future. [#video: https://www.youtube.com/embed/5iV_hB08Uns One of the biggest obstacles holding robots back has been their limited perception. We humans enjoy a rich constellation of senses that help us navigate our surroundings. Robots need the same, lest they destroy themselves. Go to pick up a box, for example, and you as a human probably don’t think deeply about the lighting and how it may cast shadows that throw off your hand placement. Kinema’s software—which is robot-agnostic, meaning it already works on a range of robots beyond Handle—helps the machine through all these challenges. “Their system is able to look at a stack of boxes,” says Michael Perry, vice president of business development at Boston Dynamics, “and no matter how ordered or disordered the boxes are, or the markings on top, or the lighting conditions, they're able to figure out which boxes are discrete from each other and to plan a path for grabbing the box.” That’s a huge part of what Handle, a robot designed to work in warehouses, needs to do. But the robot will also rely on its overall shape to do its new job. This is where BD’s larger strategy gets even more interesting: Although Handle, Atlas, and SpotMini look almost nothing alike, they are in fact intimately connected. “Handle isn't entirely different from Atlas,” says Boston Dynamics boss Marc Raibert. Indeed, a video of Atlas three years ago showed the robot picking up boxes with two arms that ended in stubs, arms that Handle wielded in its own video a year later. The challenges of bipedal locomotion are largely the same, namely the balance problems that a four-legged robot like SpotMini doesn’t share, as are the challenges of manipulation with two arms, which SpotMini (being the dog to Atlas’ human form) also doesn’t share. But this is the beauty of robots. You can iterate on their shapes to tailor them to different tasks and environments. Atlas walks on two legs and Handle rolls on two wheels, but either way, that bipedal locomotion cuts down on the robots’ footprint. “If it was a four-wheeled robot, it would have to be much larger in order to get that level of reach and lift boxes,” says Perry. “So this is a robot that's designed to go into human-purposed environments and still be able to complete a task.” The reason BD is able to riff on its robot shapes with relative ease boils down to one big thing: repurposed software. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When you think Boston Dynamics, you probably don’t first marvel at the code that’s running these machines—BD is famous for its hardware. But Raibert takes issue with that characterization. “I think it's a misconception that we're a hardware company,” he says. “The only reason any of our machines do what they do is because of the controls and perception and the systems that coordinate with the hardware. It's just that our hardware is so strong, that's what makes us look like that.” Someone, after all, has to program Atlas to do those backflips. SpotMini needs software to autonomously navigate its world. And two-wheeled Handle needs finely tuned control algorithms to keep from falling on its face. BD works out these algorithms across its platforms. “There's a lot of stuff that flows,” says Raibert. “The next group uses a lot and then creates their own stuff, and then that flows back.” With a cognitive core that's developed over time and shared across platforms, BD has been able to devote energy to honing each of its robots' specialties. In SpotMini's case, it's about becoming an expert at navigating challenging terrain. “When we've been looking at applications for Spot,” Perry says, “we're very careful to screen out tasks we think a wheeled or tracked robot could do even better.” SpotMini is a good match for environments that transition from one terrain to another. “So street to curb, stairs, lips between rooms,” he says. A relatively structured environment like a warehouse, on the other hand, tends to be a great place for a wheeled robot. Clutter can make such places chaotic, sure, but in general the robot can rely on a flat, smooth surface to glide across. In such an environment, wheels are often more efficient than legs: Handle can manage four hours of operation on a charge, whereas with SpotMini it’s more like an hour and a half. And Handle could potentially go even longer. Swinging around Handle’s backside is a counterweight that could hold even more batteries, Raibert says. The previous iteration of Handle had stump arms instead of a single vacuum arm. Also notice that the bulk of the weight is in the torso, whereas the new version has a swinging counterweight on its rear end for balance. Boston Dynamics Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Plus, a human worker can wield Handle as a unique kind of tool. “It's also got a mode where it can squat down and you can manually wheel it around,” says Raibert. To be clear, BD doesn’t intend Handle to be a particularly collaborative robot—it’ll likely work in isolation from humans, unloading pallets autonomously while humans take care of other tasks in the warehouse. At least, that’s the plan. Accordingly, Handle is a bit simpler as far as perception is concerned. It’s got one camera to localize itself in space, another for obstacle avoidance, and another looking for the best place to grab a box. SpotMini, on the other hand, “is trying to be a little more general purpose,” says Raibert. “So we have cameras looking in all directions.” With Handle stacking boxes and SpotMini wandering more widely, perhaps inspecting oil and gas operations, Atlas’ destiny might lie somewhere in between. Its legs allow it to stomp over difficult terrain, but its humanoid form might make it better suited to navigating indoor spaces designed for humans. It could one day, for instance, climb ladders, which would befuddle Handle and SpotMini. But all that hardware we’ve been marveling at over the years has been a kind of illusion—sophisticated machinery, to be sure, that obscures equally sophisticated software. With the acquisition of Kinema Systems, BD not only bolsters the software side of things, it can now sell that system for use in warehouse robots it doesn’t manufacture itself. Oh, and it means Boston’s most famous robotics company now has a base of operations on the West Coast. “We'll have machines out there, but they'll be for the development of the applications and perception and software," says Raibert. "Our current plan is to keep the core of the hardware engineering here. We'll see how that evolves.” How The Matrix built a bullet-proof legacy 5G and the potential health effects of cell phones Russia's bid to exploit gas under the Arctic tundra Ferrari built the track-slaying P80/C for a single customer What it’s like to be thrown in jail for posting on Facebook 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Staff Writer X Topics robotics Max G. Levy Max G. Levy Grace Browne Dhruv Mehrotra Dell Cameron Matt Simon Amit Katwala Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,161
2,019
"Meet Blue: The Cheap and Manipulative (in a Good Way) Robot | WIRED"
"https://www.wired.com/story/blue-the-robot"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Meet Blue: The Cheap and Manipulative (in a Good Way) Robot Philip Downey Save this story Save Save this story Save In a tiny lab at UC Berkeley, next to the whirring 3D printers on the wall, in front of an old Persian-rug-patterned couch, stands Blue the robot. It’s a pair of bulky humanoid arms—only with pincers for hands—attached to a metal stand. Wielding a pair of VR motion controllers, I wave my arms around, and Blue follows me faithfully. It’s my own robotic doppelgänger, kind of like the human-piloted, monster-fighting bots of Pacific Rim , only way cheaper. That’s the beautiful thing about Blue. Research on robots has for decades been hamstrung by extravagant costs—the popular research robot PR2 , a pair of arms not dissimilar from Blue, will set a lab back $400,000. Blue’s reliance on 3D-printed components puts its price tag much lower, at just $3,000 in materials per arm, and the goal is to get the total cost, with manufacturing at scale, to $5,000 per arm. If Blue’s creators have their way, that price point will launch the robot into research stardom, forging a future in which Blue’s descendants do our dishes, fold our laundry, and pick up around the house. And who knows, maybe one day they'll fight giant monsters making a mess of San Francisco. Project Blue Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Historically, if you wanted to operate a robot arm, you had to keep humans far, far away, lest the machine fling them across the room. That’s why industrial robots have been literally kept in cages. But robots have been getting a lot better at sensing their world, in particular reacting to human contact by stopping before they hurt us. This has led to a boom in collaborative robotics, where humans work right alongside machines. “That’s worked pretty well for a lot of existing robots,” says UC Berkeley mechanical engineer David Gealy, who leads the Blue project. “But the challenge is you take an expensive industrial robot, and then you add sensors and feedback control to it and make it even more expensive.” The author pilots and (temporarily) breaks the robotic system by getting in the way of the VR motion sensors. Blue, on the other hand, isn’t particularly sensitive to human touch. Instead, it’s elastic, in a sense. As I pilot the arms around, Gealy can push on them, and the arms give way a bit instead of shutting down. This is because the robot’s relatively cheap motors are “backdrivable,” meaning a human can grab the arms and move them around even when the machine is powered off. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Being on the cheaper side, the motors aren’t supremely accurate. Blue won’t hold its own against an assembly robot that has to, for example, put a tiny screw in place over and over. But Blue is accurate enough for the tasks it will need to perform. Those tasks will involve exploring the frontier of how robots grasp, manipulate, and interact with all kinds of objects. “This robot is designed for the assumption that in the future, robots will be controlled much more intelligently by AI systems that use visual feedback, that use force feedback, much like how humans control their own arms,” says UC Berkeley's Pieter Abbeel, a robotics researcher who's overseeing the project. Project Blue Say you want Blue to learn to fold a towel. For a sensitive collaborative robot, that might be a tough task, because bumping into the surface of the table might trigger it to stop. But being particularly flexible, Blue can put force on the table when reaching for the towel without freaking out. This is how we humans do it, and how we want future machines to do it as well: We first eyeball an object, then combine that vision with a sense of touch as we begin to manipulate the object. We don’t bump into something unexpected and then shut down—we adapt and feel our way through the world. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The thing is, being super cautious isn’t ideal for either us or the machines. If you’re afraid of bumping up against the table, folding a towel gets a whole lot more difficult. “If something is totally safe, it's not useful,” says UC Berkeley roboticist Stephen McKinley, Blue's cocreator. “If you think about the environment we live in every day, most of the objects we interact with are not safe unless they're useless. Everything is out there to hurt you if you want to actually fulfill a function.” Bicycles and cars are two obvious examples. The trick with robots is to mitigate that danger, which is a matter of getting them to interact more effectively with the objects in their world. One perk of a $5,000 Blue is that labs could buy several of the robots and run learning tasks on them in parallel, speeding up the rate at which their understanding of the world improves. “Unlike children, where each has to learn their own way, with robots you can have the same brain for all of them,” says Abbeel. One robot might stumble upon a solution quicker than the others, then share that knowledge, making learning that much more efficient. Plus, because Blue is tough, researchers can push it harder than they would a pricier machine that’s more sensitive to the world around it. “The price point is amazing,” says Brown University roboticist Stefanie Tellex. “Like, whoa. It really opens up the availability of manipulator robots to a much broader audience. $5,000, that's two laptops.” Roboticists’ gain may eventually be humanity’s gain, if Blue can help push robotic manipulation research forward. Giant monsters in San Francisco Bay, take note. The body pullers of Raqqa, Syria Scientists need more cat DNA, and Lil Bub is here to help Hacker Eva Galperin has a plan to eradicate stalkerware How Democrats plan to fix their crumbling data operation So long, Inbox! Try these email apps instead 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Staff Writer X Topics robotics Max G. Levy Max G. Levy Dell Cameron Grace Browne Dhruv Mehrotra Amit Katwala Matt Simon Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,162
2,021
"Can the Metaverse Thrive If It’s Fully Owned by Facebook? | WIRED"
"https://www.wired.com/story/gadget-lab-podcast-518"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter WIRED Staff Gear Can the Metaverse Thrive If It’s Fully Owned by Facebook? Photograph: Getty Images Save this story Save Save this story Save The metaverse. A simulated world, controlled with inputs from our reality to merge cyberspace and meatspace into one plane of existence. If this sounds like a sci-fi fantasy from the early ’90s, that’s because it is. But now Facebook is trying to make the metaverse a reality. The company has been exploring AR and VR tech with the goal of manufacturing a virtual experience that allows users from all over the world to interact in a shared dimension. So far, the most promising metaverse concept the company has shown off is a VR conference room for business meetings. Not super exciting, folks! However, Facebook has demonstrated that its tech has the potential to reframe how we interact in the future—provided we all use Facebook headsets and apps from the Oculus store to meet up within the confines of Facebook’s platform. This week on Gadget Lab , we talk with Peter Rubin, WIRED contributor and author of the book Future Presence, about Facebook’s grand vision and whether an open, platform-agnostic version of the metaverse will ever fully materialize. Read Peter’s story about Facebook’s Horizon Workrooms. Also, his story about the metaverse in Ready Player One. Peter’s book, Future Presence , is now out in paperback. Read Lauren’s story about Facebook’s wrist wearables. And Gilad Edelman has a take on cargo pants , obviously. Peter recommends the show Reservation Dogs. Lauren recommends taking a staycation, because you deserve it. Mike recommends Peter’s newsletter, The Peter Principle. PeterRubin can be found on Twitter @ provenself. Lauren Goode is @ LaurenGoode. Michael Calore is @ snackfight. Bling the main hotline at @ GadgetLab. The show is produced by Boone Ashworth (@ booneashworth ). Our theme music is by Solar Keys. If you have feedback about the show, or just want to enter to win a $50 gift card, take our brief listener survey here. You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how: If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed. Michael Calore : Lauren. Lauren Goode : Mike. MC : Lauren, have you ever visited the metaverse? Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft LG : Yeah, I think so. I think there was this time when I was meeting with a Microsoft executive in a HoloLens 2 headset, and then I had to switch between that in the HP Reverb G2 VR headset, which was connected to some giant high-powered PC. I walked into my kitchen counter and I was like, "I think I just hit the metaverse." That sound right? MC : Yeah, that sounds good to me. I'll take it. LG : Yay. [ Gadget Lab intro theme music plays ] MC : Hi, everyone. Welcome to Gadget Lab. I am Michael Calore, senior editor at WIRED. LG : And I'm Lauren Goode. I'm a senior writer at WIRED. MC : We are also joined today by WIRED writer Peter Rubin. Hello, Peter. Welcome back to the show. LG : Hey, Peter. Peter Rubin : Hey, guys. It is great to be here again. MC : Peter, we have you on because, yes, we are talking about the metaverse and we are talking about VR, and you've written a book about VR. It's called Future Presence: How Virtual Reality Is Changing Human Connection, Intimacy, and the Limits of Ordinary Life. How did I do? That's the full title. PR : You did great. And it's out in paperback now too, and there are additions of it all over the world. So even if you're listening to this in Korea or anywhere else, you can get a copy. LG : That doesn't sound very hig- tech. Paperback, what's that? PR : I know. There's audio and there's an ebook too. MC : Peter used to be an editor at WIRED, but even though he has moved on from our virtual four walls, he is still a regular contributor to WIRED and a regular guest here on the show, so it's good to have you, man. PR : Oh, man. It's so great to be back. I was just telling you before we started rolling, I miss our knees bumping together under the table and the too-small studio that we used to use to record this. MC : And sharing our lung juice. PR : And sharing, as Lauren put it, our lung juice, which— Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft LG : I have to give Alan Henry credit for that from our WIRED team. He's the one who first said “lung juice” at one point, and now I just cannot get it out of my mind. PR : Even if that had been coined in 2019, it would have been gross, but now it's almost too much to take. MC : Doubly gross. Well, we could be recording this in person, but instead, we are recording it virtually. We're all in our own spaces right now, which is sort of fitting for today, because we're talking about virtual reality in the workplace. It sounds really boring, but stay with me here. A few days ago, Facebook showed off a new beta VR experience called Horizon Workrooms. It's a combination of virtual- and augmented-reality technology that lets you interact with both the real world and a simulated environment at the same time. It sounds cool, but it's for meetings, so it's sort of like Ready Player One if Ready Player One took place entirely in an office conference room with PowerPoints and whiteboards. But Facebook's new VR experience is exciting because it melds the real world with the virtual world in new and interesting ways. It's an idea that hints at new types of human-computer interaction that proponents have dubbed the metaverse. And later on in the show, we're going to get back to the metaverse and we're going to talk about exactly what that means, and why there's so much hype attached to that word. But before we get meta, I think we need to hear all about Facebook's demo. So, Peter, you jacked into the Zuckerverse. Tell us about it. PR : I did, which sounds a lot more legally actionable than it is, thankfully, and I'm sure William Gibson would thank you. I like how that phrase became obsolete so long ago, but we can't stop ourselves from using it. Yes, I jacked in last week, and what that involves, this only runs on the Quest 2 headset, which of course is the most recent all-in-one headset that Oculus and Facebook have been selling. So, the first order of business is like the original Quest. The Quest 2 lets you sort of define a place, based on wherever you are using the hand controllers, but instead of drawing out a space on the floor, it asks you to trace the outline of your desk, the front edge of your desk, so it gets a sense of the width of your desk. It asks you to clear most things away from your desk and then to sort of put your laptop open in front of you. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft And then if you are using either a MacBook Pro or the trackable Logitech keyboard that became available earlier this year, it will actually show you a virtual sort of simulacrum of the keyboard, so you can reach out and touch-type if you want to. You're looking down, you're at the VR version of a desk, it pairs with your laptop, so you see the screen of your actual computer superimposed on your virtual environment. Nice and big, as big as you want it, and it's super readable because the display and the Quest 2 is kind of ridiculously good, especially compared to earlier generations of VR. And so if you have one of these trackable keyboards, you see it, and when you reach out … The other thing I should point out is the Quest and the Quest 2 work, not just with hand controllers, but with hand tracking, so the sensors that are on the outside of the headset can actually see your hands and space really well, the detail of each individual finger, and you can use gestural controls in space to select things in the usual sort of a UI of the Quest ecosystem. So, instead of having your controllers in your hands as you traditionally would in VR, you can just reach out. It sees your hands, and when they get in range of the trackable keyboard, that's when the AR overlay comes into place. And so what you end up seeing is kind of a ghostly, gray version of your real hands. The pass-through cameras kick on, and you see them hovering over this virtual superimposed keyboard, and so it allows you to touch-type in VR while still seeing your keyboard. And of course, because your computer screen's in there with you, you can kind of work as you normally might. Now, I was using a MacBook Air, and so it was millimeters off. And so there were ways that I had to kind of rely on muscle memory instead of taking the tracking for granted, but it did sub in really well. So, that's the kind of home office setup of Horizon's Workrooms. And then you go into a meeting room, and that's when the collaboration and the real fun begins. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft LG : And what was it like interacting with other people in this space? Since you took this briefing with other journalists, and I think with Mark Zuckerberg himself, right? PR : Yeah. It was a room full of people, and it felt very much like a room full of people. There were probably six or seven journalists there. Andrew Bosworth, who's the VP of Facebook Reality Labs, was there. Mark Zuckerberg started on a video screen because he was on a video call and then he put on a headset, and so he came in as his avatar, and there were some other folks there as well. I saw a lot of this when the news came out, because of course I did. People were like, "Why would you want this? We already have Zoom." I made this point years ago in the book Mike mentioned, Future Presence. Now, we're so used to Zoom meetings at this point, but if I want to look as though I'm looking at you, I look into a camera. And if I want to look at your faces in front of me on the computer screen, you're going to look at it as my eyes are looking down, unless you have a camera that is kind of hung down into the middle of your monitor. You're never going to make any sort of simulation of eye contact. So yeah, we're sort of here, but we're never looking directly at each other, not once. It's impossible to do. We can't make eye contact on a Zoom call. That is one of the major things that VR overcomes. When you're in there and you make eye contact with somebody, you're really making eye contact with somebody. And of course like the fidelity of the hand tracking means that all of our mannerisms come into play as well. So you see someone's head moving around, their avatar's head is moving around as it does in real life, and you see their hands moving around just like they do in real life. It's because they are happening in real life, but they're translating into VR, which makes their avatar feel incredibly, incredibly real. Fidelity of their facial expression aside, we can abstract out the way people look and we can recognize them as long as they have the distinguishing features, but if you hear someone's voice, and their very personal, idiosyncratic movements are coming through as well and you're making eye contact, your brain kind of leaps past the fact that you're talking to cartoon versions of each other, and the conversational dynamics are a hundred percent as though you're actually in a room with that person. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft MC : This is something that you've referred to in your story that you wrote for WIRED about this experience and previously as “social presence,” is that right? PR : It is. So presence by itself is sort of ... it's short for co-presence, as the absolute bedrock foundation of a realistic VR experience. It's when your brain accepts all the stimuli that you're getting and your reactions kind of follow from that. So, as long as you're fooled enough by the fidelity of the visuals, or the way things sound, or the way people appear to move, your body begins to respond as though you're actually there. So social presence is just, "Do your movements and mannerisms come across in such a way as to give a person a sense of presence, not just being in the world of VR, but of being there with the other person?" LG : And then your most recent dispatch from this Facebook beta app presentation, you wrote that it's not exactly creepy, but it's also not not creepy. Explain this. PR : Well, it's very creepy when something goes wrong. And in a multi-user environment, that layers in so many other things, the way Horizon Workrooms does. And if you think about it, there's the sort of AR thing of being able to see through your hands when you're taking notes at this meeting. You've got your laptop paired, and so does everybody else who's in the meeting. You've got spatial audio for all the people in the meeting, you've got the environment itself, and then you've got these other sorts of capabilities, like you can use a hand controller, turn it upside down, and hold it like a pen and doodle on your desk, and what you are doodling will show up on the whiteboard in the meeting room. So, there's sound, there's visual, there's all these paired devices, there's AR pass-through. It's a huge pipeline of information that every Quest has to sort of process and come together. All of which is to say, social presence is kind of balancing on a knife edge, right? If one thing goes wrong, and in this case, that one wrong thing was when Mark Zuckerberg came into the room, his mouth wasn't moving when he was talking, which vaulted you so far into the depths of the uncanny valley. His avatar has big unblinking blue eyes and a haircut just like his. And so it already feels like you're dealing with a cartoonish version of this person whose image you have seen day after day in the news for years. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft So not only is it a cartoonish version of that, but then something goes wrong and his mouth isn't moving. And meanwhile, his hands are moving and his head is moving and you're hearing him. I think in the piece, I said it was like a Hummel figurine was trying to explain the metaverse to you. It really doesn't take much to go from feeling like you're there, to feeling like you're there and also being really sort of repulsed by what you're seeing. That's the body's response. LG : So these avatars … On the upside, they can offer us a means of endless personal expression. And I think that the idea of this metaverse that we're going to talk a little about more is that there are these infinite spaces in which app developers can build apps, right? It's this endless creative space, but Facebook has also been pitching this whole idea as the "Infinite Office," which honestly makes a lot of people's skin crawl, including my own. So, what does Facebook mean by this? And does anyone want an Infinite Office space? PR : Yeah, so I think there's two ways to hear that phrase, right? One is late-stage capitalist dystopia of, no matter where you go, your office follows you. And I think the way they are thinking of it as … I'm maybe overly generous right now, let me just preface what I'm about to say with that. This is— LG : Right, and let's also add to that. We also are kind of living in an Infinite Office space, right? PR : Exactly. LG : Our phones are the Infinite Office. PR : A hundred percent. And that's the other thing too, there's this sort of knee-jerk response to everything that Facebook announces about VR with, "No thanks, uncle Mark," or 1984. You are being surveilled all the time, and your fears about a Facebook account being tied to this, or that it feels like you're just choosing one hill of many to die on here, which feels a little disingenuous. But Infinite Office, I think in their view, is leveraging this combination of AR and VR to be able to work anywhere you choose to. It's not work following you. I think there's a more maybe valid concern about if we're working more in VR, then the sort of attentional and psychological profiling of our time spent in there is more invasive, but the Infinite Office, I think in their view, is very much, "I don't need to bring a desk somewhere to feel like I am at a desk. I can sit down at a coffee shop." Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft And Lauren, you wrote about it when Facebook Reality Labs unveiled a lot of their AR roadmap, using the electrical impulses from your forearm to be able to type and control using micro movements in space. So, this idea, if you want to extrapolate what we saw with Horizon Workrooms, being able to set up an imaginary keyboard anywhere, have a monitor maybe pair to your phone, if not your laptop appear in space. Your laptop can stay in your bag, you can be sitting at an empty table, and you can get whatever work done that you want to, and then jump into a meeting space with other people, and it's basically the Infinite Office. The only tether is as long as you've got a data connection, you should be able to simulate all the affordances of what you have come to think of as your decentralized remote work life. MC : Real quick before we take a break. You mentioned briefly that people might feel uncomfortable having their Facebook identity connected to a virtual meeting, where they're sharing all kinds of sensitive information that they may not want Facebook to know about. So, has the company put in any controls to protect user privacy and data in Horizon Workrooms? PR : Not in a way that I think has been fully vetted yet.? They have a terms of service and they have agreements, and they're very, very clear that when you pair your laptop to your headset, everything stays local. None of the processing goes anywhere, but between your headset and your laptop. Nothing gets shared, that nothing that happens in a meeting or when you're using the app is given to anyone, definitely not third-party developers. The information isn't made available to anybody. I think whatever Facebook does get for processing purposes is anonymized. So they do a lot in the TOS to really assure you that security or at least privacy is recognized as a concern. Now, that's one part of it, but the other part is, a lot of people work for companies where you're on a proxy connection all day long, right? A lot of companies, especially ones that are used to working in a distributed fashion and work with sensitive data, have all these things in place that I can't imagine would play nicely with the need to pair your computer, bring it into a virtual, a shared virtual space, and then jump into a meeting with other people tied to your Facebook login. That said, a lot of these same secure companies are still using Zoom and still using Google Meet. So it stands to reason that there will be a way to make that work, but I think at least in this beta stage, Facebook is just trying to convince people, if not possible business clients or business users, that they're not here to hoover any data. They're just here to let you collaborate in a virtual space. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft MC : All right. Well, thanks for that. We're going to take a break right now. And when we come back, we will step into the metaverse. [ Break ] MC : Welcome back. So, Facebook's Horizon Workrooms is part of a broader pitch around a word you've probably heard a lot lately. The “metaverse.” It's not a new term, but recently, a combination of widely available technologies like movement-tracking sensors and VR headsets, and compute power that can handle all these heavy apps, have pushed the notion of the metaverse into the forefront. And of course, a lot of technologists are sharing their visions for what the metaverse will be, partly because they want to capitalize on it. So, Peter, I'm going to put the question to you. What the hell is a metaverse? PR : Well, I'm amazed I didn't hear this from you guys at the top of the show, but the only correct answer to that question is, I never met a verse I didn't like. LG : Oh, no. PR : I am here all week. LG : I appreciate the dad joke, Peter. I appreciate it. PR : That's what I'm here for. I also wrote about puns at WIRED, so it's not exactly off-brand. You know, what's interesting about this is people have been talking about the metaverse since the rebirth of VR, but it didn't become this kind of gross-feeling buzzword until Mark Zuckerberg started giving interviews talking about it. And I think rightfully, you had people who've been working in AR and VR being like, "Why now? Why now did the drum get loud, and why now did it become this sort of shortcut term?" I mean, so the idea of the metaverse really goes back to Snow Crash , and a lot of people like to point to Ready Player One too. It's really just this idea of a universe of realities, right? It is a way to go from real-world, to virtual, to AR, anywhere on the continuum and the permutation of those three things together. But what's important about the metaverse that seems to be at odds with the way Mark Zuckerberg is talking about it is, the metaverse by nature can't be walled, right? It has to be built on an open framework. In a true metaverse you should be able to jump from a meeting in Horizon Workrooms to something that's hosted on another platform, whether it's VRChat, or Rec Room, or what have you. But because of the way the corporate world works, everyone wants their pocket metaverses, to borrow a term from fiction and comic books. So this is the problem, if you're going to have walls around it, it's not a metaverse. A metaverse has to be basically an internet of reality, but as we've seen with the internet from 15 years ago until now, it's been kind of irredeemably and fundamentally changed by the need to monetize the thing that you built that uses it. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft A few years ago in WIRED, I made this very point. If you want a metaverse, you have to drop these walls. Facebook is in this amazing situation of having limitless resources to throw at it—which they are doing, because they know what the upside is there. They're making everything a viable product before anybody else can. Their research is way ahead of what anybody else can do, because of who they've been able to hire and how much time and money they've been able to put into it. On one hand, it's amazing that they are productizing and concretizing the promise of VR in ways people haven't been able to, but on the other side, they aren't building the metaverse, they are building Facebook 2.0. I take issue with that word being used so loosely, and with everybody now, everybody and their board of directors calling it a metaverse. A metaverse should be as utopian and borderless as the internet once was. LG : Just to be clear, the metaverse is not new infrastructure, right? It's not new networking. It's a layer that exists over our existing internet infrastructure? PR : I think our existing internet structure is the foundation on which it is built. And this is where I run into a wall, because there may be sort of practical components to this phrase that I'm not deeply versed enough in, but I mean, you have people like Philip Rosedale who cofounded Second Life and founded High Fidelity after that, which a year or two ago pivoted to become sort of enterprise only, but it had been beating the drum to make this an open framework. There've been a lot of people in the AR and VR world who have been beating the drum to make this an open framework. I'm going off an imperfect memory here, but there have definitely been symposia and conferences devoted to this idea, that if the metaverse is going to exist, it has to be open. MC : And I think we've seen this pattern before, because when the social web first launched 15 years ago, it was all based on XML data streams, and anybody can code an app against any data stream coming from any social network. And then what happened is all the companies … Well, I mean, it was very complicated what happened, but one of the big things that happened is a lot of the companies building those open tools just got purchased by Google or Facebook, and then got subsumed, and their technology just got used for proprietary platforms. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft PR : Yeah, that's exactly right. It's this oligopolistic approach to the metaverse that I don't think serves anybody but the oligopolies. MC : Yeah. And you can't have true interoperability if you're talking about one company making the hardware and building the app store. PR : Yeah. I wrote an essay for WIRED a few years ago—this was right when Ready Player One , the movie, was coming out. And the metaverse in there is an acronym called the OASIS. And the point that I was making is, everybody wants an OASIS, but all we have now are a series of puddles. And I, of course, turned puddle into a tortured acronym, just like OASIS was. So, this idea that we have all of these kinds of smaller, shallower versions of this rich, deep ideal that we've been dreaming about for 30-plus years, but in practice it's falling prey to exactly what you're saying. LG : What I hear you describing, Peter, is the threat of a closed universe when something is actually supposed to be quite open and inclusive. And that makes me wonder about the cultural impact of this too, because when I think about some of the names we've just mentioned in this conversation, I mean, Neal Stephenson, Ernest Cline, who wrote Ready Player One , Philip Rosedale, who founded Second Life , Mark Zuckerberg, Andrew Bosworth, John Hanke from Niantic has weighed in on this, two of the most prominent analysts who have written about the metaverse are Matthew Ball and Ben Thompson. Now, I'm going to assume our listeners are pretty smart here and I don't have to— PR : All of these things are exactly like the others, is what you're saying? LG : Right, right. But I'll draw the line for you in case you haven't picked up on this: It's all men. What does it say from a hegemonic perspective, that the people who are at the forefront of imagining the metaverse and analyzing it and are seemingly being most vocal about it are men? And by the way, there are a lot of brilliant women and nonbinary inventors and VR developers out there, who I've also had the pleasure of speaking to for my job, but it just seems like there's a certain group that's really dominating the conversation about the metaverse right now. And I wonder what that implies for how inclusive this new layer of the internet will actually be. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft PR : Yeah. One of the things that made me so hopeful when I first wrote the book Future Presence , and this was like three years ago, it was before the Rift had come out, was there was a chance to build this thing without falling victim to the thing that had doomed the internet, which was that everybody who became an architect was coming from a standpoint that didn't necessitate that they consider the needs of others. Let's say it that way. What you're saying is kind of the sort of hegemonic potential of this thing. And we saw that in the way social networks grew and evolved, we saw that in the problems that became intractable issues that plagued these social networks. And the hope was that as these first social worlds were beginning to be built in VR, people had seen this movie before and they didn't want it to play out the same way. And so there was a lot of thought given to user safety and inclusion. But fast-forward, that even in these smaller social worlds, let alone the massive companies and the sort of thinkers and pundits who are furthering a lot of this conversation—you mentioned Matthew Ball, who was a VC but also thinks really deeply about this stuff, or science fiction writers—that they are sharing an unfortunate degree of monolithic, at least demographic identity, right? That's not to say that they're not empathetic people, and that is not to say that they don't want VR to solve the problems of the internet or build itself without the problems that crept into the internet, but sometimes the food tastes the way it does because of the cook, right? It just is that way. Like you said, there have always been women and folks of color in the AR and VR space who have been building and creating and advocating for those considerations. And a lot of them are working at the companies that we're talking about, too, which is also helpful, but where it comes down to it, who's on camera talking about this? Who are the voices that are being quoted in the pieces about this? Who are the executives that are making the presentations to other executives about this? That's where the rubber meets the road; we're talking about something that has the potential to be so much more than it's sounding like, because it doesn't have a need to sound any different than it did the first time. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft MC : Well, Peter, thank you for that delightful and informative conversation. I'm not just saying this to make you feel good, but I completely understand what metaverse is now and what the stakes are. PR : I'm glad you do, because I don't think I do, but it was great to be here and take a stab at it. LG : Thank you for taking this meet-a-verse with us, Peter. Ahh, I stole that from your story. I cannot take credit for that. MC : Nice. PR : I'm sorry about that. MC : All right. We're going to take a break. And when we come back, we'll go through our recommendations. [ Break ] MC : Welcome back. This is the final part of our show, where we all recommend things that our listeners might enjoy. And Peter, you've done this before. What is your recommendation? PR : My recommendation is for a television show. It's made by FX, so I think it is exclusively on Hulu. It's called Reservation Dogs. I think there's a real propensity these days to describe a show like, "Oh, it's like Atlanta , but x." Years ago, there was this trend that every movie would get pitched. It was Speed , but x. Ever since Donald Glover made Atlanta , it's become a really handy measuring tool. Dave came out, and people were like, "Oh, it's like a white Atlanta. " I think with Reservation Dogs , which follows a group of native teens on a reservation in Oklahoma, I would be surprised that people aren't saying, "Oh, it's like a native Atlanta. " Not the case at all. It's made by a writer-director named Sterlin Harjo, and it's almost an entirely native cast, writing team, directors, all that. It's really, really good. It's funny and it's touching. And it's also the second comedy to come out in the past year that is sort of about and starring native life. The first was Rutherford Falls , which was exclusive to Peacock. It was an NBC show that ended up on Peacock. I watched it and I didn't really connect with it. Reservation Dogs really has me. The pilot is a little uneven, but from episode 2, it's the show that I look forward to most, by far, each week, and I watch a lot of TV. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft MC : That's great. PR : That's it. Reservation Dogs by FX on Hulu. MC : Nice. Pilots are always a little bit uneven. I feel like you got to get to episode 4 before you really feel what the show is actually like. PR : But I don't like telling people, like, "OK, but you gotta sit through three and a half hours, and then it really picks up." No, no, no, no, no. Watch the first half hour. It's good. It's just that the show hits another level from that point on. MC : It's not like anybody's going anywhere. I mean, come on. PR : Yeah, but if you tell me it's going to take me two movies worth of enjoyment before I get to a payoff, then I'm out. MC : See, I'd be in. I'd be like, "Ooh, really?" PR : Oh, it's a slow burn? OK. MC : Lauren, what is your recommendation? LG : My recommendation is very Gilad-like. For those of you who have listened to prior episodes of Gadget Lab , you know that when Gilad Edelman joins us on the show, he usually has a big lead up to his recommendation. He's like, "This is going to change your life," and then all of a sudden he recommends sliced lemons or unbuttoning the top button of your shirt. My recommendation is staycations. I think staycations are great. I've taken two this summer. I stayed right here in the great state of California, which yes, it's still a great state despite our insane gubernatorial recall election that shouldn't be happening and the wildfire smoke. And yes, we have problems here, but I have had a wonderful time just staying nearby this summer and exploring local vacation spots and I don't know, staying close to home and just feeling like … There's something really nice about not totally disrupting your flow, kind of like what you do in your every day, or changing your time zone or having to go to the airport and things like that, and having to pack a bunch of stuff. There's something really nice about just appreciating what you have nearby, and getting to do the things that you always say you wish you could do if you just had a little bit of time off from work or family obligations. And so I highly recommend just enjoying the staycation as much as you can. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft PR : Big staycation fan. MC : What are your guys' philosophy on staycations? Because I just use mine to catch up on chores. LG : Oh, Mike. Oh, Mike. MC : I'm like, "Ooh. I had three days off. I could bring so much crap to Goodwill right now." LG : I mean, yes. That's part of it. PR : I think one day of this … Yeah. LG : Yeah. PR : One day of a staycation, it's good to get a bunch of stuff done. I mean, I took a two-and-a-half month vacation that I'm fresh off, so I'm a big believer, right? I tried to keep the shape of my day the same, but walks and changing things up. I will say that was a lot better than Gilad's last recommendation, which was cargo pants. So highly support this. I think you have iterated and improved upon Gilad's recommendation process. LG : But you can use your staycation to take your cargo pants to Goodwill, if you're over them. PR : She tied it all together. That was amazing. Nicely done. MC : Professional. LG : Mike, I highly recommend the next time you take a staycation, I would say at the max, two days should be spent on getting stuff done around the house or in your local neighborhood, and cleaning, and picking up prescriptions, all those things you have to do. And then, really, you should at least get two or three days equivalent or more of just checking out a hiking spot or picnic spot, or a lake, or a beach, or something that we have access to here, and you haven't been able to explore. MC : Vipassana. LG : Yes. Yeah, do a little silent meditation. Yeah, why not? All right, Mike, what's your recommendation? MC : Well, I would like to recommend Peter Rubin's newsletter. PR : Oh my God. LG : And now, we tied it all together. MC : Yes. PR : This truly is the best podcast episode. MC : Peter Rubin started a newsletter not too long ago. It's currently on Substack. It's called The Peter Principle, and I love the premise of this, because you talk about other Peters in your life, right? Give us some examples of the other Peters who you have talked about in the newsletter. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft PR : Well, they're not in my life directly, but these are all Peters who are kind of intimately connected to an obsession or a pattern in my life. So, on past ones, I've done Peter Mayhew, who played Chewbacca, which was just sort of a way to talk about Star Wars and storytelling, and why it blew so many kids' minds when the movies came out. Peter Tosh, who is a favorite reggae artist, I think, of mine and yours both, Mike. And then there are also some really contorted back doors just to be able to talk about things I love. So, one of them was Peter Hickok who was, I believe, a costume designer on the Netflix comedy show, sketch show I Think You Should Leave. He was the only Peter involved with the production at all, and that just took an IMDb search, because I refuse to accept that I wouldn't be able to write about that show. MC : Right. I love this that you use these Peters as windows into your own life experience, and it's very touching and it's fun because it also exposes you to things that you normally would not have heard of. If you're not a big reggae fan, you wouldn't know about Peter Tosh, necessarily. If you're not a fan of excellent sketch comedy on Netflix, you probably wouldn't know about I Think You Should Leave , so it's kind of a nice way to learn about things. Props to you, first of all, but also, listeners, you should subscribe to it because it is a delightful newsletter, and newsletters are the new blogs. We love newsletters, so we like to tell people to subscribe to them, because the good ones are free. Yours is free. PR : It is free, and it will forever remain free. MC : Love that. LG : But that doesn't mean that paying for good stories is bad by the way. You should subscribe to WIRED if you're listening to this. PR : I do. I write for WIRED and I still subscribe! MC : And you should also give money to all those hardworking Substackers out there who are charging money for their newsletters, but also you should subscribe to the free ones too, because why not? Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft LG : Yes. Yeah. MC : What do you have to lose? LG : Yeah, just do it on … principle. PR : Hey! MC : Hey now! So, it's peterprinciple.substack.com. There's no “the,” it's just peterprinciple.substack.com, so check it out. PR : I'm glad you looked that up because I would have gotten it wrong, which is sad. MC : I am a professional man. Well, thank you, Peter Rubin for joining us this week. The book is called Future Presence. It is out in paperback now in your local language. Thanks for being on the show. PR : Thanks. It was great to see you guys. I had no idea it was going to become a shill hour for my projects, but I am not mad about it. Great to see you guys. LG : Great to see you, Peter. MC : And thank you all for listening. If you have feedback, you can find all of us on Twitter. Just check the show notes. This show is produced by Boone Ashworth. Goodbye, and we will be back next week. [ Gadget Lab outro theme music plays ] 📩 The latest on tech, science, and more: Get our newsletters ! A son is rescued at sea. But what happened to his mother? Without code for DeepMind's AI , this lab wrote its own Shopping for a router sucks. Here's how to choose one Twelve Minutes is a diabolical dive into the human psyche What Airbnb's boost reveals about Covid-19 recovery 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Topics Gadget Lab Podcast podcasts Facebook VR augmented reality Oculus Metaverse Simon Hill Jaina Grey Adrienne So Simon Hill Jaina Grey Eric Ravenscraft Brenda Stolyar Reece Rogers WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,163
2,019
"Facebook's Logo Gets a Face-Lift | WIRED"
"https://www.wired.com/story/facebook-logo-brand-facelift"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Arielle Pardes Business Facebook's Logo Gets a Face-Lift Facebook, the company, is much more than just Facebook, the app. Photograph: Michael Short/Bloomberg/Getty Images Save this story Save Save this story Save In the 15 years since Mark Zuckerberg created Facebook , the platform has undergone more than a few costume changes. It’s grown from dorm room hijinks to measure the relative hotness of Harvard undergraduates to the online pulpit of American politics. When Facebook filed to go public in 2012, Zuckerberg explained that Facebook was never meant to be just an app, or even just a company. Instead, it was built to do something much more ambitious: “to make the world more open and connected.” The scorecard on that mission is checkered. But, today, at least one thing is clear: Facebook, the company, is much more than just Facebook, the app—and it wants you to know it. Facebook today introduced a brand redesign that will extend across the company’s many products, like a set of matching outfits for a family portrait. The Facebook logo now shines with new typography and an “empathetic color palette” —pink for Instagram, green for WhatsApp—that features more prominently across Zuckerberg’s vast dominion. Instagram and WhatsApp will now tell you they’re “from FACEBOOK,” newly in all caps, as if shouting to remind you who’s in charge. Courtesy of Facebook Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “The new branding was designed for clarity,” writes Antonio Lucio, Facebook’s chief marketing officer, in a blog post introducing the new designs. Another blog post on the company’s design hub goes into more detail about how the team used “custom typography, rounded corners, open tracking and capitalization to create visual distinction between the company and the app.” This apparent faith in the world-changing power of a good font will be familiar to anyone who has ever read a design brief. “The subtle softening of corners and diagonals adds a sense of optimism,” the reader is told, although it isn’t specified what we’re to be optimistic about. All of these design tweaks add up to one clear takeaway: Facebook is more than just Big Blue. It’s the social technology that rules your life, from WhatsApp and Messenger and Instagram and Threads and Oculus and Portal and Workplace. Soon, if the company can navigate its way through a maze of regulatory and public perception challenges, it may also include Calibra, the digital wallet for its new cryptocurrency, Libra; one day, it could even include a Facebook-branded brain-computer interface. In the future, who knows what else Facebook will swallow up. Whatever it does, you won’t forget it was built by Facebook. Do you have a tip about Facebook's kerning decisions? Email Arielle Pardes at [email protected]. WIRED protects the confidentiality of its sources, but if you wish to conceal your identity, here are the instructions for using SecureDrop. You can also mail us materials at 520 Third Street, Suite 350, San Francisco, CA 94107. Zuckerberg has referred to his empire as a “family” of apps for years , but recently, after a difficult couple of years at the company , those familial ties seems tighter than ever. The rebranding follows Facebook’s plan, from January, to integrate its various messaging services on the backend, which would stitch together communication on Messenger, WhatsApp, and Instagram. This summer, the company furthered this assimilation by adding the Facebook name to more of the products it owns. Instagram became “Instagram from Facebook,” like a designer collection sold exclusively by a big box department store. It was curious timing for a company that is currently facing several separate antitrust investigations, from the US Department of Justice, the Federal Trade Commission, and 47 attorneys general across the United States. Presidential candidate Elizabeth Warren has made unwinding Facebook’s various acquisitions a major part of her platform. Even Chris Hughes, Facebook’s cofounder, has called for regulators to break up the company, and has launched his own fund to support academic research and policy on antitrust matters. Labeling the Facebook-owned apps and adding more cross-platform integration doesn’t make Facebook seem like less of a monopoly. It makes Facebook seem bigger than ever—and now with open letterforms and capitalization! But the rebrand also continues a kind of transparency that Facebook hasn’t always prioritized. Consider the push to #deletefacebook earlier this year, after which many migrated their social presence to Instagram, perhaps without realizing that their platform overlords remained the same. Now that connection is being slapped across products in all caps and bright colors. FACEBOOK. The grand unification of Facebook’s products might serve as a reminder of all the ways it’s lapped up its competition and combined it into one massive communication stew. But it also signals how Facebook is trying to move forward with its family of products, as one company under one design. That’s especially important as the Facebook app itself stalls in growth and the company struggles with its reputation. Make no mistake, it is still minting money. But Facebook’s future is especially reliant on the likes of Instagram, WhatsApp, and whatever else comes next. Where the new “from Facebook” language once appeared quietly, subtly at the bottom of the Instagram app, it now shows up in all caps, in a font that makes it impossible to ignore—a visual representation of the idea that Facebook, the company, is only getting bigger. The shady cryptocurrency boom on the post-Soviet frontier A new Crispr technique could fix almost all genetic diseases The quest to get photos of the USSR's first space shuttle The death of cars was greatly exaggerated Why one secure platform passed on two-factor authentication 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers. Senior Writer X Topics Facebook Morgan Meaker Reece Rogers Paresh Dave David Gilbert Kari McMahon Nelson C.J. Peter Guest Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,164
2,019
"Facebook Will Crack Down on Anti-Vaccine Content | WIRED"
"https://www.wired.com/story/facebook-anti-vaccine-crack-down"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Louise Matsakis Business Facebook Will Crack Down on Anti-Vaccine Content Yana Tatevosian/Getty Images Save this story Save Save this story Save As Clark County, Washington, combats an ongoing measles outbreak , Facebook announced Thursday that it’s diminishing the reach of anti-vaccine information on its platform. It will no longer allow it to be promoted through ads or recommendations, and will make it less prominent in search results. The social network will not take down anti-vaccine posts entirely, however. The company also said it was exploring ways to give users more context about vaccines from “expert organizations.” The decision was widely anticipated: Facebook, along with YouTube and Amazon , has faced criticism from journalists and lawmakers in recent weeks for allowing vaccine misinformation to flourish on their sites. Facebook also told media outlets in February that it was looking into how it should address anti-vaccination content. Last month, Adam Schiff, a Democratic representative from California, sent letters to the CEOs of YouTube and Facebook demanding they answer questions about the spread of anti-vaccine information on their company’s platforms. He followed up with a similar letter to Amazon CEO Jeff Bezos last week. On Wednesday, an 18-year-old from Ohio testified before the Senate that his mother primarily received misinformation about vaccines on Facebook and opted not to inoculate him. (A major study released Monday found no link between the MMR vaccine—which protects against measles, mumps, and rubella—and autism.) In a blog post written by Monika Bickert, Facebook’s vice president of global policy management, Facebook said it will begin rejecting ads that include false information about vaccinations. The company also removed targeting categories such as “vaccine controversies” from its advertising tools. Last month, the Daily Beast reported that more than 150 anti-vaccine ads had been bought on Facebook, which often targeted women over 25. Some of the ads were shown to users “interested in pregnancy.” In total, they were viewed at least 1.6 million times. YouTube similarly announced last month that it would begin preventing ads from running on videos featuring anti-vaccine content. Facebook will also reduce the ranking of pages and groups that spread misinformation about vaccines in search results and in its News Feed. In February, The Guardian found that anti-vaccination propaganda often ranked higher and outperformed accurate information from more reliable sources on Facebook. The social network’s effort to fight vaccine disinformation extends to Instagram, where the company says it will stop recommending content that includes vaccine misinformation on the app’s Explore page. Instagram will also stop displaying vaccination misinformation in hashtag search results. It’s not clear how long these new controls will take to roll out: An Instagram search for #vaccine Thursday afternoon surfaced the hashtag #vaccineskill as the number one result, for instance. Last month, Pinterest received praise for its decision to stop displaying search results for vaccines entirely, even if they are medically accurate. (In 2017, Pinterest previously banned “anti-vaccination advice” from its platform.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As The Atlantic has pointed out, the majority of anti-vaccination content on Facebook appears to originate from only a handful of fringe sources. It likely won’t require a herculean effort for Facebook to tackle this strain of misinformation. The question is why the company waited until it became the subject of media reports and criticism from lawmakers to finally act. Facebook increased its efforts to fight false information more broadly on the platform in the wake of the 2016 presidential election, including with initiatives like third-party fact-checking. The company admits it won’t catch everything, and demonstrably fake stories still do go viral. While there is little public data about user behavior on Facebook, researchers have found signs that the reach of fake news declined between 2016 and 2018 midterm elections. (Though they also say there remains plenty to be concerned about when it comes to misinformation.) It’s not yet clear whether the proliferation of anti-vaccination content online has led to a significant decrease in vaccination rates in the United States. Unscientific information about vaccines has been circulating on- and offline for well over a decade. But as Slate has pointed out, the number of children under 3 who have received their first dose of the MMR vaccination has remained steady for years, according to data from the Centers for Disease Control and Prevention. The World Health Organization named vaccine hesitancy one of its “ten threats to global health in 2019,” but cites “complacency and inconvenience in accessing vaccines” as two of the key reasons why people choose not to vaccinate, in addition to “lack of confidence.” There’s still little doubt that social media platforms like Facebook, but also YouTube and Amazon, have indeed made anti-vaccination talking points more accessible to wider audiences. The proponents of this misinformation were aided by recommendation and search ranking algorithms, which often promoted anti-vax content to the top of the pile. Facebook’s announcement today is further acknowledgment of its role in that ecosystem, and the idea that free speech is not the same as free reach. How to keep parents from fleeing STEM careers Machine learning can use tweets to spot security flaws Ways to get text onto your screen— without a keyboard Gene mutation that could cure HIV has a checkered past Anarchy, bitcoin, and murder in Acapulco 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Contributor X Topics Facebook public health vaccines Reece Rogers Morgan Meaker Reece Rogers Morgan Meaker Paresh Dave Paresh Dave David Gilbert Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,165
2,021
"5 Years After the Oculus Rift, Where Do VR and AR Go Next? | WIRED"
"https://www.wired.com/story/oculus-rift-five-year-anniversary"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Peter Rubin Culture 5 Years After the Oculus Rift, Where Do VR and AR Go Next? Photograph: Jennifer Leahy Save this story Save Save this story Save Atman Binstock was working late one night in the summer of 2015 when he saw a door open that shouldn’t have been open. There were only two keys to the room, and for good reason: That’s where the Oculus team kept the Toybox demo. The Facebook-owned VR company had just come back from the E3 video game trade show, where it had used the demo to show off the capabilities of its new handheld controllers. In Toybox , you could build a house of blocks, set off mini-rockets, even play ping-pong, just by reaching out and using your hands the way you normally would. Perhaps best of all, you could do all those things with another person. Toybox showed not only that VR wouldn’t feel like playing a video game, but that it wasn’t going to be isolating—that, as WIRED wrote around that time, VR could allow people to be alone, together. Gadget Lab Podcast WIRED Staff Virtual Reality Lauren Goode Virtual Reality Peter Rubin After E3, Binstock’s team had rebuilt a demo pod back in the office so that more Oculus employees could try it, but there had been … incidents. Hence the locked door, and hence the two keys. Can’t have a free-for-all. But now that door was open. Aw, man , Binstock thought. I’m going to go ruin somebody’s night. He poked his head in the room, ready to drop the hammer, and instead found his boss. His boss’s boss, really. There was Mark Zuckerberg, who the year before had (in)famously bought Oculus for around $2 billion. “Oh, hi, Mark,” said the chief architect of Oculus. “Need any help?” “No, I’m good,” said the chief executive of Facebook. So Binstock watched as Zuckerberg practiced hosting a Toybox demo with a prototype headset and prototype controllers. “You’ve got to remember,” Binstock says now, “these things are cranky. It takes forever to even start them up and debug what’s going wrong.” But as he watched, it became clear that Zuckerberg wasn’t there to try the demo; he was there to practice. He had a routine. He had a patter. Binstock realized that the man who had once said VR would “change the way we work, play, and communicate” had spent hours getting good at this, just so he could be able to share his vision of VR personally. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To say that a lot of things have happened since then—to Oculus, to VR, to Facebook, and to people’s trust in all three—would be an understatement on the order of “2020 was weird, huh?” All of Oculus’ original founders have moved on, a scrappy team giving way to Facebook Reality Labs, a massive AR/VR division that may constitute as much as 20 percent of Facebook’s entire workforce. The Oculus Quest 2, VR’s multimillion-selling device of the moment, is half the price and far more powerful than the Rift, the company’s first mass-produced dedicated headset. Facebook has waded deeper into the hardware space with the Portal video-call device, and a year of pandemic lockdown has been very kind to both. What time has been less kind to is public sentiment; between its complicity in the disinformation campaigns of the 2016 election, privacy issues that arise from its ad-driven business model, concerns about AI bias , and other issues, Facebook has found itself on defense far more often than any company would like. Yet all that change has made this week in particular a good time to take stock: It just happens to be the five-year anniversary of the Oculus Rift. Over those five years, despite everything, Facebook has solved an astonishing number of problems. And as the company looks ahead, those issues—as well as ones yet unsolved—figure prominently. From its Luxxotica smart glasses coming later this year to the far-flung future Facebook is imagining in plain view, Zuckerberg has maintained his convictions about AR and VR’s inevitable ubiquity. The technology has survived its initial lean years, but going from a few million users to a billion means far more than just adding a couple of commas. The question is if the bet pays off. Think back to those first few years of the current age of virtual reality. The first Rift prototype showed up behind closed doors at E3 in 2012. That fall, Kickstarter users ponied up nearly $2.5 million to get their hands on the first developer version of the headset. Headquartered in Southern California at the time, Oculus began to grow. Fast. 2013 brought nearly $100 million in funding. As it grew, it started working out many of the kinks that had plagued VR the first time around in the ’90s. When the Rift finally came out (with the HTC Vive and PlayStation VR not too far behind), the headset managed to do something no predecessor had: deliver stable and comfortable virtual reality for the price of a game console. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But that delivery wasn’t easy. The headset needed a high-end gaming PC to power it, and its cables snaked everywhere. It needed external sensors to track its position in space, which added yet more cables and hardware. It wasn’t uncommon for early adopters to encounter driver updates and USB port errors that demanded a monk’s patience to figure out. Just because some problems had been solved didn’t mean the solutions weren’t stopgap measures, and there were miles to go before VR headsets would be as intuitive and turnkey as a smartphone. So the work continued, as it did at other companies. But acquiring Oculus had only been the beginning; Facebook had also begun pouring resources into supercharging the company’s internal research pipeline. “It felt like if we put a lot of time and energy behind it, we could accelerate this into something that could get wide adoption—because that's the only way we would all be interested in it,” says Mike Schroepfer, Facebook’s CTO. “If it was some super-niche high-end very expensive toy, it just wouldn't fit with what Facebook was trying to do. So from the very beginning, it was, ‘Can we take this thing and turn it into something that everyone can have?’” Something “everyone can have” had been a priority for Zuckerberg since well before he acquired Oculus in 2014. “I talked to Zuck in 2012, when I was originally recruited to Facebook,” says Caitlin Kalinowski, who’s now head of hardware for Oculus, “and he understood already where the company would need to go in terms of owning a hardware portion of the next platform. I don’t think he knew what it was yet, but he really understood VR’s potential.” First had come a dedicated team in Seattle, where chief scientist Michael Abrash and Binstock had begun digging into VR’s thorniest problems; later came facilities in nearby Redmond and across the country in Pittsburgh, where an ever growing phalanx of PhD-level specialists sought to untether VR and push immersion as far as possible. Back in Menlo Park, Kalinowski and her colleagues worked to turn the emergent technologies into product form. By Peter Rubin and Jaina Grey As money flowed in, progress flowed out. First came the Oculus Go, in May 2018. It was wireless but couldn’t track itself in space, constraining the experience to something more like a cell-phone-powered device. (Remember Samsung Gear VR ? Google Cardboard ?) A year later, though, the Quest fixed that too; the company had figured out how to integrate outward-looking sensors, finally getting over the hurdle of “inside-out” tracking. Then, in December 2020, a sequel followed: the Quest 2. In the space of five years, Facebook had increased its annual R&D spending from $5.9 billion to nearly $18.5 billion. It had also turned its flagship VR headset into something that was a big step closer to a mainstream device, at half the price of the Rift. Perhaps more significant than the headset itself, though, is the financial potential that the VR ecosystem has begun to realize on the software side of the equation. In 2016 a game called Raw Data became the first VR title to bring in $1 million in revenue. By the beginning of 2020, more than 100 others had joined it. And that’s across all VR platforms; on the Quest line specifically, fully one-third of titles for sale have done the same. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That means breakout games like Beat Saber and Onward , but it also means two of VR’s most interesting use cases: social worlds and fitness apps. Rec Room, a multiuser social platform that lets users build their own worlds (and even get married ), was recently valued at $1.25 billion after sextupling its user base in 2020 alone, making it one of VR’s first unicorns. Facebook's own social app, Horizon, is getting closer to a wide release—"Now we have enough people to really bring those communities out and populate those worlds," says Meaghan Fitzgerald, head of product marketing for AR/VR—and the company recently announced a new avatar system that uses your own speech to drive your avatar's mouth movements and expressions. (It’s almost a given that eye- and face-tracking will be in some future version of VR headsets to improve that further. "If you don't have eye-tracking so you can make eye contact with someone, and if you don't have face tracking so you can naturally emote," Zuckerberg says, "it's not going to be the best social platform.") Exercise-focused titles like Supernatural and FitXR are seeing impressive growth in both audience and results. Supernatural —which charges users a Peloton-like $20 monthly subscription fee for the privilege of coach-led cardio classes, lunging and swinging at multicolored orbs to the beat of curated playlists—boasts a thriving Facebook community where users upload videos of their daily workouts. “We continue to see a really wide spread in our demographic, not only in terms of age and gender, but also in terms of fitness ability,” says FitXR cofounder Sam Cole. “We have people who say things like ‘I’m a bodybuilder and I hate doing cardio, but this is the way I get my fix’ through to people who are sedentary and have really struggled with fitness their entire lives.” Yet the app’s users average a ring-closing 35 minutes of activity per day. “One of our customers said to us recently that this feels like the best thing to happen to exercise since exercise,” Cole says. For Mike Verdu, a games executive who came to Facebook in 2019 to head AR/VR content, that’s a telling inflection point. “I think we finally got a use case that can go broad and lends itself to sustained use over a long period of time,” he says of the fitness sector. “You weave it into the fabric of your life, you do it every day, and companies have to get good at delivering fresh workouts and music and content.” VR as a service, in other words. Verdu sees other use cases on the horizon like creative tools and productivity utilities—some of the teams in the org hold weekly meetings in VR, using an internal app that lets them gather in virtual conference rooms—but the important part for him is that content creators are finally seeing the fruits of their labor on a promising but untested platform. “I think we're just scratching the surface on what VR is capable of,” he says. “It’s just exciting to see developers leaning in on experiences that will be around for a long time.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What “long time” means in the world of games and apps is one thing. What it means in the larger timeline for VR and AR, though, is another. Companies continue to sink money into the technologies; Apple in particular is the subject of another story seemingly every week detailing its explorations around a high-end VR headset or AR glasses. Facebook might have been a first mover, but it also wants to be the last mover, so it’s trying something different: It’s playing a very patient game, out in the open where everyone can see. When the first Quest headset hit the market, the teams that were still grouped together under the Oculus umbrella had a new leader—and their old ones had long since departed. Boy wonder founder Palmer Luckey had left Facebook in 2017 under a company-imposed cloud of secrecy. Onetime CEO Brendan Iribe departed the following year, and VP of product Nate Mitchell moved on in 2019. By that point, Mark Zuckerberg had tapped Andrew “Boz” Bosworth, who was leading Facebook’s ads and business platform, to head the company’s larger AR/VR division. (Bosworth credits Michael Abrash, who he calls “the keeper of the flame” of Facebook’s ambitions, with helping convince him to take the gig.) Under Bosworth, Facebook’s hardware ambitions have swelled. According to the Information , his division—which last year changed its name to Facebook Reality Labs—oversees nine teams that include VR, AR, Portal, and Devices. As many as one in five Facebook employees reportedly work on them. Some of the fruits of those teams are obviously already on the marketplace; the Portal family, now four items deep, has proven to be a favorite among the work-from-home set (read: everyone). Others … aren’t. Facebook’s first smart glasses, a Ray-Ban–branded collaboration with Luxottica, are expected later this year. A smartwatch reportedly may not be far behind. Zuckerberg has alluded to future iterations of the Quest. And then there’s the Big Kahuna. The mythical augmented-reality glasses that represent Facebook’s endgame. (Let’s just forget about brain implants, shall we?) It’s a vision Abrash has spooled out at developer conferences for years, and one that the company has grown increasingly comfortable talking about. Imagine a piece of eyewear that can overlay virtual content on top of the real world. That could mean simple things like games or a keyboard, or it could mean the photorealistic avatars that the company has been working on in its Pittsburgh research lab. You know that scene in Kingsman: The Secret Service where they have a meeting with a bunch of people who aren’t even there ? That. Even better, imagine that the glasses’ computer vision capabilities—which will by then be generations past those used for the Quest’s inside-out tracking or the Portal’s auto-framing technology—can see the world the way you do, and utilize an unobtrusive but powerful assistant that can do things like reduce background noise in your earbuds or translate signs in other languages. That’s something that doesn’t just get you off your phone; it replaces all your digital devices. (“Why have a TV when the glasses can display whatever you want wherever you’d like?” Abrash asks me rhetorically in an email.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg By everyone’s estimate, that’s a long, long way off. “I knew if we were going to build this out as a long-term platform, that’s a 10- or 15-year commitment,” Zuckerberg says. But he’s talking about his thinking when he first met the Oculus folks. In fact, it’s probably another decade until we’re closer to something like Abrash’s goal—and the problems that need to be solved between now and then are some of the hardest ones yet. It’s not just the computer vision and general compute power capable of delivering lifelike AR effects on the fly, but how to shrink it down into a not-totally-unattractive pair of glasses. Which then has enough battery power to last all day. And doesn’t generate ridiculous heat. Oh, and you have to be able to manipulate all the virtual objects and information that are getting overlaid on your real-world surroundings without lugging around a game controller. Many of those things might be possible—"We've got 'proof of experience' for a lot of things," Schroepfer says—but packaging it all down is where the work really is. Which makes the path from now to then somewhat of a shifting course. “I have a sense of what it's going to look like 10 years from now,” says Bosworth. “I have less of a sense of what's going to happen in those 10 years.” As evidence, he mentions a breakthrough one of his team had just two weeks ago. They’d spent a year trying to get a type of sensor package into a VR headset, only to realize that from a thermal and compute perspective it was too expensive—but then discovered that an alternative technology that they’d dismissed had had huge gains, so it became the front-runner. “The sequencing is going to vary based on trade-offs between form factor, cost, weight, and functionality, and those things all are very zero-sum today,” he says. “It's easy to keep your North Star. It's the middle parts that change the most.” Two weeks ago, Facebook Reality Labs held a media briefing to show off its North Star. You’ve likely read the stories by now , but if not, the magic word is “wristband.” Specifically, it’s an electromyography (EMG) neural interface wrist device, meaning it translates the electrical signals your muscles make as you move. The hope is that it unlocks the ability to manipulate the interfaces of your decade-hence AR world with tiny movements of your fingers—or none at all. The FRL briefing also included footage of an employee playing a simple video game without moving his hands; the EMG device read the nearly imperceptible signals his brain sent when he thought about pressing the spacebar. (Before you ask: Yes, Mark Zuckerberg has tried it. “I talk to the people on the Labs team every week,” he says. “They send me pelican cases of different gear—I’m sitting in my office right now, and I have two on the floor next to me, and one has the wrist device.”) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That may not be invasive technology in the traditional sense—again, let’s just leave that whole brain-implant thing alone—but it’s yet another reminder that AR and VR’s power depends on data. Lots and lots of data. Where you’re looking, how you’re looking at it, what your face and others’ faces are doing. In VR, that's a fount of psychographic information that has in the past proven very attractive to companies like Cambridge Analytica. And when you can identify people by their movement patterns alone , anonymity dies. In AR, the proposition gets even more fraught. When you leave a party or a store, you're likely to forget many more details than you remember; your glasses are picking up everything you are, and quite possibly much more. The result, Katitza Rodriguez and Kurt Opsahl of the Electronic Frontier Foundation wrote last year , can all too easily become a "global panopticon society of constant surveillance in public or semi-public spaces." And when the company that’s building those systems is the same company that hasn’t exactly inspired trust in the past, and tech-ethics bugaboos like facial recognition are still on the table , that’s all the more reason to cast a skeptic’s eye at the future. Which is exactly why Facebook is showing off its research and trying to engage with those bugaboos many years before they ever make it into a product. Schroepfer points to FRL’s “ responsible innovation principles ,” the foremost of which is: Never surprise people. “The bar we’re being held to is very high,” he says. “That means we have to do an exceptional job in the details of how these products will actually work, and how people understand them. The reason the Portal took us so long is because the pose-tracking algorithm had to run locally on the device—because then we don’t have to explain that we’re processing the video on the server for pose detection, but not other stuff. What I’ve learned over the past five years is that it’s really painful to have a problem figured out after you’ve launched the product.” Perhaps tougher than that is the narrative Facebook has found itself battling: While there may be good people doing good work trying to solve problems, the company, at its core, has lost its way. If all those people are going to realize this vision, this thing that led Mark Zuckerberg to Atman Binstock's demo room all those years ago, they're going to need to do it in a way that makes people trust again. And that's its own brand of work. “There's no magic bullet,” says Bosworth. “Trust is not a thing that you swoop in and solve with a great speech or a great product spec. It's a thing that you solve by setting expectations consistently and meeting and exceeding those expectations consistently over time. There's no shortcut—and I'm not looking for one.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “People understandably have a lot of concerns on the internet today about how our data is used,” Zuckerberg says. “But at the same time, Facebook has probably built some of the most advanced privacy tools and controls for people and infrastructure of any company that's out there. We're going to take this extremely seriously as we're in the foundational stages of building up these next platforms. But I'm pretty confident, given the experiences that we've had, that if we do this in an open way and show our work along the way, that the solutions will create a lot of value for people.” Granted, we've heard this before. And when the Portal first came out in 2018, you'd have been hard pressed to find a reviewer who felt good about recommending it without qualification. But fast-forward to now, and the refrain goes something more like this: "I will happily throw all my principles out the window if Facebook will alleviate the torture of long-distance grandparent hell." Proof of experience is one thing, but quality of experience—when coupled with a good-faith effort to rehabilitate trust, that is—can go a long way. Even if the distance is just between two avatars. 📩 The latest on tech, science, and more: Get our newsletters ! Audio pros “upmix” vintage tracks and give them new life Why you stay up late, even when you know you shouldn’t How sea chanteys made me love video games again Apple bent the rules for Russia. Other countries will take note Want carbon-neutral cows? Algae isn’t the answer 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Contributing Editor X Topics VR virtual reality augmented reality AR Oculus Facebook Jason Parham Marah Eakin Angela Watercutter Jason Parham Angela Watercutter Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,166
2,021
"Mark Zuckerberg’s Metaverse Already Sucks | WIRED"
"https://www.wired.com/story/mark-zuckerberg-facebook-metaverse-sucks"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gian M. Volpicelli Culture Mark Zuckerberg’s Metaverse Already Sucks Photograph: George Frey/Bloomberg/Getty Images Save this story Save Save this story Save The Zuckerverse is coming. Just over a week ago, Facebook CEO Mark Zuckerberg announced, in a long interview with the Verge, that his social network is readying itself to become “a metaverse company.” This story originally appeared on WIRED UK. First floated in Neal Stephenson’s 1992 sci-fi novel Snow Crash , the metaverse is an idealized immersive successor of the internet—a virtual space where billions of users will move, interact, and operate across myriad different but interoperable worlds and situations, always retaining their avatar identities, virtual possessions, and digital currencies. It is hard to pin the metaverse down (more on this later), but the shape one can make out amid the cyberpunk mist is some version of Ernest Cline’s novel Ready Player One meets Fortnite meets virtual reality meets blockchain. A game-y galaxy that seamlessly fuses with the meatspace. What matters is that metaverse is now the buzzword du jour and that Facebook wants a piece of it. The bad news is that Zuckerberg’s metaverse ambitions sound boring as hell. Time and again during the interview, Zuckerberg dropped language that seemed to have been cribbed straight out of some stuffy consultancy’s 40-page insights report. He waxed lyrical about the metaverse’s ability to increase “f​​ocus time and individual productivity.” He coined the dreary formula “infinite office,” a supposedly desirable scenario in which metaverse-dwellers conjure up multiple virtual screens on their Oculus VR headsets in order to multitask like pros. Zuck was “excit[ed]” (!) about the metaverse’s potential for organizing VR office meetings. Metaverse evangelists and open source advocates have been fretting about Big Tech’s invasion of the metaverse , about how the usual suspects—Facebook, Google, etc.—would consolidate their stranglehold on the digital world, harvesting our data and reenacting the rote practices of surveillance capitalism and the attendant ills of misinformation, manipulation, and gatekeeping. But Big Tech’s incursion into the metaverse might end up being much less of a super-villainous power grab and simply make the metaverse an uncool snoozefest—a hybrid between Heavy Rain ’s goofy detective-work ARI glasses and a cringey rendering of an Accenture blog post. When Microsoft starts talking about the endless opportunities of an “enterprise metaverse,” you know that there will be no fun to be had. The idea of a metaverse was always liable to be captured by corporate squares, if anything because there is no clear-cut definition of what it is even supposed to be. The metaverse’s ur-texts— Snow Crash and arguably Ready Player One —are sci-fi novels that cannot really form the base for rigorous research. Venture capitalist Matthew Ball has come closest to a systematic study of what makes a metaverse , while leaving some room for interpretation of what we’ll eventually see when the thing comes through. It is only natural that Facebook and Microsoft decided to propose their vision for whatever the buzzword will transmogrify into, but it is also dispiriting that they were so unimaginative. One crucial element that seems to be always spoken quietly in almost all analyses of the metaverse is its nature as crisis technology. While most meta-prophets expect this virtual universe to evolve almost naturally from technological progress and societal dynamics, they don’t really explain why someone would want to spend all this time there. In its fictional incarnations, however, the metaverse is desirable because the alternative —i.e. Earth—is insufferably dark. In Snow Crash , people run amok in the metaverse while the world is a violence-ridden anarchical mess dogged by mafia cartels and hyperinflation; in Ready Player One , a global underclass living in squalid shanty towns plug into the Oasis (Cline’s version of the metaverse) for days on end in the hope of winning an in-game scavenger hunt. While the metaverse-as-nuclear-shelter narrative might be a tad too catastrophist, it is not by chance that the metaverse really started entering the public discourse in 2020, as the pandemic was raging across the globe, forcing most people indoors and outlawing all but essential human contact. Of course you want to jump into the metaverse when you’re idling away your days at home glued to a screen, and when a lot of the things you do—“researching” QAnon conspiracy theories, trading GameStonks on Robinhood , playing with Technoking Musk and his doggie memes—already look like mixed-reality games anyway. More to the point, of course you’ll want better meetings and better multitasking chops when you’re once again stuck indoors as the Omega variant circulates, in 2051. Zuckerberg and Microsoft are nodding to that world, the forever virus world, when they envision their enterprise metaverse of infinite offices and exciting meetings. But in the same breath they are erasing what’s exciting and even liberating about the metaverse and what it could offer to the people seeking refuge from a crisis. Not only the fun and the experimentation with different identities and appearances—but the creation of entirely new professions, economic models, and political communities. In his analysis, Ball posits the possibility to “create, own, invest, sell, and be rewarded for an incredibly wide range of ‘work’ that produces ‘value’” as one of the lynchpins of the metaverse. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What exactly this work will look like is anybody’s guess at this stage; less of a guess is who will be doing that work. The thing is—there is another crisis apart from the pandemic that might encourage a lot of people to repair to the metaverse, and it is essentially generational in its nature. In a world—mostly the Western world—dogged by gerontocracy, one where the levers of political and economic power are firmly in the hands of unmovable boomers, one where Donald Trump can realistically consider a 2024 run and where Alan Rusbridger can become an editor in chief again, younger generations might as well decide to up sticks, untether themselves from the world economy, and build a new economy somewhere else—one where they can literally create the top positions they always aspired to, earn oodles of digital currency, and maybe buy a virtual house that looks almost as good as the brick-and-mortar one they struggled to afford. In a way, this is exactly what a lot of second-wave cryptocurrency projects, from DeFi to NFTs, claimed to be about. (Of course, that did not always pan out. ) The fashioning of an entirely new economy remains the most revolutionary promise of the metaverse. Zuckerberg sort of hinted at that in his interview, claiming that the metaverse he plans to contribute to building might be a boon for creators, content producers, and developers. But his general keynote—his infinite office spiel—is a red flag, one that looks alarmingly like productivity software kitted out with VR glitter. If that were to prevail, the real threat to the metaverse will not come from the overbearing Zuckerverse—but from Zuckerberg’s lame Suckerverse. This story originally appeared on WIRED UK. 📩 The latest on tech, science, and more: Get our newsletters ! Hundreds of ways to get s#!+ done —and we still don't Immortality should be an option in every video game Venmo gets more private —but it's still not fully safe How to share your wi-fi password Virtual reality is the rich white kid of technology 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers X Topics Wired UK virtual reality Mark Zuckerberg Facebook Angela Watercutter Jason Parham Boone Ashworth Marah Eakin Angela Watercutter Jason Parham Amit Katwala Kate Knibbs Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,167
2,019
"Google CEO Sundar Pichai on achieving quantum supremacy | MIT Technology Review"
"https://www.technologyreview.com/s/614608/google-ceo-quantum-supremacy-interview-with-sundar-pichai"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google CEO Sundar Pichai on achieving quantum supremacy By Gideon Lichfield archive page Photograph of Sundar Pichai standing next to a quantum computer at Google Google for MIT Technology Review In a paper today in Nature , and a company blog post , Google researchers claim to have attained “quantum supremacy” for the first time. Their 53-bit quantum computer, named Sycamore, took 200 seconds to perform a calculation that, according to Google, would have taken the world’s fastest supercomputer 10,000 years. (A draft of the paper was leaked online last month. ) The calculation has almost no practical use—it spits out a string of random numbers. It was chosen just to show that Sycamore can indeed work the way a quantum computer should. Useful quantum machines are many years away, the technical hurdles are huge, and even then they’ll probably beat classical computers only at certain tasks. (See “ Here’s what quantum supremacy does—and doesn’t—mean for computing. ”) But still, it’s an important milestone—one that Sundar Pichai, Google’s CEO, compares to the 12-second first flight by the Wright brothers. I spoke to him to understand why Google has already spent 13 years on a project that could take another decade or more to pay off. The interview has been condensed and edited for clarity. (Also, it was recorded before IBM published a paper disputing Google’s quantum supremacy claim. ) MIT TR: You got a quantum computer to perform a very narrow, specific task. What will it take to get to a wider demonstration of quantum supremacy? Sundar Pichai: You would need to build a fault-tolerant quantum computer with more qubits so that you can generalize it better, execute it for longer periods of time, and hence be able to run more complex algorithms. But you know, if in any field you have a breakthrough, you start somewhere. To borrow an analogy—the Wright brothers. The first plane flew only for 12 seconds, and so there is no practical application of that. But it showed the possibility that a plane could fly. A number of companies have quantum computers. IBM, for example, has a bunch of them online that people can use in the cloud. Why can their machines not do what Google’s has done? The main thing I would comment on is why Google, the team, has been able to do it. It takes a lot of systems engineering—the ability to work on all layers of the stack. This is as complicated as it gets from a systems engineering perspective. You are literally starting with a wafer, and there is a team which is literally etching the gates, making the gates and then [working up] layers of the stack all the way to being able to use AI to simulate and understand the best outcome. The last sentence of the paper says “We’re only one creative algorithm away from valuable near-term applications.” Any guesses as to what those might be? The real excitement about quantum is that the universe fundamentally works in a quantum way, so you will be able to understand nature better. It’s early days, but where quantum mechanics shines is the ability to simulate molecules, molecular processes, and I think that is where it will be the strongest. Drug discovery is a great example. Or fertilizers—the Haber process produces 2% of carbon [emissions] in the world [ see Note 1 ]. In nature the same process gets done more efficiently. Note 1: The Haber process The Haber-Bosch process , which makes ammonia for fertilizer by combining nitrogen from the air with hydrogen from natural gas and steam, produces an estimated 1.44% of global carbon dioxide emissions and just over 1% of total greenhouse gas emissions. So how far away do you think an application like improving the Haber process might be? I would think a decade away. We are still a few years away from scaling up and building quantum computers that will work well enough. Other potential applications [could include] designing better batteries. Anyway, you’re dealing with chemistry. Trying to understand that better is where I would put my money on. Even people who care about them say quantum computers could be like nuclear fusion: just around the corner for the next 50 years. It seems almost an esoteric research project. Why is the CEO of Google so excited about this? Google wouldn’t be here today if it weren’t for the evolution we have seen in computing over the years. Moore’s Law has allowed us to scale up our computational capacity to serve billions of users across many products at scale. So at heart, we view ourselves as a deep computer science company. Moore’s Law is, depending on how you think about it, at the end of its cycle. Quantum computing is one of the many components by which we will continue to make progress in computing. The other reason we’re excited is—take a simple molecule. Caffeine has 2 43 states or something like that [ actually 10 48 —see Note 2 ]. We know we can’t even understand the basic structure of molecules today with classical computing. So when I look at climate change, when I look at medicines, this is why I am confident one day quantum computing will drive progress there. Note 2: Caffeine Caffeine, with 24 atoms, can exist in 10 48 distinct quantum states, i.e., configurations of those atoms. That means that for a classical computer to perfectly represent caffeine, it would require 10 48 bits—close to the number of atoms in the entire Earth (10 49 or 10 50 ). A 1-gigabyte memory chip has about 10 10 bits. A profile of you in Fast Company described you as feeling a sense of “premonition” when you saw an AI learning to identify cat pictures all by itself, back in 2012. [“This thing was going to scale up and maybe reveal the way the universe works,” Pichai is quoted as saying. “This will be the most important thing we work on as humanity.”] Does quantum computing feel as important? Absolutely. Being able to be in the lab and actually physically manipulate the qubit and being able to put it in a superposition state was an equally profound moment for me because, to my earlier point, it’s how nature works. It opens up a whole new range of possibilities which didn’t exist until today. It could take a very long time to get to quantum systems that can do something serious. How do you manage patience at a company that is used to very fast progress? You know, I was spending time with Hartmut [Neven], who leads the quantum team along with John Martinis, the chief hardware scientist. And I mentioned that I dropped out of my PhD in materials science, and people around me were working on high-temperature superconductors. This was 26 years ago, and I was sitting in the lab and I’m like, “Wow, this is going to need a lot of patience to go through.” And I felt like I didn’t have quite that kind of patience. I have deep respect for the people in the team who have stayed on this journey for a long time. But pretty much all fundamental breakthroughs work that way, and you need that kind of a long-term vision to build it. The reason I’m excited about a milestone like this is that, while things take a long time, it’s these milestones that drive progress in the field. When Deep Blue beat Garry Kasparov, it was 1997. Fast-forward to when AlphaGo beat [Lee Sedol in 2016]—you can look at it and say, “Wow, that’s a lot of time.” But each milestone rewards the people who are working on it and attracts a whole new generation to the field. That’s how humanity makes progress. And to my earlier systems engineering point—we are pushing at many layers of the stack. So we are driving progress which will be used in many, many different ways. For example, us building our own data centers is what allowed us to build something like TPUs [tensor processing units, specialized chips for Google’s deep-learning framework, TensorFlow], which makes our algorithms go faster. So it’s a virtuous cycle. One of the great things about working on moonshots is even your failures along the side are worth something, and even interim milestones have other applications. So yes, you’re right, we have to be patient. But there is a lot of real gratification along the way. How much are you investing in quantum computing at the moment? It’s a relatively small team. But it builds on all the investments we've made across many years at various layers of Google. It’s built on the company’s years of research and the applied work we have done on top of it. Can you talk about the difference in approach between Google and IBM? For one thing, IBM has a bunch of quantum machines that it puts in the cloud for people to program, whereas you’re doing it as an in-house research project [ see Note 3 ]. Note 3: IBM on quantum supremacy On October 21, IBM researchers published a paper disputing Google's claim to have achieved quantum supremacy. They argued that, by using a modified form of Google's technique, it should be possible to simulate Sycamore's calculation on a classical system in just two and a half days instead of 10,000 years. A Google spokesperson says, "We welcome proposals to advance simulation techniques, though it’s crucial to test them on an actual supercomputer, as we have." He also noted that, since the complexity of quantum computers increases exponentially, adding just a few more qubits would put the task definitively out of bounds of a classical machine. It’s great that IBM is providing it as a cloud facility and attracting other developers. I think we as a team have been focused on making sure we prove to ourselves and to the community that you can cross this important milestone of quantum supremacy. IBM also says the term “quantum supremacy” is misleading, because it implies that quantum computers will eventually do everything better than classical computers, when in fact they will probably always have to work together on different bits of a problem. They’re accusing you of overhyping this. My answer on that would be, it is a technical term of art. People in the community understand exactly what the milestone means. But the contention is, the public may see it as a sign that quantum computers have now vanquished classical computers. I mean, it’s no different from when we all celebrate AI. There are people who conflate it with general artificial intelligence. Which is why I think it’s important we publish. It’s important that people who are explaining these things help the public understand where we are, how early it is, and how you’re always going to apply classical computing to most of the problems you need in the world. That will still be true in the future. AI generates business for Google at very many levels. It’s in services like Translate and Search. You provide AI tools to people through your cloud. You provide an AI framework, TensorFlow, that allows people to build their own tools. And you provide specialized chips [the TPUs mentioned above] that people can then use to run their tools on. Do you think of quantum computing as eventually being that pervasive for Google? I absolutely do. And if you step back, we invested in AI and developed AI before we knew it would work for us across all layers of the stack. Down the line, on all the practical applications you talked about—we don’t use AI technology just for ourselves; we provide it to customers around the world. We care about democratizing AI access. The same would be true for quantum computing, too. What do you think quantum computing might mean for AI itself? Could it help us unlock the barrier to artificial general intelligence, for instance, if you combine quantum computing and AI? I think it’ll be a very powerful symbiotic thing. Both fields are in early phases. There is exciting work in AI in terms of building larger models, more generalizable models, and what kind of computing resources you need to get there. I think AI can accelerate quantum computing and quantum computing can accelerate AI. And collectively, I think it’s what we would need to, down the line, solve some of the most intractable problems we face, like climate change. You mentioned democratizing the technology. Google has run into some ethical controversies around AI—who should have access to these tools and how they should be used. What have you learned from handling those issues, and how is it informing your thinking on quantum technology, which is much earlier in its development? Publishing and engaging with the academic community at these stages is very important. We work hard to engage. We’ve published our comprehensive AI principles. If you take an area like AI bias, I think we have published over 75 research papers in the last few years. So, codifying our ethics and engaging proactively. I think there are areas where regulation may make sense. We want to constructively participate and help get the right regulations. And finally, there’s a process of engaging externally and getting feedback. These are all technologies which will impact society. There’s no one company which can figure out what the right thing is. There’s no silver bullet, but this is early enough that, over the next 10 years, we have to engage and work together on all of this. Isn’t there a bit of a contradiction between, on the one hand, saying you won’t develop AI for certain purposes [as per the AI principles] and, on the other, creating a platform that enables people to use AI for whatever purpose they want? AI safety is one of our most important ethical principles. You want to build and test systems for safety. That’s inherent in our development. If you’re worried about quantum systems breaking cryptography over time, you want to develop better quantum encryption technologies. When we built search, we had to solve for spam. The stakes are clearly higher with these technologies, but part of it is the technical approach you take, and part of it, over time, is global governance and ethical agreements. You would need to arrive at global frameworks which result in outcomes we want. We are committed to doing what we can to help develop [the technology], not just responsibly, but to use it to safeguard safety, democracy, etc. And we would do that collectively with the institutions. Is there any other technology that you’re also really excited about right now? For me, just as a person, radically better ways to generate clean renewable energy have a lot of potential. But I’m excited just broadly about the combinations of all of this and how we practically apply it. In health care, I think we are on the verge of breakthroughs over the next decade or so which will be profound. But I would also say AI itself—the next generation of AI breakthroughs, new algorithms, better generalizable models, transfer learning, etc., are all equally exciting to me. hide by Gideon Lichfield Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks. By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers. By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel. By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field. By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,168
2,019
"Quantum supremacy from Google? Not so fast, says IBM. | MIT Technology Review"
"https://www.technologyreview.com/s/614604/quantum-supremacy-from-google-not-so-fast-says-ibm"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Quantum supremacy from Google? Not so fast, says IBM. By Gideon Lichfield archive page Konstantin Kakaes archive page Googles quantum computer Google; Edited by MIT Technology Review A month ago, news broke that Google had reportedly achieved “quantum supremacy”: it had gotten a quantum computer to run a calculation that would take a classical computer an unfeasibly long time. While the calculation itself—essentially, a very specific technique for outputting random numbers—is about as useful as the Wright brothers’ 12-second first flight, it would be a milestone of similar significance, marking the dawn of an entirely new era of computing. But in a blog post published today , IBM disputes Google’s claim. The task that Google says might take the world’s fastest classical supercomputer 10,000 years can actually, says IBM, be done in just days. As John Preskill, the CalTech physicist who coined the term “quantum supremacy,” wrote in an article for Quanta magazine , Google specifically chose a very narrow task that a quantum computer would be good at and a classical computer is bad at. “This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative,” he wrote. Google’s research paper hasn’t been published ( Update : it came out two days after this story), but a draft was leaked online last month. In it, researchers say they got a machine with 53 quantum bits, or qubits, to do the calculation in 200 seconds. They also estimated that it would take the world’s most powerful supercomputer, the Summit machine at Oak Ridge National Laboratory, 10,000 years to repeat it with equal “fidelity,” or the same level of uncertainty as the inherently uncertain quantum system. The problem is that such simulations aren’t just a matter of porting the code from a quantum computer to a classical one. They grow exponentially harder the more qubits you’re trying to simulate. For that reason, there are a lot of different techniques for optimizing the code to arrive at a good enough equivalent. And that’s where Google and IBM differ. The IBM researchers propose a method that they say would take just two and a half days on a classical machine “with far greater fidelity,” and that “with additional refinements” this could come down even further. The key difference? Hard drives. Simulating a quantum computer in a classical one requires storing vast amounts of data in memory during the process to represent the condition of the quantum computer at any given moment. The less memory you have available, the more you have to slice up the task into stages, and the longer it takes. Google’s method, IBM says, relied heavily on storing that data in RAM, while IBM’s “uses both RAM and hard drive space.” It also proposes using a slew of other classical optimization techniques, in both hardware and software, to speed up the computation. To be fair, IBM hasn't tested it in practice, so it's hard to know if it would work as proposed. (Google declined to comment.) So what’s at stake? Either a whole lot or not much, depending on how you look at it. As Preskill points out, the problem Google reportedly solved is of almost no practical consequence, and even as quantum computers get bigger, it will be a long time before they can solve any but the narrowest classes of problems. Ones that can crack modern codes will likely take decades to develop, at a minimum. Moreover, even if IBM is right that Google hasn’t achieved it this time, the quantum supremacy threshold is surely not far off. The fact that simulations get exponentially harder as you add qubits means it may only take a slightly larger quantum machine to get to the point of being truly unbeatable at something. Still, as Preskill notes, even limited quantum supremacy is “a pivotal step in the quest for practical quantum computers.” Whoever ultimately achieves it will, like the Wright brothers, get to claim a place in history. hide by Gideon Lichfield & Konstantin Kakaes Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks. By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers. By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel. By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field. By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,169
2,019
"Here’s what quantum supremacy does—and doesn’t—mean for computing | MIT Technology Review"
"https://www.technologyreview.com/s/614423/quantum-computing-and-quantum-supremacy"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Here’s what quantum supremacy does—and doesn’t—mean for computing By Martin Giles archive page A Google quantum computer Google Google has reportedly demonstrated for the first time that a quantum computer is capable of performing a task beyond the reach of even the most powerful conventional supercomputer in any practical time frame—a milestone known in the world of computing as “quantum supremacy.” ( Update : It confirmed the news on October 23. ) The ominous-sounding term, which was coined by theoretical physicist John Preskill in 2012, evokes an image of Darth Vader–like machines lording it over other computers. And the news has already produced some outlandish headlines, such as one on the Infowars website that screamed, “Google’s ‘Quantum Supremacy’ to Render All Cryptography and Military Secrets Breakable.” Political figures have been caught up in the hysteria, too: Andrew Yang, a presidential candidate, tweeted that “Google achieving quantum computing is a huge deal. It means, among many other things, that no code is uncrackable.” Nonsense. It doesn’t mean that at all. Google’s achievement is significant, but quantum computers haven’t suddenly turned into computing colossi that will leave conventional machines trailing in the dust. Nor will they be laying waste to conventional cryptography in the near future—though in the longer term, they could pose a threat we need to start preparing for now. Here’s a guide to what Google appears to have achieved—and an antidote to the hype surrounding quantum supremacy. What do we know about Google’s experiment? We still haven’t had confirmation from Google about what it’s done. The information about the experiment comes from a paper titled “Quantum Supremacy Using a Programmable Superconducting Processor,” which was briefly posted on a NASA website before being taken down. Its existence was revealed in a report in the Financial Times—and a copy of the paper can be found here. The experiment is a pretty arcane one, but it required a great deal of computational effort. Google’s team used a quantum processor code-named Sycamore to prove that the figures pumped out by a random number generator were indeed truly random. They then worked out how long it would take Summit, the world’s most powerful supercomputer, to do the same task. The difference was stunning: while the quantum machine polished it off in 200 seconds, the researchers estimated that the classical computer would need 10,000 years. When the paper is formally published, other researchers may start poking holes in the methodology, but for now it appears that Google has scored a computing first by showing that a quantum machine can indeed outstrip even the most powerful of today’s supercomputers. “There’s less doubt now that quantum computers can be the future of high-performance computing,” says Nick Farina, the CEO of quantum hardware startup EeroQ. Why are quantum computers so much faster than classical ones? In a classical computer, bits that carry information represent either a 1 or a 0 ; but quantum bits, or qubits—which take the form of subatomic particles such as photons and electrons—can be in a kind of combination of 1 and 0 at the same time, a state known as “superposition.” Unlike bits, qubits can also influence one another through a phenomenon known as “entanglement,” which baffled even Einstein, who called it “spooky action at a distance.” Thanks to these properties, which are described in more detail in our quantum computing explainer , adding just a few extra qubits to a system increases its processing power exponentially. Crucially, quantum machines can crunch through large amounts of data in parallel, which helps them outpace classical machines that process data sequentially. That’s the theory. In practice, researchers have been laboring for years to prove conclusively that a quantum computer can do something even the most capable conventional one can’t. Google’s effort has been led by John Martinis, who has done pioneering work in the use of superconducting circuits to generate qubits. Doesn’t this speedup mean quantum machines can overtake other computers now? No. Google picked a very narrow task. Quantum computers still have a long way to go before they can best classical ones at most things—and they may never get there. But researchers I’ve spoken to since the paper appeared online say Google’s experiment is still significant because for a long time there have been doubts that quantum machines would ever be able to outstrip classical computers at anything. Until now, research groups have been able to reproduce the results of quantum machines with around 40 qubits on classical systems. Google’s Sycamore processor, which harnessed 53 qubits for the experiment, suggests that such emulation has reached its limits. “We’re entering an era where exploring what a quantum computer can do will now require a physical quantum computer … You won’t be able to credibly reproduce results anymore on a conventional emulator,” explains Simon Benjamin, a quantum researcher at the University of Oxford. Isn’t Andrew Yang right that our cryptographic defenses can now be blown apart? Again, no. That’s a wild exaggeration. The Google paper makes clear that while its team has been able to show quantum supremacy in a narrow sampling task, we’re still a long way from developing a quantum computer capable of implementing Shor’s algorithm, which was developed in the 1990s to help quantum machines factor massive numbers. Today’s most popular encryption methods can be broken only by factoring such numbers—a task that would take conventional machines many thousands of years. But this quantum gap shouldn’t be cause for complacency, because things like financial and health records that are going to be kept for decades could eventually become vulnerable to hackers with a machine capable of running a code-busting algorithm like Shor’s. Researchers are already hard at work on novel encryption methods that will be able to withstand such attacks (see our explainer on post-quantum cryptography for more details). Why aren’t quantum computers as supreme as “quantum supremacy” makes them sound? The main reason is that they still make far more errors than classical ones. Qubits’ delicate quantum state lasts for mere fractions of a second and can easily be disrupted by even the slightest vibration or tiny change in temperature—phenomena known as “noise” in quantum-speak. This causes mistakes to creep into calculations. Qubits also have a Tinder-like tendency to want to couple with plenty of others. Such “crosstalk” between them can also produce errors. Google’s paper suggests it has found a novel way to cut down on crosstalk, which could help pave the way for more reliable machines. But today’s quantum computers still resemble early supercomputers in the amount of hardware and complexity needed to make them work, and they can tackle only very esoteric tasks. We’re not yet even at a stage equivalent to the ENIAC, IBM’s first general-purpose computer, which was put to work in 1945. So what’s the next quantum milestone to aim for? Besting conventional computers at solving a real-world problem—a feat that some researchers refer to as “quantum advantage.” The hope is that quantum computers’ immense processing power will help uncover new pharmaceuticals and materials, enhance artificial-intelligence applications, and lead to advances in other fields such as financial services, where they could be applied to things like risk management. If researchers can’t demonstrate a quantum advantage in at least one of these kinds of applications soon, the bubble of inflated expectations that’s blowing up around quantum computing could quickly burst. When I asked Google’s Martinis about this in an interview for a story last year, he was clearly aware of the risk. “As soon as we get to quantum supremacy,” he told me, “we’re going to want to show that a quantum machine can do something really useful.” Now it’s time for his team and other researchers to step up to that pressing challenge. hide by Martin Giles Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks. By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers. By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel. By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field. By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,170
2,019
"Explainer: What is a quantum computer? | MIT Technology Review"
"https://www.technologyreview.com/s/612844/what-is-quantum-computing"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Explainer: What is a quantum computer? How it works, why it’s so powerful, and where it’s likely to be most useful first By Martin Giles archive page Image courtesy Rigetti Computing. Photo by Justin Fantl. This is the first in a series of explainers on quantum technology. The other two are on quantum communication and post-quantum cryptography. A quantum computer harnesses some of the almost-mystical phenomena of quantum mechanics to deliver huge leaps forward in processing power. Quantum machines promise to outstrip even the most capable of today’s—and tomorrow’s—supercomputers. They won’t wipe out conventional computers, though. Using a classical machine will still be the easiest and most economical solution for tackling most problems. But quantum computers promise to power exciting advances in various fields, from materials science to pharmaceuticals research. Companies are already experimenting with them to develop things like lighter and more powerful batteries for electric cars, and to help create novel drugs. The secret to a quantum computer’s power lies in its ability to generate and manipulate quantum bits, or qubits. What is a qubit? Today's computers use bits—a stream of electrical or optical pulses representing 1 s or 0 s. Everything from your tweets and e-mails to your iTunes songs and YouTube videos are essentially long strings of these binary digits. Quantum computers, on the other hand, use qubits, which are typically subatomic particles such as electrons or photons. Generating and managing qubits is a scientific and engineering challenge. Some companies, such as IBM, Google, and Rigetti Computing, use superconducting circuits cooled to temperatures colder than deep space. Others, like IonQ, trap individual atoms in electromagnetic fields on a silicon chip in ultra-high-vacuum chambers. In both cases, the goal is to isolate the qubits in a controlled quantum state. Qubits have some quirky quantum properties that mean a connected group of them can provide way more processing power than the same number of binary bits. One of those properties is known as superposition and another is called entanglement. What is superposition? Qubits can represent numerous possible combinations of 1 and 0 at the same time. This ability to simultaneously be in multiple states is called superposition. To put qubits into superposition, researchers manipulate them using precision lasers or microwave beams. Thanks to this counterintuitive phenomenon, a quantum computer with several qubits in superposition can crunch through a vast number of potential outcomes simultaneously. The final result of a calculation emerges only once the qubits are measured, which immediately causes their quantum state to “collapse” to either 1 or 0. What is entanglement? Researchers can generate pairs of qubits that are “entangled,” which means the two members of a pair exist in a single quantum state. Changing the state of one of the qubits will instantaneously change the state of the other one in a predictable way. This happens even if they are separated by very long distances. Nobody really knows quite how or why entanglement works. It even baffled Einstein, who famously described it as “spooky action at a distance.” But it’s key to the power of quantum computers. In a conventional computer, doubling the number of bits doubles its processing power. But thanks to entanglement, adding extra qubits to a quantum machine produces an exponential increase in its number-crunching ability. Quantum computers harness entangled qubits in a kind of quantum daisy chain to work their magic. The machines’ ability to speed up calculations using specially designed quantum algorithms is why there’s so much buzz about their potential. That’s the good news. The bad news is that quantum machines are way more error-prone than classical computers because of decoherence. What is decoherence? The interaction of qubits with their environment in ways that cause their quantum behavior to decay and ultimately disappear is called decoherence. Their quantum state is extremely fragile. The slightest vibration or change in temperature—disturbances known as “noise” in quantum-speak—can cause them to tumble out of superposition before their job has been properly done. That’s why researchers do their best to protect qubits from the outside world in those supercooled fridges and vacuum chambers. But despite their efforts, noise still causes lots of errors to creep into calculations. Smart quantum algorithms can compensate for some of these, and adding more qubits also helps. However, it will likely take thousands of standard qubits to create a single, highly reliable one, known as a “logical” qubit. This will sap a lot of a quantum computer’s computational capacity. And there’s the rub: so far, researchers haven’t been able to generate more than 128 standard qubits (see our qubit counter here ). So we’re still many years away from getting quantum computers that will be broadly useful. That hasn’t dented pioneers’ hopes of being the first to demonstrate “quantum supremacy.” What is quantum supremacy? It’s the point at which a quantum computer can complete a mathematical calculation that is demonstrably beyond the reach of even the most powerful supercomputer. It’s still unclear exactly how many qubits will be needed to achieve this because researchers keep finding new algorithms to boost the performance of classical machines, and supercomputing hardware keeps getting better. But researchers and companies are working hard to claim the title, running tests against some of the world’s most powerful supercomputers. There’s plenty of debate in the research world about just how significant achieving this milestone will be. Rather than wait for supremacy to be declared, companies are already starting to experiment with quantum computers made by companies like IBM, Rigetti, and D-Wave, a Canadian firm. Chinese firms like Alibaba are also offering access to quantum machines. Some businesses are buying quantum computers, while others are using ones made available through cloud computing services. Where is a quantum computer likely to be most useful first? One of the most promising applications of quantum computers is for simulating the behavior of matter down to the molecular level. Auto manufacturers like Volkswagen and Daimler are using quantum computers to simulate the chemical composition of electrical-vehicle batteries to help find new ways to improve their performance. And pharmaceutical companies are leveraging them to analyze and compare compounds that could lead to the creation of new drugs. The machines are also great for optimization problems because they can crunch through vast numbers of potential solutions extremely fast. Airbus, for instance, is using them to help calculate the most fuel-efficient ascent and descent paths for aircraft. And Volkswagen has unveiled a service that calculates the optimal routes for buses and taxis in cities in order to minimize congestion. Some researchers also think the machines could be used to accelerate artificial intelligence. It could take quite a few years for quantum computers to achieve their full potential. Universities and businesses working on them are facing a shortage of skilled researchers in the field—and a lack of suppliers of some key components. But if these exotic new computing machines live up to their promise, they could transform entire industries and turbocharge global innovation. hide by Martin Giles Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,171
2,019
"Google researchers have reportedly achieved “quantum supremacy” | MIT Technology Review"
"https://www.technologyreview.com/f/614416/google-researchers-have-reportedly-achieved-quantum-supremacy"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google researchers have reportedly achieved “quantum supremacy” By Martin Giles archive page Google's quantum computer Google | Erik Lukero The news: According to a report in the Financial Times, a team of researchers from Google led by John Martinis have demonstrated quantum supremacy for the first time. This is the point at which a quantum computer is shown to be capable of performing a task that’s beyond the reach of even the most powerful conventional supercomputer. The claim appeared in a paper that was posted on a NASA website, but the publication was then taken down. Google did not respond to a request for comment from MIT Technology Review. Why NASA? Google struck an agreement last year to use supercomputers available to NASA as benchmarks for its supremacy experiments. According to the Financial Times report, the paper said that Google’s quantum processor was able to perform a calculation in three minutes and 20 seconds that would take today’s most advanced supercomputer, known as Summit, around 10,000 years. In the paper, the researchers said that, to their knowledge, the experiment “marks the first computation that can only be performed on a quantum processor.” Quantum speed-up: Quantum machines are so powerful because they harness quantum bits, or qubits. Unlike classical bits, which represent either a 1 or a 0 , qubits can be in a kind of combination of both at the same time. Thanks to other quantum phenomena, which are described in our explainer here , quantum computers can crunch large amounts of data in parallel that conventional machines have to work through sequentially. Scientists have been working for years to demonstrate that the machines can definitively outperform conventional ones. How significant is this milestone? Very. In a discussion of quantum computing at MIT Technology Review’s EmTech conference in Cambridge, Massachusetts, this week before news of Google’s paper came out, Will Oliver, an MIT professor and quantum specialist, likened the computing milestone to the first flight of the Wright brothers at Kitty Hawk in aviation. He said it would give added impetus to research in the field, which should help quantum machines achieve their promise more quickly. Their immense processing power could ultimately help researchers and companies discover new drugs and materials, create more efficient supply chains, and turbocharge AI. But, but: It’s not clear what task Google’s quantum machine was working on, but it’s likely to be a very narrow one. In an emailed comment to MIT Technology Review, Dario Gil of IBM, which is also working on quantum computers, says an experiment that was probably designed around a very narrow quantum sampling problem doesn’t mean the machines will rule the roost. “In fact quantum computers will never reign ‘supreme’ over classical ones,” says Gil, “but will work in concert with them, since each have their specific strengths.” For many problems, classical computers will remain the best tool to use. And another but: Quantum computers are still a long way from being ready for mainstream use. The machines are notoriously prone to errors, because even the slightest change in temperature, or a tiny vibration, can destroy the delicate state of qubits. Researchers are working on machines that will be easier to build, manage, and scale , and some computers are now available via the computing cloud. But it could still be many years before quantum computers that can tackle a wide range of problems are widely available. hide by Martin Giles Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks. By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers. By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel. By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field. By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,172
2,013
"Google Says It Has Proved Its Controversial Quantum Computer Really Works | MIT Technology Review"
"https://www.technologyreview.com/s/544276/google-says-it-has-proved-its-controversial-quantum-computer-really-works"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google Says It Has Proved Its Controversial Quantum Computer Really Works By Tom Simonite archive page Google says it has proof that a controversial machine it bought in 2013 really can use quantum physics to work through a type of math that’s crucial to artificial intelligence much faster than a conventional computer. Governments and leading computing companies such as Microsoft, IBM, and Google are trying to develop what are called quantum computers because using the weirdness of quantum mechanics to represent data should unlock immense data-crunching powers. Computing giants believe quantum computers could make their artificial-intelligence software much more powerful and unlock scientific leaps in areas like materials science. NASA hopes quantum computers could help schedule rocket launches and simulate future missions and spacecraft. “It is a truly disruptive technology that could change how we do everything,” said Rupak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California. Biswas spoke at a media briefing at the research center about the agency’s work with Google on a machine the search giant bought in 2013 from Canadian startup D-Wave systems, which is marketed as “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer. A quantum annealer is hard-coded with an algorithm suited to what are called “optimization problems,” which are common in machine-learning and artificial-intelligence software. However, D-Wave’s chips are controversial among quantum physicists. Researchers inside and outside the company have been unable to conclusively prove that the devices can tap into quantum physics to beat out conventional computers. Hartmut Neven, leader of Google’s Quantum AI Lab in Los Angeles, said today that his researchers have delivered some firm proof of that. They set up a series of races between the D-Wave computer installed at NASA against a conventional computer with a single processor. “For a specific, carefully crafted proof-of-concept problem we achieve a 100-million-fold speed-up,” said Neven. Google posted a research paper describing its results online last night, but it has not been formally peer-reviewed. Neven said that journal publications would be forthcoming. Google’s results are striking—but even if verified, they would only represent partial vindication for D-Wave. The computer that lost in the contest with the quantum machine was running code that had it solve the problem at hand using an algorithm similar to the one baked into the D-Wave chip. An alternative algorithm is known that could have let the conventional computer be more competitive, or even win, by exploiting what Neven called a “bug” in D-Wave’s design. Neven said the test his group staged is still important because that shortcut won’t be available to regular computers when they compete with future quantum annealers capable of working on larger amounts of data. Matthias Troyer , a physics professor at the Swiss Federal Institute of Technology, Zurich, said making that come true is crucial if chips like D-Wave’s are to become useful. “It will be important to explore if there are problems where quantum annealing has advantages over even the best classical algorithms, and to find if there are classes of application problems where such advantages can be realized,” he said, in a statement with two colleagues. Last year Troyer’s group published a high-profile study of an earlier D-Wave chip that concluded it didn’t offer advantages over conventional machines. That question has now been partially resolved, they say. “Google’s results indeed show a huge advantage on these carefully chosen instances.” Google is competing with D-Wave to make a quantum annealer that could do useful work. Last summer the Silicon Valley giant opened a new lab in Santa Barbara, headed by a leading academic researcher, John Martinis (see “ Google Launches Effort to Build Its Own Quantum Computer ”). Martinis is also working on quantum hardware that would not be limited to optimization problems, as annealers are. A universal quantum computer, as such a machine would be called, could be programmed to take on any problem and would be much more useful but is expected to take longer to perfect. Government and university labs, Microsoft (see “ Microsoft’s Quantum Mechanics ”), and IBM (see “ IBM Shows Off a Quantum Computing Chip ”) are also working on that technology. John Giannandrea, a VP of engineering at Google who coördinates the company’s research, said that if quantum annealers could be made practical, they would find many uses powering up Google’s machine-learning software. “We’ve already encountered problems in the course of our products impractical to solve with existing computers, and we have a lot of computers,” he said. However, Giannandrea noted, “it may be several years before this research makes a difference to Google products.” Update: An earlier version of this story incorrectly stated that NASA bought the quantum computer with Google. Google bought it and NASA hosts it. The story has also been updated to include comments from Matthias Troyer. hide by Tom Simonite Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,173
2,014
"Microsoft’s Quantum Mechanics | MIT Technology Review"
"https://www.technologyreview.com/s/531606/microsofts-quantum-mechanics"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Microsoft’s Quantum Mechanics By Tom Simonite archive page In 2012, physicists in the Netherlands announced a discovery in particle physics that started chatter about a Nobel Prize. Inside a tiny rod of semiconductor crystal chilled cooler than outer space, they had caught the first glimpse of a strange particle called the Majorana fermion, finally confirming a prediction made in 1937. It was an advance seemingly unrelated to the challenges of selling office productivity software or competing with Amazon in cloud computing, but Craig Mundie, then heading Microsoft’s technology and research strategy, was delighted. The abstruse discovery—partly underwritten by Microsoft—was crucial to a project at the company aimed at making it possible to build immensely powerful computers that crunch data using quantum physics. “It was a pivotal moment,” says Mundie. “This research was guiding us toward a way of realizing one of these systems.” Microsoft is now almost a decade into that project and has just begun to talk publicly about it. If it succeeds, the world could change dramatically. Since the physicist Richard Feynman first suggested the idea of a quantum computer in 1982, theorists have proved that such a machine could solve problems that would take the fastest conventional computers hundreds of millions of years or longer. Quantum computers might, for example, give researchers better tools to design novel medicines or super-efficient solar cells. They could revolutionize artificial intelligence. Progress toward that computational nirvana has been slow because no one has been able to make a reliable enough version of the basic building block of a quantum computer: a quantum bit, or qubit, which uses quantum effects to encode data. Academic and government researchers and corporate labs at IBM and Hewlett-Packard have all built them. Small numbers have been wired together, and the resulting devices are improving. But no one can control the physics well enough for these qubits to serve as the basis of a practical general-purpose computer. Microsoft has yet to even build a qubit. But in the kind of paradox that can be expected in the realm of quantum physics, it may also be closer than anyone else to making quantum computers practical. The company is developing a new kind of qubit, known as a topological qubit, based largely on that 2012 discovery in the Netherlands. There’s good reason to believe this design will be immune from the flakiness plaguing existing qubits. It will be better suited to mass production, too. “What we’re doing is analogous to setting out to make the first transistor,” says Peter Lee, Microsoft’s head of research. His company is also working on how the circuits of a computer made with topological qubits might be designed and controlled. And Microsoft researchers working on algorithms for quantum computers have shown that a machine made up of only hundreds of qubits could run chemistry simulations beyond the capacity of any existing supercomputer. In the next year or so, physics labs supported by Microsoft will begin testing crucial pieces of its qubit design, following a blueprint developed by an outdoorsy math genius. If those tests work out, a corporation widely thought to be stuck in computing’s past may unlock its future. Stranger still: a physicist at the fabled but faded Bell Labs might get there first. Tied Up in Knots In a sunny room 100 yards from the Pacific Ocean, Michael Freedman, the instigator and technical mastermind of Microsoft’s project, admits to feeling inferior. “When you start thinking about quantum computing, you realize that you yourself are some kind of clunky chemical analog computer,” he says. Freedman, who is 63, is director of Station Q, the Microsoft research group that leads the effort to create a topological qubit, working from a dozen or so offices on the campus of the University of California, Santa Barbara. Fit and tanned, he has dust on his shoes from walking down a beach path to lunch. If his mind is a clunky chemical computer, it is an extraordinary one. A mathematical prodigy who entered UC Berkeley at the age of 16 and grad school two years later, Freedman was 30 when he solved a version of one of the longest-standing problems in mathematics, the Poincaré conjecture. He worked it out without writing anything down, visualizing the distortion of four-dimensional shapes in his head. “I had seen my way through the argument,” Freedman recalls. When he translated that inner vision into a 95-page proof, it earned the Fields Medal, the highest honor in mathematics. That cemented Freedman’s standing as a leading light in topology, the discipline concerned with properties of shapes that don’t change when those shapes are distorted. (An old joke has it that topologists can’t distinguish a coffee cup from a doughnut—both are surfaces punctured by a single hole.) But he was drawn into physics in 1988 after a colleague discovered a connection between some of the math describing the topology of knots and a theory explaining certain quantum phenomena. “It was a beautiful thing,” says Freedman. He immediately saw that this connection could allow a machine governed by that same quantum physics to solve problems too hard for conventional computers. Ignorant that the concept of quantum computing already existed, he had independently reinvented it. Freedman kept working on that idea, and in 1997 he joined Microsoft’s research group on theoretical math. Soon after, he teamed up with a Russian theoretical physicist, Alexei Kitaev, who had proved that a “topological qubit” formed by the same physics could be much more reliable than qubits that other groups were building. Freedman eventually began to feel he was onto something that deserved attention beyond his rarefied world of deep math and physics. In 2004, he showed up at Craig Mundie’s office and announced that he saw a way to build a qubit dependable enough to scale up. “I ended up sort of making a pitch,” says Freedman. “It looked like if you wanted to start to build the technology, you could.” Mundie bought it. Though Microsoft hadn’t been trying to develop quantum computers, he knew about their remarkable potential and the slow progress that had been made toward building them. “I was immediately fascinated by the idea that maybe there was a completely different approach,” he says. “Such a form of computing would probably turn out to be the basis of a transformation akin to what classical computing has done for the planet in the last 60 years.” He set up an effort to create the topological qubit, with a slightly nervous Freedman at the helm. “Never in my life had I even built a transistor radio,” Freedman says. Distant Dream In some ways, a quantum computer wouldn’t be so different from a conventional one. Both deal in bits of data represented in binary form. And both types of machine are made up of basic units that represent bits by flipping between different states like a switch. In a conventional computer, every tiny transistor on a chip can be flipped either off to signify a 0 or on for a 1. But because of the quirky rules of quantum physics, which govern the behavior of matter and energy at extremely tiny scales, qubits can perform tricks that make them exceedingly powerful. A qubit can enter a quantum state known as superposition, which effectively represents 0 and 1 at the same time. Once in a superposition state, qubits can become linked, or “entangled,” in a way that means any operation affecting one instantly changes the fate of another. Because of superposition and entanglement, a single operation in a quantum computer can execute parts of a calculation that would take many, many more operations for an equivalent number of ordinary bits. A quantum computer can essentially explore a huge number of possible computational pathways in parallel. For some types of problems, a quantum computer’s advantage over a conventional one grows exponentially with the amount of data to be crunched. “Their power is still an amazement to me,” says Raymond Laflamme , executive director of the Institute for Quantum Computing at the University of Waterloo, in Ontario. “They change the foundation of computer science and what we mean by what is computable.” In the next year or so, physics labs supported by Microsoft will begin testing its qubit design. But pure quantum states are very fragile and can be observed and controlled only in carefully contrived circumstances. For a superposition to be stable, the qubit must be shielded from seemingly trivial noise such as random bumping from subatomic particles or faint electrical fields from nearby electronics. The two best current qubit technologies represent bits in the magnetic properties of individual charged atoms trapped in magnetic fields or as the tiny current inside circuits of superconducting metal. They can preserve superpositions for no longer than fractions of a second before they collapse in a process known as decoherence. The largest number of qubits that have been operated together is just seven. Since 2009, Google has been testing a machine marketed by the startup D-Wave Systems as the world’s first commercial quantum computer, and in 2013 it bought a version of the machine that has 512 qubits. But those qubits are hard-wired into a circuit for a particular algorithm, limiting the range of problems they can work on. If successful, this approach would create the quantum-computing equivalent of a pair of pliers—a useful tool suited to only some tasks. The conventional approach being pursued by Microsoft offers a fully programmable computer—the equivalent of a full toolbox. And besides, independent researchers have been unable to confirm that D-Wave’s machine truly functions as a quantum computer. Google recently started its own hardware lab to try to create a version of the technology that delivers. The search for ways to fight decoherence and the errors it introduces into calculations has come to dominate the field of quantum computing. For a qubit to truly be scalable, it would probably need to accidentally decohere only around once in a million operations, says Chris Monroe , a professor at the University of Maryland and co-leader of a quantum computing project funded by the Department of Defense and the Intelligence Advanced Research Projects Activity. Today the best qubits typically decohere thousands of times that often. Microsoft’s Station Q might have a better approach. The quantum states that lured Freedman into physics—which occur when electrons are trapped in a plane inside certain materials—should provide the stability that a qubit builder craves, because they are naturally deaf to much of the noise that destabilizes conventional qubits. Inside these materials, electrons take on strange properties at temperatures close to absolute zero, forming what are known as electron liquids. The collective quantum properties of the electron liquids can be used to signify a bit. The elegance of the design, along with grants of cash, equipment, and computing time, has lured some of the world’s leading physics researchers to collaborate with Microsoft. (The company won’t say what fraction of its $11 billion annual R&D spending goes to the project.) The catch is that the physics remains unproven. To use the quantum properties of electron liquids as bits, researchers would have to manipulate certain particles inside them, known as non-Abelian anyons, so that they loop around one another. And while physicists expect that non–Abelian anyons exist, none have been conclusively detected. Majorana particles, the kind of non-Abelian anyons that Station Q and its collaborators seek, are particularly elusive. First predicted by the reclusive Italian physicist Ettore Majorana in 1937, not long before he mysteriously disappeared, they have captivated physicists for decades because they have the unique property of being their own antiparticles, so if two ever meet, they annihilate each other in a flash of energy. No one had reported credible evidence that they existed until 2012, when Leo Kouwenhoven at Delft University of Technology in the Netherlands, who had gotten funding and guidance from Microsoft, announced that he had found them inside nanowires made from the semiconductor indium antimonide. He had coaxed the right kind of electron liquid into existence by connecting the nanowire to a chunk of superconducting electrode at one end and an ordinary one at the other. It offered the strongest support yet for Microsoft’s design. “The finding has given us tremendous confidence that we’re really onto something,” says Microsoft’s Lee. Kouwenhoven’s group and other labs are now trying to refine the results of the experiment and show that the particles can be manipulated. To speed progress and set the stage for possible mass production, Microsoft has begun working with industrial companies to secure supplies of semiconductor nanowires and the superconducting electronics that would be needed to control a topological qubit. For all that, Microsoft doesn’t yet have its qubit. A way must be found to move Majorana particles around one another in the operation needed to write the equivalent of 0 s and 1 s. Materials scientists at the Niels Bohr Institute in Copenhagen recently found a way to build nanowires with side branches, which could allow one particle to duck to the side while another passes. Charlie Marcus, a researcher there who has worked with Microsoft since its first design, is now preparing to build a working system with the new wires. “I would say that is going to keep us busy for the next year,” he says. Success would validate Microsoft’s qubit design and put an end to recent suggestions that Kouwenhoven may not have detected the Majorana particle in 2012 after all. But John Preskill, a professor of theoretical physics at Caltech, says the topological qubit remains nothing more than a nice theory. “I’m very fond of the idea, but after some years of serious effort there’s still no firm evidence,” he says. Competitive Physics At Bell Labs in New Jersey, Bob Willett says he has seen the evidence. He peers over his glasses at a dull black crystal rectangle the size of a fingertip. It has hand-soldered wires around its edges and fine zigzags of aluminum on its surface. And in the middle of the chip, in an area less than a micrometer across, Willett reports detecting non-Abelian anyons. If he is right, Willett is farther along than anyone who is working with Microsoft. And in his series of small, careworn labs, he is now preparing to build what—if it works—will be the world’s first topological qubit. “We’re making the transition from the science to the technology now,” he says. His effort has historical echoes. Down the corridor from his labs is a glass display case with the first transistor inside, made on this site in 1947. Willett’s device is a version of a design that Microsoft has mostly given up on. By the time the company’s project began, Freedman and his collaborators had determined that it should be possible to build a topological qubit using crystals of ultrapure gallium arsenide that trap electrons. But in four years of experiments, the physics labs supported by Microsoft didn’t find conclusive evidence of non-Abelian anyons. Willett had worked on similar physics for years, and after reading a paper of Freedman’s on the design, he decided to have a go himself. In a series of papers published between 2009 and 2013, he reported finding those crucial particles in his own crystal-based devices. When one crystal is cooled with liquid helium to less than 1 Kelvin (−272.15 °C) and subjected to a magnetic field, an electron liquid forms at its center. Willett uses electrodes to stream the particles around its edge; if they are non-Abelian anyons looping around their counterparts in the center, they should change the topological state of the electron liquid as a whole. He has published results from several different experiments in which he saw telltale wobbles, which theorists had predicted, in the current of those flowing particles. He’s now moved on to building a qubit design. It is not much more complex than his first experiment: just two of the same circuits placed back to back on the same crystal, with extra electrodes that link electron liquids and can encode and read out quantum states that represent the equivalent of 0 s and 1 s. Willett hopes that device will squelch skepticism about his results, which no one else has been able to replicate. Microsoft’s collaborator Charlie Marcus says Willett “saw signals that we didn’t see.” Willett counters that Marcus and others have made their devices too large and used crystals with important differences in their properties. He says he recently confirmed that by testing some devices made to the specifications used by other researchers. “Having worked with the materials they’re working with, I can see why they stopped doing it, because it is a pain in the ass,” he says. Bell Labs, now owned by the French telecommunications company Alcatel-Lucent, is smaller and poorer than it was back when AT&T, unchallenged as the American telephone monopoly, let many researchers do pretty much anything they desired. Some of Willett’s rooms overlook the dusty, scarred ground left when an entire wing of the lab was demolished this year. But with fewer people around than the labs had long ago, it’s easier to get access to the equipment he needs, he says. And Alcatel has begun to invest more in his project. Willett used to work with just three other physicists, but recently he began collaborating with mathematicians and optics experts too. Bell Labs management has been asking about the kinds of problems that might be solved with a small number of qubits. “It’s expanding into a relatively big effort,” he says. Willett sees himself as an academic colleague of the Microsoft researchers rather than a corporate competitor, and he still gets invited to Freedman’s twice-yearly symposiums that bring Microsoft collaborators and other leading physicists to Santa Barbara. But Microsoft management has been more evident at recent meetings, Willett says, and he has sometimes felt that his being from another corporation made things awkward. It would be more than just awkward if Willett beat Microsoft to proving that the idea it has championed can work. For Microsoft to open up a practical route to quantum computing would be surprising. For the withered Bell Labs, owned by a company not even in the computing business, it would be astounding. Quantum Code On Microsoft’s leafy campus in Redmond, Washington, thousands of software engineers toil to fix bugs and add features to Windows and Microsoft Office. Tourists pose in the company museum for photos with a life-size cutout of a 1978 Bill Gates and his first employees. In the main research building, Krysta Svore leads a dozen people working on software for computers that may never exist. The team is figuring out what the first generation of quantum computers could do for us. The group was established because although quantum computers would be powerful, they cannot solve every problem. And only a handful of quantum algorithms have been developed in enough detail to suggest that they could be practical on real hardware. “Quantum computing is possibly very disruptive, but we need to understand where the power is,” Svore says. “We believe that there’s a chance to do something that could be the foundation of a whole new economy.” No quantum computer is ever going to fit into your pocket, because of the way qubits need to be supercooled (unless, of course, someone uses a quantum computer to design a better qubit). Rather, they would be used like data centers or supercomputers to power services over the Internet, or to solve problems that allow other technologies to be improved. One promising idea is to use quantum computers for superpowered chemistry simulations that could accelerate progress on major problems in areas such as health or energy. A quantum computer could simulate reality so precisely that it could replace years of plodding lab work, says Svore. Today roughly a third of U.S. supercomputer time is dedicated to simulations for chemistry or materials science, according to the Department of Energy. Svore’s group has developed an algorithm that would let even a first-generation quantum computer tackle much more complex problems, such as virtually testing a catalyst for removing carbon dioxide from the atmosphere, in just hours or minutes. “It’s a potential killer application of quantum computers,” she says. But it’s possible to envision countless other killer applications. Svore’s group has produced some of the first evidence that quantum computers can be used for machine learning, a technology increasingly central to Microsoft and its rivals. Recent advances in image and speech recognition have triggered a frenzy of new research in artificial intelligence. But they rely on clusters of thousands of computers working together, and the results still lag far behind human capabilities. Quantum computers might overcome the technology’s limitations. Work like that helps explain how the first company to build a quantum computer might gain an advantage virtually unprecedented in the history of technology. “We believe that there’s a chance to do something that could be the foundation of a whole new economy,” says Microsoft’s Peter Lee. As you would expect, he and all the others working on quantum hardware say they are optimistic. But with so much still to do, the prize feels as distant as ever. It’s as if qubit technology is in a superposition between changing the world and decohering into nothing more than a series of obscure research papers. That’s the kind of imponderable that people working on quantum technology have to handle every day. But with a payoff so big, who can blame them for taking a whack at it? This story was updated on October 10 to delete an erroneous reference to a bust of Thomas Edison. hide by Tom Simonite Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2014 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,174
2,018
"Otoy's RNDR harnesses 14,000 GPUs to render cloud-based graphics | VentureBeat"
"https://venturebeat.com/2018/10/17/otoys-rndr-harnesses-14000-gpus-to-render-cloud-based-graphics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Otoy’s RNDR harnesses 14,000 GPUs to render cloud-based graphics Share on Facebook Share on X Share on LinkedIn Otoy's OctaneRender, a complex Nvidia CUDA application, automatically runs on an AMD FirePro GPU (W9100) without changing a single line of code: Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Blockchain-based rendering platform RNDR has harnessed the graphics processing power of more than 14,000 graphics processing units (GPU) provided by individuals who contribute the GPUs to the cloud network. Los Angeles-Based Otoy , which created software used to make special effects in Westworld and The Avengers, wanted to enable everyday content creators to make awesome graphics that we all admire in video games and movies. So it launched RNDR, which uses cloud, blockchain, and cryptocurrency technologies to marshal millions of unused PCs and their graphics capabilities to quickly render cool images for the everyday content creator. The goal is to take the cost, time, and labor out of the process by creating an economy for 3D assets, which could be rendered via the shared hardware, hosted in the cloud, and then sold and traded in a decentralized fashion. RNDR pays the computer owners for their contributions to the cloud rendering. During the second quarter, Otoy did a beta test survey of 1,200 contributors that supports its conclusion that its global decentralized network of GPUs is powering is the world’s largest cloud network of its kind. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Color Noise by Linus Zoll. The RNDR network said that 11,000 of the 14,000 GPUs are still compatible with the platform, yielding a combined rendering power of 1.5 million OctaneBench (OB), which is a benchmarking utility for Otoy’s OctaneRender software. For comparison, a 0.60 cents/hour G2 instance on Amazon Web Services (AWS) delivers around 37 OB, and a centralized version of the RNDR service, which launched in 2013 on Amazon G2 instances in partnership with AWS, Autodesk, and later, Google and other major cloud providers, is capacity limited at under 1 million OB. RNDR’s decentralized technology allows the network to utilize the computing power of everyday GPU owners distributed across the world, eliminating ceilings related to capacity and network speed that exist within systems currently offered by the biggest cloud players. The project has received early support from industry luminaries such as Hollywood producer JJ Abrams, and famed talent agent Ari Emanuel. “We have been working for quite some time now to build the RNDR network and have reached unprecedented levels in cloud computing, putting us at the top with the largest tech companies in the world,” said Jules Urbach, CEO of Otoy, in a statement. “Our vision when we first began this journey was to scale and democratize rendering, creating more efficient processes and to reach not only high-power Hollywood studios, but also everyday content creators who might not otherwise have access to this technology. ” He added, “RNDR is the key to ushering in the increasingly virtual future of entertainment — from AR to VR to video games to film. This significant milestone further demonstrates that we are that much closer to reaching our goal, and we can’t wait to show the world what our platform can do in the near future.” In an email, Urbach said, “In one week, we have amassed more GPU rendering power, from small individual users on the RNDR network, than we have ever been able to offer in the five years since we launched Otoy’s public cloud services. This indicates that our network will only continue to grow and expand, especially once we allow larger mining facilities onto the network, and also offer MESA/MPAA certification guidelines for studio work. The cost and scale of GPU rendering and compute has been turned on its head through the RNDR platform, just 10 years after I first made the case for this with the CEO of AMD on stage at CES 2009. Through RNDR, we now have the capacity to process jobs for holographic rendering (100 times the compute of VR or film rendering) for games and volumetric media (such as for our partnership with Facebook 360).” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,175
2,018
"Otoy's RNDR brings sophisticated graphics rendering to masses via blockchain | VentureBeat"
"https://venturebeat.com/2018/07/12/otoys-rndr-brings-sophisticated-graphics-rendering-to-masses-via-blockchain"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Otoy’s RNDR brings sophisticated graphics rendering to masses via blockchain Share on Facebook Share on X Share on LinkedIn Color Noise by Linus Zoll. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. We all love the great images and special effects in our favorite films, video games, and animated TV shows. But it’s still a pretty difficult and complex process to create these fanciful scenes that are better than real life. Hollywood studios can do it with colossal budgets, as can video game publishers with huge teams. But Los Angeles-Based Otoy , which created software used to make special effects in Westworld and The Avengers, wants to enable everyday content creators to do it too. And so it has launched RNDR, which uses cloud, blockchain, and cryptocurrency technologies to marshal millions of unused PCs and their graphics capabilities to quickly render cool images for the everyday content creator. The goal is to take the cost, time, and labor out of the process by creating an economy for 3D assets, which could be rendered via the shared hardware, hosted in the cloud, and then sold and traded in a decentralized fashion. Jules Urbach, CEO of Otoy (maker of the Octane graphics renderer ), believes that blockchain can be useful in this collective rendering machine. Last fall, the company held an Initial Coin Offering for Render Token. It’s a blockchain-based currency that people can invest in, as it represents a distributed graphics processing unit (GPU) rendering network. Everyone with a computer can contribute the spare cycles to a collective rendering machine when their computers aren’t being used. Artists can submit their work to be rendered, and then these computers will get their job done. Above: Jules Urbach, CEO of Otoy Back in the fall, Urbach said in an interview with GamesBeat, “Render is a way of solving a problem that I foresaw years ago. I had always imagined, five or 10 years down the line, that things would be getting built through Octane for rendering. The rendering power for that needed to be more than one person could provide or even a couple. And so I came up with a few ideas and patents around getting this to run through the millions of graphics cards that were out there. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! RNDR is now available today to grant creators access to the largest graphics processing unit (GPU) cloud network in the world, from the comfort of their bedroom. Above: Otoy and blockchain. Otoy hopes that RNDR will help with filling the growing demand for the creation of virtual reality, augmented reality, mixed reality, and other complex graphics, and make these tools more accessible to the greater community of creators. RNDR has officially launched Phase II of its platform to the public. By leveraging a distributed network of idle GPUs across its peer-to-peer network, RNDR makes it possible to scale rendering speed and simplify the transactional process of rendering and streaming 3D environments, models, and objects. RNDR’s team and advisory board is comprised of notable players such as Hollywood director and producer J.J. Abrams, founder of Brave and Basic Attention Token Brendan Eich, and famed talent agent Ari Emanuel. The network allows for any anonymous individual to lend their GPU to perform rendering tasks in exchange for RNDR tokens while providing as little information as a cryptocurrency wallet address. “Currently less than 1 percent of the world’s GPU power is accessible to creators, leaving a huge gap of wasted idle computing power in addition to stifling innovation and prohibiting the creation of incredible complex graphics,” said Kalin Stoyanchev, head of blockchain/project lead of RNDR said in a statement. “Our goal is to have the demand covered with no expense in terms of hardware, allowing for accessible costs and the democratization of rendering resources and with today’s launch of the Render Network, we are taking a significant step toward providing access to exponentially more computing power, at a more affordable cost to creators, with the utmost security of their digital assets ownership.” Above: Godrays. This is an animated rendering by Otoy. Once a rendering job begins, a creator’s payment of RNDR tokens will be held in escrow until the job is completed, at which time tokens will be approved for withdrawal. Throughout the rendering process, users can watch the status of a job and updates of the render, such as scene previews, or maximize the preview mode to see better details in the image. As the job progresses, RNDR token usage increases, and once the job is complete, frames can be downloaded and the token transfer occurs. Following today’s launch, RNDR’s next phase of development will be devoted to expanding its global partnerships and growing the platform to reach rendering-streaming through smart contracts and blockchain technology. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,176
2,018
"Light Field Lab raises $7 million for Holodeck-like holographic displays | VentureBeat"
"https://venturebeat.com/2018/01/25/light-field-lab-raises-7-million-for-holodeck-like-holographic-displays"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Light Field Lab raises $7 million for Holodeck-like holographic displays Share on Facebook Share on X Share on LinkedIn Light Field Lab Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Light Field Lab has raised $7 million for holographic display technologies that the company says will make you wonder whether what you’re seeing is real or an illusion. The money comes from Khosla Ventures and Sherpa Capital, with participation from R7 Partners. It’s a large amount of money for a seed round, but the startup is very ambitious. San Jose, California-based Light Field Lab will use the funding to complete a prototype of its light-field display system, which it says will enable real holographic objects to appear as if they are floating in space without the aid of accessories or head-mounted gear. Jon Karafin, CEO of Light Field Lab, said in an interview with GamesBeat that the hope is to eventually create something akin to the fictional Star Trek Holodeck , where illusion and reality are indistinguishable. In the near term, the company is creating prototype displays that will show very high-resolution images in a 3D space. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “We are very excited to work with these investors,” Karafin said. “We are building a truly holographic projection wall.” The walls could be placed on floors or ceilings, and then they would project holographic images into a 3D space. “Light Field Lab has the potential to change the way we view and interact with media,” said Khosla Ventures founder Vinod Khosla, in a statement. “This is essentially the holy grail of optical display technology, enabling things that seem like science fiction to be possible today. We are thrilled to be in on the ground floor with the team, and look forward to helping evolve this exciting technology.” The initial modules are 6 inches by 4 inches, and they can project images into a 3D space. The production modules will be something like 2 feet by 2 feet, and they will have a resolution of 16K by 10K, far more dense than the 4K, two-dimensional screens we use today in high-end TVs, Karafin said. “When you get these types of resolutions, you are no longer able to tell the difference between the real and the synthetic,” he said. “When you look at a display, you know it is a display. This is a true window into a world.” Those 2-feet-by-2-feet modules will be stitched together to make 100-feet wide screens, with huge images that could be used at venues such as theme park attractions, concerts, and other events. The initial customers in the space will be theme parks or location-based entertainment, Karafin said. “Projecting holograms is just the beginning,” said Karafin. “We are building the core modules to enable a real-world Holodeck. The strategic guidance offered by our investors is critical to enable these breakthrough technologies.” Light Field Lab will target its real-world holographic experiences at both professional and consumer markets. Eventually, it hopes to build holographic video walls with hundreds of gigapixels of resolution. The company was founded in 2017 by Karafin, Brendan Bevensee, and Ed Ibe. But Karafin said he had been thinking about the challenge for a decade. The company has a handful of employees and contractors now, and it would expand during 2018. The team had experience working at light field capture and display maker Lytro in the past. “Our core premise is to take the accessories, like glasses and headsets, off the body,” Karafin said. “They don’t give you a true immersive experience. With us, you can project out life-size things that are directly in the room with you. It is as if you have a digital blank canvas and can transport anyone to any world. When you have that, you have the Holodeck. That is what we are building toward.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,177
2,004
"The secret origin story of the iPhone - The Verge"
"https://www.theverge.com/2017/6/13/15782200/one-device-secret-history-iphone-brian-merchant-book-excerpt"
"The Verge homepage The Verge homepage The Verge The Verge logo. / Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech The secret origin story of the iPhone An exclusive excerpt from The One Device By Brian Merchant Illustrations by William Joel and Garret Beard Jun 13, 2017, 2:00 PM UTC | Comments Share this story If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. This month marks 10 years since Apple launched the first iPhone, a device that would fundamentally transform how we interact with technology, culture, and each other. Ahead of that anniversary, Motherboard editor Brian Merchant embarked on an investigation to uncover the iPhone’s untold origin. The One Device: The secret history of the iPhone , out on June 20th, traces that journey from Kenyan mines to Chinese factories all the way to One Infinite Loop. The following excerpt has been lightly condensed and edited. If you worked at Apple in the mid-2000s, you might have noticed a strange phenomenon afoot: people were disappearing. It happened slowly at first. One day there’d be an empty chair where a star engineer used to sit. A key member of the team, gone. Nobody could tell you exactly where they went. “I had been hearing rumblings about, well, it was unclear what was being built, but it was clear that a lot of the best engineers from the best teams had been slurped over to this mysterious team,” says Evan Doll, who was then a software engineer at Apple. Here’s what was happening to those star engineers. First, a couple of managers had shown up in their office unannounced and closed the door behind them. Managers like Henri Lamiraux, a director of software engineering, and Richard Williamson, a director of software. One such star engineer was Andre Boule. He’d been at the company only a few months. “Henri and I walked into his office,” Williamson recalls, “and we said, ‘Andre, you don’t really know us, but we’ve heard a lot about you, and we know you’re a brilliant engineer, and we want you to come work with us on a project we can’t tell you about. And we want you to do it now. Today.’ ” Boule was incredulous, then suspicious. “Andre said, ‘Can I have some time to think about it?’ ” Williamson says. “And we said, ‘No.’ ” They wouldn’t, and couldn’t, give him any more details. Still, by the end of the day, Boule had signed on. “We did that again and again across the company,” Williamson says. Some engineers who liked their jobs just fine said no, and they stayed in Cupertino. Those who said yes, like Boule, went to work on the iPhone. And their lives would never be the same — at least, not for the next two and a half years. Not only would they be working overtime to hammer together the most influential piece of consumer technology of their generation, but they’d be doing little else. Their personal lives would disappear, and they wouldn’t be able to talk about what they were working on. Steve Jobs “didn’t want anyone to leak it if they left the company,” says Tony Fadell, one of the top Apple executives who helped build the iPhone. “He didn’t want anyone to say anything. He just didn’t want — he was just naturally paranoid.” More on The One Device / Hear Nilay Patel interview Brian Merchant about the reporting behind The One Device on a special episode of the Vergecast. Jobs told Scott Forstall, who would become the head of the iPhone software division, that even he couldn’t breathe a word about the phone to anyone, inside Apple or out, who wasn’t on the team. “He didn’t want, for secrecy reasons, for me to hire anyone outside of Apple to work on the user interface,” Forstall said. “But he told me I could move anyone in the company into this team.” So he dispatched managers like Henri and Richard to find the best candidates. And he made sure potential recruits knew the stakes upfront. “We’re starting a new project,” he told them. “It’s so secret, I can’t even tell you what that new project is. I cannot tell you who you will work for. What I can tell you is if you choose to accept this role, you’re going to work harder than you ever have in your entire life. You’re going to have to give up nights and weekends probably for a couple years as we make this product.” And “amazingly,” as Forstall put it, some of the top talent at the company signed on. “Honestly, everyone there was brilliant,” Williamson tells me. That team — veteran designers, rising programmers, managers who’d worked with Jobs for years, engineers who’d never met him — would end up becoming one of the great, unheralded creative forces of the twenty-first century. One of Apple’s greatest strengths is that it makes its technology look and feel easy to use. There was nothing easy about making the iPhone, though its inventors say the process was often exhilarating. “The iPhone is the reason I’m divorced.” Forstall’s prediction to the iPhone team would be borne out. “The iPhone is the reason I’m divorced,” Andy Grignon, a senior iPhone engineer, tells me. I heard that sentiment more than once throughout my dozens of interviews with the iPhone’s key architects and engineers. “Yeah, the iPhone ruined more than a few marriages,” says another. “It was really intense, probably professionally one of the worst times of my life,” Grignon says. “Because you created a pressure cooker of a bunch of really smart people with an impossible deadline, an impossible mission, and then you hear that the future of the entire company is resting on it. So it was just like this soup of misery,” Grignon says. “There wasn’t really time to kick your feet back on the desk and say, ‘This is going to be really fucking awesome one day.’ It was like, ‘Holy fuck, we’re fucked.’ Every time you turned around there was some just imminent demise of the program just lurking around the corner.” Making the iPhone The iPhone began as a Steve Jobs–approved project at Apple around the end of 2004. But its DNA began coiling long before that. “I think a lot of people look at the form factor and they think it’s not just like any other computer, but it is — it’s just like any other computer,” Williamson says. “In fact, it’s more complex, in terms of software, than many other computers. The operating system on this is as sophisticated as the operating system on any modern computer. But it is an evolution of the operating system we’ve been developing over the last thirty years.” Like many mass-adopted, highly profitable technologies, the iPhone has a number of competing origin stories. There were as many as five different phone or phone-related projects — from tiny research endeavors to full-blown corporate partnerships — bubbling up at Apple by the middle of the 2000s. But if there’s anything I’ve learned in my efforts to pull the iPhone apart, literally and figuratively, it’s that there are rarely concrete beginnings to any particular products or technologies — they evolve from varying previous ideas and concepts and inventions and are prodded and iterated into newness by restless minds and profit motives. Even when the company’s executives were under oath in a federal trial, they couldn’t name just one starting place. “There were many things that led to the development of the iPhone at Apple,” Phil Schiller, senior vice president of worldwide marketing, said in 2012. “First, Apple had been known for years for being the creator of the Mac, the computer, and it was great, but it had small market share,” he said. “And then we had a big hit called the iPod. It was the iPod hardware and the iTunes software. And this really changed everybody’s view of Apple, both inside and outside the company. And people started asking, Well, if you can have a big hit with the iPod, what else can you do? And people were suggesting every idea, make a camera, make a car, crazy stuff.” And make a phone, of course. Open the Pod Bay Doors When Steve Jobs returned to take the helm of a flailing Apple in 1997, he garnered acclaim and earned a slim profit by slashing product lines and getting the Mac business back on track. But Apple didn’t reemerge as a major cultural and economic force until it released the iPod, which would mark its first profitable entry into consumer electronics and become a blueprint and a springboard for the iPhone in the process. “There would be no iPhone without the iPod,” says Tony Fadell, who helped build both of them. Fadell, sometimes dubbed “the Podfather” by the media, was a driving force in creating Apple’s first bona fide hit device in years, and he’d oversee hardware development for the iPhone. As such, there are few better people to explain the bridge between the two hit devices. We met at Brasserie Thoumieux, a swank eatery in Paris’s gilded seventh arrondissement, where he was living at the time. He’s been called “Tony Baloney,” and one former Apple exec advised me “not to believe a single word Tony Fadell says.” Fadell is a looming figure in modern Silicon Valley lore, and he’s divisive in the annals of Apple. Brian Huppi and Joshua Strickon, key members of Apple’s input engineering team, who’d prototyped the earliest drafts of the iPhone, praise him for his audacious, get-it-done management style (“Don’t take longer than a year to ship a product” is one of his credos) and for being one of the few people strong enough of will to stand up to Steve Jobs. Others chafe at the credit he takes for his role in bringing the iPod and iPhone to market; he’s been called “Tony Baloney,” and one former Apple exec advised me “not to believe a single word Tony Fadell says.” After he left Apple in 2008, he co-founded Nest, a company that crafted smart home gadgets, like learning thermostats, which was later acquired by Google for $3.2 billion. Right on time, Fadell strode in; shaved head save for some stubble, icy blue eyes, snug sweater. He was once renowned for his cyberpunk style, his rebellious streak, and a fiery temper that was often compared to Jobs’s. Fadell is still undeniably intense, but here, speaking easy French to the waitstaff, he was smack in the overlap of a Venn diagram showing Mannered Parisian Elite and Brash Tech Titan. “The genesis of the iPhone, was — well, let’s get started with — was iPod dominance,” Fadell says. “It was fifty percent of Apple’s revenue.” But when iPods initially shipped in 2001, hardly anyone noticed them. “It took two years,” Fadell says. “It was only made for the Mac. It was less than one percent market share in the U.S. They like to say ‘low single digits.’ ” Consumers needed iTunes software to load and manage the songs and playlists, and that software ran only on Macs. “Over my dead body are you gonna ship iTunes on a PC,” Steve Jobs told Fadell, he says, when Fadell pushed the idea of offering iTunes on Windows. Nonetheless, Fadell had a team secretly building out the software to make iTunes compatible with Windows. “It took two years of failing numbers before Steve finally woke up. Then we started to take off, then the music store was able to be a success.” That success put iPods in the hands of hundreds of millions of people— more than had ever owned Macs. Moreover, the iPod was hip in a fashionably mainstream way that lent a patina of cool to Apple as a whole. Fadell rose in the executive ranks and oversaw the new product division. Launched in 2001, a hit by 2003, the iPod was deemed vulnerable as early as 2004. The mobile phone was seen as a threat because it could play MP3s. “So if you could only carry one device, which one would you have to choose?” Fadell says. “And that’s why the Motorola Rokr happened.” Rokring Out In 2004, Motorola was manufacturing one of the most popular phones on the market, the ultrathin Razr flip phone. Its new CEO, Ed Zander, was friendly with Jobs, who liked the Razr’s design, and the two set about exploring how Apple and Motorola might collaborate. (In 2003, Apple execs had considered buying Motorola outright but decided it’d be too expensive.) Thus the “iTunes phone” was born. Apple and Motorola partnered with the wireless carrier Cingular, and the Rokr was announced that summer. Publicly, Jobs had been resistant to the idea of Apple making a phone. “The problem with a phone,” Steve Jobs said in 2005, “is that we’re not very good going through orifices to get to the end users.” By orifices , he meant carriers like Verizon and AT&T, which had final say over which phones could access their networks. “Carriers now have gained the upper hand in terms of the power of the relationship with the handset manufacturers,” he continued. “So the handset manufacturers are really getting these big thick books from the carriers telling them here’s what your phone’s going to be. We’re not good at that.” Jobs “wasn’t convinced that smartphones were going to be for anyone but the ‘pocket protector crowd.’” Privately, Jobs had other reservations. One former Apple executive who had daily meetings with Jobs told me that the carrier issue wasn’t his biggest hang-up. He was concerned with a lack of focus in the company, and he “wasn’t convinced that smartphones were going to be for anyone but the ‘pocket protector crowd,’ as we used to call them.” Partnering with Motorola was an easy way to try to neutralize a threat to the iPod. Motorola would make the handset; Apple would do the iTunes software. “It was, How can we make it a very small experience, so they still had to buy an iPod? Give them a taste of iTunes and basically turn it into an iPod Shuffle so that they’ll want to upgrade to an iPod. That was the initial strategy,” Fadell says. “It was, ‘Let’s not cannibalize the iPod because it’s going so well.’ ” As soon as the collaboration was made public, Apple’s voracious rumor mill started churning. With an iTunes phone on the horizon, blogs began feeding the anticipation for a transformative mobile device that had been growing for some time already. Inside Apple, however, expectations for the Rokr could not have been lower. “We all knew how bad it was,” Fadell says. “They’re slow, they can’t get things to change, they’re going to limit the songs.” Fadell laughs aloud when discussing the Rokr today. “All of these things were coming together to make sure it was really a shitty experience.” But there may have been another reason that Apple’s executives were tolerating the Rokr’s unfurling shittiness. “Steve was gathering information during those meetings” with Motorola and Cingular, Richard Williamson says. He was trying to figure out how he might pursue a deal that would let Apple retain control over the design of its phone. He considered having Apple buy its own bandwidth and become its own mobile virtual network operator, or MVNO. Apple approached Verizon, but the two companies were unable to ink a deal; telecoms still wanted too much control over how a handset was designed. An executive at Cingular, meanwhile, began to cobble together an alternative deal Jobs might actually embrace: Give Cingular exclusivity, and we’ll give you complete freedom over the device. Fix What You Hate From Steve Jobs to Jony Ive to Tony Fadell to Apple’s engineers, designers, and managers, there’s one part of the iPhone mythology that everyone tends to agree on: Before the iPhone, everyone at Apple thought cell phones “sucked.” They were “terrible.” Just “pieces of junk.” We’ve already seen how Jobs felt about phones that dropped calls. “Apple is best when it’s fixing the things that people hate,” Greg Christie, who was head of Apple’s Human Interface Group at the time, tells me. Before the iPod, nobody could figure out how to use a digital music player; as Napster boomed, people took to carting around skip-happy portable CD players loaded with burned albums. And before the Apple II, computers were mostly considered too complex and unwieldy for the layperson. “For at least a year before starting on what would become the iPhone project, even internally at Apple, we were grumbling about how all of these phones out there were all terrible,” says Nitin Ganatra, who managed Apple’s email team before working on the iPhone. It was water-cooler talk. But it reflected a growing sense inside the company that since Apple had successfully fixed — transformed, then dominated — one major product category, it could do the same with another. “At the time,” Ganatra says, “it was like, ‘Oh my God, we need to go in and clean up this market too — why isn’t Apple making a phone?’ ” Calling All Pods Andy Grignon was restless. The versatile engineer had been at Apple for a few years, working in different departments on various projects. He’s a gleefully imposing figure— shaved-head bald, cheerful, and built like a friendly bear. He had a hand in everything from creating the software that powered the iPod to working on the software for a videoconferencing program and iChat. He’d become friends with rising star Tony Fadell when they’d built the iSight camera together. After wrapping up another major project — writing the Mac feature Dashboard, which Grignon affectionately calls “his baby” (it’s the widget-filled screen with the calculator and the calendar and so on) — he was looking for something fresh to do. “Fadell reached out and said, ‘Do you want to come join iPod? We’ve got some really cool shit. I’ve got this other project I really want to do but we need some time before we can convince Steve to do it, and I think you’d be great for it.’” Grignon is boisterous and hardworking. He’s also got a mouth like a Silicon Valley sailor. “So I left,” Grignon says, “to work on this mystery thing. So we just kind of spun our wheels on some wireless speakers and shit like that, but then the project started to materialize. Of course what Fadell was talking about was the phone.” Fadell knew Jobs was beginning to come around to the idea, and he wanted to be prepared. “We had this idea: Wouldn’t it be great to put WiFi in an iPod?” Grignon says. Throughout 2004, Fadell, Grignon, and the rest of the team worked on a number of early efforts to fuse iPod and internet communicator. It was also the first time Steve Jobs had seen the internet running on an iPod. “And he was like, ‘This is bullshit.’” “That was one of the very first prototypes I showed Steve. We gutted an iPod, we had hardware add in a WiFi part, so it was a big plastic piece of junk, and we modified the software.” There were click-wheel iPods that could clumsily surf the web as early as 2004. “You would click the wheel, you would scroll the web page, and if there was a link on the page, it would highlight it, and you could click on it and you could jump in,” Grignon says. “That was the very first time where we started experimenting with radios in the form factor.” It was also the first time Steve Jobs had seen the internet running on an iPod. “And he was like, ‘This is bullshit.’ He called it right away… ‘I don’t want this. I know it works, I got it, great, thanks, but this is a shitty experience,’ ” Grignon says. Meanwhile, Grignon says, “The exec team was trying to convince Steve that building a phone was a great idea for Apple. He didn’t really see the path to success.” One of those trying to do the convincing was Mike Bell. A veteran of Apple, where he’d worked for fifteen years, and of Motorola’s wireless division, Bell was positive that computers, music players, and cell phones were heading toward an inevitable convergence point. For months, he lobbied Jobs to do a phone, as did Steve Sakoman, a vice president who had worked on the ill-fated Newton. “We were spending all this time putting iPod features in Motorola phones,” Bell says. “That just seemed ass-backwards to me. If we just took the iPod user experience and some of the other stuff we were working on, we could own the market.” It was getting harder to argue with that logic. The latest batches of MP3 phones were looking increasingly like iPod competitors, and new alternatives for dealing with the carriers were emerging. Meanwhile, Bell had seen Jony Ive’s latest iPod designs. On November 7, 2004, Bell sent Jobs a late-night email. “Steve, I know you don’t want to do a phone,” he wrote, “but here’s why we should do it: Jony Ive has some really cool designs for future iPods that no one has seen. We ought to take one of those, put some Apple software around it, and make a phone out if ourselves instead of putting our stuff on other people’s phones.” Jobs called him right away. They argued for hours, pushing back and forth. Bell detailed his convergence theory — no doubt mentioning the fact that the mobile phone market was exploding worldwide — and Jobs picked it apart. Finally, he relented. “Okay, I think we should go do it,” he said. “So Steve and I and Jony and Sakoman had lunch three or four days later and kicked off the iPhone project.” Reviving the Apple Tablet At 2 Infinite Loop, an older touchscreen-tablet research project was still chugging along. Bas Ording, Imran Chaudhri, and company were still exploring the contours of a basic touch-focused user interface. One day, Bas Ording got a call from Steve. He said, “We’re gonna do a phone.” Years ago, a handful of input engineers and key designers had prototyped multitouch-focused interaction demos, followed by the Q79 tablet project — an experimental early stab at an iPad-like device. But a tangle of obstacles, not least of which was that it was too expensive, shut it down. (“You’ve got to give me something I can sell,” he told Imran.) But with a smaller screen and scaled-down system, Q79 might work as a phone. “It’s gonna have a small screen, it’s gonna be just a touchscreen, there’s not gonna be any buttons, and everything has to work on that,” Jobs told Ording. He asked the UI wiz to make a demo of scrolling through a virtual address book with multitouch. “I was super-excited,” Ording says. “I thought, Yeah, it seems kind of impossible, but it would be fun to just try it. ” He sat down, “moused off” a phone-size section of his Mac’s screen, and used it to model the iPhone surface. He and a scant few other designers had spent years experimenting with touch-based user interfaces — and those years in the touchscreen wilderness were paying off. “When I saw the rubber band, inertial scrolling, and a few of the other things, I thought, ‘My God, we can build a phone out of this.’ ” “We already had some other demos, a web page, for example — it was just a picture you could scroll with momentum,” Ording says. “That’s sort of how it started.” The famous effect where your screen bounces when you hit the top or bottom of a page was born because Ording couldn’t tell when he’d hit the top of a page. “I thought my program wasn’t running because I tried to scroll and nothing would happen,” he says, and then he’d realize he was scrolling in the wrong direction. “So that’s when I started to think, How can I make it so you can see or feel that you’re at the end? Right? Instead of feeling dead, like it’s not responding.” These small details, which we now take for granted, were the product of exhaustive tinkering, of proof-of-concept experimenting. Like inertial scrolling, the wonky-sounding but now-universal effect that makes scrolling down your contact list feel satisfyingly tactile; the names fly by in a burst after you swipe down, then slow to a tick-tock as if bound by real-world physics “I had to try all kinds of things and figure out some math,” Ording says. “Not all of it was complicated, but you have to get to the right combinations, and that’s the tricky thing. ” Eventually, Ording got it to feel natural. “He called me back a few weeks later, and he had inertial scrolling working,” Jobs said. “And when I saw the rubber band, inertial scrolling, and a few of the other things, I thought, ‘My God, we can build a phone out of this.’ ” Scott Forstall walked into Greg Christie’s office near the end of 2004 and gave him the news too: Jobs wanted to do a phone. He’d been waiting about a decade to hear those words. Christie is intense and brusque; his stocky build and sharp eyes feel loaded with kinetic energy. He joined Apple in the 1990s, when the company was in a downward spiral, just to work on the Newton— then one of the most promising mobile devices on the market. Then, he’d even tried to push Apple to do a Newton phone. “I’m sure I proposed it a dozen times,” Christie says. “The internet was popping too— this is going to be a big deal: mobile, internet, phone.” Now, his Human Interface team — his knobs-and-dials crew — was about to embark on its most radical challenge yet. Its members gathered on the second floor of 2 Infinite Loop, right above the old user-testing lab, and set to work expanding the features, functionality, and look of the old ENRI tablet project. The handful of designers and engineers set up shop in a drab office replete with stained carpet, old furniture, a leaky bathroom next door, and little on the walls but a whiteboard and, for some reason, a poster of a chicken. Jobs liked the room because it was secure, windowless, tucked away from straying eyes. The CEO was already imbuing the nascent iPhone project with top-to-bottom secrecy. “You know, the cleaning crews weren’t allowed in here because there were these sliding whiteboards along the wall,” Christie says. The team would sketch ideas on them, and the good ones stayed put. “We wouldn’t erase them. They became part of the design conversation.” Jobs liked the room because it was secure, windowless, tucked away from straying eyes That conversation was about how to blend a touch-based UI with smartphone features. Fortunately, they’d had a head start. There were the ENRI crew’s multitouch demos, of course. But Imran Chaudhri had also led the design for Dashboard, which was full of widgets — weather, stocks, calculator, notes, calendar — that would be ideal for the phone. “The early idea for the phone was all about having these widgets in your pocket,” Chaudhri says. So they ported them over. The original design for many of those icons was actually created in a single night, back during the development of Dashboard. “It was one of those fucking crazy Steve deadlines,” Imran says, “where he wanted to see a demo of everything.” So he and Freddy Anzures, a recent hire to the HI team, spent a long night coming up with the rectilinear design concepts for the widgets — which would, years later, become the designs for the iPhone icons. “It’s funny, the look of smartphone icons for a decade to come was hashed out in a few hours.” And they had to establish the fundamentals; for instance, What should it look like when you fire up your phone? A grid of apps seems like the obvious way to organize a smartphone’s functions today — now that it’s like water, as Chaudhri says — but it wasn’t a foregone conclusion. “We tried some other stuff,” Ording says. “Like, maybe it’s a list of icons with the names after them.” But what came to be called Springboard emerged early on as the standard. “They were little Chiclets, basically,” Ording says. “Now, that’s Imran too, that was a great idea, and it looked really nice.” Chaudhri had the Industrial Design team make a few wooden iPhone-like mock-ups so they could figure out the optimal size of the icons for a finger’s touch. The multitouch demos were promising, and the style was coming together. But what the team lacked was cohesion — a united idea of what a touch-based phone would be. “It was really just sketches,” Christie says. “Little fragments of ideas, like tapas. A little bit of this, a little of that. Could be part of Address Book, a slice of Safari.” Tapas wouldn’t sate Jobs, obviously; he wanted a full course. So he grew increasingly frustrated with the presentations. “In January, in the New Year, he blows a gasket and tells us we’re not getting it,” Christie says. The fragments might have been impressive, but there was no narrative drawing the disparate parts together; it was a jumble of half-apps and ideas. There was no story. “It was as if you delivered a story to your editor and it was a couple of sentences from the introductory paragraph, a few from the body, and then something from the middle of the conclusion — but not the concluding statement.” It simply wasn’t enough. “Steve gave us an ultimatum,” Christie recalls. “He said, You have two weeks. It was February of 2005, and we kicked off this two-week death march.” So Christie gathered the HI team to make the case that they should all march with him. “It was February of 2005, and we kicked off this two-week death march.” “Doing a phone is what I always wanted to do,” he said. “I think the rest of you want to do this also. But we’ve got two weeks for one last chance to do this. And I really want to do it.” He wasn’t kidding. For a decade, Christie had believed mobile computing was destined to converge with cell phones. This was his opportunity not only to prove he was right, but to drive the spark. The small team was on board: Bas, Imran, Christie, three other designers — Stephen LeMay, Marcel van Os, and Freddy Anzures — and a project manager, Patrick Coffman. They worked around the clock to tie those fragments into a full-fledged narrative. “We basically went to the mattresses,” Christie says. Each designer was given a fragment to realize — an app to flesh out — and the team spent two sleepless weeks perfecting the shape and feel of an inchoate iPhone. And at the end of the death march, something resembling the one device emerged from the exhausted fog of the HI floor. “I have no doubt that if I could resurrect that demo and show it to you now, you would have no problem recognizing it as an iPhone,” Christie says. There was a home button — still software-based at this point — scrolling, and the multitouch media manipulations. “We showed Steve the outline of the whole story. Showed him the home screen, showed him how a call comes in, how to go to your Address Book, and ‘this is what Safari looks like,’ and it was a little click through. It wasn’t just some clever quotes, it told a story.” And Steve Jobs did love a good story. “It was a smashing success,” Christie says. “He wanted to go through it a second time. Anyone who saw it thought it was great. It was great.” It meant that the project was immediately deemed top secret. After the February demo, badge readers were installed on either end of the Human Interface group’s hallway, on the second floor of 2 Infinite Loop. “It was lockdown,” Christie says. “That’s what you say when there’s a prison riot, right? That was the phrase. Yeah, we’re on lockdown.” It also meant they had a lot more work to do. If the touch interface research meetings were prologue, the tablet prototyping the beginning, then this was the second act of the iPhone, and there was much left to be written. But now that Jobs was invested in the narrative, he wanted to show it off, in high style, to the rest of the company. “We had this ‘big demo’ — that’s what we called it,” Ording says. Steve wanted to show the iPhone prototype at the Top 100 meeting inside Apple. “They have this meeting every once in a while with all the important people, saying what the direction of the company is,” Ording tells me. Jobs would invite the people he considered his top one hundred employees to a secret retreat, where they’d present and discuss upcoming products and strategies. For rising Applers, it was a make-or-break career opportunity. For Jobs, the presentations had to be as carefully calibrated as public-facing product launches. “What are the apps we’re going to have? What should a calendar in your hand look like? Email?” “From then until May, it was another brutal haul, to, well… come up with connecting paragraphs,” Christie says. “Okay, what are the apps we’re going to have? What should a calendar in your hand look like? Email? Every step on this journey was just making it more and more concrete and more real. Playing songs out of your iTunes. Media playback. iPhone software started as a design project in my hallway with my team.” Christie hacked the latest model of the iPod so the designers could get a feel for what the applications might look like on a device. The demo began to take shape. “You could tap on the mail app and see how that kind of works, and the web browser,” Ording says. “It wasn’t fully working, but enough that you could get the idea.” Christie uses one word to describe how the team toiled around the clock, you might have noticed, above all others. It was “brutal, grueling work. I put people in hotel rooms because I didn’t want them driving home. People crashed at my house,” he says, but “it was exhilarating at the same time.” Steve Jobs had been blown away by the results. And soon, so was everyone else. The presentation at Top 100 was another smash success. The Bod of an iPod When Fadell heard that a phone project was taking shape, he grabbed his own skunkworks prototype design of the iPod phone before he headed into an executive meeting. “There was a meeting where they were talking about the formation of the phone project on the team,” Grignon says. “Tony had [it] in his back pocket, a team already working on the hardware and the schematics, all the design for it. And once they got the approval for it from Steve, Tony was like, ‘Oh, hold on, as a matter of fact’— whoochaa! Like he whipped it out, ‘Here’s this prototype that we’ve been thinking about,’ and it was basically a fully baked design.” On paper, the logic looks impeccable: The iPod was Apple’s most successful product, phones were going to eat the iPod’s lunch, so why not an iPod phone? “Take the best of the iPod and put a phone in it,” Fadell says. “So you could do mobile communications and have your music with you, and we didn’t lose all the brand awareness we’d built into the iPod, the half a billion dollars we were spending getting that known around the world.” It was that simple. Remember that while it was becoming clear inside Apple that they were going to pursue a phone, it wasn’t clear at all what that phone should look or feel like. Or how it would work, on just about every level. “Early 2005, around that time frame, Tony started saying there’s talk about them doing a phone,” says David Tupman, who was in charge of iPod hardware at the time. “And I said, ‘I really want to do a phone. I’d like to lead that.’ He said, ‘No.’ ” Tupman laughs. “ ‘You can’t do that.’ But they did a bunch of interviews, and I guess they couldn’t find anybody, so I was like, ‘Hello, I’m still here!’ Tony was like, ‘Okay, you’re it.’ ” The iPod team wasn’t privy to what had been unfolding in the HI group. “We were gonna build what everyone thought we should build at the time: Let’s bolt a phone onto an iPod,” says Andy Grignon. And that’s exactly what they started to do. What’s It Going to Be? Richard Williamson found himself in Steve Jobs’s office. He’d gone in to discuss precisely the kind of thing that nobody wanted to discuss with Steve Jobs — leaving Apple. For years, he had been in charge of the team that developed the framework that powered Safari, called WebKit. Here’s a fun fact about WebKit: Unlike most products developed and deployed by Apple, it’s open-source. Here’s another: Until 2013, Google’s own Chrome browser was powered by WebKit too. It’s big-deal software, in other words. And Williamson was, as Forbes put it, “what’s commonly referred to as a ‘@#$ rock star’ in Silicon Valley.” But he was getting burned out on upgrading the same platform. “We had gone through three or four versions of WebKit, and I was thinking of moving to Google,” he says. “That’s when Steve invited me.” And Steve wasn’t happy. When you think “successful computer engineer,” the stock photo that springs to mind is pretty much what Williamson looks like — bespectacled, unrepentantly geekish, brainy, wearing a button-down shirt. We met for an interview at a Palo Alto sushi joint that eschewed waiters in favor of automated service via table-mounted iPads. Seemed fitting. Williamson is soft-spoken, with a light British accent. He seems affable but shy — there’s a slightly anxious undercurrent to his speech — and unmistakably sharp. He’s apt to rattle off ideas pulled from a deep knowledge of code, industry acumen, and the philosophy of technology, sometimes in the same breath. In the mid-‘80s, a friend convinced Williamson to start a company writing software for the Commodore Amiga, an early PC. “We wrote a program called Marauder, which was a program to make archival backups of copy-protected disks.” He laughs. “That’s kind of the diplomatic way of describing the program.” Basically, they created a tool that allowed users to pirate software. “So we had a little bit of a recurring revenue stream,” he says slyly. In 1985, Steve Jobs’s post-Apple company, NeXT, was still a small operation, and hungry for good engineers. There, Williamson met with two NeXT officers and one Steve Jobs. He showed them the work that he’d done on the Amiga, and they hired him on the spot. The young programmer would go on to spend the next quarter of a century in Jobs’s— and the NeXT team’s— orbit, working on the software that would become integral to the iPhone. “Don’t leave,” Jobs said, according to Williamson. “We’ve got a new project I think you might be interested in.” “Don’t leave,” Jobs said, according to Williamson. “We’ve got a new project I think you might be interested in.” So Williamson asked to see it. “At this point, there was nobody on the project from a software perspective, it was all just kind of an idea in Steve’s mind.” It didn’t seem like a convincing reason for Williamson to pass up an enticing new offer. “Google was interested in giving me some very interesting work too, so it was a very pivotal moment,” he says. “So I said, ‘Well, the screen isn’t there, the display tech is kind of not really there.’ But Steve convinced me it was. That the path would be there.” Williamson pauses for a second. “It’s all true about Steve,” Williamson says with a quick smile. “I was with him since NeXT, and I’ve fallen under his glare many times.” What would it be, then? Of course, Williamson would stay. “So I became an advocate at that point of building a device to browse the web.” Which Phone “Steve wanted to do a phone, and he wanted to do it as fast as he could,” Williamson says. But which phone? There were two options: (a) take the beloved, widely recognizable iPod and hack it to double as a phone (that was the easier path technologically, and Jobs wasn’t envisioning the iPhone as a mobile computing device but as a souped-up phone), or (b) transmogrify a Mac into a tiny touch-tablet that made calls (which was an exciting idea but frayed with futuristic abstraction). “After the big demo,” Ording says, “the engineers started to look into, What would it take to actually make this real? On the hardware side but also the software side,” Ording says. To say the engineers who first examined it were skeptical about its near-term viability would be an understatement. “They went, ‘Oh my God, this is— we don’t know, this is going to be a lot of work. We don’t even know how much work.’ ” There was so much that needed to be done to translate the multitouch Mac mass into a product, and one with so many new, unproven technologies, that it was difficult even to put forward a roadmap, to conceive of all of its pieces coming together. For Those About to Rokr Development had continued on the Rokr throughout 2005. “We all thought the Rokr was a joke,” Williamson says. The famously hands-on CEO didn’t see the finished Rokr until early September 2005, right before he was supposed to announce it to the world. And he was aghast. “He was like, ‘What else can we do, how can we fix it?’ He knew it was subpar but he didn’t know how bad it was going to be. When it finally got there, he didn’t even want to show it onstage because he was so embarrassed by it,” Fadell says. During the demonstration, Jobs held the phone like an unwashed sock. At one point the Rokr failed to switch from making calls to playing music, leaving him visibly agitated. So, at about the same moment that Jobs was announcing “the world’s first mobile phone with iTunes” to the media, he was resolving to make it obsolete. He helped by lavishing praise on the new iPod Nano, clearly elevating it as the star of the show and reportedly leaving Motorola execs fuming. Rokr’s sheer shittiness took Jobs by surprise “When he got offstage he was just like, ‘Ugh,’ really upset,” Fadell says. The Rokr was such a disaster that it landed on the cover of Wired with the headline “You Call This the Phone of the Future?” and it was soon being returned at a rate six times higher than the industry average. Its sheer shittiness took Jobs by surprise — and his anger helped motivate him to squeeze the trigger harder on an Apple-built phone. “It wasn’t when it failed. It was right after it launched,” Fadell says. “This is not gonna fly. I’m sick and tired of dealing with bozo handset guys,” Jobs told Fadell after the demo. “That was the ultimate thing,” Fadell says. “It was, ‘Fuck this, we’re going to make our own phone.’ ” “Steve called a big meeting in the boardroom,” Ording says. “Everyone was there, Phil Schiller and Jony Ive and whoever.” He said, “Listen. We’re going to change plans… We’re going to do this iPod-based thing, make that into a phone because that’s a much more doable project. More predictable.” That was Fadell’s project. The touchscreen effort wasn’t abandoned, but while the engineers worked on whipping it into shape, Jobs directed Ording, Chaudhri, and members of the UI team to design an interface for an iPod phone, a way to dial numbers, select contacts, and browse the web using that device’s tried-and-true click wheel. There were now two competing projects vying to become the iPhone — a “bake-off,” as some engineers put it. The two phone projects were split into tracks, code-named P1 and P2, respectively. Both were top secret. P1 was the iPod phone. P2 was the still-experimental hybrid of multitouch technology and Mac software. If there’s a ground zero for the political strife that would later come to engulf the project, it’s likely here, in the decision to split the two teams — Fadell’s iPod division, which was still charged with updating that product line in addition to prototyping the iPod phone, and Scott Forstall’s Mac OS software vets — and drive them to compete. (The Human Interface designers, meanwhile, worked on both P1 and P2.) Eventually, the executives overseeing the most important elements of the iPhone — software, hardware, and industrial design — would barely be able to tolerate sitting in the same room together. One would quit, others would be fired, and one would emerge solidly — and perhaps solely — as the new face of Apple’s genius in the post-Jobs era. Meanwhile, the designers, engineers, and coders would work tirelessly, below the political fray, to turn the Ps into working devices in any way possible. The Purple People Leader Every top secret project worth its salt in intrigue has a code name. The iPhone’s was Purple. “One of the buildings we have up in Cupertino, we locked it down,” said Scott Forstall, who had managed Mac OS X software and who would come to run the entire iPhone software program. “We started with one floor”— where Greg Christie’s Human Interface team worked — “We locked the entire floor down. We put doors with badge readers, there were cameras, I think, to get to some of our labs, you had to badge in four times to get there.” He called it the Purple Dorm because, “much like a dorm, people were there all the time.” They “put up a sign that said ‘Fight Club’ because the first rule of Fight Club in the movie is that you don’t talk about Fight Club, and the first rule about the Purple Project is you do not talk about that outside of those doors,” Forstall said. Every top secret project worth its salt in intrigue has a code name. The iPhone’s was Purple. Why Purple? Few seem to recall. One theory is it was named after a purple aardvark toy that Scott Herz — one of the first engineers to come to work on the iPhone — had as a mascot for Radar, the system that Apple engineers used to keep track of software bugs and glitches throughout the company. “All the bugs are tracked inside of Radar at Apple, and a lot of people have access to Radar,” says Richard Williamson. “So if you’re a curious engineer, you can go spelunking around the bug-tracking system and find out what people are working on. And if you’re working on a secret project, you have to think about how to cover your tracks there.” Scott Forstall, born in 1969, had been downloading Apple into his brain his entire life. By junior high, his precocious math and science skills landed him in an advanced-placement course with access to an Apple IIe computer. He learned to code, and to code well. Forstall didn’t fit the classic computer-geek mold, though. He was a debate team champ and a performer in high-school musicals; he played the lead in Sweeney Todd , that hammy demon barber. Forstall graduated from Stanford in 1992 with a master’s in computer science and landed a job at NeXT. After releasing an overpriced computer aimed at the higher education market, NeXT flailed as a hardware company, but it survived by licensing its powerful NeXTSTEP operating system. In 1996, Apple bought NeXT and brought Jobs back into the fold, and the decision was made to use NeXTSTEP to overhaul the Mac’s aging operating system. It became the foundation on which Macs— and iPhones— still run today. At Jobs-led Apple, Forstall rose through the ranks. He mimicked his idol’s management style and distinctive taste. BusinessWeek called him “the Sorcerer’s Apprentice.” One of his former colleagues praised him as a smart, savvy leader but said he went overboard on the Jobs-worship: “He was generally great, but sometimes it was like, just be yourself.” Forstall emerged as the leader of the effort to adapt Mac software to a touchscreen phone. Though some found his ego and naked ambition distasteful — he was “very much in need of adulation,” according to one peer, and called “a starfucker” by another — few dispute the caliber of his intellect and work ethic. “I don’t know what other people have said about Scott,” Henri Lamiraux says, “but he was a pleasure to work with.” Forstall led many of the top engineers he’d worked with since his NeXT days — Henri Lamiraux and Richard Williamson among them — into the P2 project. Williamson jokingly called the crew “the NeXT mafia.” True to the name, they would at times behave in a manner befitting a close-knit, secretive (and highly efficient) organization. P1 Thing After Another Tony Fadell was Forstall’s chief competition. “From a politics perspective, Tony wanted to own the entire experience,” Grignon says. “The software, the hardware… once people started to see the importance of this project to Apple, everyone wanted to get their fingers in it. And that’s when the epic fight between Fadell and Forstall began.” Having worked with Forstall on Dashboard, Grignon was in a unique position to interface with both groups. “From our perspective, Forstall and his crew, we always viewed them as the underdogs. Like they were trying to wedge their way in,” Grignon says. “We had complete confidence that our stack was going to happen because this is Tony’s project, and Tony’s responsible for millions upon millions of iPod sales.” So, the pod team worked to produce a new pod-phone cut from the mold of Apple’s ubiquitous music player. Their idea was to produce an iPod that would have two distinct modes: the music player and a phone. “We prototyped a new way,” Grignon says of the early device. “It was this interesting material… it still had this touch sensitive click wheel, right, and the Play/Pause/Next/Previous buttons in blue backlighting. And when you put it into phone mode through the UI, all that light kind of faded out and faded back in as orange. Like, zero to nine in the click wheel in an old rotary phone, you know, ABCDEGF around the edges.” When the device was in musicplaying mode, blue backlighting would show iPod controls around the touch wheel. The screen would still be filled with iPod-style text and lists, and if you toggled it to phone mode, it’d glow orange and display numbers like the dial of a rotary phone. “We put a radio inside, effectively an iPod Mini with a speaker and headphones, still using the touch-wheel interface,” Tupman says. “And when you texted, it dialed — and it worked!” Grignon says. “So we built a couple hundred of them.” The problem was that they were difficult to use as phones. “After we made the first iteration of the software, it was clear that this was going nowhere,” Fadell says. “Because of the wheel interface. It was never gonna work because you don’t want a rotary dial on the phone.” The design team tried mightily to hack together a solution. “I came up with some ideas for the predictive typing,” Bas Ording says. There would be an alphabet laid out at the bottom of the screen, and users would use the wheel to select letters. “And then you can just, like, click-click-click-click — ‘Hello, how are you.’ So I just built an actual thing that can learn as you type — it would build up a database of words that follow each other.” But the process was still too tedious. “It was just obvious that we were overloading the click wheel with too much,” Grignon says. “And texting and phone numbers — it was a fucking mess.” “We tried everything,” Fadell says. “And nothing came out to make it work. Steve kept pushing and pushing, and we were like, ‘Steve.’ He’s pushing the rock up a hill. Let’s put it this way: I think he knew, I could tell in his eyes that he knew; he just wanted it to work,” he says. “He just kept beating this dead horse.” “I guess Steve must have woken up one day like, ‘This is not as exciting as the touch stuff.’ ” “C’mon, there’s gotta be a way,” Jobs would tell Fadell. “He didn’t just want to give up. So he pushed until there was nothing there,” Fadell says. They even filed for a patent for the ill-fated device, and in the bowels of Cupertino, there were offices and labs littered with dozens of working iPod phones. “We actually made phone calls,” Grignon says. The first calls from an Apple phone were not, it turns out, made on the sleek touchscreen interface of the future but on a steampunk rotary dial. “We came very close,” Ording says. “It was, like, we could have finished it and made a product out of it… But then I guess Steve must have woken up one day like, ‘This is not as exciting as the touch stuff.’ ” “For us on the hardware team, it was great experience,” David Tupman says. “We got to build RF radio boards, it forced us to select suppliers, it pushed us to get everything in place.” In fact, elements of the iPod phone wound up migrating into the final iPhone; it was like a version 0.1, Tupman says. For instance: “The radio system that was in that iPod phone was the one that shipped in the actual iPhone.” Hands Off The first time Fadell saw P2’s touch-tablet rig in action, he was impressed — and perplexed. “Steve pulled me in a room when everything was failing on the iPod phone and said, ‘Come and look at this.’ ” Jobs showed him the ENRI team’s multitouch prototype. “They had been getting, in the background, the touch Mac going. But it wasn’t a touch Mac; literally, it was a room with a PingPong table, a projector, and this thing that was a big touchscreen,” Fadell says. “This is what I want to put on the phone,” Jobs said. “Steve, sure,” Fadell replied. “It’s not even close to production. It’s a prototype, and it’s not a prototype at scale — it’s a prototype table. It’s a research project. It was like eight percent there,” Fadell says. David Tupman was more optimistic. “I was like, ‘Oh, wow, yeah, we have to find out a way to make this work.’ ” He was convinced the engineering challenges could be solved. “I said, ‘Let’s just sit down and go through the numbers and let’s work it out.’ ” The iPod phone was losing support. The executives debated which project to pursue, but Phil Schiller, Apple’s head of marketing, had an answer: Neither. He wanted a keyboard with hard buttons. The BlackBerry was arguably the first hit smartphone. It had an email client and a tiny hard keyboard. After everyone else, including Fadell, started to agree that multitouch was the way forward, Schiller became the lone holdout. He “just sat there with his sword out every time, going, ‘No, we’ve got to have a hard keyboard. No. Hard keyboard.’ And he wouldn’t listen to reason as all of us were like, ‘No, this works now, Phil.’ And he’d say, ‘You gotta have a hard keyboard!’ ” Fadell says. Schiller didn’t have the same technological acumen as many of the other execs. “Phil is not a technology guy,” Brett Bilbrey, the former head of Apple’s Advanced Technology Group, says. “There were days when you had to explain things to him like a grade-school kid.” Jobs liked him, Bilbrey thinks, because he “looked at technology like middle America does, like Grandma and Grandpa did.” “[Schiller] was told, like, Get on the program or get the fuck out. And he ultimately caved.” When the rest of the team had decided to move on multitouch and a virtual keyboard, Schiller put his foot down. “There was this one spectacular meeting where we were finally going in a direction,” Fadell says, “and he erupted.” “We’re making the wrong decision!” Schiller shouted. “Steve looked at him and goes, ‘I’m sick and tired of this stuff. Can we get off of this?’ And he threw him out of the meeting,” Fadell recalls. Later, he says, “Steve and he had it out in the hallway. He was told, like, Get on the program or get the fuck out. And he ultimately caved.” That cleared it up: the phone would be based on a touchscreen. “We all know this is the one we want to do,” Jobs said in a meeting, pointing to the touchscreen. “So let’s make it work.” Round Two “There was a whole religious war over the phone” between the iPod team and the Mac OS crew, one former Apple executive told me. When the iPod wheel was ruled out and the touch ruled in, the new question was how to build the phone’s operating system. This was a critical juncture — it would determine whether the iPhone would be positioned as an accessory or as a mobile computer. “Tony and his team were arguing we should evolve the operating system and take it in the direction of the iPod, which was very rudimentary,” Richard Williamson says. “And myself and Henri and Scott Forstall, we were all arguing we should take OS X” — Apple’s main operating system, which ran on its desktops and laptops — “and shrink it down.” “There were some epic battles, philosophical battles about trying to decide what to do,” Williamson says. The NeXT mafia saw an opportunity to create a true mobile computing device and wanted to squeeze the Mac’s operating system onto the phone, complete with versions of Mac apps. They knew the operating system inside and out — it was based on code they’d worked with for over a decade. “We knew for sure that there was enough horsepower to run a modern operating system,” Williamson says, and they believed they could use a compact ARM processor — Sophie Wilson’s low-power chip architecture — to create a stripped-down computer on a phone. “There were some epic battles, philosophical battles about trying to decide what to do.” The iPod team thought that was too ambitious and that the phone should run a version of Linux, the open-source system popular with developers and open-source advocates, which already ran on low-power ARM chips. “Now we’ve built this phone,” says Andy Grignon, “but we have this big argument about what was the operating system it should be built on. ’Cause we were initially making it iPod-based, right? And nobody cares what the operating system in an iPod is. It’s an appliance, an accessory. We were viewing the phone in that same camp.” Remember, even after the iPhone’s launch, Steve Jobs would describe it as “more like an iPod” than a computer. But those who’d been in the trenches experimenting with the touch interface were excited about the possibilities it presented for personal computing and for evolving the human-machine interface. “There was definitely discussion: This is just an iPod with a phone. And we said, no, it’s OS X with a phone,” Henri Lamiraux says. “That’s what created a lot of conflict with the iPod team, because they thought they were the team that knew about all the software on small devices. And we were like, no, okay, it’s just a computer.” “At this point we didn’t care about the phone at all,” Williamson says. “The phone’s largely irrelevant. It’s basically a modem. But it was ‘What is the operating system going to be like, what is the interaction paradigm going to be like?’ ” In that comment, you can read the roots of the philosophical clash: The software engineers saw P2 not as a chance to build a phone, but as an opportunity to use a phone-shaped device as a Trojan horse for a much more complex kind of mobile computer. The Incredible Shrinking Operating System When the two systems squared off early on, the mobile-computing approach didn’t fare so well. “Uh, just the load time was laughable,” Andy Grignon says. Grignon’s Linux option was fast and simple. “It’s just kind of prrrrrt and it’s up.” When the Mac team first got their system compiling, “it was like six rows of hashtags, dink-dink-dink-dink-dink , and then it just sat there and it would shit the bed for a little bit, and then it would finally come back up and you’d be like, Are you even kidding me? And this is supposed to be for a device that just turns on? Like, for real?” “At that point it was up to us to prove” that a variant of OS X could work on the device, Williamson says. The mafia got to work, and the competition heightened. “We wanted our vision for this phone that Apple was going to release to become a reality,” Nitin Ganatra says. “We didn’t want to let the iPod team have an iPod-ish version of the phone come out before.” One of the first orders of business was to demonstrate that the scrolling that had wowed Jobs would work with the stripped-down operating system. Williamson linked up with Ording and hashed it out. “It worked and looked amazingly real. When you touched the screen, it would track your finger perfectly, you would pull down, it would pull down.” That, Williamson says, put the nail in the Linux pod’s coffin. “Once we had OS X ported and these basic scrolling interactions nailed, the decision was made: We’re not going to go with the iPod stack, we’re going to go with OS X.” The software for the iPhone would be built by Scott Forstall’s NeXT mafia; the hardware would go to Fadell’s group. The iPhone would boast a touchscreen and pack the power of a mobile computer. That is, if they could get the thing to work. Brian Merchant is the author of The One Device: the Secret History of the iPhone. He's an editor at Motherboard , Vice's science and technology arm; the founder / editor of Terraform, its online fiction outlet; and his work has appeared in The Guardian , Slate , VICE Magazine , Fast Company , Discovery , and beyond. This excerpt is from the book THE ONE DEVICE: The Secret History of the IPhone by Brian Merchant. Copyright © 2017 by Brian Merchant. Reprinted by permission of Little, Brown and Company, New York, NY. All rights reserved. Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily. From our sponsor Advertiser Content From More from Tech Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved "
1,178
2,016
"Soon We Won't Program Computers. We'll Train Them Like Dogs | WIRED"
"https://www.wired.com/2016/05/the-end-of-code"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jason Tanz Ideas Soon We Won't Program Computers. We'll Train Them Like Dogs Edward C. Monaghan Save this story Save Save this story Save Before the invention of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject's behavior— ring bell, dog salivates —but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades. Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn't a black box at all. It was more like a computer. June 2016. Subscribe now. The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace. This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud's idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it's a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded. Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook's Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.” In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits. What the AI Behind AlphaGo Can Teach Us About Being Human Peek Into the Weird and Wonderful Age of AI (Yes, There’s a Chatbot) Andy Rubin Unleashed Android on the World. Watch Him Do the Same With AI. In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek , Paul Ford was slightly more circumspect: “If coders don't run the world, they run the things that run the world.” Tomato, tomahto.) But whether you like this state of affairs or hate it—whether you're a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don't get used to it. Our machines are starting to speak a different language now, one that even the best coders can't fully understand. Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don't encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don't tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don't rewrite the code. You just keep coaching it. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This approach is not new—it's been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed , and Google Photos uses it to identify faces. Machine learning runs Microsoft's Skype Translator , which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google's search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don't have to write these rules anymore.” Our machines speak a different language now, one that even the best coders can't fully understand. But here's the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network's operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it. If in the old view programmers were like gods, authoring the laws that govern computer systems, now they're like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in. Andy Rubin is an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.” Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “People don't linearly write the programs,” Rubin says. “After a neural network learns how to do speech recognition, a programmer can't go in and look at it and see how that happened. It's just like your brain. You can't cut your head off and see what you're thinking.” When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world. Artificial intelligence wasn't supposed to work this way. Until a few years ago, mainstream AI researchers assumed that to create intelligence, we just had to imbue a machine with the right logic. Write enough rules and eventually we'd create a system sophisticated enough to understand the world. They largely ignored, even vilified, early proponents of machine learning, who argued in favor of plying machines with data until they reached their own conclusions. For years computers weren't powerful enough to really prove the merits of either approach, so the argument became a philosophical one. “Most of these debates were based on fixed beliefs about how the world had to be organized and how the brain worked,” says Sebastian Thrun, the former Stanford AI professor who created Google's self-driving car. “Neural nets had no symbols or rules, just numbers. That alienated a lot of people.” The implications of an unparsable machine language aren't just philosophical. For the past two decades, learning to code has been one of the surest routes to reliable employment—a fact not lost on all those parents enrolling their kids in after-school code academies. But a world run by neurally networked deep-learning machines requires a different workforce. Analysts have already started worrying about the impact of AI on the job market, as machines render old skills irrelevant. Programmers might soon get a taste of what that feels like themselves. Just as Newtonian physics wasn't obviated by quantum mechanics, code will remain a powerful tool set to explore the world. “I was just having a conversation about that this morning,” says tech guru Tim O'Reilly when I ask him about this shift. “I was pointing out how different programming jobs would be by the time all these STEM-educated kids grow up.” Traditional coding won't disappear completely—indeed, O'Reilly predicts that we'll still need coders for a long time yet—but there will likely be less of it, and it will become a meta skill, a way of creating what Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, calls the “scaffolding” within which machine learning can operate. Just as Newtonian physics wasn't obviated by the discovery of quantum mechanics, code will remain a powerful, if incomplete, tool set to explore the world. But when it comes to powering specific functions, machine learning will do the bulk of the work for us. Of course, humans still have to train these systems. But for now, at least, that's a rarefied skill. The job requires both a high-level grasp of mathematics and an intuition for pedagogical give-and-take. “It's almost like an art form to get the best out of these systems,” says Demis Hassabis, who leads Google's DeepMind AI team. “There's only a few hundred people in the world that can do that really well.” But even that tiny number has been enough to transform the tech industry in just a couple of years. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Whatever the professional implications of this shift, the cultural consequences will be even bigger. If the rise of human-written software led to the cult of the engineer, and to the notion that human experience can ultimately be reduced to a series of comprehensible instructions, machine learning kicks the pendulum in the opposite direction. The code that runs the universe may defy human analysis. Right now Google, for example, is facing an antitrust investigation in Europe that accuses the company of exerting undue influence over its search results. Such a charge will be difficult to prove when even the company's own engineers can't say exactly how its search algorithms work in the first place. This explosion of indeterminacy has been a long time coming. It's not news that even simple algorithms can create unpredictable emergent behavior—an insight that goes back to chaos theory and random number generators. Over the past few years, as networks have grown more intertwined and their functions more complex, code has come to seem more like an alien force, the ghosts in the machine ever more elusive and ungovernable. Planes grounded for no reason. Seemingly unpreventable flash crashes in the stock market. Rolling blackouts. These forces have led technologist Danny Hillis to declare the end of the age of Enlightenment, our centuries-long faith in logic, determinism, and control over nature. Hillis says we're shifting to what he calls the age of Entanglement. “As our technological and institutional creations have become more complex, our relationship to them has changed,” he wrote in the Journal of Design and Science. “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own.” The rise of machine learning is the latest—and perhaps the last—step in this journey. This can all be pretty frightening. After all, coding was at least the kind of thing that a regular person could imagine picking up at a boot camp. Coders were at least human. Now the technological elite is even smaller, and their command over their creations has waned and become indirect. Already the companies that build this stuff find it behaving in ways that are hard to govern. Last summer, Google rushed to apologize when its photo recognition engine started tagging images of black people as gorillas. The company's blunt first fix was to keep the system from labeling anything as a gorilla. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To nerds of a certain bent, this all suggests a coming era in which we forfeit authority over our machines. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” wrote Stephen Hawking—sentiments echoed by Elon Musk and Bill Gates, among others. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” But don't be too scared; this isn't the dawn of Skynet. We're just learning the rules of engagement with a new technology. Already, engineers are working out ways to visualize what's going on under the hood of a deep-learning system. But even if we never fully understand how these new machines think, that doesn't mean we'll be powerless before them. In the future, we won't concern ourselves as much with the underlying sources of their behavior; we'll learn to focus on the behavior itself. The code will become less important than the data we use to train it. This isn't the dawn of Skynet. We're just learning the rules of engagement with a new technology. If all this seems a little familiar, that's because it looks a lot like good old 20th-century behaviorism. In fact, the process of training a machine-learning algorithm is often compared to the great behaviorist experiments of the early 1900s. Pavlov triggered his dog's salivation not through a deep understanding of hunger but simply by repeating a sequence of events over and over. He provided data, again and again, until the code rewrote itself. And say what you will about the behaviorists, they did know how to control their subjects. In the long run, Thrun says, machine learning will have a democratizing influence. In the same way that you don't need to know HTML to build a website these days, you eventually won't need a PhD to tap into the insane power of deep learning. Programming won't be the sole domain of trained coders who have learned a series of arcane languages. It'll be accessible to anyone who has ever taught a dog to roll over. “For me, it's the coolest thing ever in programming,” Thrun says, “because now anyone can program.” For much of computing history, we have taken an inside-out view of how machines work. First we write the code, then the machine expresses it. This worldview implied plasticity, but it also suggested a kind of rules-based determinism, a sense that things are the product of their underlying instructions. Machine learning suggests the opposite, an outside-in view in which code doesn't just determine behavior, behavior also determines code. Machines are products of the world. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Ultimately we will come to appreciate both the power of handwritten linear code and the power of machine-learning algorithms to adjust it—the give-and-take of design and emergence. It's possible that biologists have already started figuring this out. Gene-editing techniques like Crispr give them the kind of code-manipulating power that traditional software programmers have wielded. But discoveries in the field of epigenetics suggest that genetic material is not in fact an immutable set of instructions but rather a dynamic set of switches that adjusts depending on the environment and experiences of its host. Our code does not exist separate from the physical world; it is deeply influenced and transmogrified by it. Venter may believe cells are DNA-software-driven machines, but epigeneticist Steve Cole suggests a different formulation: “A cell is a machine for turning experience into biology.” And now, 80 years after Alan Turing first sketched his designs for a problem-solving machine, computers are becoming devices for turning experience into technology. For decades we have sought the secret code that could explain and, with some adjustments, optimize our experience of the world. But our machines won't work that way for much longer—and our world never really did. We're about to have a more complicated but ultimately more rewarding relationship with technology. We will go from commanding our devices to parenting them. Editor at large Jason Tanz ( @jasontanz ) wrote about Andy Rubin's new company, Playground , in issue 24.03. This article appears in the June issue. X Topics magazine-24.06 Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,179
2,016
"Building AI Is Hard—So Facebook Is Building AI That Builds AI | WIRED"
"https://www.wired.com/2016/05/facebook-trying-create-ai-can-create-ai"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Building AI Is Hard—So Facebook Is Building AI That Builds AI Getty Images/WIRED Save this story Save Save this story Save Deep neural networks are remaking the Internet. Able to learn very human tasks by analyzing vast amounts of digital data , these artificially intelligent systems are injecting online services with a power that just wasn't viable in years past. They're identifying faces in photos and recognizing commands spoken into smartphones and translating conversations from one language to another. They're even helping Google choose its search results. All this we know. But what's less discussed is how the giants of the Internet go about building these rather remarkable engines of AI. Part of it is that companies like Google and Facebook pay top dollar for some really smart people. Only a few hundred souls on Earth have the talent and the training needed to really push the state-of-the-art forward, and paying for these top minds is a lot like paying for an NFL quarterback. That's a bottleneck in the continued progress of artificial intelligence. And it's not the only one. Even the top researchers can't build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem, researchers must first try countless options that don't work , running each one across dozens and potentially hundreds of machines. Microsoft Neural Net Shows Deep Learning Can Get Way Deeper Finally, Neural Networks That Actually Work Inside the Artificial Brain That’s Remaking the Google Empire "It's almost like being the coach rather than the player," says Demis Hassabis, co-founder of DeepMind, the Google outfit behind the history-making AI that beat the world's best Go player. "You're coaxing these things, rather than directly telling them what to do." That's why many of these companies are now trying to automate this trial and error---or at least part of it. If you automate some of the heavily lifting, the thinking goes, you can more rapidly push the latest machine learning into the hands of rank-and-file engineers---and you can give the top minds more time to focus on bigger ideas and tougher problems. This, in turn, will accelerate the progress of AI inside the Internet apps and services that you and I use every day. In other words, for computers to get smarter faster, computers themselves must handle even more of the grunt work. The giants of the Internet are building computing systems that can test countless machine learning algorithms on behalf of their engineers , that can cycle through so many possibilities on their own. Better yet, these companies are building AI algorithms that can help build AI algorithms. No joke. Inside Facebook, engineers have designed what they like to call an "automated machine learning engineer," an artificially intelligent system that helps create artificially intelligent systems. It's a long way from perfection. But the goal is to create new AI models using as little human grunt work as possible. After Facebook's $104 billion IPO in 2012, Hussein Mehanna and other engineers on the Facebook ads team felt an added pressure to improve the company's ad targeting, to more precisely match ads to the hundreds of millions of people using its social network. This meant building deep neural networks and other machine learning algorithms that could make better use of the vast amounts of data Facebook collects on the characteristics and behavior of those hundreds of millions of people. 'The more ideas you try, the better. The more data you try, the better.' According to Mehanna, Facebook engineers had no problem generating ideas for new AI, but testing these ideas was another matter. So he and his team built a tool called Flow. "We wanted to build a machine-learning assembly line that all engineers at Facebook could use," Mehanna says. Flow is designed to help engineers build, test, and execute machine learning algorithms on a massive scale, and this includes practically any form of machine learning---a broad technology that covers all services capable of learning tasks largely on their own. Basically, engineers could readily test an endless stream of ideas across the company's sprawling network of computer data centers. They could run all sorts of algorithmic possibilities---involving not just deep learning but other forms of AI, including logistic regression to boosted decision trees ---and the results could feed still more ideas. "The more ideas you try, the better," Mehanna says. "The more data you try, the better." It also meant that engineers could readily reuse algorithms that others had built, tweaking these algorithms and applying them to other tasks. Soon, Mehanna and his team expanded Flow for use across the entire company. Inside other teams, it could help generate algorithms that could choose the links for your Faceboook News Feed, recognize faces in photos posted to the social network, or generate audio captions for photos so that the blind can understand what's in them. It could even help the company determine what parts of the world still need access to the Internet. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With Flow, Mehanna says, Facebook trains and tests about 300,000 machine learning models each month. Whereas it once rolled a new AI model onto its social network every 60 days or so, it can now release several new models each week. The idea is far bigger than Facebook. It's common practice across the world of deep learning. Last year, Twitter acquired a startup, WhetLab, that specializes in this kind of thing, and recently, Microsoft described how its researchers use a system to test a sea of possible AI models. Microsoft researcher Jian Sun calls it "human-assisted search." Engineers even built their own 'automated machine learning engineer.' Mehanna and Facebook want to accelerate this. The company plans to eventually open source Flow, sharing it with the world at large, and according to Mehanna, outfits like LinkedIn, Uber, and Twitter are already interested in using it. Mehanna and team have also built a tool called AutoML that can remove even more of the burden from human engineers. Running atop Flow, AutoML can automatically "clean" the data needed to train neural networks and other machine learning algorithms---prepare it for testing without any human intervention---and Mehanna envisions a version that could even gather the data on its own. But more intriguingly, AutoML uses artificial intelligence to help build artificial intelligence. As Mehana says, Facebook trains and tests about 300,000 machine learning models each month. AutoML can then use the results of these tests to train another machine learning model that can optimize the training of machine learning models. Yes, that can be a hard thing to wrap your head around. Mehanna compares it to Inception. But it works. The system can automatically chooses algorithms and parameters that are likely to work. "It can almost predict the result before the training," Mehanna says. Inside the Facebook ads team , engineers even built that automated machine learning engineer, and this too has spread to the rest of the company. It's called Asimo, and according to Facebook, there are cases where it can automatically generate enhanced and improved incarnations of existing models---models that human engineers can then instantly deploy to the net. "It cannot yet invent a new AI algorithm," Mehanna says. "But who knows, down the road..." It's an intriguing idea---indeed, one that has captivated science fiction writers for decades: an intelligent machine that builds itself. No, Asimo isn't quite as advanced---or as frightening---as Skynet. But it's a step toward a world where so many others, not just the field's sharpest minds, will build new AI. Some of those others won't even be human. Senior Writer X Topics deep learning Enterprise Facebook Google machine learning Microsoft neural networks Paresh Dave David Gilbert Will Knight Steven Levy Reece Rogers Reece Rogers Vittoria Elliott Deidre Olsen Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,180
2,016
"The Best AI Still Flunks 8th Grade Science | WIRED"
"https://www.wired.com/2016/02/the-best-ai-still-flunks-8th-grade-science"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Best AI Still Flunks 8th Grade Science Then One/WIRED Save this story Save Save this story Save In 2012, IBM Watson went to medical school. So said The New York Times , announcing that the tech giant's artificially intelligent question-and-answer machine had begun a "stint as a medical student" at the Cleveland Clinic Lerner College of Medicine. This was just a metaphor. Clinicians were helping IBM train Watson for use in medical research. But as metaphors go, it wasn't a very good one. Three years later, our artificially intelligent machines can't even pass an eighth-grade science test, much less go to medical school. The top performers successfully answered about 60 percent of the questions. In other words, they flunked. So says Oren Etzioni, a professor of computer science at the University of Washington and the executive director of the Allen Institute for Artificial Intelligence , the AI think-tank funded by Microsoft co-founder Paul Allen. Etzioni and the non-for-profit Allen Institute recently ran a contest, inviting nearly 800 teams of researchers to build AI systems that could take an eighth grade science test, and today, the Institute released the results: The top performers successfully answered about 60 percent of the questions. In other words, they flunked. For Etzioni, this five-month-long contest serves as a reality check for the state of artificial intelligence. Yes, thanks to the rise of deep neural networks , networks of hardware and software that approximate the web of neurons in the human brain, companies like Google and Facebook and Microsoft have achieved human-like performance in identifying images and recognizing spoken words , among other tasks. But we're still a long way from machines that can really think, from AI that can carry on a real conversation, even from systems that can pass a basic science test. You might say that, way back in 2011, IBM Watson beat the best humans on Earth at Jeopardy! , the venerable TV trivia game show. And it did. Google just built a system that could top a professional at the ancient game of Go. But for a machine, these are somewhat easier tasks than taking a science test. " Jeopardy! is [about] finding a single fact, while I would imagine---and hope---that 8th-grade science asks students to solve problems that require several steps, and combine multiple facts to show understanding," says Chris Nicholson, CEO and founder of AI startup Skymind. The Allen Institute's science test includes more than just trivia. It asks that machines understand basic ideas, serving up not only questions like "Which part of the eye does light hit first?" but more complex questions that revolve around concepts like evolutionary adaptation. "Some types of fish live most of their adult lives in salt water but lay their eggs in freshwater," one question read. "The ability of these fish to survive in these different environments is an example of [what]?" These were multiple-choice questions---and the machines still couldn't pass, despite using state-of-the-art techniques, including deep neural nets. "Natural language processing, reasoning, picking up a science textbook and understanding---this presents a host of more difficult challenges," Etzioni says. "To get these questions right requires a lot more reasoning." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Yes, most of the contestants were academics, independent researchers, or computer scientists outside the largest tech companies. But Etzioni isn't sure the tech giants would preform all that much better, despite employing some of the top researchers in the field. "It's entirely possible that the scores would have gone higher had companies like Google and others put their 'big guns' to work," he says. "[But] the 'wisdom of the crowds' is quite powerful and there some very talented folks engaged in these contests." Chaim Linhart, an Israeli researcher who participated in the competition, agrees. "In most competitions, I think the winning models are very specific to the test dataset, so even companies that work in the same domain don't necessarily have a significant advantage," he says. What about Watson? According to Etzioni, IBM declined to participate (the company says it has turned its attentions away from contests like this and towards "real world" applications). But Watson is perhaps not the best litmus test. Watson was good at Jeopardy!. That's what it was built for. But today, Watson is really just a brand name for a wide range of AI tools offered by IBM, and those tools aren't necessarily state of the art. Etzioni's eighth grade science test is really a test of natural language understanding---how well a machine understands the natural way humans speak and write. IBM's services do include natural language processing, but since Watson's arrival, this kind of tech has received a new boost from deep neural nets. Just as you can teach a neural net to recognize a cat by feeding it myriad cat photos, you can teach it to understand natural language using mountains of digital dialogue. Google, for instance, has used neural nets to build a chatbot that debates the meaning of life. But this chatbot wasn't completely convincing. As it stands, the state of the art lies beyond any one technology. "So far, there is no universal method," says Dutch researcher Benedikt Wilbertz, another participant in the Allen AI contest. "This challenge needed its own mix of machine learning and [other] AI tools." Indeed, the top participants in the Allen AI challenge used deep learning as well as various other techniques. And the end result was still well below perfect. Doug Lenat, who runs an AI project called Cyc, says that teaching today's machines to take basic science tests doesn't even make much sense. We should be striving for something more---something much further out. "If you're talking about passing multiple choice science tests, I always felt that was not actually the test AI should be aiming to pass," he says. "The focus on natural language understanding----science tests, and so on---is something that should follow from a program being actually intelligent. Otherwise, you end up hitting the target but producing the veneer of understanding." In other words, a machine that passes an eighth grade science test isn't all that smart. So, we've yet to build a machine that's even sorta close to real intelligence. But work will continue. Senior Writer X Topics artificial intelligence deep learning Enterprise Facebook Google IBM Microsoft Paul Allen Susan D'Agostino Will Knight Christopher Beam Will Knight Will Knight Steven Levy Niamh Rowe Caitlin Harrington Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,181
2,012
"Did a Computer Bug Help Deep Blue Beat Kasparov? | WIRED"
"https://www.wired.com/2012/09/deep-blue-computer-bug"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Klint Finley Culture Did a Computer Bug Help Deep Blue Beat Kasparov? Chess Grand Master Garry Kasparov, left, comtemplates his next move against IBM's Deep Blue chess computer while Chung-Jen Tan, manager of the Deep Blue project looks on during the first game of a six-game rematch between Kasparov and Deep Blue in this file photo from 1997. The computer program made history by becoming the first to beat a world chess champion, Kasparov, at a serious game. Photo: Adam Nadel/Associated Press Save this story Save Save this story Save In May 1997, an IBM supercomputer known as Deep Blue beat then chess world champion Garry Kasparov, who had once bragged he would never lose to a machine. Kasparov and other chess masters blamed the defeat on a single move made by the IBM machine. Either at the end of the first game or the beginning of the second, depending on who's telling the story, the computer made a sacrifice that seemed to hint at its long-term strategy. Kasparov and many others thought the move was too sophisticated for a computer, suggesting there had been some sort of human intervention during the game. "It was an incredibly refined move, of defending while ahead to cut out any hint of countermoves," grandmaster Yasser Seirawan told Wired in 2001 , "and it sent Garry into a tizzy." Fifteen years later, one of Big Blue's designers says the move was the result of a bug in Deep Blue's software. The revelation was published in a book by statistician and New York Times journalist Nate Silver titled The Signal and the Noise — and promptly highlighted by Ezra Klein of the Washington Post. For his book, Silver interviewed Murray Campbell, one of the three IBM computer scientists who designed Deep Blue, and Murray told him that the machine was unable to select a move and simply picked one at random. At the time, Deep Blue versus Kasparov was hailed as a seminal moment in the history of computer science — and lamented as a humiliating defeat for the human intellect. But it may have just been a lesson that as humans, we tend to blow things way out of proportion. Many chess masters have long claimed that Kasparov was at a significant disadvantage during the match. Deep Blue's designers had the opportunity to tweak Deep Blue's programming between matches to adapt to Kasparov's style and strategy. They also had access to the full history of his previous public matches. Kasparov had no similar record of Big Blue's performance. Because the machine had been heavily modified since he had last played it, he was essentially going in blind. That strange move was chalked up to these advantages. The IBM team did tweak the algorithms between games, but part of what they were doing was fixing the bug that resulted in that unexpected move. The machine made a mistake, then they made sure it wouldn't do it again. The irony is that the move had messed with Kasporav's mind, and there was no one to fix this bug. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "Kasparov had concluded that the counterintuitive play must be a sign of superior intelligence," Campbell told Silver. "He had never considered that it was simply a bug." It's tempting to think there's a lesson here about human nature. After all, a human mistake in the development of the software led to the machine's victory. It's sort of reassuring to think that a human flaw is actually what made Deep Blue successful. But it's not clear that things would have turned out all that differently had that bug never surfaced. Years after the final Deep Blue match, both Kasparov and Vladimir Kramnik, his successor as world chess champion, played against various versions of Deep Blue's successor Fritz. But in these matches, no code modifications were permitted between games. Kramnik even had the chance to play against the software in advance of the matches , and had the right to adjourn a game until the next if it went past 56 moves. The results aren't that encouraging for humans. Kasparov's match against X3D Frintz in 2003 ended in a draw. So did Kramnik's first match against Fritz in 2002. And Kramnik lost to Fritz due to a blunder in 2006. These weren't decisive victories for the machines, but the humans still couldn't win. Even though humans can conceive of strategies to counteract the computation advantage of computers, we get tired, make blunders, and suffer from anxiety. Machines never get tired or flustered. But the relationship between chess players and computers is actually more symbiotic than adversarial. Today's chess masters use computers extensively as learning aids. That said, today's computers make Deep Blue look puny. Maybe it's time for a rematch. Contributor X Topics Chess Playbook Angela Watercutter Geek's Guide to the Galaxy Megan Farokhmanesh Gabrielle Niola Alex Winter Angela Watercutter Jennifer M. Wood Megan Farokhmanesh Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,182
2,011
"IBM's Watson Supercomputer Wins Practice Jeopardy Round | WIRED"
"https://www.wired.com/2011/01/ibm-watson-jeopardy"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sam Gustin Business IBM's Watson Supercomputer Wins Practice Jeopardy Round Save this story Save Save this story Save YORKTOWN HEIGHTS, NY - It's man vs. machine -- for real. IBM's celebrated supercomputer Watson will square off against Jeopardy champions Ken Jennings and Brad Rutter in a first-of-its-kind competition to be aired over three nights in February. The grand prize is $1 million; second place wins $300,000; third place receives $200,000. Jennings and Rutter have pledged 50 percent of their winnings to charity; IBM will donate all of its prize. During a demonstration round Thursday, Watson handily defeated the two Jeopardy champions. The IBM Jeopardy Challenge represents a milestone in the development of artificial intelligence , and is part of Big Blue's centennial celebration. "We are at a very special moment in time," said Dr. John E. Kelly III, IBM Senior Vice President and Director of IBM Research. "We are at a moment where computers and computer technology now have approached humans. We have created a computer system that has the ability to understand natural human language, which is a very difficult thing for computers to do." Named after IBM founder Thomas J. Watson, the supercomputer is one of the most advanced systems on Earth and was programmed by 25 IBM scientists over the last four years. Researchers scanned some 200 million pages of content -- or the equivalent of about one million books -- into the system, including books, movie scripts and entire encyclopedias. Watson is not your run-of-the-mill computer. The system is powered by 10 racks of IBM POWER 750 servers running Linux, and uses 15 terabytes of RAM, 2,880 processor cores and can operate at 80 teraflops. That's 80 trillion operations per second. Watson scans the 2 million pages of content in its "brain" in less than three seconds. The system is not connected to the internet, but totally self-contained. The machine is the size of 10 refrigerators. Watson. Photo by Sam Gustin/Wired.com Sam Gustin/WIRED Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "This is the culmination of four years of hard work and we didn't know that we'd get here," said David A. Ferrucci, the principal investigator for IBM's Watson project. Watson follows Deep Blue, the IBM supercomputer that ultimately defeated chess grandmaster Garry Kasparov in 1997. Kelly said the lessons IBM learned from developing Watson would be applicable across industries, including law, business, and especially medicine. "Watson can read all of the health-care texts in the world in seconds," Kelly said. "And that's our first priority, creating a Dr. Watson, if you will." "Imagine if a doctor in Africa could access all of the world's medical texts from the cloud, in seconds, to learn about potential drug interactions," he added. During a press conference Thursday morning at IBM Research headquarters in Yorktown Heights, New York, the company showcased Watson and held a practice Jeopardy round between the supercomputer, Jennings, who won over $2.5 million on a 74-game run in 2004-2005, and Rutter, the all-time money leader at $3,255,102. The scene was slightly surreal. Watson "stood" in between the two champions, its "avatar" -- which the company describes as "a global map projection with a halo of 'thought rays'" -- flickering and flashing, as if it was thinking. "The threads and thought rays that make up Watson's avatar change color and speed depending on what happens during the game," according to Watson's official "bio." "For example, when Watson feels confident in an answer the rays on the avatar turn green; they turn orange when Watson gets the answer wrong. Viewers will see the avatar speed up and activate when Watson's algorithms are working hard to answer a clue." Watson jumped out to an early lead. For the first four questions of the round, the supercomputer "read" the clue, "pressed" its buzzer, and provided the correct answer. Its human opponents tried valiantly to catch up, but the end of the round, Watson was in first place with $4,400. Jennings was second, with $3,400. Rutter was third, with $1,200. None of the three contestants appeared rattled. Of course, Watson lacks the capacity to get rattled. "Watson doesn't have any emotions, but it knows that humans do," Ferrucci said. Asked if he was nervous to be competing against a computer, Rutter quipped, "Not nervous, but I will be when Watson's progeny comes back from the future to kill me. " Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Thursday's round was just a demonstration. Watson will go head-to-head with Jennings and Rutter in two matches, to be aired February 14, 15, and 16. Follow us for disruptive tech news: Sam Gustin and Epicenter on Twitter. Photos: Sam Gustin/Wired.com See Also: IBM Watson Challenges Jeopardy Contestants - Wired.com IBM and the Jeopardy! Challenge - Video - Wired A Decade After Kasparov's Defeat, Deep Blue Coder Relives Victory Wired 9.10: This Time It's Personal May 11, 1997: Machine Bests Man in Tournament-Level Chess Match X Topics Future Shock IBM Watson WIRED Staff Will Knight Steven Levy Will Knight Steven Levy Will Knight Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,183
2,023
"Alphabet’s Layoffs Aren’t Very Googley | WIRED"
"https://www.wired.com/story/plaintext-alphabets-layoffs-arent-very-googley"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business Alphabet’s Layoffs Aren’t Very Googley Photograph: Leonardo Munoz/Getty Images Save this story Save Save this story Save In 2004, Google cofounders Larry Page and Sergey Brin engaged in a comically passive-aggressive IPO road show. They eschewed business suits for casual garb, refused to answer many questions from finance bigwigs, and warned investors that instead of focusing on profits, the newly public company might apply its resources “to ameliorate a number of the world’s problems.” Both founders dreaded the restrictions of a public company and vowed that Google would never sing to Wall Street’s tune. To ensure they could do this, the founders structured the company so that they controlled the majority of voting shares. Instead of kicking back money to shareholders, Google would pamper the talent that drove its innovations, providing perks like in-house massages, free food, and lavish compensation. For instance, at the end of 2010, Page and Brin blew their workers’ minds by announcing an across-the-board 10 percent raise, a doubling of the generous annual bonus, and a $1,000 Christmas present, just for the hell of it. The beneficiaries already had top-of-market salaries augmented by lucrative equity shares. But the founders’ largesse made clear that they meant it when they said employees were the heart of the company. Brin and Page haven't been deeply involved for years, but in the company’s 25-year history, a lot of that convention-defying legacy has remained. At least until this month, when Google’s parent company Alphabet laid off 12,000 employees, about 6 percent of its workforce, including many senior leaders and some people who had worked there since its early days. For a company renowned for coddling its workers, the layoffs were a psychic shock. Especially since some of the victims were dispatched coldly, with their email access cut off before they could even say goodbye to long-term colleagues. Alphabet isn’t the only company dismissing workers. Top executives at Meta, Microsoft, Salesforce, Amazon, and others are doing the same thing—dealing with what they suddenly perceive as excessive headcount by lopping off heads. Current CEO Sundar Pichai’s memo was so similar to other corporate dispatches that it seems that all of them fed the same prompts into ChatGPT: Hey sorry I was too optimistic in hiring when we were raking in dough during the pandemic, so some of you will have to go. But this is just a blip in our trajectory. I’m really excited about the future that not all of you will be part of! Yet, the bloodletting at Alphabet is different. Aside from letting go a few hundred sales employees in 2009, the company had never experienced a major layoff. And along with it are signals that the age of limitless perks is gone. (Among those rolfed by the cuts were 27 of the company’s in-house massage therapists. ) And it’s not like the company is in financial peril. Though growth has slowed and the stock is down—like at every other tech company lately—Alphabet is still pulling in plenty of money. In the most recent quarter it reported, the company managed to eke out $14 billion in profits. It also has $116 billion sitting around in its vaults. And in the past few years it has spent over $100 billion to buy back its own stock, something Wall Street loves but that does nothing for the business itself. Pichai does have a case to make for the layoffs and a cutback in perks. With 187,000 employees, there were undeniably thousands whose jobs were not integral to the company—likely not only the massage therapists but also hundreds of middle managers performing nonessential projects. (Brin and Page always felt that middle managers slowed down innovation.) As you might expect, those working in the hotly competitive area of AI, including the Google Brain research group, were spared from the layoffs. In fact, Pichai argued that the cuts were performed so Google could spend more resources on AI. But in some ways the layoffs represent what seems like a gradual shift in philosophy. For years, Alphabet has funded projects—and created entire divisions—devoted to producing novel forms of technology. One of those was an in-house incubator called Area 120 that was basically shut down by this month’s cutbacks. There were also some trimming in Alphabet’s X division that works on “moonshots.” Wall Street has griped for years about the unprofitability of the company’s aspirational “other bets,” and now the company seems more focused on its more concrete businesses. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It’s certainly true that Alphabet has set fire to billions of dollars in its quest for the Next Big Thing. But they call those moonshots for a reason—one success can cancel out a hundred failures. And you can argue it’s already happened. Google Brain began at X and is now not only integrated into Google but is a key component in almost all the company’s software, and a pivotal advantage in the coming wars over generative AI. What’s more, investing in new in-house businesses is even more important now that the US government and the EU frown on acquisitions by Big Tech. Google’s most successful move since search itself was buying YouTube for $1.6. billion in 2006—a purchase that Federal Trade Commission head Lina Khan would squash like a dung beetle if it happened today. It’s also disheartening that Alphabet seems more inclined to count pennies on employee perks. It’s easy to mock the grandiose goodies that Google bestows on its employees, especially when you see them laid out as lurid entitlements on TikTok videos. It’s also true that not many companies can generate the profit that pays for all that. But Brin and Page had a core belief that treating workers like royalty was good business. What a concept! A disruptive innovation in its own right, it became the template for nearly all of Silicon Valley’s contenders—not just tech giants but also well-funded startups competed for top-notch chefs as fiercely as they did for machine learning adepts. It was a grand experiment that flew in the face of Wall Street’s belief that the best workforces are ones that are brutally deprived and pitilessly culled. That experiment isn’t looking as great now, and that’s to the detriment of workers everywhere as well as those of us hungry to see some crazy idea become the next big thing. (Guess that will now be more likely to come from a startup.) Coincidentally—or maybe not—Alphabet’s moves come as one of the company’s biggest shareholders, hedge fund mogul Christopher Hohn, has been communicating with Pichai. He has been publicly complaining that the company should drastically cut its workforce—the current layoffs of 6 percent were only “a step in the right direction” he wrote , arguing for a 20 percent evisceration. He also griped about high salaries and too much money spent on Other Bets. The whole point of Brin and Page maintaining a majority of voting shares, of course, was so they wouldn’t have to listen to hedge fund multibillionaires arguing to fire workers or cut their salaries. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While the remaining Googlers are still well paid and well fed, this episode may well lead some of them to explore other options. Though Pichai and his team attempted in a company town hall this week to provide some rationale for who was let go, people I spoke to still often had little idea why person X was cut and person Y remained. But here’s what is clear: Person Y, and everyone else in the company (except maybe its AI wizards), are now a little less certain about their status. “It feels like a shift in the company,” says one long-time software engineer who can’t figure out why he got the pink slip. “I definitely get the sense that even long-term high-performing employees who are left will now be looking over their shoulders.” In his memo, Pichai promised that Google will continue its “healthy regard for the impossible that’s been core to our culture from the beginning.” Unfortunately, it has proved impossible to do that without firing people, freaking out the survivors, and calling into question the company’s unique values. In my 2011 book, In the Plex , I wrote about Brin and Page's reluctant move to take the company public. Google would go public. But Larry and Sergey would do it their way. It was the values of Google squaring off against the values of Wall Street, which embodied everything its founders despised about tradition-bound, irrational corporate America … Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Page and Brin drafted a personal letter to potential investors explaining in simple language why Google was special and therefore would have a different relationship with its shareholders than other companies did … “We wanted to get people to know what to expect,” says Brin … “Google is not a conventional company,” began Page’s letter, released on April 29, 2004. “We do not intend to become one.” It was an explicit warning to potential shareholders: Fasten your seat belts! Michael asks, “Why do people fly in private jets to talk about climate change in Davos?” Thanks for the question, though I suspect that it is more a comment about what you perceive as hypocrisy than a mystery that’s keeping you up at night. Not being rich enough to have my own plane, or even to regularly book my way into a pricier seat on commercial aviation, I can’t answer firsthand. But from what I know of eco-conscious billionaires, I imagine that they would say that they need to fly private because of security and the value of their time. Probably they also ask themselves, What’s the point of being a billionaire if I can’t fly in my own plane? But that puts them in an awkward position when they go all Cassandra in their climate statements. In his book about the environment, Bill Gates admits that his own carbon footprint is “absurdly high” and vows to do something about it. But while he says the pandemic did cut his travel —he’s not joining Greta Thunberg in hitching a ride on a freight ship in his oceanic jaunts. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But while it’s easy to scorn the general aviation crowd for being two-faced, fat cats are not the only ones who don’t live by their principles. Lots of us who aren’t rich have to grapple with the same conflict between comfort and climate. I travel by air quite frequently, even though I realize that all those planes spewing contrails aren’t good for the environment. Messing up the earth is a collective enterprise and we all contribute. Giant private yachts, however, have no justification. If you own one, pipe down about the climate. Your word means nothing. You can submit questions to [email protected]. Write ASK LEVY in the subject line. The Doomsday Clock now shows only 90 seconds before apocalypse midnight. Does this mean we won’t ever get to see Apple’s AR headset ? All popular platforms start out great and become, well, shitty. TikTok will be no exception. Meanwhile, some Youngs are using TikTok for search. Our favorite cat owner tried it out. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Nuclear blast in your region? No problem! If you’re got the right bomb shelter. ChatGPT won’t ruin education. And no, it’s not a bot making that claim. Don't miss future subscriber-only editions of this column. Subscribe to WIRED (50% off for Plaintext readers) today. If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Editor at Large X Topics Plaintext Silicon Valley Google Jobs Search IPOs Paresh Dave Steven Levy Peter Guest Amanda Hoover Niamh Rowe Paresh Dave Steven Levy Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,184
2,020
"Portland City Council votes to ban facial recognition technologies in public places | VentureBeat"
"https://venturebeat.com/2020/09/09/portland-city-council-votes-to-ban-facial-recognition-technologies-in-public-places"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Portland City Council votes to ban facial recognition technologies in public places Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The Portland, Oregon City Council today unanimously voted to adopt two of the strongest bans on facial recognition technologies in the U.S. One prohibits the public use of facial recognition by city bureaus, including the Portland Police Department, while the other bans all private use in places of “public accommodation,” like parks and buildings. The ordinances originally contained an amendment that would have allowed airlines in partnership with U.S. Customs and Border Protection to collect facial recognition data on travelers at the Portland International Airport. But the proposals voted on today make exemptions only for Portland public schools. The ban on Portland government agencies’ use of facial recognition technology goes into effect immediately, while the ban on private use takes effect starting January 1, 2021. The state of Oregon had already prohibited police use of body cameras with facial recognition technology. In the wake of the Black Lives Matter movement, an increasing number of cities and states have expressed concerns about facial recognition technology and its applications. Oakland and San Francisco, California and Somerville, Massachusetts are among the metros where law enforcement is prohibited from using facial recognition. In Illinois, companies must get consent before collecting biometric information of any kind, including face images. New York recently passed a moratorium on the use of biometric identification in schools until 2022, and lawmakers in Massachusetts are considering a suspension of government use of any biometric surveillance system within the commonwealth. As OneZero’s Kate Kaye notes , the newly adopted pair of Portland ordinances ban the use of facial recognition at stores, banks, restaurants, public transit stations, homeless shelters, doctors’ offices, rental properties, retirement homes, and a variety of other types of businesses. The legislation allows people to sue noncompliant private and government entities for $1,000 per day of violation or for damages sustained as a result of the violation, whichever is greater, and establishes a new chapter of city code sharply constraining the use of facial recognition by private entities. The ordinances also give city bureaus 90 days to provide an assessment ensuring they’re not using facial recognition for any purpose. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The bans fall short of preventing facial recognition use in private clubs, places of worship, and households, and they don’t limit the technology’s deployment at workplaces, like factories or office buildings (excepting publicly accessible lobbies within those workplaces). In addition, government staff will still be permitted to use facial recognition to unlock a phone, tag someone on social media, and obscure faces in law enforcement images released to the public. Individuals can also set up facial recognition technology at home or on personal devices, like Apple’s Face ID feature on iPhones. But in spite of the exemption for Portland public schools, the ordinances do cover private schools, from nursery schools through elementary, secondary, undergraduate, and post-graduate institutions. “With these concerning reports of state surveillance of Black Lives Matter activists and the use of facial recognition technology to aid in the surveillance, it is especially important that Portland prohibits its bureaus from using this technology,” City Commissioner Jo Ann Hardesty said in a statement. “Facial recognition tech, with its gender and racial bias and inaccuracies, is an intrusion on Portlanders’ privacy. No one should have something as private as their face photographed, stored, and sold to third parties for a profit. No one should be unfairly thrust into the criminal justice system because the tech algorithm misidentified an innocent person.” Amazon was among the technology vendors who sought to block or weaken the city’s legislation. According to OneZero , the company paid lobbyists $24,000 to contact and meet with key Portland councilmember staffers and mayoral staffers. Amazon reportedly wanted to influence language in the draft, including how the term “facial recognition” was defined. Beyond Amazon, some Portland businesses, including the Oregon Bankers Association, urged councilmembers ahead of the vote to consider a temporary ban on specific uses of facial recognition software rather than a blanket ban on the technology. For instance, Jackson officials said they used the technology at three stores in the city to protect employees and customers from people who have threatened clerks or shoplifted. “Talking to some businesses that we work with, as well as the broader business community, there are definitely some who would be opposed to the city restricting their ability to use that technology,” Technology Association of Oregon president Skip Newberry told Oregon Live. “It can range from security of sites or critical infrastructure to people coming into a store and it being used to provide an experience tailored to that individual.” Numerous studies and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to bias. One issue is that the data sets used to train the algorithms skew white and male. IBM found that 81% of people in the three face-image collections most widely cited in academic studies have lighter-colored skin. Academics have found that photographic technology and techniques can also favor lighter skin, including everything from sepia-tinged film to low-contrast digital cameras. The algorithms are often misused in the field, as well, which tends to amplify their underlying biases. A report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects. The New York Police Department and others reportedly edit photos with blur effects and 3D modelers to make them more conducive to algorithmic face searches. Amazon, IBM, and Microsoft have self-imposed moratoriums on the sale of facial recognition systems. But some vendors, like Rank One Computing and Los Angeles-based TrueFace, are aiming to fill the gap with customers, including the City of Detroit and the U.S. Air Force. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,185
2,020
"Tech execs urge Washington to accelerate AI adoption for national security | VentureBeat"
"https://venturebeat.com/2020/07/22/tech-execs-urge-washington-to-accelerate-ai-adoption-for-national-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tech execs urge Washington to accelerate AI adoption for national security Share on Facebook Share on X Share on LinkedIn United States Congress in Washington DC Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tech company CEOs may be heading to Washington, D.C. next week to take part in antitrust hearings in Congress , but this week high-profile executives from companies like Amazon, Microsoft, and Google gave the president, Pentagon, and Congress advice on how the United States can maintain AI supremacy over other nations. Today, the National Security Commission on AI released a set of 35 recommendations , ranging from the creation of an accredited university for training AI talent to speeding up Pentagon applications of AI in an age of algorithmic warfare. The National Security Council on AI ( NSCAI ) was created by Congress in 2018 to advise national AI strategy as it relates to defense, research investments, and strategic planning. Commissioners include AWS CEO Andy Jassy, Google Cloud chief AI scientist Andrew Moore, and Microsoft chief scientist Eric Horvitz. Former Google CEO Eric Schmidt acts as chair of the group. Coming amid concerns over China’s rise as an economic and military power and AI’s increasing use in businesses and governments, the group’s recommendations may have a long-lasting impact on the United States government and the world. To bolster U.S. competitiveness in AI, the council recommends steps such as creating a National Reserve Digital Corps, modeled on military reserve corps, to give machine learning practitioners a way to contribute to government projects on a part-time basis. Unlike the U.S. Digital Service, which asks tech workers to serve for one full year, the NRDC would ask for a minimum of 38 days a year. Commissioners also recommend creating an accredited university called the U.S. Digital Services Academy. Graduates would pay for their education with five years of work as civil servants. Classes would include American history, as well as mathematics and computer science. Students would participate in internships at government agencies and in the private sector. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A joint Stanford-NYU study found that only a small percentage of federal agencies are using complex forms of machine learning and that trustworthiness of systems used for what it calls algorithmic governance will be critical to citizen trust. Released in February, the report urges federal agencies to acquire more internal AI expertise. The quarterly report includes an outline for putting ethics principles into practice with the goal of aligning U.S. principles with engineering practices in major parts of the AI lifecycle. “We hope that the key considerations we’re laying out now have the potential to form the foundation for international dialogue on areas of cooperation and collaboration, even with potential adversaries,” said Horvitz, who gave a presentation on the subject Monday during a meeting where commissioners unanimously approved the recommendations. Among the recommendations: Train U.S. State Department employees in emerging technologies like AI that, as an NSCAI staff member put it, “define global engagement strategies.” Encourage the Department of Defense to adopt commercial AI systems for things like robotic process automation. (At VentureBeat’s Transform 2020 conference last week, Joint AI Center acting director Nand Mulchandani, a former Silicon Valley executive, stressed that the military will grow its reliance on private industry. ) Build a certified AI software repository for the U.S. military to accelerate creation of AI and support research and development. Create a database to track research and development projects within the U.S. military. Have military leaders adopt an open innovation model for the DoD to accelerate the Pentagon’s ability to create AI. Integrate AI-enabled applications into “all major joint and service exercises,” as well as war games and table-top exercises. Invest in research and development for testing AI systems for compliance and verify results. Google Cloud’s Moore said testing is important “because it won’t be long before 90% of the entire length of an AI project [pipeline] is testing and validation and only 10% will be the initial development. So we have to be good at this, or else we will see our country’s speed of innovation grind to a halt.” Former deputy secretary of Defense and NSCAI commissioner Robert Work referred to the competition in AI as a competition in values but stressed that testing and validation to prove results is also important in allowing military leaders to confidently adopt AI applications. Following the release of the NSCAI interim report to Congress last fall, the group began releasing quarterly recommendations to advise national leaders on how to maintain the country’s edge in AI. Recommendations in the first quarterly report ranged from building public-private partnerships and government funding of semiconductor development to using the ASVAB military entrance exam to identify recruits with “computational thinking.” A major topic of discussion throughout the meeting Monday was international relations — how the U.S. cooperates with allies and how it treats adversaries. While many commissioners in the meeting stressed the need to defend AI supremacy over that of other nation-states, Microsoft’s Horvitz said, “Our biggest competitor is status quo and actually innovation, to be honest.” NSCAI Commissioner Gilman Louie is founder of In-Q-Tel, the CIA’s investment arm. He said he welcomes healthy competition in the development of AI for exploration, science, health, and the environment, but being the best in AI is a matter of national security, particularly with the rise of adversarial machine learning. Louie said increased adoption of government use of AI is not just a matter of technical expertise or compute resources, but also of cultural change. Once that change happens, he said, it can have a drastic and disruptive impact. “I think there’s going to be a point somewhere maybe five or six years from now, when we get our hands around the basic uses of AI, that we will have a choice to make: whether or not we’ll continue to use AI for incremental improvement versus highly disruptive change,” he said. “When you think about offensive uses, defensive uses, support uses of AI, we tend to liken these new technologies within the department and national security apparatus in a way that doesn’t change the wiring diagram. It makes us a little bit faster, a little bit better, but we don’t want to change the way we think about our mental models of operating or constructs of organizations. I think the power of AI is that it could disrupt all of that, and if we’re not willing to disrupt ourselves, we’re going to let potential adversaries and competitors disrupt us.” Katharina McFarland, chair of the National Academies of Science Board of Army Research and Development, said she’s seen machine learning deployments accelerate inside and outside the military during the COVID-19 pandemic. “There’s some hope here because people are starting by having to — not because they want to, but because they have to — to start having and developing some confidence in these tools,” she said. The commission also discussed potential next steps, such as testing and validation framework recommendations and ways to put ethical principles into practice. Additional NSCAI recommendations are due out in the fall. The NSCAI is a temporary group that is scheduled to deliver a final report to Congress next spring and will dissolve in October 2021. In other news at the intersection of AI and policy, today the Senate Committee on Commerce, Science, and Transportation advanced two bills that if passed into law would help shape U.S. AI policy. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,186
2,020
"AWS, Google, and Mozilla back national AI research cloud bill in Congress | VentureBeat"
"https://venturebeat.com/2020/06/30/aws-google-and-mozilla-back-national-ai-research-cloud-bill-in-congress"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS, Google, and Mozilla back national AI research cloud bill in Congress Share on Facebook Share on X Share on LinkedIn Stanford HAI codirector Dr. Fei-Fei Li and Hoover Institution director and former Secretary of State Condoleeza Rice talking about AI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A group of more than 20 organizations , including tech giants like AWS, Google, IBM, and Nvidia, joined schools like Stanford University and Ohio State University today in backing the idea of a national AI research cloud. Nonprofit groups like Mozilla and the Allen Institute for AI also support the idea. The cloud would help researchers across the United States gain access to compute power and data sets freely available to companies like Google, but not researchers in academia. Compute resources available to academics could grow even more scarce in the near future as COVID-19 fallout constricts university budgets. The National AI Research Resource Task Force Act was first introduced earlier this month by the founding cochairs of the Senate AI Caucus, U.S. Senators Rob Portman (R-OH) and Martin Heinrich (D-NM), together with a bipartisan group in the House of Representatives. If passed, the bill will bring together experts from government, industry, and academia to devise a plan for the creation of a national AI research cloud. The National Security Commission on Artificial Intelligence (NSCAI) chair and former Google CEO Eric Schmidt also supports the plan. In reports written by tech executives and delivered to Congress in the past year, the NSCAI has recommended more cooperation between academia, industry , and government as part of a broader strategy to keep the United States’ edge in tech compared to other nations. The idea of a national AI research cloud was first proposed last year by Stanford Institute for Human-Centered Artificial Intelligence (HAI) codirectors Dr. Fei-Fei Li and John Etchemendy, who said its creation was essential to U.S. competitiveness and the nation’s status as a leader in AI. In a March blog post , Li and Etchemendy called the creation of such a cloud potentially “one of the most strategic research investments the federal government has ever made.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Leaders at Stanford joined more than 20 other universities in sending a joint letter to President Trump and Congress last year backing a national AI research cloud. Previous bills also recommended the creation of AI centers and a national AI coordination office as part of a comprehensive U.S. AI strategy. Increased data sharing and ideas like a national center of excellence also came up last year when the Computing Community Consortium laid out its 20-year AI research road map. Li talked about AI, China, health care, and other topics today in a conversation with former Secretary of State and soon-to-be Hoover Institution director Condoleezza Rice. After stating that a U.S. lead in tech is important to national security, Rice asked Li about how the U.S. can lead in AI if China has more data and fewer privacy concerns. In response, Li said AI applications like speech or facial recognition may be data heavy, but other forms of AI that require less data may supply fruitful ground for U.S. progress. “Data is a first-class citizen of today’s AI research. We should admit that, but it’s not the only thing that defines AI,” Li said. “Rare disease understanding, genetic study of rare disease, drug discovery, treatment management — they are by definition not necessarily data heavy, and AI can play a huge role. Human-centered design, I think about elder care and that kind of nuanced technological help. That’s not necessarily data heavy as well, so I think we need to be very thoughtful about how to use data.” The future of work, ethics, and AI bias were also major topics of discussion. Li urged the development of AI that brings together interdisciplinary teams, gathers insights from people impacted by AI, and is made by more than computer science school graduates. “America’s strength is our people, and the more people who participate in this technology, to guide and develop it, the stronger we are,” she said. Li also stressed the need to stay ahead of the ethical implications of AI and suggested computer scientists throw away the notion of independent machine values, asserting that “Machine values are human values.” In a separate policy proposal made by Stanford HAI last year, Li and Etchemendy urged the federal government to grow its national AI investments to $12 billion a year for the next decade. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,187
2,020
"Tech leaders highlight military AI and 5G investments they call essential to U.S economy and national security | VentureBeat"
"https://venturebeat.com/2020/04/03/tech-leaders-highlight-military-ai-and-5g-investments-they-call-essential-to-u-s-economy-and-national-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tech leaders highlight military AI and 5G investments they call essential to U.S economy and national security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Some of the biggest names in AI at companies like Amazon, Google, and Microsoft are making recommendations about how the U.S. military and federal government should fund 5G adoption and AI initiatives as part of the forward-looking work of the National Security Council on AI ( NSCAI ). The NSCAI gives a select group of tech executives agency to make recommendations with the power to affect not just military policy, but, with public-private partnerships, their own businesses as well. Congress formed the independent NSCAI Commission as part of the 2019 military budget to advise on matters at the intersection of AI and national defense. Commissioners include AWS CEO Andy Jassy, Microsoft chief science officer Eric Horvitz, former Google CEO Eric Schmidt, and Google Cloud AI chief Andrew Moore. The report makes more than two dozen preliminary judgments ranging from modifications to a military entrance exam to discovering AI talent to creating a national 5G strategy within 6 months as part of the next annual defense spending bill. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It is a national security imperative for the U.S. military to have access to a powerful 5G network to enable future AI capabilities, and ensure the network is trusted to prevent competitors from accessing our AI systems,” the report reads. The report frames 5G as an adjacent technology to AI and ML. For that same reason, biotechnology and quantum recommendations will also be part of the final report, which is due out in March 2021. But this quarterly report includes recommendations on issues like 5G that NSCAI commissioners believe deserve action now, and it underscores the importance of doing so. The report also calls for an expansion of 5G spectrum sharing between private businesses and the U.S. military, and for Congress to pass the $750 million USA Telecommunications Act to encourage 5G research and development and open up access to radio networks. Quick action is called for, the NSCAI Commission said, to compete with Huawei. “5G networks will form the connective tissue between AI platforms. Ensuring the United States maintains access to trusted and robust 5G networks is a critical component of overall leadership in AI. This is particularly true as microelectronics continue to advance, and the capability to run sophisticated AI models at the edge will increase. As AI becomes more dispersed throughout the network, the need for a secure and effective 5G network will increase even more,” the Commission said in the report. The report also emphasizes the need for military and federal government funding to produce, assemble, and test semiconductor hardware like FPGAs, GPUs, and ASIC chips. The Commission prescribes simultaneous creation of government pathways to trusted, state-of-the-art integrated circuit chips and continued investment in microelectronics to ensure progress despite the slowdown of Moore’s law. To accomplish this, the Commission says Congress should invest $500 million in DARPA’s Electronics Resurgence Initiative and $100 million in the U.S. Navy’s Trusted and Assured Microelectronics program and State-of-the-Art Heterogeneously Integrated Packaging (SHIP). “At present, the U.S. government does not have trusted access to state-of-the-art microelectronics manufacturing,” the report reads. “With an additional $50 million, SHIP could expand the existing pilot prototype program to include heterogeneous integration of multi-chip packages incorporating AI specific chips and configurations.” The report acknowledges that the United States has enjoyed a strategic advantage in chips since the field was created decades ago but says that advantage is eroding, and that risk related to the semiconductor supply chain in the U.S. is on the rise. Related to the Navy’s trusted microelectronics programs, the report states, the Office of the Director of National Intelligence (ODNI) has assessed ways private semiconductor companies can work with the government to establish state-of-the-art semiconductor design. “While building a cutting-edge, high-capacity semiconductor fabrication plant for dedicated-government use would likely cost approximately $20 billion, the ODNI approach calls for a security-based split-manufacturing facility and partnering with a private sector firm to build a facility, which would produce both commercial use and government-use chips,” the report reads. Leading up to a final report scheduled to be sent to Congress next year, the NSCAI Commission said this week it now plans to release quarterly reports on how the U.S. military and intelligence agencies can prepare for a future of algorithmic warfare or otherwise use AI for their purposes. A draft NSCAI report released last fall asserted that AI supremacy is essential to U.S. national security and economic might. Virtually all recommendations in this week’s report are intended for Congress or the executive branch for reprogramming 2020 spending or allocation in the 2021 budget. Other recommendations in the report include: Modify the Armed Services Vocational Aptitude Battery ( ASVAB ) test, an entrance exam for military recruits, to seek out people with “computational thinking” — that is, people with minds that define a problem, create models to solve the problem, then iterate. The same test should be extended to civilian DoD employees to find AI talent. Establish AI ethics training for the DoD as well as the Department of Homeland Security, FBI, and intelligence agencies. This training should later be shared with state and local law enforcement agencies. The DoD should also create an expert panel to advise the federal government on AI ethics issues. This week’s report comes after DoD issued recommendations on AI ethics principles in February. Establish general AI training for government procurement officials and HR professionals who are hiring software developers, data scientists, or AI practitioners. Create unclassified workspaces so recent hires who do not yet have a classified security clearance can stay busy. Launch a task force study and pilot program to establish a National AI Research Resource for research. Establish deeper AI collaborations with Australia, Canada, New Zealand, and the United Kingdom. Hire more university professors as part-time government researchers in order to attract young talent. On matters of privacy and ethics, the Commission singles out Clearview AI , a company that scraped billions of images from the web to create a facial recognition system, as an example of invasive AI-powered tech that should be avoided. Tech companies that track the location of individuals — including the employers of some commission members — was also mentioned as problematic. “These developments only confirm that we need to develop best practices, policies, and laws aimed at ensuring the responsible development and fielding of AI-enabled systems and tools consistent with democratic norms and values,” the report reads. The Commissioners, and a staff of experts advising their work, said they switched to a quarterly approach in order to inform elected officials like the president and Congress about issues that deserve near-term or immediate action, like 5G competition with China. “The NSCAI is on track to submit its final report in March 2021. However, the pace of AI development, the geopolitical situation, and the relevant authorization and budget timelines in 2020 represent important opportunities for the Commission to contribute to ongoing efforts to foster research and development, accelerate AI applications, and responsibly grapple with the implications of AI for our security, economy, and society.” The report makes a single mention of coronavirus, citing the global pandemic as a reason the Commission must remain flexible and act fast. Like AI, defense officials have referred to COVID-19 as a national security threat. The Commission will share a series of recommendations deemed classified with executive and legislative branches of government related to specific threats to the United States from foreign state and non-state actors. The laundry list of funding requests in the report could face some obstacles in the future. As the U.S. and global economy continue to falter, economists expect that a recession in the months ahead may lead to the largest reduction in U.S. GDP production since World War II. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,188
2,020
"Stanford and NYU: Only 15% of AI federal agencies use is highly sophisticated | VentureBeat"
"https://venturebeat.com/2020/02/19/only-15-of-ai-federal-agencies-use-is-highly-sophisticated-according-to-stanford-and-nyu-report"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stanford and NYU: Only 15% of AI federal agencies use is highly sophisticated Share on Facebook Share on X Share on LinkedIn U.S. Capitol Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. More than 40% of U.S. federal agencies and departments have experimented with AI tools, but only 15% currently use highly sophisticated AI, according to analysis by Stanford University computer scientists published today in “Government by Algorithm,” a joint report from Stanford and New York University. “This is concerning because agencies will find it harder to realize gains in accuracy and efficiency with less sophisticated tools. This result also underscores AI’s potential to widen, not narrow, the public-private technology gap,” the report reads. The warning comes from an analysis released today of 142 federal agencies and departments and the legal and policy implications of government use of machine learning or “algorithmic governance.” The report excludes analysis of military and intelligence agencies and any federal agency with less than 400 employees. AI in use today include an autonomous vehicle project at the U.S. Postal Service; Food and Drug Administration detection of adverse drug events ; and facial recognition by the U.S. Department of Homeland Security and ICE. Major use cases today focus heavily on enforcement of regulatory mandates, adjudicating benefits and privileges, service delivery, citizen engagement, regulation analysis, and personnel management. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The “Government by Algorithm” report found that 53% of AI use is a product of in-house use by agency technologists, and the remainder comes from contractors. It recommends that federal agencies get more in-house AI talent to vet systems from contractors and create AI that’s policy compliant, customized to meet agency needs, and accountable. It also warns that AI use by government raises the potential to “fuel political anxieties” and creates the risk of AI systems being gamed by “better-heeled groups with resources and know-how.” “An enforcement agency’s algorithmic predictions, for example, may fall more heavily on smaller businesses that, unlike larger firms, lack a stable of computer scientists who can reverse-engineer the agency’s model and keep out of its cross-hairs. If citizens come to believe that AI systems are rigged, political support for a more effective and tech-savvy government will evaporate quickly,” the report reads. The report, put together by a group of lawyers, computer scientists, and social scientists, also acknowledges concerns that more use of AI in the public sector can lead to the growth of government power and the disempowerment of marginalized groups, something AI Now Institute’s Meredith Whittaker and Algorithmic Justice League’s Joy Buolamwini talked about in relation to facial recognition in testimony before Congress over the course of the past year. The report calls its systematic survey of federal government use of AI essential for lawmakers to create “sensible and working prescriptions.” “To achieve meaningful accountability, concrete and technically informed thinking within and across contexts — not facile calls for prohibition, nor blind faith in innovation — is urgently needed,” the report reads. Drawing on resources from Stanford Law School, the Stanford Institute for Human-Centered AI, and Stanford Institute for Economic Policy Research, the report comes at a time when lawmakers from Washington state to Washington D.C. are considering facial recognition regulation. Last week, Senators Cory Booker (D-NJ) and Jeff Merkley (D-OR) proposed the Ethical Use of AI Act, which would require a facial recognition moratorium for federal agencies and employees until limits can be put in place. The European Union Commission today presented a set of initiatives to attract billions in AI investment in member nations and require that high-risk AI used by police and law enforcement, health care, or things related to people’s rights be tested and certified. “We want the application of these new technologies to deserve the trust of our citizens,” EU Commission president Ursula von der Leyen said in a statement. The Trump administration is drafting its own set of regulatory AI principles for federal agencies that White House CTO Michael Kratsios said other nations should emulate. A previous Stanford Institute for Human-Centered AI report called for a $120 billion federal government investment in AI by the federal government to maintain U.S. supremacy in AI, something government officials have called essential to U.S. national defense and economy. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,189
2,019
"KPMG warns executives must move beyond 'lip service' to address AI-driven job loss | VentureBeat"
"https://venturebeat.com/2019/03/28/kpmg-warns-executives-must-move-beyond-lip-service-to-address-ai-driven-job-loss"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages KPMG warns executives must move beyond ‘lip service’ to address AI-driven job loss Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. More than half of business executives plan to implement some form of artificial intelligence within the next 12 months, according to the 2019 State of Intelligent Automation report released today by KPMG International in collaboration with HFS Research. KPMG defines intelligent automation as a collection of terms and practices under the artificial intelligence umbrella — ranging from deep learning to robotic process automation, cognitive computing, and smart analytics. One of the challenges the report highlights is difficulty moving AI implementation beyond pilot projects. In fact, only 17 percent of respondents said they have scaled up or industrialized AI implementation in their organization. The survey also pointed to a less-than-clear understanding of the financial investment needed to scale AI. Still, more than 50 percent of executives surveyed expect to scale intelligent automation at the enterprise level within the next two years, while over 30 percent are investing more than $50 million in AI, primarily toward costs like cloud computing. The survey also found that organizations are likely to underestimate costs beyond the tech needed for AI adoption — such as those related to human resources or retraining employees. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Other insights include: More than 60 percent are applying multiple forms of intelligent automation tech, but only 11 percent are using an integrated solution or a company-wide approach to coordinate efforts across an organization. About 1 in 4 said they want to use AI to drive revenue growth, while 30 percent said they want AI to help them improve the quality of interactions with customers. Organizations in quick-moving industries that need to remain agile have seen the most benefit from AI adoption, while legacy organizations used to operating in a specific way can bring baggage to their AI initiatives. The majority of AI initiatives are headed by IT departments. Less than one-fifth of businesses have an approach that brings together both IT and business leaders in a company. Fundamental shifts in workplace culture are necessary to achieve results from automation beyond cost savings, according to the authors. “If all an organization gains through IA [intelligent automation] is incremental cost savings, it is missing out on IA’s full potential. To get the most from IA efforts beyond cost savings, broad-ranging transformation is needed, not just in a piecemeal way,” the report reads. Near 600 business executives in 13 countries in North America, Europe, Asia, and Africa participated in the study and subsequent interviews to elaborate on survey responses. Participating companies include Ericsson Group, InterContinental Hotels Group, and USAA Bank in the United States. The study also found that virtually all organizations need help preparing employees for changes ahead. “Change management strategies and plans are typically inadequate, and too much lip service is being paid to talk down the potential for job loss, as well as the potential for retraining and reskilling,” the report reads. To succeed in bringing AI services to a business, the report suggests a “top-level champion” be appointed to spearhead initiatives, someone who understands the value of AI within the organization. The study also suggests organizations begin conditioning their employees to understand that their jobs are going to change as part of their AI strategy. About 3 out of 4 organizations surveyed expect intelligent automation to significantly impact 10 to 50 percent of their employees in the next two years. While acknowledging that robotic software can partially or fully eliminate many work roles in an organization, the report notes that only 1 percent of executives surveyed said their goal with AI adoption is to eliminate full-time employees. Despite tension about job loss, the authors implore businesses to continue to deploy AI since they found a correlation between speed of intelligent automation implementation and company success. Nearly 65 percent of the best-performing companies surveyed will scale AI use this year, while nearly 60 percent of poor-performing companies plan to scale AI use in the next two to five years. KPMG survey results mirror a study commissioned by Microsoft and released earlier this month that stresses the need to change company culture in order to successfully implement AI and also notes the correlation between AI adoption rates and the performance of high-growth businesses in the United States and Europe. “It takes patience when pushing forward with IA efforts, especially given [that] the whole transition may face resistance from managers and staff, who may naturally resist and feel threatened by change, especially when it might lead to job loss and changes to roles and operating models. Despite these challenges, organizations must press ahead with their IA efforts,” the report reads. “Intelligent automation will span quickly across all industries and will disrupt businesses at an accelerated pace. The competitive businesses of the future will be far along the IA curve of development.” The study also refers to AI as not just as a potential job disrupter or killer, but key to addressing skills shortages in countries with aging workforces, like Japan, the United States, and Europe. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,190
2,019
"Google Lens, Augmented Reality, and the Future of Learning | WIRED"
"https://www.wired.com/story/google-lens-augmented-reality-future-of-learning"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Lauren Goode Gear Google Lens, Augmented Reality, and the Future of Learning Cera Hensley Save this story Save Save this story Save Did you know that the painter Rockwell Kent, whose splendorous Afternoon on the Sea, Monhegan hangs in San Francisco's de Young Museum, worked on murals and advertisements for General Electric and Rolls-Royce? I did not, until I visited Gallery 29 on a recent Tuesday afternoon, phone in hand. Because the de Young's curators worked with Google to turn some of the informational placards that hang next to paintings into virtual launchpads, any placard that includes an icon for Google Lens —the name of the company's visual search software—is now a cue. Point the camera at the icon and a search result pops up, giving you more information about the work. (You can access Google Lens on the iPhone within the Google search app for iOS or within the native camera app on Android phones.) Related Stories How We Spend Lauren Goode Plot Reversal Nitasha Tiku AUGMENTED REALITY Jason Tanz The de Young's augmented-reality add-ons extend beyond the informational. Aim your camera at a dot drawing of a bee in the Osher Sculpture Garden and a quirky video created by artist Ana Prvacki plays—she attempts to pollinate flowers herself with a bizarrely decorated gardening glove. It wasn't so long ago that many museums banned photo-taking. And smartphones and tablets were disapproved of in classrooms. But technology is winning, and the institutions of learning and discovery are embracing screens. AR , with its ability to layer digital information on top of real-world objects, makes that learning more engaging. Of course, these ARtistic addenda don't pop out in the space in front of you; they're not volumetric, to borrow a term from VR. They appear as boring, flat web pages in your phone's browser. Using Google Lens in its current form in a museum, I discovered, requires a lot of looking up, looking down, looking up, looking down. AR isn't superimposing information atop the painting yet. Then again, Lens isn't just for museums; you can use it anywhere. Google's AR spans maps, menus, and foreign languages. And Google's object-recognition technology is so advanced, the thing you're scanning doesn't need a tag or QR code—it is the QR code. Your camera simply ingests the image and Google scans its own database to identify it. Apple, loath to be outdone by Google, has been hyping AR capabilities via the iPhone and iPad, though not directly in its camera. Instead, Apple has created ARKit , an augmented-reality platform for app makers who want to plug camera-powered intelligence into their own creations. The platform has turned into an early-stage playground for educational apps. Take Froggipedia, which lets teachers lead students through a frog dissection without having to explain the senseless death of the amphibian. Or Plantale, which allows a student to explore the vascular system of a plant by pointing their iPad camera at one. Katie Gardner, who teaches English as a second language at Knollwood Elementary in Salisbury, North Carolina, says her kindergarten students “just scream with excitement” when they see their drawings come to life in the iPad app AR Makr. It takes a 2D drawing and renders it as a 3D object that can be placed in the physical world, as viewed through the iPad's camera. Gardner uses the app for story-retelling exercises: The kids listen to a tale like Sneezy the Snowman and then use AR Makr on their iPads to illustrate a snippet of the narrative. In the real classroom, there is nothing on the table in the corner. But when the kids point their iPads at the table, their creations appear on it. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft It's too early to say how well we learn things through augmented reality. AR lacks totality by definition—unlike VR, it enhances the real world but doesn't replace it—and it's hard to say what that means for memory retention, says Michael Tarr, a cognitive science researcher at Carnegie Mellon University. “There is a difference between the emotional and visceral responses that happen when something is experienced as a real event or thing and when something is experienced as a digital or pictorial implementation of a thing,” he says. Last year, I used Google Lens to identify a fading houseplant, hoping to save it. I now know everything about philodendrons, even though mine didn't make it. During long hikes, I've started using Lens to identify everything from blue gum eucalyptus trees to blue-tailed skinks. But not all of this new knowledge sticks. I still find myself Googling trees and lizards again and again. When I want to really learn something, I put down my $1,000 smartphone and scribble handwritten notes in my $3 notebook. When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works. LAUREN GOODE (@LaurenGoode) covers consumer tech. This article appears in the September issue. Subscribe now. Get more tech news with our Gadget Lab podcast , available on iTunes and Spotify. When influencers switch platforms— and bare it all Inside the hidden world of elevator phone phreaking Smaller cities are trying to plug America's brain drain The radical transformation of the textbook How scientists built a “living drug” to beat cancer 👁 Facial recognition is suddenly everywhere. Should you worry? Plus, read the latest news on artificial intelligence 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Senior Writer X Topics magazine-27.09 augmented reality Google David Nield Justin Pot Lauren Goode Julian Chokkattu Brenda Stolyar Julian Chokkattu Julian Chokkattu Michael Calore WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,191
2,020
"AI Can Help Patients—but Only If Doctors Understand It | WIRED"
"https://www.wired.com/story/ai-help-patients-doctors-understand"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business AI Can Help Patients—but Only If Doctors Understand It Photograph: Getty Images Save this story Save Save this story Save Application Human-computer interaction End User Big company Sector Health care Source Data Text Speech Images Technology Machine learning Nurse Dina Sarro didn’t know much about artificial intelligence when Duke University Hospital installed machine learning software to raise an alarm when a person was at risk of developing sepsis, a complication of infection that is the number one killer in US hospitals. The software, called Sepsis Watch, passed alerts from an algorithm Duke researchers had tuned with 32 million data points from past patients to the hospital’s team of rapid response nurses, co-led by Sarro. But when nurses relayed those warnings to doctors, they sometimes encountered indifference or even suspicion. When docs questioned why the AI thought a patient needed extra attention, Sarro found herself in a tough spot. “I wouldn’t have a good answer because it’s based on an algorithm ,” she says. Sepsis Watch is still in use at Duke—in no small part thanks to Sarro and her fellow nurses reinventing themselves as AI diplomats skilled in smoothing over human-machine relations. They developed new workflows that helped make the algorithm’s squawks more acceptable to people. A new report from think tank Data & Society calls this an example of the “repair work” that often needs to accompany disruptive advances in technology. Coauthor Madeleine Clare Elish says that vital contributions from people on the frontline like Sarro are often overlooked. “These things are going to fail when the only resources are put towards the technology itself,” she says. By Tom Simonite The human-machine mediation required at Duke illustrates the challenge of translating a recent surge in AI health research into better patient care. Many studies have created algorithms that perform as well as or better than doctors when tested on medical records, such as X-rays or photos of skin lesions. But how to usefully employ such algorithms in hospitals and clinics is not well understood. Machine learning algorithms are notoriously inflexible, and opaque even to their creators. Good results on a carefully curated research dataset don’t guarantee success in the chaotic clockwork of a hospital. A recent study on software for classifying moles found its recommendations sometimes persuaded experienced doctors to switch from a correct diagnosis to a wrong one. When Google put a system capable of detecting eye disease in diabetics with 90 percent accuracy into clinics in Thailand, the system rejected more than 20 percent of patient images due to problems like variable lighting. Elish recently joined the company, and says she hopes to keep researching AI in health care. Duke’s sepsis project started in 2016, early in the recent AI health care boom. It was supposed to improve on a simpler system of pop-up sepsis alerts, which workers overwhelmed by notifications had learned to dismiss and ignore. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Researchers at the Duke Institute for Health Innovation reasoned that more targeted alerts, sent directly to the hospital’s rapid response nurses, who in turn informed doctors, might fare better. They used deep learning, the AI technique favored by the tech industry, to train an algorithm on 50,000 patient records , and built a system that scans patient charts in real time. “These things are going to fail when the only resources are put towards the technology itself.” Madeleine Clare Elish, formerly of Data & Society Sepsis Watch got an anthropological close up because the Duke developers knew there would be unknowns in the hospital’s hurly burly and asked Elish for help. She spent days shadowing and interviewing nurses and emergency department doctors and found the algorithm had a complicated social life. The system threw up alerts on iPads monitored by the nurses, flagging patients deemed moderate or high risk for sepsis, or to have already developed the deadly condition. Nurses were supposed to call an emergency department doctor immediately for patients flagged as high risk. But when the nurses followed that protocol, they ran into problems. Some challenges came from disrupting the usual workflow of a busy hospital—many doctors aren’t used to taking direction from nurses. Others were specific to AI, like the times Sarro faced demands to know why the algorithm had raised the alarm. The team behind the software hadn’t built in an explanation function, because as with many machine learning algorithms, it’s not possible to pinpoint why it made a particular call. One tactic Sarro and other nurses developed was to use alerts that a patient was at high risk of sepsis as a prompt to review that person’s chart so as to be ready to defend the algorithm’s warnings. The nurses learned to avoid passing on alerts at certain times of day, and how to probe whether a doctor wasn’t in the mood to hear the opinion of an algorithm. “A lot of it was figuring out the interpersonal communication,” says Sarro. “We would gather more information to arm ourselves for that phone call.” Elish also found that in the absence of a way to know why the system flagged a patient, nurses and doctors developed their own, incorrect, explanations—a response to inscrutable AI. One nurse believed the system looked for keywords in a medical record, which it does not. One doctor advised coworkers that the system should be trusted because it was probably smarter than clinicians. By Tom Simonite Mark Sendak, a data scientist and leader on the project, says that incorrect characterization is an example of how Elish’s findings were more eye opening—and concerning—than expected. His team changed their training and documentation for the sepsis alert system as a result of feedback from Sarro and other nurses. Sendak says the experience has convinced him that AI health care projects should devote more resources to studying social as well as technical performance. “I would love to make it standard practice,” he says. “If we don’t invest in recognizing the repair work people are doing, these things will fail.” Sarro says the tool ultimately appeared to improve the hospital’s sepsis care. Many more AI projects may soon enter the tricky territory Duke encountered. Amit Kaushal, an assistant professor at Stanford, says that in the past decade advances in machine learning and larger medical datasets have made it almost routine to do things researchers once dreamed of, like have algorithms make sense of medical images. But integrating them into patient care may prove more challenging. “For some fields technology is no longer the limiting factor, it’s these other issues,” Kaushal says. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kaushal has contributed to a Stanford project testing camera systems that can alert health workers when they don’t sanitize their hands and says results are promising. Yet while it’s tempting to see AI as a quick fix for health care, proving a system’s worth comes down to conventional and often slow research. “The real proof is in the study that says ‘Does this improve outcomes for our patients?’” Kaushal says. Results from a clinical trial completed last year should go some way to answering that question for Duke’s sepsis system, which has been licensed to a startup called Cohere Med. Sarro, now a nurse practitioner in a different health system, says her experience makes her open to working with more AI tools, but also wary of their limitations. “They’re helpful but just one part of the puzzle.” 📩 Want the latest on tech, science, and more? Sign up for our newsletters ! The Trump team has a plan to not fight climate change To clean up comments, let AI tell users their words are trash Mental health in the US is suffering— will it go back to normal ? Why teens are falling for TikTok conspiracy theories Stop yelling about a rushed vaccine, and start planning for it 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Senior Editor X Topics artificial intelligence health machine learning healthcare Khari Johnson Caitlin Harrington Steven Levy Will Knight Khari Johnson Khari Johnson Vittoria Elliott Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,192
2,020
"Kai-Fu Lee Gives AI a B-Minus Grade in the Covid-19 Fight | WIRED"
"https://www.wired.com/story/kai-fu-lee-ai-b-minus-grade-covid-19"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Nicholas Thompson Business Kai-Fu Lee Gives AI a B-Minus Grade in the Covid-19 Fight While in quarantine in Beijing, Kai-Fu Lee (pictured with WIRED editor in chief Nicholas Thompson) says his meals were delivered by robot, with no human contact. Photograph: Phillip Faraone/Getty Images Save this story Save Save this story Save Application Human-computer interaction Hardware Personal assistant Personal services Prediction End User Consumer Government Research Sector Consumer services Education Health care Technology Machine learning Robotics This past week, as part of the Aspen Ideas Festival, I spoke with Kai-Fu Lee , the president and chair of Sinovation Ventures and a pioneer in artificial intelligence. We discussed his recent argument that AI has been of limited use in the response to the coronavirus crisis. And then we talked about the future of work and why he thinks that Covid-19 is going to accelerate trends toward automation. Because of the virus, and because of the way we all work now, we’re going to have many more robots and other machines in our factories, restaurants, and kitchens. A lightly edited transcript is below. You can watch the original video here. Nicholas Thompson: You're a pioneer in artificial intelligence. You wrote my favorite book on artificial intelligence. You've taught us all a lot about artificial intelligence. And now we have perhaps the greatest crisis of our lifetime and you've given AI a B-minus in helping to resolve it! Why is that? Why such a low grade ? Kai-Fu Lee: Well, B-minus is a lot better than passing. It's not ideal. The reason is, AI works by accumulating a lot of data and seeing recurrence of similar events in order to make accurate predictions. And a pandemic is a once-a-century activity. There isn't a lot of experience building models and there isn't a lot of data. Despite that, there are many places where AI has added value. So therefore the B-minus. Can you walk us through the places where artificial intelligence has been helpful in combating the coronavirus and the places where it hasn't done that much? Two personal examples: I live in China, and contact tracing is working quite well. I get a red, yellow, or green signal on my phone telling me whether I may have been in contact with someone who has the virus and therefore need to do a checkup. That is a way of informing people about their status. The second thing is I was in quarantine when I returned to Beijing, and all the things I ordered by ecommerce—including takeout food—were delivered by robot. So I was really in zero contact with people because robotics are now working well enough within structured environments like apartment buildings, hospitals, stores, and office buildings. Also, AI has made some contributions in helping drug discovery, discovering new antivirus vaccines, and AI has been used in warehouses to handle the massive number of packages that are sent by ecommerce. There are also prediction engines. Early on, there was a company called BlueDot that actually predicted that there may be a serious pandemic coming. Of course, it wasn't released to the world. Now that we've seen the damage of a pandemic, I think if there is a next pandemic or a second wave, the predictions will be much more accurate. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One area where people may have expected AI to be very useful where it hasn't been is in vaccine development. There are a few examples. For example, Moderna , when they were trying to figure out which protein to build for a potential vaccine, did use AI to model the options. But we did not have what many people hoped we'd have, which is the ability to simulate a human being and to identify potential vaccines. How far away are we from that? I think that's impossible to predict right now. We're in a very urgent situation, so all the people working on vaccine development are using the low-hanging fruits; they are not really going all-out for that. One company we invested in called Insilico Medicine used generative chemistry aided by AI to predict the types of compounds that may block the spread of the virus. So there are similar techniques to that, basically retargeting existing AI technologies in places that are low-hanging fruits. I think what you're talking about, a brand new way of doing vaccine development, hopefully the pandemic will give us a running start and we'll make progress towards that but it would be, I think, too aggressive to predict that there will be a huge difference made by AI this time. How high is the fruit? In the next pandemic, will we be able to very quickly develop a vaccine because of AI? Will it take 10 years until we're there? Well I'm going to be an optimist and say yes, but I really don't have the solid grounding to prove that. Let's talk about the industries that are going to change. The coronavirus is going to turn our economy upside-down. It's changing all kinds of industries. It's massively changing my industry, media. It's changing online education in crazy ways. It's changing telemedicine in fascinating ways. What are the industries that you think will change and will be most affected by artificial intelligence? Clearly health care is one. AI has so much to offer in terms of personalized, targeted diagnosis, more accurate due to genome sequencing, new technologies like Crispr coming out, potentially combined with AI; also, there are a lot of inefficiencies in health care. Insurance was not designed with all the health care data. So I think all of these will compound and make health care plus AI the biggest potential. There is one issue with health care, which is whether the data can become accessible. In countries where there are strong protections, such as HIPAA, even with anonymized data, it may be hard to aggregate the data for training AI. And AI really runs on data. "I also think the pandemic will potentially cause certain types of jobs to be more accelerated in their replacement." Kai-Fu Lee Another is education, that you talked about. People are changing their habits about going to school. A billion kids across the world are learning online. And suddenly we see all these ways of using online AI technologies, whether it's AI teachers, AI to help you fix your pronunciation, AI to figure out what areas you're having trouble, in math or English, that can all be added to make the human-to-human interaction more about learning the methods, helping to motivate learning, individualize, but using AI for the routine part of education. Lastly, I think work as a general category is shifting online. We're conducting all of our meetings, we're making investments potentially without ever meeting the entrepreneur. People are making deals online. This change of habit, of being willing to have meetings and make decisions, and helping to change the work process into a digitized process, this digitalization turns everything into data. Once you have data, you have AI potentially coming in to improve the margins, improve the efficiency. A huge potential challenge is you have AI potentially coming in to say, well, everything's digitized, why don't we use AI to do this workload instead of people? So that will accelerate automation and potentially cause a faster churn in terms of AI replacing people. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This is fascinating. Right now you and I are talking across screens. You're on the other side of the world, I'm in an attic in upstate New York. We should be in Aspen talking to each other, but here we are. So the fact that this conversation is all digital, and all the other work conversations I've had today and will have tomorrow are all digital, you think will lead to some kind of unknown advance in AI that will make human work disappear more quickly? Yes. To give you an example, in the pre-pandemic world there are lots of companies that require people-to-people interaction. So people go to work, they have meetings, people take notes, they write on paper, they have records, and they call each other. But now that all the work is essentially operated and run online, everything from meetings to decisions to workflow becomes digital. And once it's digital, the company's management will see, oh, there's that part of expense report decision-making that could be done by AI. There's that part of customer service, we could simply have an AI agent rather than a human agent. Oh, the sales process, all this telesales could be done by AI with either automated speech generation or even synthesis of digital humans. So pieces of corporations and their workflow will become automated faster because it's already online and digital. In your book, AI Superpowers —and let me just pause to say it's superb—you talk about the future of work, and you have two charts. One is about job and workforce, and you have a y-axis which is human interaction and an x-axis which is creativity. So a job that requires a lot of creativity and a lot of human interaction is not likely to be replaced by machines, like a CEO job. And a job that has not a lot of creativity and not a lot of human interaction, like a telemarketer, will be replaced by machines. How does your chart change in the post-coronavirus era? I think the chart basically remains the same. The replacement curve, if you will, taking over the jobs, that's going to go faster. I also think the pandemic will potentially cause certain types of jobs to be more accelerated in their replacement. Because we would think health care, hospital jobs, we would want the human touch, empathy, etc. But human touch in the period of the pandemic increases the likelihood of spreading the virus. So if there were a smart robot that could move medical supplies and help the patients with testing their blood pressure, etc., we as humanity would be more willing to embrace that. Similarly, waiters and waitresses in restaurants. We would think there's value in the human interaction. However, those are dangerous jobs. And certainly at the fast food and lower-cost restaurants, more will be replaced by automation and AI. And actually in China, in many of the lower-cost restaurants, you see people taking orders on mobile phones. And the waiters or waitresses are just delivering the food and you'll also pay on your phones. So that already reduces the cost. And there are also many restaurants that have AI robots delivering the food because the restaurant, like the apartment building and the hospital, are structured environments. Robots’ work moving around in a structured environment is much easier. These are not robots that have feet and hands. Think of them as just carts that basically move to your table, make some sound to let you know to take your own order. So those kinds of jobs that we would think require some human interaction are potentially going to turn into automated jobs faster because of the pandemic. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Do you think using similar logic that we're soon going to have self-driving taxis or that the move toward self-driving vehicles will be accelerated because we don't want to be near human drivers? Not as much, because in order to get to L5, which is the highest of the five levels of autonomous vehicles, there are still a lot of technical challenges to be overcome. So I think the taxis and Uber will continue to be needed. We will see AI taking over, automating human jobs in the warehouse, some of the manufacturing, driving on highways and buses, probably in that order. Taxis and Uber will be the last to be fully automated. I just want to make sure I fully understand your argument, which is, we've all known that AI and robotics would be replacing human jobs. But there are two factors here that are accelerating it. One, we don't want as much human interaction, we don't want to be near the waiter. And two, so much more of our life is digitized and AI runs off data. Is that correct? Is there a third factor, or just those two? No, I think those are the two main factors from the pandemic. Let me ask you about something you mentioned a few minutes before. You were talking about health care and data and HIPAA, and you mentioned that countries that have strong health care regulations, like the United States, won't have as much data and therefore may not have as many advances in artificial intelligence. My view is that we're going to care a lot less about privacy in the future than we do now. So perhaps countries like the United States will start loosening up access to data, for better or for worse. Is that fair? I think that would be a plausible outcome because I think people are starting to realize privacy is not a binary issue. It's also not an issue that trumps everything else. It needs to be considered in the context of public health, greater social good, and personal security. So while we want everyone to have their privacy safe from companies, as much as possible, when it provides a solution to the public health or greater security for each individual and perhaps some incredible convenience for people, then we should really consider it in the context of how much benefit it is providing and provide each person with some degree of choice. Because there will always be people who feel privacy is the most important. So to the extent that each country develops a [balanced] set of regulations, then the appropriate amount of data collection, anonymized data, can be aggregated and AI can be trained. China is quite a few months ahead of the United States in its response to the coronavirus. The countries are clearly on different paths. Tell us what we have to expect for when we eventually get on the China trajectory. What do we have to expect in terms of AI and technology that will come along the way? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There are many things that are fundamentally cultural and very difficult. I think China has gone through SARS, as have many Asian countries. And also, culturally speaking, people are more willing to be disciplined for their own safety and for the safety of society. That's going to be hard to change until a country has experienced some of the challenges. But some of the possibly doable things include, I think, certainly contact tracing based on an understanding that privacy is important but public health is equally or maybe even more important. Second, when there is a significant spread, actually wearing masks is important, but it's really most important when everyone wears it. So when it's purely voluntary it's not going to be as effective. So when the pandemic was in a serious situation in China and other Asian countries, you'd go on the street and everyone is wearing a mask, because the purpose of a mask is mostly not to protect you, but to protect other people in case you have it. Therefore everyone has to wear it. These, I think, are the two biggest things, in case there is a continuing challenge or a second wave. If people would do that and get it close to zero, then actually the new normal as we see in China today is not that far from normal. Actually, of the things that you've described in this conversation, I'm fine with wearing a mask, I'm fine with contact tracing , but I do really look forward to the robot delivering my dinner. Well that actually works incredibly well. It's a little bit related to how people live. Most people like myself in China live in urban condos or apartment buildings. That makes the robot delivery much easier. You've got the delivery from the restaurant to the apartment building, then the apartment building manager handles delivering the last mile from the first floor to your apartment. Because in the US most people live in houses, it's a little more complex, because then the robot has to figure out how to deal with the last mile. Thank you Kai-Fu. It’s always a pleasure to talk with you. We can protect the economy from pandemics. Why didn't we ? Vaccine makers turn to microchip tech to beat glass shortages 15 face masks we actually like to wear It’s ridiculous to treat schools like Covid hot zones After the virus: How we'll learn, age, move, listen, and create Read all of our coronavirus coverage here Editor in Chief X LinkedIn Topics artificial intelligence robotics COVID-19 coronavirus Self-Driving Cars Jobs Amit Katwala Will Knight Khari Johnson Andy Greenberg Andy Greenberg David Gilbert Kari McMahon Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,193
2,021
"Understanding the mind | MIT Technology Review"
"https://www.technologyreview.com/2021/08/25/1032153/mind-editor-letter"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Understanding the mind By Michael Reilly archive page Simon Simard Inside the three-pound lumps of mostly fat and water inside our heads we can, in a very real sense, find the root of everything we know and ever will know. Sure, the universe gave rise to our brains. But what good is the cosmos without brains and, more specifically, minds? Without them, there’d be no understanding, no appreciation, no probing of great mysteries. Which is what this issue is all about: our quest to understand what’s between our ears, and in so doing, better understand ourselves. A friendly warning: you are in for some mind-bending stuff. As Lisa Feldman Barrett notes in our opening essay, our brains create our minds specifically to preserve our bodies and pilot them through our environment. “Your brain did not evolve to think, feel, and see,” she writes. “It evolved to regulate your body. Your thoughts, feelings, senses, and other mental capacities are consequences of that regulation.” Basically, our minds create a fiction for us to live in. Nathan McGee knows a thing or two about having his mind bent. After suffering from PTSD since early childhood, he enrolled in a clinical trial in his 40s to test whether the psychedelic drug MDMA could help him. The result was nothing short of transformative. “I’m seeing life as a thing to be explored and appreciated rather than something to be endured,” he told Charlotte Jee in an intimate interview about his experience. Similarly, for those of us experiencing pandemic fatigue, Dana Smith has some good news: our brains definitely took a hit as we social-distanced and Zoomed ourselves into oblivion, but they’re also really, really good at bouncing back. Your pandemic brain will heal; just give it time. Messing with our heads can also be fun, as Neel Patel tells us. He writes about a talent he developed as a teenager: lucid dreaming. The science behind it is still being worked out, but it’s proving useful for helping people unlock their creativity and deal with fears and traumatic memories. It is perhaps in dreams where the power of our minds to hold sway over what we believe is “real” is most clearly on display. In a roundup of three fascinating new books on human perception , writer Matthew Hutson quotes one author: “You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality." There’s still the question of what it means to be conscious. For a long time, we humans clung to the idea that we were the only conscious animals. It’s one of several misunderstandings about brains that David Robson and David Biskup put the lie to in comic-strip form. Not only is consciousness hard to define, but it has been extremely difficult to measure. Yet there is now a consciousness meter to detect it in people, as Russ Juskalian finds out. Consciousness in silicon form is on Will Douglas Heaven’s brain these days; he ponders whether we’d know it if we managed to build a conscious machine. Dan Falk asks researchers whether they think a brain is a computer in the first place. And Emily Mullin takes a look at two multibillion-dollar efforts to study the human brain in unprecedented detail—one of which involved trying to simulate one from scratch. No issue on the mind would be complete without a chance to gaze upon the gray matter itself, and there are brains aplenty in our haunting photo essay documenting a library of malformed specimens. If that’s too much, zoom in on our infographic that depicts what happens in Tate Ryan-Mosley’s brain when she sees her boyfriend’s face. And finally, we’ve included a rare treat indeed: a selection of poetry curated by our news editor, Niall Firth. It’s guaranteed to jangle your neurons into a new way of viewing this thing we call “reality.” hide by Michael Reilly Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2021 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,194
2,021
"Is everything in the world a little bit conscious? | MIT Technology Review"
"https://www.technologyreview.com/2021/08/25/1032149/panpsychism-conscious-world"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Is everything in the world a little bit conscious? The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be tested? Surprisingly, perhaps it can. By Christof Koch archive page Andrea Daquino Panpsychism is the belief that consciousness is found throughout the universe—not only in people and animals, but also in trees, plants, and bacteria. Panpsychists hold that some aspect of mind is present even in elementary particles. The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional reasons. But can it be empirically tested? Surprisingly, perhaps it can. That’s because one of the most popular scientific theories of consciousness, integrated information theory (IIT), shares many—though not all—features of panpsychism. As the American philosopher Thomas Nagel has argued, something is conscious if there is “something that it is like to be” that thing in the state that it is in. A human brain in a state of wakefulness feels like something specific. IIT specifies a unique number, a system’s integrated information, labeled by the Greek letter φ (pronounced phi ). If φ is zero, the system does not feel like anything; indeed, the system does not exist as a whole, as it is fully reducible to its constituent components. The larger φ, the more conscious a system is, and the more irreducible. Given an accurate and complete description of a system, IIT predicts both the quantity and the quality of its experience (if any). IIT predicts that because of the structure of the human brain, people have high values of φ, while animals have smaller (but positive) values and classical digital computers have almost none. A person’s value of φ is not constant. It increases during early childhood with the development of the self and may decrease with onset of dementia and other cognitive impairments. φ will fluctuate during sleep, growing larger during dreams and smaller in deep, dreamless states. IIT starts by identifying five true and essential properties of any and every conceivable conscious experience. For example, experiences are definite (exclusion). This means that an experience is not less than it is (experiencing only the sensation of the color blue but not the moving ocean that brought the color to mind), nor is it more than it is (say, experiencing the ocean while also being aware of the canopy of trees behind one’s back). In a second step, IIT derives five associated physical properties that any system—brain, computer, pine tree, sand dune—has to exhibit in order to feel like something. A “mechanism” in IIT is anything that has a causal role in a system; this could be a logical gate in a computer or a neuron in the brain. IIT says that consciousness arises only in systems of mechanisms that have a particular structure. To simplify somewhat, that structure must be maximally integrated—not accurately describable by breaking it into its constituent parts. It must also have cause-and-effect power upon itself, which is to say the current state of a given mechanism must constrain the future states of not only that particular mechanism, but the system as a whole. Given a precise physical description of a system, the theory provides a way to calculate the φ of that system. The technical details of how this is done are complicated , but the upshot is that one can, in principle, objectively measure the φ of a system so long as one has such a precise description of it. (We can compute the φ of computers because, having built them, we understand them precisely. Computing the φ of a human brain is still an estimate.) Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Systems can be evaluated at different levels—one could measure the φ of a sugar-cube-size piece of my brain, or of my brain as a whole, or of me and you together. Similarly, one could measure the φ of a silicon atom, of a particular circuit on a microchip, or of an assemblage of microchips that make up a supercomputer. Consciousness, according to the theory, exists for systems for which φ is at a maximum. It exists for all such systems, and only for such systems. The φ of my brain is bigger than the φ values of any of its parts, however one sets out to subdivide it. So I am conscious. But the φ of me and you together is less than my φ or your φ, so we are not “jointly” conscious. If, however, a future technology could create a dense communication hub between my brain and your brain, then such brain-bridging would create a single mind, distributed across four cortical hemispheres. Conversely, the φ of a supercomputer is less than the φs of any of the circuits composing it, so a supercomputer—however large and powerful—is not conscious. The theory predicts that even if some deep-learning system could pass the Turing test, it would be a so-called “zombie”—simulating consciousness, but not actually conscious. Like panpsychism, then, IIT considers consciousness an intrinsic, fundamental property of reality that is graded and most likely widespread in the tree of life, since any system with a non-zero amount of integrated information will feel like something. This does not imply that a bee feels obese or makes weekend plans. But a bee can feel a measure of happiness when returning pollen-laden in the sun to its hive. When a bee dies, it ceases to experience anything. Likewise, given the vast complexity of even a single cell, with millions of proteins interacting, it may feel a teeny-tiny bit like something. Debating the nature of consciousness might at first sound like an academic exercise, but it has real and important consequences. Most obviously, it matters to how we think about people in vegetative states. Such patients may groan or otherwise move unprovoked but fail to respond to commands to signal in a purposeful manner by moving their eyes or nodding. Are they conscious minds, trapped in their damaged body, able to perceive but unable to respond? Or are they without consciousness? Evaluating such patients for the presence of consciousness is tricky. IIT proponents have developed a procedure that can test for consciousness in an unresponsive person. First they set up a network of EEG electrodes that can measure electrical activity in the brain. Then they stimulate the brain with a gentle magnetic pulse, and record the echoes of that pulse. They can then calculate a mathematical measure of the complexity of those echoes, called a perturbational complexity index (PCI). In healthy, conscious individuals—or in people who have brain damage but are clearly conscious—the PCI is always above a particular threshold. On the other hand, 100% of the time, if healthy people are asleep, their PCI is below that threshold (0.31). So it is reasonable to take PCI as a proxy for the presence of a conscious mind. If the PCI of someone in a persistent vegetative state is always measured to be below this threshold, we can with confidence say that this person is not covertly conscious. This method is being investigated in a number of clinical centers across the US and Europe. Other tests seek to validate the predictions that IIT makes about the location and timing of the footprints of sensory consciousness in the brains of humans, nonhuman primates, and mice. Unlike panpsychism, the startling claims of IIT can be empirically tested. If they hold up, science may have found a way to cut through a knot that has puzzled philosophers for as long as philosophy has existed. Christof Koch is the chief scientist of the MindScope program at the Allen Institute for Brain Science in Seattle. hide by Christof Koch Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2021 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,195
2,021
"Our brains exist in a state of “controlled hallucination” | MIT Technology Review"
"https://www.technologyreview.com/2021/08/25/1032121/brains-controlled-hallucination"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Our brains exist in a state of “controlled hallucination” Three new books lay bare the weirdness of how our brains process the world around us. By Matthew Hutson archive page Andrea Daquino When you and I look at the same object we assume that we’ll both see the same color. Whatever our identities or ideologies, we believe our realities meet at the most basic level of perception. But in 2015, a viral internet phenomenon tore this assumption asunder. The incident was known simply as “The Dress.” For the uninitiated: a photograph of a dress appeared on the internet, and people disagreed about its color. Some saw it as white and gold; others saw it as blue and black. For a time, it was all anyone online could talk about. Eventually, vision scientists figured out what was happening. It wasn’t our computer screens or our eyes. It was the mental calculations that brains make when we see. Some people unconsciously inferred that the dress was in direct light and mentally subtracted yellow from the image, so they saw blue and black stripes. Others saw it as being in shadow, where bluish light dominates. Their brains mentally subtracted blue from the image, and came up with a white and gold dress. Not only does thinking filter reality; it constructs it, inferring an outside world from ambiguous input. In Being You , Anil Seth, a neuroscientist at the University of Sussex, relates his explanation for how the “inner universe of subjective experience relates to, and can be explained in terms of, biological and physical processes unfolding in brains and bodies.” He contends that “experiences of being you , or of being me, emerge from the way the brain predicts and controls the internal state of the body.” Prediction has come into vogue in academic circles in recent years. Seth and the philosopher Andy Clark, a colleague at Sussex, refer to predictions made by the brain as “controlled hallucinations.” The idea is that the brain is always constructing models of the world to explain and predict incoming information; it updates these models when prediction and the experience we get from our sensory inputs diverge. “Chairs aren’t red,” Seth writes, “just as they aren’t ugly or old-fashioned or avant-garde … When I look at a red chair, the redness I experience depends both on properties of the chair and on properties of my brain. It corresponds to the content of a set of perceptual predictions about the ways in which a specific kind of surface reflects light.” Seth is not particularly interested in redness, or even in color more generally. Rather his larger claim is that this same process applies to all of perception: “The entirety of perceptual experience is a neuronal fantasy that remains yoked to the world through a continuous making and remaking of perceptual best guesses, of controlled hallucinations. You could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality.” Cognitive scientists often rely on atypical examples to gain understanding of what’s really happening. Seth takes the reader through a fun litany of optical illusions and demonstrations, some quite familiar and others less so. Squares that are in fact the same shade appear to be different; spirals printed on paper appear to spontaneously rotate; an obscure image turns out to be a woman kissing a horse; a face shows up in a bathroom sink. Re-creating the mind’s psychedelic powers in silicon, an artificial-intelligence-powered virtual-reality setup that he and his colleagues created produces a Hunter Thompson–esque menagerie of animal parts emerging piecemeal from other objects in a square on the Sussex University campus. This series of examples, in Seth’s telling, “chips away at the beguiling but unhelpful intuition that consciousness is one thing—one big scary mystery in search of one big scary solution.” Seth’s perspective might be unsettling to those who prefer to believe that things are as they seem to be: “Experiences of free will are perceptions. The flow of time is a perception.” Seth is on comparatively solid ground when he describes how the brain shapes experience, what philosophers call the “easy” problems of consciousness. They’re easy only in comparison to the “hard” problem: why subjective experience exists at all as a feature of the universe. Here he treads awkwardly, introducing the “real” problem, which is to “explain, predict, and control the phenomenological properties of conscious experience.” It’s not clear how the real problem differs from the easy problems, but somehow, he says, tackling it will get us some way toward resolving the hard problem. Now that would be a neat trick. Where Seth relates, for the most part, the experiences of people with typical brains wrestling with atypical stimuli, in Coming to Our Senses , Susan Barry, an emeritus professor of neurobiology at Mount Holyoke college, tells the stories of two people who acquired new senses later in life than is usual. Liam McCoy, who had been nearly blind since he was an infant, was able to see almost clearly after a series of operations when he was 15 years old. Zohra Damji was profoundly deaf until she was given a cochlear implant at the unusually late age of 12. As Barry explains, Damji’s surgeon “told her aunt that, had he known the length and degree of Zohra’s deafness, he would not have performed the operation.” Barry’s compassionate, nuanced, and observant exposition is informed by her own experience: At age forty-eight, I experienced a dramatic improvement in my vision, a change that repeatedly brought me moments of childlike glee. Cross-eyed from early infancy, I had seen the world primarily through one eye. Then, in mid-life, I learned, through a program of vision therapy, to use my eyes together. With each glance, everything I saw took on a new look. I could see the volume and 3D shape of the empty space between things. Tree branches reached out toward me; light fixtures floated. A visit to the produce section of the supermarket, with all its colors and 3D shapes, could send me into a sort of ecstasy. Barry was overwhelmed with joy at her new capacities, which she describes as “seeing in a new way.” She takes pains to point out how different this is from “seeing for the first time.” A person who has grown up with eyesight can grasp a scene in a single glance. “But where we perceive a three-dimensional landscape full of objects and people, a newly sighted adult sees a hodgepodge of lines and patches of colors appearing on one flat plane.” As McCoy described his experience of walking up and down stairs to Barry: The upstairs are large alternating bars of light and dark and the downstairs are a series of small lines. My main focus is to balance and step IN BETWEEN lines, never on one … Of course going downstairs you step in between every line but upstairs you skip every other bar. All the while, when I move, the stairs are skewing and changing. Even a sidewalk was tricky, at first, to navigate. He had to judge whether a line “indicated the junction between flat sidewalk blocks, a crack in the cement, the outline of a stick, a shadow cast by an upright pole, or the presence of a sidewalk step,” Barry explains. “Should he step up, down, or over the line, or should he ignore it entirely?” As McCoy says, the complexity of his perceptual confusion probably cannot be fully explained in terms that sighted people are used to. The same, of course, is true of hearing. Raw audio can be hard to untangle. Barry describes her own ability to listen to the radio while working, effortlessly distinguishing the background sounds in the room from her own typing and from the flute and violin music coming over the radio. “Like object recognition, sound recognition depends upon communication between lower and higher sensory areas in the brain … This neural attention to frequency helps with sound source recognition. Drop a spoon on a tiled kitchen floor, and you know immediately whether the spoon is metal or wood by the high- or low-frequency sound waves it produces upon impact.” Most people acquire such capacities in infancy. Damji didn’t. She would often ask others what she was hearing, but had an easier time learning to distinguish sounds that she made herself. She was surprised by how noisy eating potato chips was, telling Barry: “To me, potato chips were always such a delicate thing, the way they were so lightweight, and so fragile that you could break them easily, and I expected them to be soft-sounding. But the amount of noise they make when you crunch them was something out of place. So loud.” As Barry recounts, at first Damji was frightened by all sounds, “because they were meaningless.” But as she grew accustomed to her new capabilities, Damji found that “a sound is not a noise anymore but more like a story or an event.” The sound of laughter came to her as a complete surprise, and she told Barry it was her favorite. As Barry writes, “Although we may be hardly conscious of background sounds, we are also dependent upon them for our emotional well-being.” One strength of the book is in the depth of her connection with both McCoy and Damji. She spent years speaking with them and corresponding as they progressed through their careers: McCoy is now an ophthalmology researcher at Washington University in St. Louis, while Damji is a doctor. From the details of how they learned to see and hear, Barry concludes, convincingly, that “since the world and everything in it is constantly changing, it’s surprising that we can recognize anything at all.” In What Makes Us Smart , Samuel Gershman, a psychology professor at Harvard, says that there are “two fundamental principles governing the organization of human intelligence.” Gershman’s book is not particularly accessible; it lacks connective tissue and is peppered with equations that are incompletely explained. He writes that intelligence is governed by “inductive bias,” meaning we prefer certain hypotheses before making observations, and “approximation bias,” which means we take mental shortcuts when faced with limited resources. Gershman uses these ideas to explain everything from visual illusions to conspiracy theories to the development of language, asserting that what looks dumb is often “smart.” “The brain is evolution’s solution to the twin problems of limited data and limited computation,” he writes. He portrays the mind as a raucous committee of modules that somehow helps us fumble our way through the day. “Our mind consists of multiple systems for learning and decision making that only exchange limited amounts of information with one another,” he writes. If he’s correct, it’s impossible for even the most introspective and insightful among us to fully grasp what’s going on inside our own head. As Damji wrote in a letter to Barry: When I had no choice but to learn Swahili in medical school in order to be able to talk to the patients—that is when I realized how much potential we have—especially when we are pushed out of our comfort zone. The brain learns it somehow. Matthew Hutson is a contributing writer at The New Yorker and a freelance science and tech writer. hide by Matthew Hutson Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2021 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity. By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology. By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience. By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,196
2,021
"The hunt for hidden signs of consciousness in unreachable patients | MIT Technology Review"
"https://www.technologyreview.com/2021/08/25/1031776/the-hunt-for-hidden-signs-of-consciousness-in-unreachable-patients"
"Featured Topics Newsletters Events Podcasts The hunt for hidden signs of consciousness in unreachable patients Experts may not agree on what consciousness is or isn’t. But that hasn’t stopped Marcello Massimini from peering into the minds of those with profound brain injuries to determine if anyone is still inside—and how to proceed with treatment. Russ Juskalian by Russ Juskalian archive page At first glance, there’s nothing remarkable about the uninspired, low-rise hospital on the west side of Milan, affectionately known as “Gnocchi.” But two floors up, on an isolated wing of the Don Carlo Gnocchi IRCCS Centro S. Maria Nascente , an uncommunicative man with a severe brain injury is hooked up to a technology suite that researchers here believe can tell them if he’s conscious. The man sits in what resembles a motorized dentist’s chair, his head cocked backwards, a blue surgical mask covering his mouth and nose. A white mesh cap dotted with 60 electrodes, each connected to a two-meter-long cable, is held in place by a strap beneath his chin. Hovering above him, an infrared array positioned on an articulating arm bounces signals off sensors attached to the man’s temples to produce a moving, MRI-constructed overlay of his brain on a nearby monitor. A researcher watching the monitor then presses a white plastic oval to the man’s skull and aims electromagnetic pulses at Tic Tac–size areas of his brain. Each pulse makes an audible click. Three heavy cables, each about as thick as a garden hose, coil out from behind the device to a quarter-million-dollar machine controlling the output. On the other side of the room, Marcello Massimini, a blue-eyed, curly-haired neuroscientist, and Angela Comanducci, the patient’s neurologist, watch on a laptop as complicated blue squiggles representing brain waves fill the screen in close to real time. What the scientists see in them is the faintest sign of a liminal, maybe dreamlike, consciousness. Back in the lab, a computer will assign those brain-wave recordings a number from 0 to 1—the so-called perturbational complexity index, or PCI. This single number, according to Massimini and his colleagues, is a crude measure of a type of complexity that reveals whether a person is conscious. The researchers have even calculated a cutoff of 0.31, which, according to a 2016 study of the technology in healthy and brain-injured subjects, “discriminated between unconscious and conscious conditions with 100% sensitivity and 100% specificity.” In other words, it works well—really well. More unsettling is that when the researchers calculated PCI from a group of patients with unresponsive wakefulness syndrome (UWS, a condition previously known as a “vegetative state”), they found that around one in five had a PCI value within the consciousness distribution. “Even if [such a] patient is completely unresponsive, no sign whatsoever of consciousness,” Massimini told me, “you can say with confidence that this patient is nonetheless conscious.” Such a breakthrough represents the most accurate consciousness meter ever seen in medicine (even if it is still crude, rudimentary, and unrefined). The medical implications are wide reaching. Estimates suggest there are up to 390,000 people around the world with prolonged disorders of consciousness. Some of them, unresponsive, may be treated as though nobody is in there—while they experience the world awake, alone, and unable to reach out from their bodily prison as long as they live. Massimini is confident that PCI can help identify those people. In July 2021, when I visited him in Milan, Massimini was collaborating with other researchers in Milan, Boston, Los Angeles, and beyond. In the meantime, PCI measurements are already being used at Gnocchi to help guide diagnosis and determine the potential for partial recovery. The solution PCI was born of the search to overcome nearly a century of obstacles standing in the way of measuring consciousness. Since 1924, when Hans Berger invented electroencephalography (EEG), scientists have tried to access the electrical responses that our brains use to communicate, hoping to see, predict, and measure what is going on behind the 6.5-millimeter-thick protection of our skulls. Berger’s invention detected changes in spikes of voltage produced by our neurons—converting those signals into the seismograph-like squiggles popularized as “brain waves.” Standard EEG patterns include fast alpha waves, oscillating about 10 times a second and common in consciousness, and slow delta waves, oscillating about once per second and common in nondreaming sleep or under anesthesia. But passively listening to the brain with EEG is an imperfect way to determine consciousness, because exceptions are lurking everywhere. The anesthetic ketamine can excite the brain, resulting in alternating alpha and delta waves. Some types of coma patients show fast oscillations while unconscious. And people under the influence of the drug atropine or during a seizure pattern called status epilepticus report being conscious while displaying the slow brain waves typical of unconsciousness. An even bigger issue is that a patient’s brain activity itself—the result of short attention span, drowsiness, voluntary or involuntary movement, visual distractions, or even a lack of desire to follow instructions—can cause passive EEG to skew and react in ways that render its messages a mess. The case for PCI is that it claims to be an objective measure of consciousness—a relatively straightforward yes or no. What differentiates it from regular EEG, according to Massimini, is that while the older technology only measures ongoing brain activity, PCI measures the brain’s capacity to sustain complex internal interactions. You can do this, he says, if you give the brain a knock and then follow how that perturbation filters and reverberates and is acted on as it courses through the fantastically complex architecture of 86 billion neurons and their 100 trillion connections in the human brain. That knock or zap is delivered via trans-cranial magnetic stimulation (TMS), which has been around in modern form since the 1980s: a wand is held up against the head to shoot an electromagnetic impulse into the brain. When it’s used to target the motor cortex, TMS can provoke involuntary twitching of the hand; when it targets the visual cortex, it can induce lightning-like visuals in the mind’s eye. To generate a PCI reading, Massimini uses TMS on the cerebral cortex. Then he uses EEG to measure what happens. It is the quality of the post- zap signal that leads to a score. What Massimini looks for in this perturbed EEG is a special kind of complexity that is organized, but not too organized. The conscious mind produces neither the perfectly synchronized ripples of a stone lobbed into an imaginary pond nor the perfectly scrambled noise of an analog TV’s between-channel snow. The template of consciousness is more like an intricate chaos—a unique pattern among an almost infinite number of possibilities, with brain waves appearing similar in some areas and profoundly different in others. Onscreen in the hospital, a high PCI looks like a series of squiggles that start off alike yet differentiate from one another as they move across the geography of the brain. A low PCI is even easier to see: either you get the same long, slow wave everywhere, or you get a wave in one part of the brain and silence everywhere else. For years, Massimini and others could literally watch consciousness being recorded onscreen yet were stumped by how to quantify it. They had clues for how to proceed, since the search for PCI was built on the foundation of integrated information theory (IIT), a controversial model of consciousness proposed by Giulio Tononi, a professor of psychiatry at the University of Wisconsin School of Medicine (see page 82). IIT claims that a conscious brain has a high level of integration (its various parts influence one another) alongside a high level of differentiation (the parts produce diverse signals). Massimini was trying to find a proxy for this complexity that could actually be calculated in the lab, but the goal was elusive. The “lucky strike,” as he recalls it, came from a bored Brazilian physicist named Adenauer Casali whose wife worked down the hall. Massimini offered Casali space in his office, where the physicist passed the time reading Dante and other Italian greats. One day the two started talking, and Massimini mentioned the problem. “He’s in my lab, sitting on the chair,” recalls Massimini. “We start talking: ‘We’re doing this and that, and we have this problem, by the way—maybe you can add something?’” Indeed, the solution was obvious to Casali. All Massimini needed to do was take the TMS-EEG recordings and compress the data using the same algorithm a computer uses to compress files to the ZIP format. A low-complexity signal would end up being tiny because it would contain so little unique data. A high-complexity signal indicating a conscious mind would be large. Casali was credited as a first author on the paper introducing the quantification of PCI, and the procedure itself remains known as zap-ZIP. Doubters It’s a difficult thing to pursue something like PCI when experts still can’t agree on what consciousness is and isn’t. Tononi, who at times sounds like a mystic, explained the nature of consciousness to me with an example from everyday life. “You are lying in bed and asleep, a dreamless sleep, and then you wake up and suddenly there is something rather than nothing,” he told me. “That something is consciousness—having an experience.” For most of history, detecting that something wasn’t all that difficult. If you asked someone a question and got a reasonable answer, that person was probably conscious. “That’s still the gold standard,” says Massimini. “[Massimini has] shown empirically that when the brain networks are shut down by anesthesia or sleep or brain injury, you have complexity patterns that are different from those seen when someone is awake.” But the increasing use of mechanical ventilation in the 1950s and 1960s helped create a significant populations of people with long-term disorders of consciousness for the first time. Today there are those who can be kept alive even though we have zero evidence of anyone being in there. And there are those like the gray-haired man at Gnocchi who show potential hints of consciousness, like eyes that track movement, but have no behavioral way to communicate or to prove their internal existence. Beyond is a whole spectrum of difficult-to-distinguish states. Tononi’s something is a condition we can all immediately identify in ourselves yet find difficult to know about in other people unless they tell us. Related Story That makes any measure of consciousness controversial, let alone one whose theoretical foundation is IIT. While some scientists have called IIT the best theory of consciousness put forward to date, not everyone is a fan. When I wrote Michael Graziano, a neuroscientist at Princeton, about his opinion on IIT and PCI, his response was unequivocal. “IIT is pseudoscience,” he wrote. But, he continued, even phrenology—the idea, now firmly established as nonsense, that the shape of people’s heads can tell you about their personality—helped push science in the 1800s toward the idea that different parts of the brain had different functions, and that the cerebral cortex was worth some attention. “That change in perspective led to most of the major discoveries in brain science for a century,” he acknowledged, so PCI might still be worth something. Emery Brown, a neuroscientist and anesthesiologist who is the director of the Harvard-MIT Program in Health Science and Technology, is reserving judgment, waiting for more evidence to come in. He’s wary of letting the “theory drive the analysis.” Yet Brown admires Massimini for doing experiments, carefully analyzing data, and publishing results for anyone to see. “What I like about it, when I hear Marcello talk about it, is that he is being a total empiricist,” Brown told me. “He’s shown empirically that when the brain networks are shut down by anesthesia or sleep or brain injury, you have complexity patterns that are different from those seen when someone is awake.” And that empiricism makes a compelling case when PCI values are computed in actual human beings. Beautiful consistency The power of Massimini’s approach is perhaps best represented in a beautifully consistent chart from years of testing the technology. On the chart, PCI values computed from people known to have been conscious or not are recorded as dots separated by a dashed line at the threshold of 0.31. In every single case, the maximum PCI scores recorded in nondreaming sleep, or under the influence of one of three different anesthetic drugs, are below the line. And for the same people, every single one of the maximum scores while awake, experiencing the dreaming sleep of REM, or under the influence of ketamine (which at anesthetic doses induces a dreamlike state) is above the line. So are nearly all the maximum scores for patients with locked-in syndrome and who had experienced strokes, who at the time of the study were able to prove their consciousness by communicating. Notably, 36 out of 38 patients in a minimally conscious state showed high complexity, demonstrating the unprecedented sensitivity of PCI as an objective marker of consciousness. But nine of 43 patients previously considered totally without consciousness also scored above the line. This raises difficult questions. With no other way to prove their consciousness, and no way to communicate, those patients represent either PCI’s failure or its horrifying promise. Their zap-ZIP responses were similar in quality to those of people with minimal consciousness, as well as conscious people when awake, dreaming, or dosed with ketamine. And in fact, half a year after testing, six of these patients improved to the point that they were classified as minimally conscious. Someone, it seems, was in there after all. In recent years, researchers in Massimini’s group have had the opportunity to stimulate neurons and record brain activity from electrodes temporarily inserted into the brains of patients having surgery for epilepsy. These measurements revealed an interesting mechanism by which PCI may collapse after brain injury , leading to loss of consciousness. Neuronal circuits that are physically spared by the lesion may enter a sleep-like mode, leaving the whole brain unable to generate complex patterns of interactions. “Such intrusion of sleep-like neuronal activity may be only temporary in some patients, who will eventually regain consciousness, but may persist in others who remain blocked in a state of low complexity, corresponding to a prolonged vegetative state,” says Massimini. And that, he thinks, could provide a rationale for developing novel treatments to reawaken brain circuits and restore consciousness. PCI could be refined in the form of other ways to perturb the brain, such as focused ultrasound or targeted laser light. Or the technology could be improved through better spatial-temporal resolution, or even automated scanning and computational calculations of where complexity is maximized in a damaged brain. Massimini is clear that in its current form, PCI can’t say much about the quality or degree of consciousness—just whether it is there or not. And he sees the 0.31 threshold as a clinical measure of a blurry condition—it’s not the case that at 0.30 there’s nothing at all and at 0.32 consciousness appears in full form. You can have a high PCI score, he says, “and it doesn’t even make a difference whether you’re dreaming or awake.” Obviously part of the picture is missing. Breaking through But Angela Comanducci, a clinical neurophysiologist who passed through Massimini’s lab during her training and now oversees the 13-bed wing at Gnocchi that’s devoted to disorders of consciousness, has already observed the clinical power of PCI firsthand. In June 2020, a 21-year-old woman was brought to the ward two months after sustaining a traumatic brain injury from being beaten. “Every clinical diagnostic test, experimental and established, showed no signs of consciousness,” Comanducci told me. The situation was so dire that the family of the patient had been told to expect she would remain in an irreversible vegetative state. Related Story But when Comanducci and her staff hooked the woman up to the bulky TMS-EEG apparatus used to measure her PCI, they were startled by what they saw. “Within seconds, I could see on the screen she was in there,” said Comanducci. The PCI they calculated later that day was high—reflecting high-complexity EEG response to TMS stimulation—and compatible with a minimally conscious state. Over the next weeks they manually moved the patient’s fingers, arms, and legs, trying to reboot her brain the way you might start an old airplane by spinning its propeller. They spoke to her as if she was listening, trying to trigger a response—a sigh, perhaps, or the tiniest vertical movement of her eyes. And they administered a drug called amantadine, hoping to awaken parts of the brain they suspected might be undamaged yet in a state something like a protective sleep. “I told my rehabilitation staff, ‘Now you must be detectives,’” recalled Comanducci. “‘Search everywhere and find her!’” About a month later, they found her. With a millimeter wiggle of a single finger, the woman opened a fragile portal of communication to the world outside. With practice, she learned to move more fingers, carving out a system with which she could answer simple questions. Russ Juskalian is a freelance writer and photographer whose work has appeared in Discover , Smithsonian , and the New York Times. by Russ Juskalian Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2021 issue. Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain. By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative. By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,197
2,021
"The quest to learn if our brain's mutations affect mental health | MIT Technology Review"
"https://www.technologyreview.com/2021/08/25/1031729/brain-mutation-genes-mental-health"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The quest to learn if our brain’s mutations affect mental health For years scientists have tried to find a gene for conditions like schizophrenia, Alzheimer’s and autism. But the real source could lie in a much more complex genetic puzzle. By Roxanne Khamsi archive page Tony Luong When Mike McConnell decided what he wanted to spend his career working on, he was 29, inspired to begin his PhD—and flat broke. He’d learned from his biology classes that immune cells in the body constantly rearrange their own DNA: it’s what allows them to protect us by making receptors in the right shapes to bind to invasive pathogens. As he wrapped up a master’s degree in immunology in Virginia in the late 1990s, he’d obsess about it over beers with his roommates. “Suddenly this idea kind of clicked,” McConnell recalls. If gene rearrangement helped the immune system function, where else could it happen? What about the brain? “Wouldn’t it be neat if neurons did something like that too?” he thought. At the time, most scientists assumed that cells in the normal nervous system had identical genomes. But McConnell looked through the scientific literature and found he wasn’t the only one hot on the trail of this question: a neuroscientist named Jerold Chun at the University of California, San Diego, was already working on it. He wrote to Chun and persuaded him to let him join his lab on the West Coast. There was just one problem: McConnell couldn’t afford to get there. He was “a starving graduate student already,” with no cash to fix his navy 1966 Mustang—and as the first person in his family to go to college, he didn’t have access to many resources. “I didn’t have anybody who was going to drop some moving expenses in my lap or any of those sorts of things,” he explains. Chun gave him $1,000 to repair the broken car and get himself across the country so that he could start testing his hypothesis. Using special dyes to stain the chromosomes of neurons from mouse embryos and adult mice, McConnell hoped to find that the neurons had undergone the same type of genetic rearrangement seen in immune cells, yielding diversity rather than the perfect copies most researchers would have expected. Instead, though, he kept finding brain cells that had the wrong number of chromosomes. This was a surprise. When cells divide, they replicate their DNA for their daughter cells. Sometimes copies of genes are accidentally added or lost, which—unlike the reshuffling within chromosomes that’s beneficial in the immune system—was thought to be a hugely damaging mistake. It didn’t make sense that neurons could survive such a giant change in their genetic material. But McConnell kept finding aberrant neurons with extra or missing chromosomes. Finally he had to reconsider scientific assumptions. “We took the crazy idea seriously,” he says. A postdoctoral fellow in the lab named Stevens Rehen had expertise in culturing the neurons for study, which made it possible to parse the data. The UCSD team’s experiments, published in 2001, showed that the central nervous systems of developing mouse embryos did not contain perfect genetic copies. Instead, the researchers suggested, about a third of the neurons from each mouse embryo, on average, had lost a chromosome or gained an extra one. The result was what’s known as a “genetic mosaic.” While many of those cells didn’t survive, some made it into the brains of adult mice. McConnell, Chun, and their coauthors wondered what such a genetic mosaic might mean. Perhaps in humans it could be a contributing factor to neurological disorders, or even psychiatric disease. In any case, it was an early clue that the conventional notion of genetically identical brain cells was wrong. At the time, scientists seeking to understand the biology of mental illness were mainly looking for genetic mutations that had occurred near the moment of conception and thus were reflected in all of a person’s cells. Tantalizing clues had emerged that a single gene might be responsible for certain conditions. In 1970, for example, a Scottish teen with erratic behavior was found to have a broken gene region—and it turned out that his relatives with mental illness showed the same anomaly. It took three decades to isolate the error, which researchers named DISC1 (for “disrupted-in-schizophrenia”). Despite some 1,000 published research papers, the question of whether DISC1—or any other single gene—is involved in schizophrenia remains much debated. A handful of other genes have also been scrutinized as possible culprits, and one study of the whole human genome pointed to more than 120 different places where mutations seemed to heighten the risk of the disease. But after this extensive search for a “schizophrenia gene,” no single gene or mutation studied so far seems to exert a big enough influence to be seen as a definitive cause—not even DISC1. In fact, scientists have struggled in their search for specific genes behind most brain disorders, including autism and Alzheimer’s disease. Unlike problems with some other parts of our body, “the vast majority of brain disorder presentations are not linked to an identifiable gene,” says Chun, who is now at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California. But the UCSD study suggested a different path. What if it wasn’t a single faulty gene—or even a series of genes—that always caused cognitive issues? What if it could be the genetic differences between cells? The explanation had seemed far-fetched, but more researchers have begun to take it seriously. Scientists already knew that the 85 billion to 100 billion neurons in your brain work to some extent in concert—but what they want to know is whether there is a risk when some of those cells might be singing a different genetic tune. Ditching the dogma McConnell, now 51, has now spent most of his career trying to answer this question. He seems laid back, at first, with his professorial short beard, square glasses, and slight surfer lilt. But there’s an intensity, too: he looks a little like a younger version of the Hollywood star Liam Neeson, with somber, spirited eyes and a furrowed brow. After earning his PhD, McConnell packed his bags once again and moved to Boston to start a postdoctoral position at Harvard Medical School. But he was restless. He didn’t relish the colder climate and longed to head back to California and revisit the data he’d found there on genetic differences in the brain. “I thought mosaicism was the most interesting thing I could be working on,” he recalls, sweeping the ends of his brown hair behind his ears, “ and one Boston winter made me really miss San Diego.” He started corresponding with Rusty Gage, a neuroscientist at the Salk Institute for Biological Studies in San Diego. Gage was also interested in genetic diversity, but he was best known for pushing against another piece of scientific dogma. People had long assumed that adults never made new neurons, but Gage had led a group that published a paper in the late 1990s detailing evidence of newly born cells in a brain region called the hippocampus. The publication—establishing the evidence of what is called adult neurogenesis—gave him a reputation as a maverick who wasn’t afraid to stand behind provocative ideas. Not too long after the UCSD team published its paper about mosaicism in the brain, Gage had struck upon another phenomenon that could explain how genetic diversity arises in the nervous system. It was already known that cells had bits of DNA called long interspersed nuclear elements, or LINEs, which jump around the genome. Gage and his colleagues showed that these could also cause mosaics to emerge. In one experiment, mice engineered to carry human DNA elements known as LINE-1s developed genetically diverse cells in their brains as a result. Just as with his work on neurogenesis, Gage initially encountered skepticism. The idea that LINEs—which many considered to be “junk” DNA—could cause genetic diversity in brain cells ran counter to the prevailing wisdom. “We knew we were going to run into a sawmill,” he recalls. But Gage and his collaborators kept plowing ahead for more evidence. After the rodent study, he and his teammates looked at the human brain. Four years later, they published an analysis of postmortem samples, which found that LINE-1s seemed especially active in human brain tissues. McConnell had been corresponding with Gage about all this, including the chromosome variation he’d found in mouse neurons while working in Chun’s lab. By the start of 2009, he’d secured a fellowship with Gage at the Salk Institute. There, they looked for evidence of the same phenomenon in human neurons, and after just a few years, they found it. As part of the experiment, which appeared in Science in 2013, they used a new technology called single-cell genome sequencing. The technique could isolate and read out the DNA from individual cells; until then, scientists had only been able to analyze extracted genetic material from pooled cell samples. Using the postmortem frontal cortex samples from three healthy individuals, they applied the method to dozens of neurons and established that up to 41% of the cells had either missing or extra gene copies. This variation was “abundant,” they concluded, and it contributed to the mosaic of genetic differences in the brain. Instead of being genetically uniform, it turns out, our brains are rife with genetic changes. “We’re past the story about whether or not it occurs,” Gage says. “These mosaic events are occurring. This is very reminiscent of where I was with the adult neurogenesis. When everybody finally agreed that it occurred, we had to figure out what it did.” Widening the search After publishing data from human brains, McConnell didn’t feel he wanted to go back to studying mice. So when it came time for him to set up his own lab at the University of Virginia, he immediately set out to find human samples. “I spent the first three years as an assistant professor trying to find brains,” he recalls. A couple of years after he landed in Virginia, the mission to understand the constellation of mutations in the brain got an important boost. The National Institute of Mental Health gave $30 million to a consortium including Gage, McConnell, and others so they could keep investigating somatic mosaicism. (“Somatic,” from the Greek for “body,” refers to mutations that arise during a person’s lifetime, rather than in the sperm or egg cells of the individual’s parents.) The network contained research groups looking at the different effects of genetic mosaics. Gage and McConnell were part of a subset focused on the link with psychiatric disease. They devised a plan to look for different mechanisms for mosaicism using the same set of brain samples. Crucially, they got human samples. Tissue biopsies of postmortem brains from individuals with schizophrenia were shipped from a repository in Baltimore, the Lieber Institute for Brain Development, to each of the three teams. One portion of each was sent to Gage’s group in California to be examined for LINE-1s that might have caused mosaic genetic variation. Another portion was sent to McConnell’s team in Virginia to look for genetic mosaics caused by deleted or duplicated DNA in the genome. The remaining third of each sample went to yet another lab, led by John Moran at the University of Michigan in Ann Arbor, which was investigating whether cells that acquire small DNA sequence errors very early in development might seed the formation of large brain regions with the same mutation. There's a growing list of brain conditions where mosaicism really does seem to have a role. It has "reached proof for autism, epilepsy, and brain overgrowth disorders," says McConnell. This January, a large group of scientists including members of the consortium published a paper in Nature Neurosciencedescribing how they used machine learning to analyze data about postmortem brain cells from several people who’d had schizophrenia. The researchers suggested that LINEs begin actively mutating brain DNA early in fetal development—and found instances where LINE-1s had bombarded at least two gene regions linked to neuropsychiatric disorders. McConnell expects these kinds of discoveries to accelerate. He says that big improvements in genetic sequencing in the last few years now allow scientists to detect DNA errors at the individual cell level much more quickly. A couple of years ago it used to take four lab members on McConnell’s team two weeks to individually sequence 300 brain cells. Today, one team member working alone can do single-cell sequencing on 2,000 cells in three days. “It’s been a game-changer,” he says. But finding mutations isn’t the same as establishing a causal link between them and disease.The sporadic and variable nature of mosaic mutations makes definitively connecting them to disease a complicated undertaking.Colleagues have cautioned him against chasing windmills in a quest that McConnell himself describes as “a little bit quixotic.” Uncharted waters The quest to understand how mosaic gene mutations might influence psychiatric disease stretches much further back than the work of scientists such as McConnell. He notes that decades ago “people were finding strange chromosome abnormalities in psychiatric diseases, largely in blood draws.” But if you look to that history, you will see that those investigating the role of mosaic gene patterns in mental health have had false starts. One of the earliest case reports emerged decades ago: in the spring of 1959, a 19-year-old woman in southern England began stripping the paper off the walls of her newly decorated room. A month later, she burned all her clothes and ran away to the seaside town of Brighton. Her erratic behavior intensified to the point that she was admitted to a psychiatric hospital, where doctors diagnosed her with schizophrenia. They examined her blood and looked for the 46 wound-up bundles of chromosomes inside each cell. What they found surprised them: about a fifth of her cells were missing one of the two X chromosomes that women normally carry. The woman’s doctors were unsure whether her mosaicism was a factor in her psychiatric disorder. There are a handful of other cases of women who, like the British patient, were missing their second X in some cells and who also had schizophrenia. But the link remains pure speculation. While it’s still too early to say how mosaic gene mutations in the brain might influence schizophrenia, there’s a growing list of brain conditions where mosaicism really does seem to have a role. For example, a pivotal 2012 study by Harvard geneticist Christopher Walsh and his colleagues uncovered evidence that somatic mutations were the root cause of some forms of epilepsy. Related Story Perhaps the greatest amount of data on gene mosaics—and therefore the most promising area of development—is being generated from studies of autism. Various research groups, including Walsh’s, have found evidence that as many as 5% of children with autism spectrum disorder have potentially damaging mosaic mutation. More recently, in January, Walsh—along with consortium members like Rusty Gage— published a study uncovering evidence that certain types of mutations arise more commonly in people with autism. They looked at postmortem brain samples from 59 people with autism and 15 neurotypical individuals for comparison, and found that those in the first group had an unexpectedly high number of somatic mutations in the genetic regions called enhancers. These regions help stimulate the production of genes, which led the researchers to speculate that mosaic mutations there might elevate a person’s risk of developing autism. And even though brain cells are not thought to be actively dividing like cells in other tissues, they do seem to develop into more of a genetic mosaic as we age. In 2018, the team led by Walsh analyzed neurons taken from the brains of 15 people four months to 82 years old, as well as nine people with disorders linked to premature aging. They concluded that the somatic changes in DNA that create a mosaic accumulate “ slowly but inexorably with age in the normal human brain. ” A new study from Walsh’s group, still undergoing peer review, suggests that while human neurons begin with hundreds of such mutations in every genome, mutations continue to build at a rate of up to 25 per year for life. On this basis, he and his teammates calculated that neurons in elderly individuals contain somewhere between 1,500 and 2,500 mutations per cell. “We think that this is a key new way of looking at aging and common forms of neurodegeneration like Alzheimer’s disease,” Walsh says. British scientists looking specifically for somatic variants in genes associated with neurodegenerative disorders such as Parkinson’s and Alzheimer’s suggest that the average adult has 100,000 to 1 million brain cells with pathologically mutated genes. The next step is to understand whether and how those mutations actually exert an influence. Identifying the link between mosaics in the brain and various medical conditions isn’t just about explaining how these illnesses arise, though.One of the greatest hopes is that it might help usher in new therapeutic approaches. That’s already happening with one condition, an often untreatable form of epilepsy known as focal cortical dysplasia. The brains of individuals with this disorder have telltale spots of disorganized tissue layers, and patients sometimes undergo surgery to remove these brain areas in the hope of reducing their seizures. A study published in 2018 by researchers at the Korea Advanced Institute of Science and Technology found mosaic mutations in these abnormal brain spots that overstimulated certain cell-signaling pathways. Drugs that curb this overactivity, called mTOR inhibitors, are worth a shot , according to scientists. “I think it’s largely uncharted waters,” says Orrin Devinsky, who is leading a pilot trial for a drug to treat focal cortical dysplasia at the New York University Langone Medical Center. “There’s a few areas where we’ve made real progress … but I think with the larger field the ground has barely been touched.” On the brink Twenty years after he started, Mike McConnell remains as fascinated as ever with the question of how genetic mutations acquired after conception or birth might shape our behavior. “My interest really became: What makes outliers?” he says, with the California tone that he brought back with him to the East Coast. “What makes two identical twins totally different people?” In all that time, a lot has changed. He’s married and settled down, he’s earned awards from the likes of the US National Academy of Medicine, and he’s not a destitute grad student anymore. He recently switched coasts again, moving his lab to the Lieber Institute, which is home to more than 3,000 brains—one of the world’s largest collections. And he thinks we’re on the brink of a breakthrough. Even if the links between mutations and mental conditions are not conclusive, scientists in the field now feel they have amassed a trove of data to show that having genetically different cells can certainly influence our health. “Brain somatic mosaicism has reached proof for autism, epilepsy, and brain overgrowth disorders,” McConnell says. The evidence, meanwhile, continues to accumulate that many people have significantly mosaic brains. One 2018 analysis suggests that around 1 out of every 100 people has deleterious mosaic genetic difference that affects “sizable brain regions.” In other words, they have a section of brain cells that possess a mutation not seen in surrounding cells. However, while there’s increasingly solid evidence that mosaic gene patterns in the brain contribute to epilepsy and autism, there isn’t enough data yet to implicate them in schizophrenia. McConnell has kept the faith that studying human brains will reveal whether some “flavor” of mosaic mutations contributes to that disease too—mutations that could point toward new treatments. “I’m either going to have a eureka moment, or this is just something that happens and there’s not a clear link to disease,” he says. Ever the optimist, he hopes to succeed where others have failed by sorting through the flood of genetic data pouring in about the brain cells he’s analyzing. “If there’s a signal there,” he says, “I think I’m going to see a hint of it in the upcoming year.” Roxanne Khamsi is a science journalist based in Montreal. This story was supported by a reporting grant through the Genetics and Human Agency Journalism Fellowship. hide by Roxanne Khamsi Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2021 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain. By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative. By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,198
2,021
"How technology can let us see and manipulate memories | MIT Technology Review"
"https://www.technologyreview.com/2021/08/25/1031417/memory-brain-questions"
"Featured Topics Newsletters Events Podcasts How technology can let us see and manipulate memories Optogenetics and advanced imaging have helped neuroscientists understand how memories form and made it possible to manipulate them. by Joshua Sariñana archive page There are 86 billion neurons in the human brain, each with thousands of connections, giving rise to hundreds of trillions of synapses. Synapses—the connection points between neurons—store memories. The overwhelming number of neurons and synapses in our brains makes finding the precise location of a specific memory a formidable scientific challenge. Figuring out how memories form may ultimately help us learn more about ourselves and keep our mental acuity intact. Memory helps shape our identities, and memory impairment may indicate a brain disorder. Alzheimer’s disease robs individuals of their memories by destroying synapses; addiction hijacks the brain’s learning and memory centers; and some mental health conditions, like depression, are associated with memory impairment. In many ways, neuroscience has revealed the nature of memories, but it has also upended the very notion of what memories are. The five questions below speak to how much we’ve learned and what mysteries remain. Can we see memories in the brain? Neuroscientists have observed the basic outline of memories in the brain for decades. However, only recently could they see the enduring physical representation of a memory, which is called a memory engram. An engram is stored within a network of connected neurons, and neurons holding the engram can be made to glow so that they are visible through special microscopes. Today, neuroscientists can manipulate memory engrams by artificially activating their underlying networks and inserting new information. These techniques are also shedding light on how different types of memory work and where each is recorded in the brain. Episodic autobiographical memory deals with what happened, where, and when. It relies on the hippocampus, a seahorse-shaped structure. Procedural memories, supported by the basal ganglia, let us remember how to carry out habitual behaviors like riding a bike. This region malfunctions in those with addiction. Our ability to recall facts, like state capitals, is thanks to semantic memory, which is stored in the cortex. What tools let us see memories? At the end of the 19th century, tabletop microscopes made it possible to identify individual neurons, enabling scientists to draw stunningly detailed representations of the brain. By the mid-20th century, powerful electron microscopes could show synaptic structures just tens of nanometers wide (about the width of a virus particle). At the turn of the 21st century, neuroscientists used two-photon microscopes to watch synapses form in real time while mice learned. Incredible advancements in genetics have also made it possible to swap genes in and out of the brain to link them to memory function. Scientists have used viruses to insert a green fluorescent protein found in jellyfish into mouse brains, causing neurons to light up during learning. They’ve also used an algae protein called channelrhodopsin (ChR2) to artificially activate neurons. The protein is sensitive to blue light, so when it’s inserted into neurons, the neurons can be turned on and off with a blue laser—a technique known as optogenetics. With this technology, which was pioneered by researchers at Stanford almost two decades ago, neuroscientists can artificially activate memory engram cells in lab animals. New techniques also make it possible to study how nerve impulses translate outside information to our inner worlds. To watch this process in the brain, neuroscientists use tiny electrodes to record the impulses, which last for just a few milliseconds. Analytical tools such as neural decoding algorithms can then weed out noise to reveal patterns that indicate a memory center in the brain. Open-source software kits allow more neuroscience laboratories to conduct such research. What do these tools tell us about how memories are created and stored? How neurons become part of a memory engram remained a mystery until recently. When neuroscientists looked closer, they were surprised to see that neurons compete with one another to store memories. By inserting genes into the brain to increase or decrease neuron excitability, the researchers learned that the most excited neurons in the area will become part of the engram. These neurons will also actively inhibit their neighbors from becoming part of another engram for a short period of time. This competition likely helps memories form and shows that where memories are allocated in the brain is not random. Related Story In other experiments, researchers found that neural networks hold on to forgotten memories. Mice injected with a cocktail of protein inhibitors develop amnesia, likely forgetting information because their synapses wither away. But the researchers discovered that these memories weren’t forever lost—the neurons still held the information, though without synapses, it couldn’t be retrieved (at least not without optogenetic stimulation). Mice with Alzheimer’s disease showed similar memory loss. Another finding has to do with how dreaming strengthens our memories. Neuroscientists had long thought that as the day’s experiences replayed in the form of nerve impulses during sleep, those memories slowly transferred out of the hippocampus and to the cortex so that the brain could extract information to create rules about the world. They also knew that some rules were synthesized by the cortex more quickly, but existing models couldn’t explain how this happened. Recently, though, researchers have used optogenetic tools in animal studies to show that the hippocampus also works to establish these rapidly forming cortical memories. “The hippocampus helps to rapidly create immature memory engrams in the cortex,” says Takashi Kitamura , an assistant professor at the University of Texas Southwestern Medical Center. “The hippocampus still teaches the cortex, but without optogenetic tools we might not have observed the immature engrams.” Can memories be manipulated? Memories are not as stable as they might feel. By their very nature, they must be amenable to change, or learning would be impossible. Nearly a decade ago, MIT researchers genetically altered mice so that when their neurons were active during learning, this activity turned on the ChR2 gene, which was tethered to a green fluorescent protein. By seeing which neurons fluoresced, neuroscientists could identify which ones were involved in learning. And they could reactivate specific memories by shining light on the ChR2 genes associated with those neurons. With this ability, the MIT researchers inserted a false memory into mouse brains. First they placed the mice in a triangular box, which activated specific ChR2 genes and neurons. Then they put the mice in a square box and administered shocks to their feet while shining a light on the ChR2 neurons associated with the first environment. Eventually, the mice associated the memory of the triangle box with the shocks even though they were shocked only while in the square box. “The animals were fearful of an environment that, technically speaking, never had anything ‘bad’ happen in it,” says Steve Ramirez , a coauthor of the study who is now an assistant professor of neuroscience at Boston University. It’s not feasible to use such techniques involving fiber-optic cables and lasers to experiment on the human brain, but the results on the brains of mice suggest how easily memories can be manipulated. Can we see memories outside of the brain? Human memories can be visually reconstructed using brain scanners. In research conducted by Brice Kuhl , who is now an assistant professor of cognitive neuroscience at the University of Oregon, people were given images to view, and their brains were scanned with an MRI machine to measure which regions were active. An algorithm was then trained to guess what the person was viewing and reconstruct an image based on this activity. The algorithm also reconstructed images from participants who were asked to hold one of the images they viewed in their minds. There’s much room for improvement in these reconstructed images, but this work showed that neuroimaging and reconstruction algorithms can indeed show the content of human memories for others to see. Technology has let neuroscientists peer into the brain and see the tiny glowing traces of memory. Yet the discovery that experiences and knowledge can be implanted or externalized has also given memory a different meaning. What does this mean for our sense of who we are? Joshua Sariñana is a neuroscientist, writer, and fine art photographer. by Joshua Sariñana Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2021 issue. Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain. By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative. By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,199
2,021
"Is your brain a computer? | MIT Technology Review"
"https://www.technologyreview.com/2021/08/25/1030861/is-human-brain-computer"
"Featured Topics Newsletters Events Podcasts Is your brain a computer? We asked experts for their best arguments in the long-standing debate over whether brains and computers process information the same way. Getty by Dan Falk archive page It’s an analogy that goes back to the dawn of the computer era: ever since we discovered that machines could solve problems by manipulating symbols, we’ve wondered if the brain might work in a similar fashion. Alan Turing, for example, asked what it would take for a machine to “think” ; writing in 1950, he predicted that by the year 2000 “one will be able to speak of machines thinking without expecting to be contradicted.” If machines could think like human brains, it was only natural to wonder if brains might work like machines. Of course, no one would mistake the gooey material inside your brain for the CPU inside your laptop—but beyond the superficial differences, it was suggested, there might be important similarities. Today, all these years later, experts are divided. Although everyone agrees that our biological brains create our conscious minds , they’re split on the question of what role, if any, is played by information processing—the crucial similarity that brains and computers are alleged to share. While the debate may sound a bit academic, it actually has real-world implications: the effort to build machines with human-like intelligence depends at least in part on understanding how our own brains actually work, and how similar—or not—they are to machines. If brains could be shown to function in a way that was radically different from a computer, it would call into question many traditional approaches to AI. The question may also shape our sense of who we are. As long as brains, and the minds they enable, are thought of as unique, humankind might imagine itself to be very special indeed. Seeing our brains as nothing more than sophisticated computational machinery could burst that bubble. We asked the experts to tell us why they think we should—or shouldn’t—think of the brain as being “like a computer.” AGAINST: The brain can’t be a computer because it’s biological. Everyone agrees that the actual stuff inside a brain—“designed” over billions of years by evolution—is very different from what engineers at IBM and Google put inside your laptop or smartphone. For starters, brains are analog. The brain’s billions of neurons behave very differently from the digital switches and logic gates in a digital computer. “We’ve known since the 1920s that neurons don’t just turn on and off,” says biologist Matthew Cobb of the University of Manchester in the UK. “As the stimulus increases, the signal increases,” he says. “The way a neuron behaves when it’s stimulated is different from any computer that we’ve ever built.” Blake Richards, a neuroscientist and computer scientist at McGill University in Montreal, agrees: brains “process everything in parallel, in continuous time” rather than in discrete intervals, he says. In contrast, today’s digital computers employ a very specific design based on the original von Neumann architecture. They work largely by going step by step through a list of instructions encoded in a memory bank, while accessing information stored in discrete memory slots. “None of that has any resemblance to what goes on in your brain,” says Richards. (And yet, the brain keeps surprising us: in recent years, some neuroscientists have argued that even individual neurons can perform certain kinds of computations, comparable to what computer scientists call an XOR, or “exclusive or,” function.) FOR: Sure it can! The actual structure is beside the point. But perhaps what brains and computers do is fundamentally the same, even if the architecture is different. “What the brain seems to be doing is quite aptly described as information processing,” says Megan Peters, a cognitive scientist at the University of California, Irvine. “The brain takes spikes [brief bursts of activity that last about a tenth of a second] and sound waves and photons and converts it into neural activity—and that neural activity represents information.” Richards, who agrees with Cobb that brains work very differently from today’s digital computers, nonetheless believes the brain is , in fact, a computer. “A computer, according to the usage of the word in computer science, is just any device which is capable of implementing many different computable functions,” says Richards. By that definition, “the brain is not simply like a computer. It is literally a computer.” Michael Graziano, a neuroscientist at Princeton University, echoes that sentiment. “There’s a more broad concept of what a computer is, as a thing that takes in information and manipulates it and, on that basis, chooses outputs. And a ‘computer’ in this more general conception is what the brain is; that’s what it does.” But Anthony Chemero, a cognitive scientist and philosopher at the University of Cincinnati, objects. “What seems to have happened is that over time, we’ve watered down the idea of ‘computation’ so that it no longer means anything,” he says. “Yes, your brain does stuff, and it helps you know things—but that’s not really computation anymore.” FOR: Traditional computers might not be brain-like, but artificial neural networks are. All of the biggest breakthroughs in artificial intelligence today have involved artificial neural networks , which use “layers” of mathematical processing to assess the information they’re fed. The connections between the layers are assigned weights (roughly, a number that corresponds to the importance of each connection relative to the others—think of how a professor might work out a final grade based on a series of quiz results but assign a greater weight to the final quiz). Those weights are adjusted as the network is exposed to more and more data, until the last layer produces an output. In recent years, neural networks have been able to recognize faces, translate languages , and even mimic human-written text in an uncanny way. Related Story “An artificial neural network is actually basically just an algorithmic-level model of a brain,” says Richards. “It is a way of trying to model the brain without reference to the specific biological details of how the brain works.” Richards points out that this was the explicit goal of neural-network pioneers like Frank Rosenblatt, David Rumelhart, and Geoffrey Hinton : “They were specifically interested in trying to understand the algorithms that the brain uses to implement the functions that brains successfully compute.” Scientists have recently developed neural networks whose workings are said to more closely resemble those of actual human brains. One such approach, predictive coding, is based on the premise that the brain is constantly trying to predict what sensory inputs it’s going to receive next; the idea is that “keeping up” with the outside world in this way boosts its chances for survival—something that natural selection would have favored. It’s an idea that resonates with Graziano. “The purpose of having a brain is movement—being able to interact physically with the external world,” he says. “That’s what the brain does; that’s the heart of why you have a brain. It’s to make predictions.” AGAINST: Even if brains work like neural networks, they’re still not information processors. Not everyone thinks neural networks support the notion that our brains are like computers. One problem is that they are inscrutable : when a neural network solves a problem, it may not be at all clear how it solved the problem, making it harder to argue that its method was in any way brain-like. “The artificial neural networks that people like Hinton are working on now are so complicated that even if you try to analyze them to figure out what parts were storing information about what, and what counts as the manipulation of that information, you’re not going to be able to pull that out,” says Chemero. “The more complicated they get, the more intractable they become.” But defenders of the brain-as-computer analogy say that doesn’t matter. “You can’t point to the 1 s and 0 s,” says Graziano. “It’s distributed in a pattern of connectivity that was learned among all those artificial neurons, so it’s hard to ‘talk shop’ about exactly what the information is, where it’s stored, and how it’s encoded—but you know it’s there.” FOR: The brain has to be a computer; the alternative is magic. If you’re committed to the idea that the physical brain creates the mind, then computation is the only viable path, says Richards. “Computation just means physics,” he says. “The only other option is that you’re proposing some kind of magical ‘soul’ or ‘spirit’ or something like that ... There’s literally only two options: either you’re running an algorithm or you’re using magic.” AGAINST: The brain-as-computer metaphor can’t explain how we derive meaning. No matter how sophisticated a neural network may be, the information that flows through it doesn’t actually mean anything, says Romain Brette, a theoretical neuroscientist at the Vision Institute in Paris. A facial-recognition program, for example, might peg a particular face as being mine or yours—but ultimately it’s just tracking correlations between two sets of numbers. “You still need someone to make sense of it, to think, to perceive,” he says. Which doesn’t mean that the brain doesn’t process information—perhaps it does. “Computation is probably very important in the explanation of the mind and intelligence and consciousness,” says Lisa Miracchi, a philosopher at the University of Pennsylvania. Still, she emphasizes that what the brain does and what the mind does are not necessarily the same. And even if the brain is computer-like, the mind may not be: “Mental processes are not computational processes, because they’re inherently meaningful, whereas computational processes are not.” So where does that leave us? The question of whether the brain is or is not like a computer appears to depend partly on what we mean by “computer.” But even if the experts could agree on a definition, the question seems unlikely to be resolved anytime soon—perhaps because it is so closely tied to thorny philosophical problems, like the so-called mind-body problem and the puzzle of consciousness. We argue about whether the brain is like a computer because we want to know how minds came to be; we want to understand what allows some arrangements of matter, but not others, not only to exist but to experience. Dan Falk is a science journalist based in Toronto. His books include The Science of Shakespeare and In Search of Time. by Dan Falk Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2021 issue. Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks. By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers. By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel. By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field. By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,200
2,021
"A New Antitrust Case Cuts to the Core of Amazon’s Identity | WIRED"
"https://www.wired.com/story/amazon-antitrust-lawsuit-cuts-to-core-of-identity"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gilad Edelman Business A New Antitrust Case Cuts to the Core of Amazon’s Identity An antitrust lawsuit filed against Amazon on Tuesday directly challenges the company's narrative that customers are always its top priority. Photograph: Getty Images Save this story Save Save this story Save “I founded Amazon 26 years ago with the long-term mission of making it Earth’s most customer-centric company,” Jeff Bezos testified before the House Antitrust Subcommittee last summer. “Not every business takes this customer-first approach, but we do, and it’s our greatest strength.” Bezos’ obsession with customer satisfaction is at the center of Amazon’s self-mythology. Every move the company makes, in this account, is designed with only one goal in mind: making the customer happy. If Amazon has become an economic juggernaut, the king of ecommerce, that’s not because of any unfair practices or sharp elbows; it’s simply because customers love it so much. The antitrust lawsuit filed against Amazon on Tuesday directly challenges that narrative. The suit, brought by Karl Racine, the Washington, DC, attorney general, focuses on Amazon’s use of a so-called most-favored-nation clause in its contracts with third-party sellers, who account for most of the sales volume on Amazon. A most-favored-nation clause requires sellers not to offer their products at a lower price on any other website, even their own. According to the lawsuit, this harms consumers by artificially inflating prices across the entire internet, while preventing other ecommerce sites from competing against Amazon on price. “I filed this antitrust lawsuit to put an end to Amazon’s ability to control prices across the online retail market,” Racine said in a press conference announcing the case. For a long time, Amazon openly did what DC is alleging; its “price parity provision” explicitly restricted third-party sellers from offering lower prices on other sites. It stopped in Europe in 2013, after competition authorities in the UK and Germany began investigating it. In the US, however, the provision lasted longer, until Senator Richard Blumenthal wrote a letter to antitrust agencies in 2018 suggesting Amazon was violating antitrust law. A few months later, in early 2019, Amazon dropped price parity. But that wasn’t the end of the story. The DC lawsuit alleges that Amazon simply substituted a new policy that uses different language to accomplish the same result as the old rule. Amazon’s Marketplace Fair Pricing Policy informs third-party sellers that they can be punished or suspended for a variety of offenses, including “setting a price on a product or service that is significantly higher than recent prices offered on or off Amazon.” This rule can protect consumers when used to prevent price-gouging for scarce products, as happened with face masks in the early days of the pandemic. But it can also be used to inflate prices for items that sellers would prefer to offer more cheaply. The key phrase is “off Amazon. ” In other words, Amazon reserves the right to cut off sellers if they list their products more cheaply on another website—just as it did under the old price parity provision. According to the final report filed by the House Antitrust Subcommittee last year, based on testimony from third-party sellers, the new policy “has the same effect of blocking sellers from offering lower prices to consumers on other retail sites.” The main form that this price discipline takes, according to sellers who have spoken out against Amazon either publicly or in anonymous testimony, is through manipulating access to the Buy Box—those Add to Cart and Buy Now buttons at the top right of an Amazon product listing. When you go to buy something, there are often many sellers trying to make the sale. Only one can “win the Buy Box,” meaning they’re the one who gets the sale when you click one of those buttons. Because most customers don’t scroll down to see what other sellers are offering a product, winning the Buy Box is crucial for anyone trying to make a living by selling on Amazon. As James Thomson, a former Amazon employee and a partner at Buy Box Experts, a brand consultancy for Amazon sellers, told me in 2019, “If you can’t earn the Buy Box, for all intents and purposes, you’re not going to earn the sale.” Gated Community Gilad Edelman Gadget Lab Podcast WIRED Staff Social Media Gilad Edelman Jason Boyce, another longtime Amazon seller turned consultant, explained to me how this works. He and his partners were excited when the last third-party seller contract they signed with Amazon, to sell sporting goods on the site, didn’t include the price parity provision. “We thought, ‘This is great! We can offer discounts on Walmart, and Sears, and wherever else,’” he said. But then something odd happened. Boyce (who spoke with House investigators as part of the antitrust inquiry) noticed that once his company lowered prices on other sites, sales on Amazon started tanking. “We went to the listing, and the Add to Cart button was gone, the Buy Now button was gone. Instead, there was a gray box labeled ‘See All Buying Options.’ You could still buy the product, but it was an extra click. Now, an extra click on Amazon is an eternity—they’re all about immediate gratification.” Moreover, his company’s ad spending plummeted, which he realized was because Amazon doesn’t show users ads for products without a Buy Box. “So what did we do? We went back and raised our prices everywhere else, and within 24 hours everything came back. Traffic improved, clicks improved, and sales came back.” The upshot, Boyce said, is that sellers can’t lower their prices even when they’re selling on their own site or on other platforms, like Walmart.com, that don’t take as large a cut of sales or require sellers to spend as much on advertising—two costs that have increased in recent years on Amazon. (Amazon search results tend to feature paid promotions at the top, which puts pressure on sellers to pay for ads if they want customers not to have to scroll down to see them. That appears to be a key reason why Amazon has become the third-largest digital advertising company, with more than double the ad revenue of Snap, Twitter, Roku, and Pinterest combined.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Because of its size and strength, and because sellers can’t keep their prices low on their own channels, Amazon is literally inflating the entire online economy,” said Boyce. “It’s insane. And any seller who tries to lower their prices is going to get their sales suppressed on Amazon within a week.” Boyce’s experience illustrates something important about most-favored-nation clauses: On their own, they aren’t illegal. The problem comes when they’re used by a company with a dominant share of the market. If a store wants to feature a certain brand on its shelves in exchange for an agreement not to sell more cheaply at a rival chain, the brand can decide whether the deal is worth it. But in the case of Amazon, according to sellers like Boyce, there is no real choice. The DC attorney general’s lawsuit points out that Amazon accounts for somewhere between 50 and 70 percent of the US online retail market, and it notes that “a staggering 74 percent [of consumers] go directly to Amazon when they are ready to buy a specific product.” It accuses Amazon of using its price policy to maintain that monopoly power by preventing rival platforms from using lower prices to eat into its market share. In a statement emailed to reporters, Amazon did not exactly deny that it punishes sellers who offer lower prices elsewhere. Rather, it suggested that this is ultimately good for the consumer. “The DC attorney general has it exactly backward—sellers set their own prices for the products they offer in our store,” the company said. “Amazon takes pride in the fact that we offer low prices across the broadest selection, and like any store we reserve the right not to highlight offers to customers that are not priced competitively. The relief the AG seeks would force Amazon to feature higher prices to customers, oddly going against core objectives of antitrust law.” But this logic relies on a very idiosyncratic definition of “priced competitively.” When someone goes to Amazon to buy something, they want the site to show them the best deal available on Amazon. If Jenny’s Bike Supply has the best deal on Amazon for chain locks, then it’s the best deal, regardless of whether Jenny is also selling the locks for an even better price on eBay. If Amazon makes it harder to buy the lock from Jenny in this scenario, the only thing it accomplishes is forcing customers to settle for the second-best deal. And, of course, it will probably succeed in forcing Jenny to raise prices on eBay. What it won’t do is result in lower prices on Amazon. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg All of which makes the DC lawsuit a narrower and potentially more winnable case than some of the other antitrust litigation that has been brought against tech companies. “As long as it’s actually having the same effect as a most-favored-nation, it’s a loser case for Amazon,” said Sally Hubbard, the director of enforcement strategy at the Open Markets Institute, an anti-monopoly think tank. “It’s quite straightforward that the conduct is eliminating competition and causing higher prices.” (On the other hand, the suit is so far only being brought under DC law, rather than the federal antitrust statutes, which could limit its impact.) Hubbard predicted that Amazon would settle, since the pricing requirement isn’t absolutely essential to its business. Going to trial, as Apple is learning in its lawsuit with Epic, would invite a great deal of unwelcome publicity and attention to Amazon’s business practices. That could be more costly than any financial penalty imposed by the courts. Bezos suggested as much in his congressional testimony last year. “Customer trust is hard to win,” he explained, “and easy to lose.” 📩 The latest on tech, science, and more: Get our newsletters ! The Arecibo Observatory was like family. I couldn't save it It’s true. Everyone is multitasking in video meetings This is your brain under anesthesia The best personal safety devices, apps, and alarms Ransomware’s dangerous new trick: double-encrypting data 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽‍♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Senior Writer X Topics Amazon Antitrust lawsuits eCommerce Paresh Dave Vittoria Elliott Vittoria Elliott Vittoria Elliott Morgan Meaker Reece Rogers Amanda Hoover Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,201
2,016
"Why the Final Game Between AlphaGo and Lee Sedol Is Such a Big Deal for Humanity | WIRED"
"https://www.wired.com/2016/03/final-game-alphago-lee-sedol-big-deal-humanity"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Why the Final Game Between AlphaGo and Lee Sedol Is Such a Big Deal for Humanity Geordie Wood Save this story Save Save this story Save SEOUL, SOUTH KOREA --- Go grandmaster Lee Sedol regained a sizeable slice of human pride on Sunday night when he won the latest game in his historic match with an artificially intelligent machine built by Google researchers. But on Tuesday, in the final game of this best-of-five series, he hopes to regain far more. In one sense, the match is already lost. Google's system, known as AlphaGo, won the match's first three games, taking home a million-dollar prize and becoming the first machine to beat a top human at the ancient game of Go , a pastime that's exponentially more complex than chess. Lee Sedol openly apologized to the Korean public and the wider Go community (and, perhaps, humans in general) for losing the match, tapping an undeniable melancholy among those gathered to watch the match inside Seoul's Four Seasons hotel. But he completely reversed the mood in Game Four. Game Five is, in a way, the last frontier. When AlphaGo resigned nearly five hours into the game, the Korean press cheered. They cheered even louder when Lee Sedol walked into the post-game press conference. "Because I lost three matches and then was able to get one single win, this win is so valuable that I wouldn't exchange it for anything in the world," he said through an interpreter, fueling still more cheers. "That's because of the cheers and the encouragement that you all have shown me." But another significant moment arrived at the very end of the press conference, when the Korean unexpectedly turned towards Demis Hassabis and David Silver, two researchers from DeepMind, the London-based Google AI lab that built AlphaGo, with an unexpected question. In Game Four, Lee Sedol had won playing the white stones while AlphaGo played black. In other words, AlphaGo made the first move, and he played second. Until changes to the rules of Go in the early 20th century, playing second was a disadvantage. But now, if you play second, you receive a sizable head start in points, and that disadvantage goes away. "It's even," says Andrew Jackson of the US Go Association, who has been broadcasting an online commentary during the match. This week, however, playing second suited Lee Sedol. His best effort prior to Game Four was in Game Two, when he also played white. And during the press conference following Game Four, the Korean explicitly said that AlphaGo was weaker when it played first and he played second. "It struggled more when it was holding black," he said. For Game Five, under the official rules of the match, the two opponents were set to randomly choose who would play first and who would play second. But then came that moment at the end of the press conference following his victory in Game Four. Lee Sedol turned towards Hassabis and Silver and asked if he could play black in Game Five. To wit, he was asking for the bigger challenge. He was asking for the hurdle he still hasn't cleared. "I really do hope I can win with black," he said, "because winning with black is much more valuable." Hassabis and Silver conferred---ever so briefly---and then granted his wish. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Geordie Wood for WIRED There was more applause from the international press. Granted, this applause was led by me. But, well, it was another wonderful moment from the Korean grandmaster. And it lends some added spice to the fifth and final game. This is no dead rubber. It's not just about Lee Sedol clawing back to within one game. It's about Lee Sedol showing that he can beat AlphaGo no matter which stone he holds. If he can do that, the machine's match victory isn't quite so complete. How Google’s AI Viewed the Move No Human Could Understand Go Grandmaster Lee Sedol Grabs Consolation Win Against Google’s AI Google’s AI Takes Historic Match Against Go Champ With Third Straight Win The Sadness and Beauty of Watching Google’s AI Play Go Google’s AI Wins Pivotal Second Game in Match With Go Grandmaster Google’s AI Wins First Game in Historic Match With Go Champion Yes, if Lee Sedol wins, there will be talk of a rematch. But that will by no means favor the Korean. The trick with AlphaGo is that it's powered by machine learning---technologies that allow machines to learn tasks on their own. Google's creation beat European Go champion Fan Hui in a closed-door match this past October. After Hassabis, Silver, and their team continued to retrain the system over the past five months, its skill level rose significantly. Before a rematch, that level would rise yet again. Game Five is, in a way, the last frontier. And it will by no means be easy for Lee Sedol. Yes, he now has the advantage of having watched AlphaGo play two games with the white stones. So he has more experience to draw from. And yes, the pressure to win the whole match is now off, as it was in Game Four. But clearly, AlphaGo is stronger when playing white. Just before Game Three, I asked David Silver if AlphaGo played differently when it played one color as opposed to the other. "I think it's hard to say," he told me. "I would have to defer judgment to a pro player on that." Though he has helped build a machine that plays Go at professional level, he is still an amateur and feels he can't really judge the play of the machine. Well, the best pro player to defer to is Lee Sedol, who clearly thinks that AlphaGo struggles when playing black. And the Korean has chosen the opposite scenario for Game Five. That indeed deserves a cheer. But the cheers will be far louder if he can grab a win from this position of weakness. Whatever happens tonight, the bigger contest is by no means over. Senior Writer X Topics AlphaGo artificial intelligence deep learning DeepMind Enterprise Google Will Knight Steven Levy Steven Levy Paresh Dave Will Knight Khari Johnson Will Knight Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,202
2,018
"How to Teach Artificial Intelligence Some Common Sense | WIRED"
"https://www.wired.com/story/how-to-teach-artificial-intelligence-common-sense"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Clive Thompson Business How to Teach Artificial Intelligence Some Common Sense We’ve spent years feeding neural nets vast amounts of data, teaching them to think like human brains. They’re crazy-smart, but they have absolutely no common sense. What if we’ve been doing it all wrong? Beth Holzer Save this story Save Save this story Save Application Games End User Research Big company Sector Research Technology Neural Network Five years ago, the coders at DeepMind, a London-based artificial intelligence company, watched excitedly as an AI taught itself to play a classic arcade game. They’d used the hot technique of the day, deep learning, on a seemingly whimsical task: mastering Breakout , 1 the Atari game in which you bounce a ball at a wall of bricks, trying to make each one vanish. 1 Steve Jobs was working at Atari when he was commissioned to create 1976’s Breakout , a job no other engineer wanted. He roped his friend Steve Wozniak, then at Hewlett-­Packard, into helping him. Deep learning is self-education for machines; you feed an AI huge amounts of data, and eventually it begins to discern patterns all by itself. In this case, the data was the activity on the screen—blocky pixels representing the bricks, the ball, and the player’s paddle. The DeepMind AI, a so-called neural network made up of layered algorithms, wasn’t programmed with any knowledge about how Breakout works, its rules, its goals, or even how to play it. The coders just let the neural net examine the results of each action, each bounce of the ball. Where would it lead? To some very impressive skills, it turns out. During the first few games, the AI flailed around. But after playing a few hundred times, it had begun accurately bouncing the ball. By the 600th game, the neural net was using a more expert move employed by human Breakout players, chipping through an entire column of bricks and setting the ball bouncing merrily along the top of the wall. “That was a big surprise for us,” Demis Hassabis , CEO of DeepMind, said at the time. “The strategy completely emerged from the underlying system.” The AI had shown itself capable of what seemed to be an unusually subtle piece of humanlike thinking, a grasping of the inherent concepts behind Breakout. Because neural nets loosely mirror the structure of the human brain, the theory was that they should mimic, in some respects, our own style of cognition. This moment seemed to serve as proof that the theory was right. December 2018. Subscribe to WIRED. Illustration: Axis of Strength Then, last year, computer scientists at Vicarious , an AI firm in San Francisco, offered an interesting reality check. They took an AI like the one used by DeepMind and trained it on Breakout. It played great. But then they slightly tweaked the layout of the game. They lifted the paddle up higher in one iteration; in another, they added an unbreakable area in the center of the blocks. A human player would be able to quickly adapt to these changes; the neural net couldn’t. The seemingly supersmart AI could play only the exact style of Breakout it had spent hundreds of games mastering. It couldn’t handle something new. “We humans are not just pattern recognizers,” Dileep George, a computer scientist who cofounded Vicarious, tells me. “We’re also building models about the things we see. And these are causal models—we understand about cause and effect.” Humans engage in reasoning, making logi­cal inferences about the world around us; we have a store of common-sense knowledge that helps us figure out new situations. When we see a game of Breakout that’s a little different from the one we just played, we realize it’s likely to have mostly the same rules and goals. The neural net, on the other hand, hadn’t understood anything about Breakout. All it could do was follow the pattern. When the pattern changed, it was helpless. The A.I. Issue The A.I. Issue Tom Simonite The A.I. Issue Jessi Hempel The A.I. Issue Shaun Raviv Deep learning is the reigning monarch of AI. In the six years since it exploded into the mainstream, it has become the dominant way to help machines sense and perceive the world around them. It powers Alexa’s speech recognition , Waymo’s self-driving cars , and Google’s on-the-fly translations. Uber is in some respects a giant optimization problem, using machine learning to figure out where riders will need cars. Baidu , the Chinese tech giant, has more than 2,000 engineers cranking away on neural net AI. For years, it seemed as though deep learning would only keep getting better, leading inexorably to a machine with the fluid, supple intelligence of a person. But some heretics argue that deep learning is hitting a wall. They say that, on its own, it’ll never produce generalized intelligence, because truly humanlike intelligence isn’t just pattern recognition. We need to start figuring out how to imbue AI with everyday common sense, the stuff of human smarts. If we don’t, they warn, we’ll keep bumping up against the limits of deep learning, like visual-recognition systems that can be easily fooled by changing a few inputs, making a deep-learning model think a turtle is a gun. But if we succeed, they say, we’ll witness an explosion of safer, more useful devices—health care robots that navigate a cluttered home, fraud detection systems that don’t trip on false positives, medical breakthroughs powered by machines that ponder cause and effect in disease. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But what does true reasoning look like in a machine? And if deep learning can’t get us there, what can? Beth Holzer Gary Marcus is a pensive, bespectacled 48-year-old professor of psychology and neuroscience at New York University, and he’s probably the most famous apostate of orthodox deep learning. Marcus first got interested in artificial intelligence in the 1980s and ’90s, when neural nets were still in their experimental phase, and he’s been making the same argument ever since. “It’s not like I came to this party late and want to pee on it,” Marcus told me when I met him at his apartment near NYU. (We are also personal friends.) “As soon as deep learning erupted, I said ‘This is the wrong direction, guys!’ ” Back then, the strategy behind deep learning was the same as it is today. Say you wanted a machine to teach itself to recognize daisies. First you’d code some algorithmic “neurons,” connecting them in layers like a sandwich (when you use several layers, the sandwich gets thicker or deep—hence “deep” learning). You’d show an image of a daisy to the first layer, and its neurons would fire or not fire based on whether the image resembled the examples of daisies it had seen before. The signal would move on to the next layer, where the process would be repeated. Eventually, the layers would winnow down to one final verdict. At first, the neural net is just guessing blindly; it starts life a blank slate, more or less. The key is to establish a useful feedback loop. Every time the AI misses a daisy, that set of neural connections weakens the links that led to an incorrect guess; if it’s successful, it strengthens them. Given enough time and enough daisies, the neural net gets more accurate. It learns to intuit some pattern of daisy-­ness that lets it detect the daisy (and not the sunflower or aster) each time. As the years went on, this core idea—start with a naive network and train by repetition—was improved upon and seemed useful nearly anywhere it was applied. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But Marcus was never convinced. For him, the problem is the blank slate: It assumes that humans build their intelligence purely by observing the world around them, and that machines can too. But Marcus doesn’t think that’s how humans work. He walks the intellectual path laid down by Noam Chomsky, 2 who argued that humans are born wired to learn, programmed to master language and interpret the physical world. 2 In 1975 the psycholo­gist Jean Piaget and the linguist Noam Chomsky met in France for what would prove to be a historic debate. Grossly simplified, Piaget argued that human brains are blank-slate self-­learning machines, and Chomsky that they are endowed with some preprogrammed smarts. For all their supposed braininess, he notes, neural nets don’t appear to work the way human brains do. For starters, they’re much too data-hungry. In most cases, each neural net requires thousands or millions of examples to learn from. Worse, each time you want a neural net to recognize a new type of item, you have to start from scratch. A neural net trained to recognize only canaries isn’t of any use in recognizing, say, birdsong or human speech. “We don’t need massive amounts of data to learn,” Marcus says. His kids didn’t need to see a million cars before they could recognize one. Better yet, they can generalize; when they see a tractor for the first time, they understand that it’s sort of like a car. They can also engage in counterfactuals. Google Translate can map the French equivalent of the English sentence “The glass was pushed, so it fell off the table.” But it doesn’t know what the words mean, so it couldn’t tell you what would happen if the glass weren’t pushed. Humans, Marcus notes, grasp not just the patterns of grammar but the logic behind it. You could give a young child a fake verb like pilk , and she’d likely be able to reason that the past tense would be pilked. She hasn’t seen that word before, of course. She hasn’t been “trained” on it. She has just intuited some logic about how language works and can apply it to a new situation. “These deep-learning systems don’t know how to integrate abstract knowledge,” says Marcus, who founded a company that created AI to learn with less data (and sold the company to Uber in 2016). Earlier this year, Marcus published a white paper on arXiv , arguing that, without some new approaches, deep learning might never get past its current limitations. What it needs is a boost—rules that supplement or are built in to help it reason about the world. Beth Holzer Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Oren Etzioni is a smiling bear of a guy. A computer scientist who runs the Allen Institute for Artificial Intelligence in Seattle, he greets me in his bright office wearing jeans and a salmon-­colored shirt, ushering me in past a whiteboard scrawled with musings about machine intelligence. (“DEFINE SUCCESS,” “WHAT’S THE TASK?”) Outside, in the sun-drenched main room of the institute, young AI researchers pad around sylphlike, headphones attached, quietly pecking at keyboards. Etzioni and his team are working on the common-sense problem. He defines it in the context of two legendary AI moments—the trouncing of the chess grandmaster Garry Kasparov 3 by IBM’s Deep Blue in 1997 and the equally shocking defeat of the world’s top Go player by DeepMind’s AlphaGo last year. (Google bought DeepMind in 2014.) 3 In 1996, Kasparov—then the best chess player in the world—beat Deep Blue. During a rematch a year later, Kasparov surrendered after 19 moves. He later told a reporter: “I’m a human being. When I see something that is well beyond my understanding, I’m afraid.” “With Deep Blue we had a program that would make a superhuman chess move—while the room was on fire,” Etzioni jokes. “Right? Completely lacking context. Fast-forward 20 years, we’ve got a computer that can make a superhuman Go move—while the room is on fire.” Humans, of course, do not have this limitation. His team plays weekly games of bughouse chess, and if a fire broke out the humans would pull the alarm and run for the doors. Humans, in other words, possess a base of knowledge about the world (fire burns things) mixed with the ability to reason about it (you should try to move away from an out-of-control fire). For AI to truly think like people, we need to teach it the stuff that everyone knows, like physics (balls tossed in the air will fall) or the relative sizes of things (an elephant can’t fit in a bathtub). Until AI possesses these basic concepts, Etzioni figures, it won’t be able to reason. With an infusion of hundreds of millions of dollars from Paul Allen , 4 Etzioni and his team are trying to develop a layer of common-sense reasoning to work with the existing style of neural net. (The Allen Institute is a nonprofit, so everything they discover will be published, for anyone to use.) 4 Microsoft cofounder and philanthropist Paul Allen donated billions to science, climate, and health research, as well as to Seattle causes. He died of complications from cancer on October 15 at age 65. The first problem they face is answering the question, What is common sense? Etzioni describes it as all the knowledge about the world that we take for granted but rarely state out loud. He and his colleagues have created a set of benchmark questions that a truly reasoning AI ought to be able to answer: If I put my socks in a drawer, will they be there tomorrow? If I stomp on someone’s toe, will they be mad? One way to get this knowledge is to extract it from people. Etzioni’s lab is paying crowdsourced humans on Amazon Mechanical Turk to help craft common-sense statements. The team then uses various machine-learning techniques—some old-school statistical analyses, some deep-learning neural nets—to draw lessons from those statements. If they do it right, Etzioni believes they can produce reusable Lego bricks of computer reasoning: One set that understands written words, one that grasps physics, and so on. Yejin Choi, one of Etzioni’s leading common-­sense scientists, has led several of these crowdsourced efforts. In one project, she wanted to develop an AI that would understand the intent or emotion implied by a person’s actions or statements. She started by examining thousands of online stories, blogs, and idiom entries in Wiktionary and extracting “phrasal events,” such as the sentence “Jeff punches Roger’s lights out.” Then she’d anonymize each phrase—“Person X punches Person Y’s lights out”—and ask the Turkers to describe the intent of Person X: Why did they do that? When she had gathered 25,000 of these marked-up sentences, she used them to train a machine-learning system to analyze sentences it had never seen before and infer the emotion or intent of the subject. At best, the new system worked only half the time. But when it did, it evinced some very humanlike perception: Given a sentence like “Oren cooked Thanksgiving dinner,” it predicted that Oren was trying to impress his family. “We can also reason about others’ reactions, even if they’re not mentioned,” Choi notes. “So X’s family probably feel impressed and loved.” Another system her team built used Turkers to mark up the psychological states of people in stories; the resulting system could also draw some sharp inferences when given a new situation. It was told, for instance, about a music instructor getting angry at his band’s lousy performance and that “the instructor was furious and threw his chair.” The AI predicted that the musicians would “feel fear afterwards,” even though the story doesn’t explicitly say so. Choi, Etzioni, and their colleagues aren’t abandoning deep learning. Indeed, they regard it as a very useful tool. But they don’t think there is a shortcut to the laborious task of coaxing people to explicitly state the weird, invisible, implied knowledge we all possess. Deep learning is garbage in, garbage out. Merely feeding a neural net tons of news articles isn’t enough, because it wouldn’t pick up on the unstated knowledge, the obvious stuff that writers didn’t bother to mention. As Choi puts it, “People don’t say ‘My house is bigger than me.’ ” To help tackle this problem, she had the Turkers analyze the physical relationships implied by 1,100 common verbs, such as “X threw Y.” That, in turn, allowed for a simple statistical model that could take the sentence “Oren threw the ball” and infer that the ball must be smaller than Oren. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Another challenge is visual reasoning. Aniruddha Kembhavi, another of Etzioni’s AI scientists, shows me a virtual robot wandering around an onscreen house. Other Allen Institute scientists built the Sims -like house, filling it with everyday items and realistic physics—kitchen cupboards full of dishes, couches that can be pushed around. Then they designed the robot, which looks like a dark gray garbage canister with arms, and told it to hunt down certain items. After thousands of tasks, the neural net gains a basic grounding in real-life facts. “What this agent has learned is, when you ask it ‘Do I have tomatoes?’ it doesn’t go and open all the cabinets. It prefers to open the fridge,” Kembhavi says. “Or if you say ‘Find me my keys,’ it doesn’t try to pick up the television. It just looks behind the television. It has learned that TVs aren’t usually picked up.” Etzioni and his colleagues hope that these various components—Choi’s language reasoning, the visual thinking, other work they’re doing on getting an AI to grasp textbook science information—can all eventually be combined. But how long will it take, and what will the final products look like? They don’t know. The common-sense systems they’re building still make mistakes, sometimes more than half the time. Choi estimates she’ll need around a million crowdsourced human statements as she trains her various language-parsing AIs. Building common sense, it would seem, is uncommonly hard. There are other pathways to making machines that reason, and they’re even more labor-intensive. For example, you could simply sit down and write out, by hand, all the rules that tell a machine how the world works. This is how Doug Lenat’s Cyc project works. For 34 years, Lenat has employed a team of engineers and philosophers to code 25 million rules of general common sense, like “water is wet” or “most people know the first names of their friends.” This lets Cyc deduce things: “Your shirt is wet, so you were probably in the rain.” The advantage is that Lenat has exquisite control over what goes into Cyc’s database; that isn’t true of crowdsourced knowledge. Brute-force, handcrafted AI has become unfashionable in the world of deep learning. That’s partly because it can be “brittle”: Without the right rules about the world, the AI can get flummoxed. This is why scripted chatbots are so frustrating; if they haven’t been explicitly told how to answer a question, they have no way to reason it out. Cyc is enormously more capable than a chatbot and has been licensed for use in health care systems, financial services, and military projects. But the work is achingly slow, and it’s expensive. Lenat says it has cost around $200 million to develop Cyc. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But a bit of hand coding could be how you replicate some of the built-in knowledge that, according to the Chomskyite view, human brains possess. That’s what Dileep George and the Vicarious researchers did with Breakout. To create an AI that wouldn’t get stumped by changes to the layout of the game, they abandoned deep learning and built a system that included hard-coded basic assumptions. Without too much trouble, George tells me, their AI learned “that there are objects, and there are interactions between objects, and that the motion of one object can be causally explained between the object and something else.” As it played Breakout , the system developed the ability to weigh different courses of action and their likely outcomes. This worked in reverse too. If the AI wanted to break a block in the far left corner of the screen, it reasoned to put the paddle in the far right corner. Crucially, this meant that when Vicarious changed the layout of the game—adding new bricks or raising the paddle—the system compensated. It appeared to have extracted some general understanding about Breakout itself. Granted, there are trade-offs in this type of AI engineering. It’s arguably more painstaking to craft and takes careful planning to figure out precisely what foreordained logic to feed into the system. It’s also hard to strike the right balance of speed and accuracy when designing a new system. George says he looks for the minimum set of data “to put into the model so it can learn quickly.” The fewer assumptions you need, the more efficiently the machine will make decisions. Once you’ve trained a deep-learning model to recognize cats, you can show it a Russian blue it has never seen and it renders the verdict—it’s a cat!—almost instantaneously. Having processed millions of photos, it knows not only what makes a cat a cat but also the fastest way to identify one. In contrast, Vicarious’ style of AI is slower, because it’s actively making logical inferences as it goes. When the Vicarious AI works well, it can learn from much less data. George’s team created an AI to bust captchas , 5 those “I’m not a robot” obstacles online, by recognizing characters in spite of their distorted, warped appearance. 5 Captcha stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” It originated at Carnegie Mellon University in 2000; Yahoo was the first big company to make its use commonplace. Much as with the Breakout system, they endowed their AI with some abilities up front, such as knowledge that helps it discern the likely edges of characters. With that bootstrapping in place, they only needed to train the AI on 260 images before it learned to break captchas with 90.4 percent accuracy. In contrast, a neural net needed to be trained on more than 2.3 million images before it could break a captcha. Others are building common-sense-like structure into neural nets in different ways. Two researchers at DeepMind, for instance, recently created a hybrid system—part deep learning, part more traditional techniques—known as inductive logic programming. The goal was to produce something that could do mathematical reasoning. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg They trained it on the children’s game fizz-buzz, in which you count upward from 1, saying “fizz” if a number is divisible by 3 and “buzz” if it is divisible by 5. A regular neural net would be able to do this only for numbers it had seen before; train it up to 100 and it would know that 99 is “fizz” and 100 is “buzz.” But it wouldn’t know what to do with 105. In contrast, the hybrid DeepMind system seemed to understand the rule and went past 100 with no problem. Edward Grefenstette, one of the DeepMind coders who built the hybrid, says, “You can train systems that will generalize in a way that deep-learning networks simply couldn’t on their own.” Beth Holzer Yann LeCun, a deep-learning pioneer and the current head of Facebook’s AI research wing, agrees with many of the new critiques of the field. He acknowledges that it requires too much training data, that it can’t reason, that it doesn’t have common sense. “I’ve been basically saying this over and over again for the past four years,” he reminds me. But he remains steadfast that deep learning, properly crafted, can provide the answer. He disagrees with the Chomskyite vision of human intelligence. He thinks human brains develop the ability to reason solely through interaction, not built-in rules. “If you think about how animals and babies learn, there’s a lot of things that are learned in the first few minutes, hours, days of life that seem to be done so fast that it looks like they are hardwired,” he notes. “But in fact they don’t need to be hardwired, because they can be learned so quickly.” In this view, to figure out the physics of the world, a baby just moves its head around, data-crunches the incoming imagery, and concludes that, hey, depth of field is a thing. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Still, LeCun admits it’s not yet clear which routes will help deep learning get past its humps. It might be “adversarial” neural nets, a relatively new technique in which one neural net tries to fool another neural net with fake data—forcing the second one to develop extremely ­subtle internal representations of pictures, sounds, and other inputs. The advantage here is that you don’t have the “data hungriness” problem. You don’t need to collect millions of data points on which to train the neural nets, because they’re learning by studying each other. (Apocalyptic side note: A similar method is being used to create those profoundly troubling “deepfake” videos in which someone appears to be saying or doing something they are not.) I met LeCun at the offices of Facebook’s AI lab in New York. Mark Zuckerberg recruited him in 2013, with the promise that the lab’s goal would be to push the limits of ambitious AI, not just produce minor tweaks for Facebook’s products. Like an academic lab, LeCun and his researchers publish their work for others to access. LeCun, who retains the rich accent of his native France and has a Bride of Frankenstein shock of white in his thick mass of dark hair, stood at a whiteboard energetically sketching out theories of possible deep-learning advances. On the facing wall was a set of gorgeous paintings from Stanley Kubrick’s 2001: A Space Odyssey —the main spaceship floating in deep space, the wheel-like ship orbiting Earth. “Oh, yes,” LeCun said, when I pointed them out; they were reprints of artwork Kubrick commissioned for the movie. It was weirdly unsettling to discuss humanlike AI with those images around, because of course HAL 9000 , 6 the humanlike AI in 2001 , turns out to be a highly efficient murderer. 6 HAL was originally supposed to be voiced by Martin Balsam, an actor with a thick Bronx accent. After recording, however, director Stanley Kubrick decided Balsam sounded “too colloquially American.” He was replaced by Canadian actor Douglas Rain. And this pointed to a deeper philosophical question that floats over the whole debate: Is smarter AI even a good idea? Vicarious’ system cracked captcha, but the whole point of captcha is to prevent bots from impersonating humans. Some AI thinkers worry that the ability to talk to humans and understand their psychology could make a rogue AI incredibly dangerous. Nick Bostrom 7 at the University of Oxford has sounded the alarm about the dangers of creating a “superintelligence,” an AI that self-improves and rapidly outstrips humanity, able to outthink and outflank us in every way. (One way he suggests it might amass control is by manipulating people—something for which possessing a “theory of mind” would be quite useful.) 7 In 2003, Bostrom published the now-famous paper-clip warning about superintelligence: “A well-meaning team of programmers [could] make a big mistake in designing its goal system. This could result … in a super­intelligence whose top goal is the manufacturing of paper clips, with the consequence that it starts transforming first all of Earth and then increasing portions of space into paper-clip manufacturing facilities.” Elon Musk is sufficiently convinced of this danger that he has funded OpenAI, an organization dedicated to the notion of safe AI. This future doesn’t keep Etzioni up at night. He’s not worried about AI becoming maliciously superintelligent. “We’re worried about something taking over the world,” he scoffs, “that can’t even on its own decide to play chess again.” It’s not clear how an AI would develop a desire to do so or what that desire would look like in software. Deep learning can conquer chess, but it has no inborn will to play. What does concern him is that current AI is woefully inept. So while we might not be creating HAL with a self-preserving intelligence, an “inept AI attached to deadly weapons can easily kill,” he says. This is partly why Etzioni is so determined to give AI some common sense. Ultimately, he argues, it will make AI safer; the idea that humanity shouldn’t be wholesale slaughtered is, of course, arguably a piece of common-­sense knowledge itself. (Part of the Allen Institute’s mandate is to make AI safer by making it more reasonable.) Related Stories wired25 Tom Simonite Artificial Intelligence Louise Matsakis WIRED Q&A Nicholas Thompson Etzioni notes that the dystopic sci-fi visions of AI are less risky than near-term economic displacement. The better AI gets at common sense, the more rapidly it’ll take over jobs that currently are too hard for mere pattern-­matching deep learning: drivers, cashiers, managers, analysts of all stripes, and even (alas) journalists. But truly reasoning AI could wreak havoc even beyond the economy. Imagine how good political disinformation bots would be if they could use common-­sense knowledge to appear indistinguishably human on Twitter or Facebook or in mass phone calls. Marcus agrees that reasoning AI will have dangers. But the upsides, he says, would be huge. AI that could reason and perceive like humans yet move at the speed of computers could revolutionize science, teasing out causal connections at a pace impossible for us alone. It could follow if-then chains and ponder counterfactuals, running mental experiments the way humans do, except with massive robotic knowledge. “We might finally be able to cure mental illness, for example,” Marcus adds. “AI might be able to understand these complex biological cascades of proteins that are involved in building brains and having them work correctly or not.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Sitting beneath the images from 2001 , LeCun makes a bit of a heretical point himself. Sure, making artificial intelligence more humanlike helps AI to navigate our world. But directly replicating human styles of thought? It’s not clear that’d be useful. We already have humans who can think like humans; maybe the value of smart machines is that they are quite alien from us. “They will tend to be more useful if they have capabilities we don’t have,” he tells me. “Then they’ll become an amplifier for intelligence. So to some extent you want them to have a nonhuman form of intelligence ... You want them to be more rational than humans.” In other words, maybe it’s worth keeping artificial intelligence a little bit artificial. Clive Thompson (@pomeranian99) is a columnist for WIRED. This article appears in the December issue. Subscribe now. Let us know what you think about this article. Submit a letter to the editor at [email protected]. This genius neuroscientist might hold the key to true AI How to safely and securely dispose of your old gadgets PHOTOS: When your baby is actually made of silicone Online conspiracy groups are a lot like cults Pipeline vandals are reinventing climate activism Get even more of our inside scoops with our weekly Backchannel newsletter Contributor Topics magazine-26.12 Cover Story longreads artificial intelligence neural networks Steven Levy Will Knight Will Knight Khari Johnson Will Knight Steven Levy Will Knight Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,203
2,017
"James Damore’s Google Memo Gets Science All Wrong | WIRED"
"https://www.wired.com/story/the-pernicious-science-of-james-damores-google-memo"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Megan Molteni Adam Rogers Science The Actual Science of James Damore’s Google Memo Hotlittlepotato Save this story Save Save this story Save In early August, a Google engineer named James Damore posted a document titled “ Google’s Ideological Echo Chamber ” to an internal online discussion group. His memo was a calm attempt to point out all the ways Google has gone wrong in making gender representation among its employees a corporate priority. And then, on August 5, the memo jumped the fence. Nobody else was calm about it. It wasn’t a screed or a rant, but, judging by his document, Damore clearly feels that some basic truths are getting ignored—silenced, even—by Google’s bosses. So in response, the engineer adopted a methodology at the core of Google’s culture: He went to look at the data. “Google’s Ideological Echo Chamber” wants to be a discussion of ideas about diversity through solid, ineluctable science. The core arguments run to this tune: Men and women have psychological differences that are a result of their underlying biology. Those differences make them differently suited to and interested in the work that is core to Google. Yet Google as a company is trying to create a technical, engineering, and leadership workforce with greater numbers of women than these differences can sustain, and it’s hurting the company. Damore further says that anyone who tries to talk about that paradox gets silenced—which runs counter to Google’s stated goal of valuing and being friendly to difference. And, maybe helping make his point a little, last Monday Google fired him. Damore is now on a media tour , saying he was fired illegally for speaking truth to power. Hashtag Fired4Truth! The problem is, the science in Damore’s memo is still very much in play, and his analysis of its implications is at best politically naive and at worst dangerous. The memo is a species of discourse peculiar to politically polarized times: cherry-picking scientific evidence to support a preexisting point of view. It’s an exercise not in rational argument but in rhetorical point scoring. And a careful walk through the science proves it. Psychology as a field has been trying to figure out the differences between men and women, if any, for more than a century— paging Dr. Freud , as the saying goes. The results of these efforts are ambiguous. And psychologists are still working on it. The science of difference is a mushball, and trying to understand differences among populations only makes it messier. Every cognitive or personality trait will have a wide distribution among a given population—sex, ethnicity, nationality, age, whatever—and those distributions may only vary slightly. Which means huge chunks of the population may overlap. For any given trait, men may be more different from each other than from women, let’s say. That said, Damore’s assertion that men and women think different is actually pretty uncontroversial, and he cites a paper to back it up, from a team led by David Schmitt, a psychologist at Bradley University in Illinois and director of the International Sexuality Description Project. The 2008 article, “Why Can’t a Man Be More Like a Woman? Sex Difference in Big Five Personality Traits Across 55 Cultures,” does indeed seem to show that women rate higher than men in neuroticism, extraversion, agreeableness, and conscientiousness. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As always, the issue is the extent of the difference (and what causes it—more on that in a bit). Also, as Damore himself notes: Google hires individuals, not populations. Damore argues that greater extraversion and agreeableness, on the whole, would make it harder for women to negotiate and stake out leadership positions in an organization, and that higher neuroticism would naturally lead to fewer women in high-stress jobs. The first-order criticism here is easy: Damore oversells the difference cited in the paper. As Schmitt tells WIRED via email, “These sex differences in neuroticism are not very large, with biological sex perhaps accounting for only 10 percent of the variance.” The other 90 percent, in other words, are the result of individual variation, environment, and upbringing. It is unclear to me that this sex difference would play a role in success within the Google workplace. David Schmitt, Bradley University A larger problem, though, is measuring the differences in the first place. Personality traits are nebulous, qualitative things, and psychologists still have a lot of different—often conflicting or contradictory—ways to measure them. In fact, the social sciences are rife with these kinds of disagreements, what sociologist Duncan Watts has called an “incoherency problem.” Very smart people studying the same things collect related, overlapping data and then say that data proves wildly different hypotheses, or fits into divergent theoretical frameworks. The incoherency problem makes it hard to know what social science is valid in a given situation. The impulse to apply those theories to explain human behavior is as strong as it is misguided. Women as a group score higher on neuroticism in Schmitt’s meta-analysis, sure, but he doesn’t buy that you can predict the population-level effects of that difference. “It is unclear to me that this sex difference would play a role in success within the Google workplace (in particular, not being able to handle stresses of leadership in the workplace. That’s a huge stretch to me),” writes Schmitt. So, yes, that’s the researcher Damore cites disagreeing with Damore. Damore does this over and over again, holding up social science that tries to quantify human variation to support his view of the world. In general, he notes, women prefer to work with people and men prefer to work with things—the implication being that Google is a more thing-oriented workplace, so it just makes sense that fewer women would want to work there. Again, the central assertion here is fairly uncontroversial. “On average—and I emphasize that, on average—men are more interested in thing-oriented occupations and fields, and that difference is actually quite large,” says Richard Lippa , a psychologist at Cal State Fullerton and another of the researchers who Damore cites. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But trying to use that data to explain gender disparities in the workplace is irrelevant at best. “I would assume that women in technical positions at Google are more thing-oriented than the average woman,” Lippa says. “But then an interesting question is, are they more thing-oriented than the average male Google employee? I don’t know the answer to that.” Semantics aren’t helping here. Is coding a thing- or people-oriented job? What about when you do it in a corporation with 72,000 people? When you’re managing a team of engineers? When you’re trying to marshal support for your proposed expenditure of person-hours versus someone else’s? Which is more thing-oriented, deep neural networks or database optimization? And maybe the most important question: How useful are psychological studies of the general population when you’re talking about Googlers? Damore essentially forecloses the possibility of changing sex roles and representation at Google—or anywhere, really—by asserting that not only are the differences between men and women significant but that they are at least in part intrinsic. Damore doesn’t assert that biology is the only factor in play, and no scientist does either. But how important biology is to psychology is—again—in heavy dispute. Here’s Damore’s take: “On average, men and women biologically differ in many ways.” Nothing to argue about here. If men and women didn’t differ biologically, it would make sexual reproduction very difficult indeed. Also, men and women differ in height (on average), bone mass (on average), and fat, muscle, and body hair distribution (on average). No one thinks those differences are socially constructed. Damore, though, is saying that differences in cognitive or personality traits—if they exist at all—have both social and biological origins. And those biological origins, he says, are exactly what scientists would predict from an evolutionary perspective. Evolutionary psychology and its forebear, sociobiology, are themselves problematic fields. Two decades ago evo-psych was all the rage. It’s essential argument: Males and females across species have faced different kinds of pressures on their ability to successfully reproduce—the mechanism, simplistically, through which evolution operates. Those pressures lead to different mating strategies for males and females, which in turn show up as biological and psychological differences—distinctions present in men and women today. The problem with that set of logical inferences is that it provides a convenient excuse to paint a veneer of shaky science onto “me Tarzan, you Jane” stereotypes. It’s the scientific equivalent of a lazy stand-up comedian joking about how all men dance like this —the idea that nature hardwires our differences. In fact, evolutionary biologists today race to point out that the nature-versus-nurture dichotomy is outdated. No serious scientist finds it to be a credible model. Related Stories Business Nitasha Tiku Business Klint Finley google Ashley Feinberg In 2005, Lawrence Summers, then president of Harvard, suggested publicly that women might not have as much “innate ability” as men to succeed in academic disciplines that require advanced mathematical abilities. In response, psychologists got together to assess more than 100 years of work and present a consensus statement about whether Summers was right. They concluded that a wide range of sociocultural forces contribute to sex difference in STEM achievement and ability, including family, neighborhood, school influences, training experiences, cultural practices, and, yes, some biological factors. When it comes to brain biology in particular, the authors wrote that “experience alters brain structures and functioning, so causal statements about brain differences and success in math and science are circular.” Most researchers today point to data that shows cognitive traits differ slightly on average between the sexes, but they change throughout an individual’s lifetime, influenced by a mix of genetic, epigenetic, and environmental (including social) factors. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg From birth, boys and girls receive different, gender-specific treatment, which can enhance or inhibit any innate differences. That certainly has an effect on the findings of psychology. The gap between girls and boys who say they want to go into the sciences is much more informed by stereotypes —on a survey of half a million people, 70 percent associated math with males—and cultural norms than by intrinsic ability. “From infancy, boys get footballs and girls get dolls, so is it that surprising? We’ve been socializing them. It doesn’t mean there’s anything innate,” says Janet Hyde , director of the Center for Research on Gender and Women at the University of Wisconsin. All these things change as culture changes. In 1990, Hyde published a meta-analysis on sex differences in mathematical performance among high school students and found significant deficits in girls’ abilities. When she did the same analysis in 2008, the difference had disappeared. In the 1980s, “girls in high school didn’t take as many years of math as boys did,” Hyde says. “Today that gap in course taking has closed. Girls take as many classes as boys do, and they’re scoring as well. What we once thought was a serious difference has disappeared.” There are areas where, on average, women excel and, on average, men excel, but everyone gets better with education. Diane Halpern, former president of the American Psychological Association And just as culture moves on, so too does biology. “The brain can change a lot in a matter of weeks,” says Diane Halpern , an author on that post-Summers study and of one of the central textbooks on cognitive sex differences. “That’s why we send children to school. There are areas where, on average, women excel and, on average, men excel, but everyone gets better with education. But it means we cannot know the influence of environmental versus biological variables, even at very young ages.” In other words, the science on math and science abilities says differences between sexes depend much more on external factors than sex in and of itself. And those external factors and their results can change over time. This is critical, because most of Damore’s memo seems to be talking about preferences , which is to say, rather than innate skill he means what women would rather be doing versus what men would rather be doing. In fact, one recurring finding in sex difference research is that in cultures seen as more egalitarian, differences in preferences between men and women become more pronounced. With more opportunity, says one hypothesis, men and women are more likely to follow their respective blisses. So when Damore does juke from preferences to abilities , it looks a little sneaky. Here’s what he writes: “I’m simply stating that the distribution of preferences and abilities of men and women may differ in part due to biological causes and that these differences may explain why we don’t have equal representation of women in tech and leadership,” he writes. Making the leap from personality differences to achievement differences would require citing at least some of the well-studied body of work we’ve mentioned here, which Damore ignored. With the next pivot, the memo gets more pernicious. Damore switches—again, subtly—from effects to causes. His interpretation of the science around preference and ability is arguable; on causation, though, he’s even rockier. According to Damore (and a lot of research), the biological factor that connects sex to cognitive abilities and personality traits is prenatal exposure to testosterone. Of all the high-stakes claims in sex-difference research, none is more important or more popular than the idea that hormones in the womb help give people stereotypically masculine or feminine interests. While they’re developing, males get a bigger dose of testosterone. “Among social psychologists there’s a consensus that prenatal testosterone does affect a lot of personality traits, in particular one’s interest in people versus things,” Damore said in an interview last week with Bloomberg’s Emily Chang. He also said it to pro-Trump YouTuber Stefan Molyneux, adding that hormonal exposure “explains a lot of differences in career choice.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Damore is probably wrong about this too. The most consistent findings linking prenatal testosterone to sex-linked behaviors come from about a dozen studies examining toy preferences among girls with a condition known as congenital adrenal hyperplasia, which causes the overproduction of sex hormones, including testosterone. CAH-affected girls tend to be less interested in dolls (substituting for people) and more interested in toys like trucks (things). But children with CAH have other variables. They’re often born with ambiguous genitalia and other grave medical conditions, and therefore have unusual rearing experiences. To get around this socialization issue, researchers from Emory University gave toys to young rhesus monkeys. When they saw that females preferred plush dolls and males preferred trucks, they concluded that these tendencies must be hard-wired into each sex. Squint hard at this result, because it presumes that juvenile rhesus monkeys see stuffed animals as monkeylike but “wheeled toys” as thinglike. But why would a monkey see a plush turtle as akin to self? And how would it know what a truck was or was not? Also: The male monkeys played with trucks. The females chose between the two about equally. The logic here walks a twisted path across the floor of the uncanny valley. Still, most hormone researchers agree that these differences are real. But that they’re directly linked to prenatal testosterone? Not so much. And to differences in career choice? “There’s 100 percent no consensus on that,” says Justin Carré, a psychologist at Nipissing University in Ontario. “The human literature on early androgen exposure is really very messy.” Damore needs scientific consensus to make his case—not just because of confirmation bias but because the memo goes on to argue that the left is just as guilty as the right when it comes to science denialism. He equates conservative tendencies to reject climate change and evolution (theories with an overwhelming scientific consensus behind them) with liberal refusals to accept differences in personality traits between the sexes and—in a quiet racist dog whistle—IQ, where the evidence is far, far weaker. Climbing to an even higher altitude, though, we might ask another question about Damore’s appeal to science: So what? Which is to say, what are we to do with not just the conclusions of the memo but also its implications? Damore is hardly the first person to use science to justify social norms or political preferences. Science has, too often in human history, been a tool for literal dehumanization as a rationale for oppression. It happened to people of African descent in America; to the poor of the Victorian era; to women in the years leading up to suffrage; and to Jews, people of nonbinary gender, Roma, people with disabilities, and so on in Nazi Germany. Historians try to wall off those ideas now—eugenics, phrenology, social Darwinism—but each, in its day, was just science. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With hindsight you can see that those pursuits weren’t science, and you can aim those 20/20 lenses at Damore too. What he’s advocating is scientism—using undercooked research as coverage for answering oppression with a shrug. Science has, too often in human history, been a tool for literal dehumanization as a rationale for oppression. In that context, social science’s incoherency problem becomes disastrous. Throw the most red-state conservative physicist you can find into a room with a pinko-commie physicist and then toss in the latest data from the Large Hadron Collider. Mostly, the physicists will agree on which subatomic particles they can or can’t find. But even if you buy the research on psychological sex differences, the work on their biological or evolutionary basis is far from finished—leaving people free to cherry-pick results ready to mix into a manifesto. Just add outrage. Science must inform policy—social, corporate, whatever. The more solid the science, the more it can inform. (Why, hello, climate change data —you are terrifyingly real.) But when it comes to sex differences, Google—or any organization, really—will understandably want to create an environment where people feel secure, safe, and empowered to do their best work. It’s good ethics and good business. That’s what Damore seems to see as an overly politically correct culture that stifles dissent. If he’d poked harder at his own hypothesis—as everyone should when the behavioral sciences produce findings that helpfully reify society’s blunt, dumb guide rails—he would have found questions instead of answers. Interesting questions, for sure, but about as helpful as a Magic 8-Ball if you’re looking not for excuses to keep things as they are but mechanisms to make them better. Damore’s dissent, stripped of its shaky scientism, isn’t a serious conversation about human difference. It’s an attempt to make permanent a power dynamic that shouldn’t exist in the first place. If Google was, for Damore, an echo chamber, that’s because his was the only voice he was really willing to hear. X Senior Correspondent X Topics Google sexism Neuroscience Biology gender evolutionary psychology Matt Simon Matt Simon Emily Mullin Rhett Allain Rhett Allain Emily Mullin Ramin Skibba Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,204
2,018
"Study Revives Debate About Google's Role in Filter Bubbles | WIRED"
"https://www.wired.com/story/study-revives-debate-about-googles-filter-bubbles"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Nitasha Tiku Business Study Revives Debate About Google's Role in Filter Bubbles HOTLITTLEPOTATO Save this story Save Save this story Save Google says a very small percentage of its search results are personalized, a claim that has helped insulate the company from scrutiny over filter bubbles, especially compared with Facebook and YouTube, a Google subsidiary. But a new study from DuckDuckGo, a Google rival, found that users saw very different results when searching for terms such as “gun control,” “immigration,” and “vaccinations,” even after controlling for time and location. One participant saw a National Rifle Association video at the top of the results page for “gun control,” another saw Wikipedia at the top, while a third got the NRA video but no result from Wikipedia in any of the first 10 links. The study also found that most users saw roughly similar results whether they were logged in to Google, logged out, or searching in private browsing, also known as Incognito mode. If private browsing on Google were truly anonymous, the study’s authors contend, all private browsing results should be the same. DuckDuckGo’s conclusions are far from scientific. Only 87 individuals participated in the test. Each responded to a tweet from DuckDuckGo and sent screenshots of their results. Regardless of why results differed, variation in search results for political topics—particularly during an election year—underscores how users have little visibility into Google’s algorithms and don’t know whether, or how, the information they see is being filtered. DuckDuckGo Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg DuckDuckGo CEO Gabe Weinberg told WIRED that the study took place before President Trump and other Republicans criticized Google for alleged anti-conservative bias, an unsubstantiated and self-serving claim. Weinberg does not believe that Google is altering search results because of political bias. Rather, he says the goal of the study is to draw attention to Google’s overall political influence, whether it is intentional or not. “I think search results are politically biased just by the nature of tailoring them to your past history,” Weinberg says. Google says that if a user is logged out or searching in Incognito mode, it does not personalize results based on a user’s signed-in search history and does not use personal data. In those two modes, however, the results may be contextualized based on the session in that browser window. Google also shared a number of reasons that individuals who perform the same search query may see different results, including timing (for rapidly evolving news topics, it can vary by the second), the location of Google’s data centers, and localization of query results. In September, the company told WIRED that only 2 to 2.5 percent of results from searches that are typed into the search box are meaningfully personalized; Google says this happens most often when a search is ambiguous, such as searching for Barcelona as a city or a soccer team. The implication was that users should not worry that it is creating filter bubbles, which can entrench partisan divisiveness and skew access to information. For instance, if a search algorithm reflects personal preferences, a liberal user might see more results about gun reform while a conservative user might see more results about gun rights. (In a similar vein, YouTube’s recommendations algorithm, which rewards engagement, has been known to serve increasingly extreme content to keep people watching, unintentionally radicalizing users in the process.) DuckDuckGo Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Not long after Trump tweeted in August that Google’s results were “rigged” against conservatives, the company briefed reporters on changes to its search algorithm that began in December 2016—around the time Google received bad press over misinformation, hate speech, and other problematic content in its search results. A visual aid showed the before and after results for a search on “Did the Holocaust happen.” Prior to the changes, the first result came from Stormfront.org, a hub for neo-Nazis. In a search on Monday, the top result was from the US Holocaust Memorial Museum. Tuesday’s study is a follow-up on a similar test that DuckDuckGo conducted in 2012, looking at Google search results for Obama and Romney. The Wall Street Journal performed its own independent version of the study and found that Google often customized results for users who recently searched for "Obama” but not for users who had recently searched for "Romney." At the time, Google told the paper the discrepancy was merely the result of the fact that more individuals searched for Obama’s name and then searched for topics, such as Iran, compared with people who searched for Romney’s name and then Iran. “The findings are among the latest examples of how mathematical formulas, rather than human judgments, influence more of the information that people encounter online,” The Journal wrote. Diamonds and lasers could power your drone The WIRED Guide to online shopping (and digital retail) PHOTOS: Four Freedoms recast for modern America The music obsessives who tape your favorite concerts Inside the pricey war to influence your Instagram feed Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Senior Writer X Facebook LinkedIn Topics Google search engines bias Paresh Dave Matt Burgess Khari Johnson Vittoria Elliott Reece Rogers Vittoria Elliott Reece Rogers Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,205
2,017
"Defense Secretary James Mattis Envies Silicon Valley’s AI Ascent | WIRED"
"https://www.wired.com/story/james-mattis-artificial-intelligence-diux"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Defense Secretary James Mattis Envies Silicon Valley’s AI Ascent Secretary of Defense Jim Mattis waves as he walks to his vehicle after speaking at the Defense Innovation Unit Experimental in Mountain View, Aug. 10, 2017. Jeff Chiu/AP Save this story Save Save this story Save Defense Secretary James Mattis has a lot on his mind these days. North Korea , obviously. China's expanding claims on the South China sea. Afghanistan, Iraq, Syria. And, closer to home, the Pentagon lagging behind the tech industry in leveraging artificial intelligence. Mattis admitted to that concern Thursday during the Silicon Valley leg of a West Coast tour that includes visits to Amazon and Google. When WIRED asked Mattis if the US had ambitions to harness recent progress in AI for military purposes like those recently espoused by China, he said his department needed to do more with the technology. “It's got to be better integrated by the Department of Defense, because I see many of the greatest advances out here on the West Coast in private industry,” Mattis said. Mattis, speaking in Mountain View, a stone’s throw from Google’s campus, hopes the tech industry will help the Pentagon catch up. He was visiting the Defense Innovation Unit Experimental, an organization within the DoD started by his predecessor Ashton Carter in 2015 to make it easier for smaller tech companies to partner with the Department of Defense and the military. DIUx has so far sunk $100 million into 45 contracts, including with companies developing small autonomous drones that could explore buildings during military raids, and a tooth-mounted headset and microphone. Mattis said Thursday he wanted to see the organization increase the infusion of tech industry savvy into his department. “There’s no doubt in my mind DIUx will continue to exist; it will grow in its influence on the Department of Defense,” he said. The Pentagon has a long record of researching and deploying artificial intelligence and automation technology. But AI is rapidly progressing, and the most significant developments have come out of the commercial and academic spheres. Over the past five years, leading tech companies and their lavishly funded AI labs have sucked up ideas and talent from universities. They're now in a race to spin up the best new products and experimental projects. Google, for example, has recently used machine learning research to power up its automatic translation and cut data-center cooling bills. Waymo, Alphabet's autonomous-car company, uses AI in developing the technology in its self-driving vehicles. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Making smart use of artificial intelligence looks to be crucial to military advancement and dominance. Just last month, China’s State Council released a detailed strategy for artificial intelligence across the economy and in its military. China's strategic interest in AI led DIUx to prepare an internal report this year suggesting scrutiny and restrictions on Chinese investment in Silicon Valley companies. Texas senior senator John Cornyn has proposed legislation that could enable that policy. Related Stories In Depth Issie Lapowsky military tech Nicholas Thompson Diversity Matt Simon A recent Harvard report commissioned by the Office of the Director of National Intelligence found that AI-based technologies, like autonomous vehicles, are poised to make advance militaries much more powerful—and possibly cause a transformation similar in scale to the advent of nuclear weapons. But the US does not have a public, high-level national or defense strategy for artificial intelligence in the same way as China—perhaps owing mostly to differences of political style. On Thursday, Mattis professed confidence that his department would figure out how to make more with AI, without offering specifics. “The bottom line is we’ll get better at integrating advances in AI that are being taken here in the Valley into the US military,” he said. There is another bottom line to consider. The Trump administration’s proposed budget would increase funding for DIUx, which might help fulfill Mattis' dreams of an AI acceleration. It also expands support to Pentagon research agency DARPA , which has many AI-related projects. But the White House’s budget proposal also includes cuts to the National Science Foundation, an agency that has long supported AI research, including work on artificial neural networks, the very technique that now has companies—and nations—suddenly so interested in the field's potential. Senior Editor X Topics department of defense artificial intelligence Military Will Knight Amanda Hoover Kari McMahon Gregory Barber Vittoria Elliott Khari Johnson Vittoria Elliott Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,206
2,019
"The Tricky Ethics of Google's Cloud Ambitions | WIRED"
"https://www.wired.com/story/google-needs-grow-cloud-business-carefully"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business The Tricky Ethics of Google's Cloud Ambitions Getty Images Save this story Save Save this story Save Application Cloud computing Ethics Company Alphabet Google Google’s attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company’s cloud computing conference, building on his predecessor’s strategy of emphasizing Google’s strength in artificial intelligence. That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its—and its customers’—AI projects to ethical reviews. They have caused Google to turn away some business. “There have been things that we have said no to,” says Tracy Frey, director of AI strategy for Google Cloud, although she declines to say what. But this week, the company fueled criticism that those mechanisms can’t be trusted when it fumbled an attempt to introduce outside oversight over its AI development. Google’s ethics reviews tap a range of experts. Frey says product managers, engineers, lawyers, and ethicists assess proposed new services against Google’s AI principles. Some new products announced next week will come with features or limitations added as a result. Last year, that process led Google not to launch a facial recognition service, something rivals Microsoft and Amazon have done. This week, more than 70 AI researchers—including nine who work at Google— signed an open letter calling on Amazon to stop selling the technology to law enforcement. Frey says that tricky decisions over how—or whether—to release AI technology will become more common as the technology advances. In February, San Francisco research institute OpenAI said it would not release new software it created that is capable of generating surprisingly fluent text because it might be used maliciously. The episode was dismissed by some researchers as a stunt, but Frey says it provides a powerful example of the kind of restraint needed as AI technology gets more powerful. “We hope to be able to have that same courageous stance,” she says. Google said last year that it modified research on lip-reading software to minimize the risk of misuse. The technology could help the hard of hearing—or be used to infringe on privacy. Not everyone is convinced that Google itself can be trusted to make ethical decisions about its own technology and business. Google’s AI principles have been criticized as too vague and permissive. Weapons projects are banned, but military work is still allowed. The principles say Google will not pursue “technologies whose purpose contravenes widely accepted principles of international law and human rights,” but the company has been testing a search engine for China that, if launched, would have to perform political censorship. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Since Google revealed its AI principles, the company has been dogged by questions about how they would be enforced without external oversight. Last week Google announced a panel of eight outsiders it said would help implement the principles. Late Thursday it said that new Advanced Technology External Advisory Council was being shut down and that the company was “going back to the drawing board.” The U-turn came after thousands of Google employees signed a petition protesting the inclusion of Kay Coles James, president of conservative think tank the Heritage Foundation. She worked on President Trump’s transition team and has spoken against policies aimed at helping trans and LGBTQ people. As the controversy grew, one council member resigned and another, Oxford University philosopher Luciano Floridi, said Google had made a “ grave error ” in appointing James. Os Keyes, a researcher at the University of Washington who joined hundreds of outsiders in signing the Googlers’ petition protesting James’ inclusion, says the episode suggests Google cares more about currying political favor with conservatives than the impact of AI technology. “The idea of ‘responsible AI’ as practiced by Google is not actually responsible,” Keyes says. “They mean ‘not harmful, unless harm makes money.’” Anything that adds friction to new products or deals could heighten Kurian’s challenge. He took over at Google Cloud last year after the departure of Diane Greene , a storied engineer and executive who led a broad expansion of the unit after joining in 2016. Although Google’s cloud business made progress during Greene’s tenure, Amazon’s and Microsoft’s did too. Oppenheimer estimates that Google has 10 percent of the cloud market, well behind Amazon’s 45 percent and Microsoft’s 17 percent. Google is not the only big company talking more about AI ethics lately. Microsoft has its own internal ethical review process for AI deals and also says it has turned down some AI projects. Frey says such reviews don’t have to slow down a business and that Google’s ethical AI checkups can generate new business because of growing awareness of the risks that come with AI’s power. Google Cloud needs to encourage trust in AI to succeed in the long term, she says. “If that trust is broken at any point we run the risk of not being able to realize the important and valuable effects of AI being infused in enterprises around the world,” Frey says. Want to start fermenting your food? Get this gear Free throws should be easy. Why do NBA players miss ? Russia's bid to exploit gas under the Arctic tundra Tracking eye movements can help computers learn Al Gore did not invent the Green New Deal, but he likes it 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Editor X Topics Google artificial intelligence cloud Amazon Microsoft ethics Steven Levy Will Knight Will Knight Khari Johnson Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,207
2,019
"15 Months of Fresh Hell Inside Facebook | WIRED"
"https://www.wired.com/story/facebook-mark-zuckerberg-15-months-of-fresh-hell"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Nicholas Thompson Fred Vogelstein Business 15 Months of Fresh Hell Inside Facebook Play/Pause Button Pause In early 2018, Mark Zuckerberg set out to fix Facebook. Here's how that turned out. Adam Maida Save this story Save Save this story Save The streets of Davos, Switzerland, were iced over on the night of January 25, 2018, which added a slight element of danger to the prospect of trekking to the Hotel Seehof for George Soros’ annual banquet. The aged financier has a tradition of hosting a dinner at the World Economic Forum, where he regales tycoons, ministers, and journalists with his thoughts about the state of the world. That night he began by warning in his quiet, shaking Hungarian accent about nuclear war and climate change. Then he shifted to his next idea of a global menace: Google and Facebook. “Mining and oil companies exploit the physical environment; social media companies exploit the social environment,” he said. “The owners of the platform giants consider themselves the masters of the universe, but in fact they are slaves to preserving their dominant position ... Davos is a good place to announce that their days are numbered.” Across town, a group of senior Facebook executives, including COO Sheryl Sandberg and vice president of global communications Elliot Schrage, had set up a temporary headquarters near the base of the mountain where Thomas Mann put his fictional sanatorium. The world’s biggest companies often establish receiving rooms at the world’s biggest elite confab, but this year Facebook’s pavilion wasn’t the usual scene of airy bonhomie. It was more like a bunker—one that saw a succession of tense meetings with the same tycoons, ministers, and journalists who had nodded along to Soros’ broadside. Over the previous year Facebook’s stock had gone up as usual, but its reputation was rapidly sinking toward junk bond status. The world had learned how Russian intelligence operatives used the platform to manipulate US voters. Genocidal monks in Myanmar and a despot in the Philippines had taken a liking to the platform. Mid-level employees at the company were getting both crankier and more empowered, and critics everywhere were arguing that Facebook’s tools fostered tribalism and outrage. That argument gained credence with every utterance of Donald Trump, who had arrived in Davos that morning, the outrageous tribalist skunk at the globalists’ garden party. May 2019. Subscribe to WIRED. Frank J. Guzzone CEO Mark Zuckerberg had recently pledged to spend 2018 trying to fix Facebook. But even the company’s nascent attempts to reform itself were being scrutinized as a possible declaration of war on the institutions of democracy. Earlier that month Facebook had unveiled a major change to its News Feed rankings to favor what the company called “meaningful social interactions.” News Feed is the core of Facebook—the central stream through which flow baby pictures, press reports, New Age koans, and Russian-­made memes showing Satan endorsing Hillary Clinton. The changes would favor interactions between friends, which meant, among other things, that they would disfavor stories published by media companies. The company promised, though, that the blow would be softened somewhat for local news and publications that scored high on a user-driven metric of “trustworthiness.” Davos provided a first chance for many media executives to confront Facebook’s leaders about these changes. And so, one by one, testy publishers and editors trudged down Davos Platz to Facebook’s headquarters throughout the week, ice cleats attached to their boots, seeking clarity. Facebook had become a capricious, godlike force in the lives of news organizations; it fed them about a third of their referral traffic while devouring a greater and greater share of the advertising revenue the media industry relies on. And now this. Why? Why would a company beset by fake news stick a knife into real news? And what would Facebook’s algorithm deem trustworthy? Would the media executives even get to see their own scores? Facebook didn’t have ready answers to all of these questions; certainly not ones it wanted to give. The last one in particular—about trustworthiness scores—quickly inspired a heated debate among the company’s executives at Davos and their colleagues in Menlo Park. Some leaders, including Schrage, wanted to tell publishers their scores. It was only fair. Also in agreement was Campbell Brown, the company’s chief liaison with news publishers, whose job description includes absorbing some of the impact when Facebook and the news industry crash into one another. But the engineers and product managers back at home in California said it was folly. Adam Mosseri, then head of News Feed, argued in emails that publishers would game the system if they knew their scores. Plus, they were too unsophisticated to understand the methodology, and the scores would constantly change anyway. To make matters worse, the company didn’t yet have a reliable measure of trustworthiness at hand. Heated emails flew back and forth between Switzerland and Menlo Park. Solutions were proposed and shot down. It was a classic Facebook dilemma. The company’s algorithms embraid choices so complex and interdependent that it’s hard for any human to get a handle on it all. If you explain some of what is happening, people get confused. They also tend to obsess over tiny factors in huge equations. So in this case, as in so many others over the years, Facebook chose opacity. Nothing would be revealed in Davos, and nothing would be revealed afterward. The media execs would walk away unsatisfied. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After Soros’ speech that Thursday night, those same editors and publishers headed back to their hotels, many to write, edit, or at least read all the news pouring out about the billionaire’s tirade. The words “their days are numbered” appeared in article after article. The next day, Sandberg sent an email to Schrage asking if he knew whether Soros had shorted Facebook’s stock. Far from Davos, meanwhile, Facebook’s product engineers got down to the precise, algorithmic business of implementing Zuckerberg’s vision. If you want to promote trustworthy news for billions of people, you first have to specify what is trustworthy and what is news. Facebook was having a hard time with both. To define trustworthiness, the company was testing how people responded to surveys about their impressions of different publishers. To define news, the engineers pulled a classification system left over from a previous project—one that pegged the category as stories involving “politics, crime, or tragedy.” That particular choice, which meant the algorithm would be less kind to all kinds of other news—from health and science to technology and sports—wasn’t something Facebook execs discussed with media leaders in Davos. And though it went through reviews with senior managers, not everyone at the company knew about it either. When one Facebook executive learned about it recently in a briefing with a lower-­level engineer, they say they “nearly fell on the fucking floor.” The confusing rollout of meaningful social interactions—marked by internal dissent, blistering external criticism, genuine efforts at reform, and foolish mistakes—set the stage for Facebook’s 2018. This is the story of that annus horribilis , based on interviews with 65 current and former employees. It’s ultimately a story about the biggest shifts ever to take place inside the world’s biggest social network. But it’s also about a company trapped by its own pathologies and, perversely, by the inexorable logic of its own recipe for success. Facebook’s powerful network effects have kept advertisers from fleeing, and overall user numbers remain healthy if you include people on Insta­gram, which Facebook owns. But the company’s original culture and mission kept creating a set of brutal debts that came due with regularity over the past 16 months. The company floundered, dissembled, and apologized. Even when it told the truth, people didn’t believe it. Critics appeared on all sides, demanding changes that ranged from the essential to the contradictory to the impossible. As crises multiplied and diverged, even the company’s own solutions began to cannibalize each other. And the most crucial episode in this story—the crisis that cut the deepest—began not long after Davos, when some reporters from The New York Times , The Observer/Guardian , and Britain’s Channel 4 News came calling. They’d learned some troubling things about a shady British company called Cambridge Analytica , and they had some questions. It was, in some ways, an old story. Back in 2014, a young academic at Cambridge University named Aleksandr Kogan built a personality questionnaire app called thisisyourdigitallife. A few hundred thousand people signed up, giving Kogan access not only to their Facebook data but also—because of Facebook’s loose privacy policies at the time—to that of up to 87 million people in their combined friend networks. Rather than simply use all of that data for research purposes, which he had permission to do, Kogan passed the trove on to Cambridge Analytica, a strategic consulting firm that talked a big game about its ability to model and manipulate human behavior for political clients. In December 2015, The Guardian reported that Cambridge Analytica had used this data to help Ted Cruz’s presidential campaign, at which point Facebook demanded the data be deleted. This much Facebook knew in the early months of 2018. The company also knew—because everyone knew—that Cambridge Analytica had gone on to work with the Trump campaign after Ted Cruz dropped out of the race. And some people at Facebook worried that the story of their company’s relationship with Cambridge Analytica was not over. One former Facebook communications official remembers being warned by a manager in the summer of 2017 that unresolved elements of the Cambridge Analytica story remained a grave vulnerability. No one at Facebook, however, knew exactly when or where the unexploded ordnance would go off. “The company doesn’t know yet what it doesn’t know yet,” the manager said. (The manager now denies saying so.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The company first heard in late February that the Times and The Observer/Guardian had a story coming, but the department in charge of formulating a response was a house divided. In the fall, Facebook had hired a brilliant but fiery veteran of tech industry PR named Rachel Whetstone. She’d come over from Uber to run communications for Facebook’s WhatsApp, Insta­gram, and Messenger. Soon she was traveling with Zuckerberg for public events, joining Sandberg’s senior management meetings, and making decisions—like picking which outside public relations firms to cut or retain—that normally would have rested with those officially in charge of Facebook’s 300-person communications shop. The staff quickly sorted into fans and haters. And so it was that a confused and fractious communications team huddled with management to debate how to respond to the Times and The Observer/Guardian reporters. The standard approach would have been to correct misinformation or errors and spin the company’s side of the story. Facebook ultimately chose another tack. It would front-run the press: dump a bunch of information out in public on the eve of the stories’ publication, hoping to upstage them. It’s a tactic with a short-term benefit but a long-term cost. Investigative journalists are like pit bulls. Kick them once and they’ll never trust you again. Facebook’s decision to take that risk, according to multiple people involved, was a close call. But on the night of Friday, March 16, the company announced it was suspending Cambridge Analytica from its platform. This was a fateful choice. “It’s why the Times hates us,” one senior executive says. Another communications official says, “For the last year, I’ve had to talk to reporters worried that we were going to front-run them. It’s the worst. Whatever the calculus, it wasn’t worth it.” The tactic also didn’t work. The next day the story—focused on a charismatic whistle-­blower with pink hair named Christopher Wylie—exploded in Europe and the United States. Wylie, a former Cambridge Analytica employee, was claiming that the company had not deleted the data it had taken from Facebook and that it may have used that data to swing the American presidential election. The first sentence of The Observer/Guardian ’s reporting blared that this was “one of the tech giant’s biggest ever data breaches” and that Cambridge Analytica had used the data “to build a powerful software program to predict and influence choices at the ballot box.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The story was a witch’s brew of Russian operatives, privacy violations, confusing data, and Donald Trump. It touched on nearly all the fraught issues of the moment. Politicians called for regulation; users called for boycotts. In a day, Facebook lost $36 billion in its market cap. Because many of its employees were compensated based on the stock’s performance, the drop did not go unnoticed in Menlo Park. To this emotional story, Facebook had a programmer’s rational response. Nearly every fact in The Observer/Guardian ’s opening paragraph was misleading, its leaders believed. The company hadn’t been breached—an academic had fairly downloaded data with permission and then unfairly handed it off. And the software that Cambridge Analytica built was not powerful, nor could it predict or influence choices at the ballot box. But none of that mattered. When a Facebook executive named Alex Stamos tried on Twitter to argue that the word breach was being misused, he was swatted down. He soon deleted his tweets. His position was right, but who cares? If someone points a gun at you and holds up a sign that says hand’s up, you shouldn’t worry about the apostrophe. The story was the first of many to illuminate one of the central ironies of Facebook’s struggles. The company’s algorithms helped sustain a news ecosystem that prioritizes outrage, and that news ecosystem was learning to direct outrage at Facebook. As the story spread, the company started melting down. Former employees remember scenes of chaos, with exhausted executives slipping in and out of Zuckerberg’s private conference room, known as the Aquarium, and Sandberg’s conference room, whose name, Only Good News, seemed increasingly incongruous. One employee remembers cans and snack wrappers everywhere; the door to the Aquarium would crack open and you could see people with their heads in their hands and feel the warmth from all the body heat. After saying too much before the story ran, the company said too little afterward. Senior managers begged Sandberg and Zuckerberg to publicly confront the issue. Both remained publicly silent. “We had hundreds of reporters flooding our inboxes, and we had nothing to tell them,” says a member of the communications staff at the time. “I remember walking to one of the cafeterias and overhearing other Facebookers say, ‘Why aren’t we saying anything? Why is nothing happening?’ ” According to numerous people who were involved, many factors contributed to Facebook’s baffling decision to stay mute for five days. Executives didn’t want a repeat of Zuckerberg’s ignominious performance after the 2016 election when, mostly off the cuff, he had proclaimed it “a pretty crazy idea” to think fake news had affected the result. And they continued to believe people would figure out that Cambridge Analytica’s data had been useless. According to one executive, “You can just buy all this fucking stuff, all this data, from the third-party ad networks that are tracking you all over the planet. You can get way, way, way more privacy-­violating data from all these data brokers than you could by stealing it from Facebook.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Those five days were very, very long,” says Sandberg, who now acknowledges the delay was a mistake. The company became paralyzed, she says, because it didn’t know all the facts; it thought Cambridge Analytica had deleted the data. And it didn’t have a specific problem to fix. The loose privacy policies that allowed Kogan to collect so much data had been tightened years before. “We didn’t know how to respond in a system of imperfect information,” she says. Facebook’s other problem was that it didn’t understand the wealth of antipathy that had built up against it over the previous two years. Its prime decisionmakers had run the same playbook successfully for a decade and a half: Do what they thought was best for the platform’s growth (often at the expense of user privacy), apologize if someone complained, and keep pushing forward. Or, as the old slogan went: Move fast and break things. Now the public thought Facebook had broken Western democracy. This privacy violation—unlike the many others before it—wasn’t one that people would simply get over. Finally, on Wednesday, the company decided Zuckerberg should give a television interview. After snubbing CBS and PBS, the company summoned a CNN reporter who the communications staff trusted to be reasonably kind. The network’s camera crews were treated like potential spies, and one communications official remembers being required to monitor them even when they went to the bathroom. (Facebook now says this was not company protocol.) In the interview itself, Zuckerberg apologized. But he was also specific: There would be audits and much more restrictive rules for anyone wanting access to Facebook data. Facebook would build a tool to let users know if their data had ended up with Cambridge Analytica. And he pledged that Facebook would make sure this kind of debacle never happened again. A flurry of other interviews followed. That Wednesday, WIRED was given a quiet heads-up that we’d get to chat with Zuckerberg in the late afternoon. At about 4:45 pm, his communications chief rang to say he would be calling at 5. In that interview, Zuckerberg apologized again. But he brightened when he turned to one of the topics that, according to people close to him, truly engaged his imagination: using AI to keep humans from polluting Facebook. This was less a response to the Cambridge Analytica scandal than to the backlog of accusations, gathering since 2016, that Facebook had become a cesspool of toxic virality, but it was a problem he actually enjoyed figuring out how to solve. He didn’t think that AI could completely eliminate hate speech or nudity or spam, but it could get close. “My understanding with food safety is there’s a certain amount of dust that can get into the chicken as it’s going through the processing, and it’s not a large amount—it needs to be a very small amount,” he told WIRED. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The interviews were just the warmup for Zuckerberg’s next gauntlet: A set of public, televised appearances in April before three congressional committees to answer questions about Cambridge Analytica and months of other scandals. Congresspeople had been calling on him to testify for about a year, and he’d successfully avoided them. Now it was game time, and much of Facebook was terrified about how it would go. As it turned out, most of the lawmakers proved astonishingly uninformed , and the CEO spent most of the day ably swatting back soft pitches. Back home, some Facebook employees stood in their cubicles and cheered. When a plodding Senator Orrin Hatch asked how, exactly, Facebook made money while offering its services for free, Zuckerberg responded confidently, “Senator, we run ads,” a phrase that was soon emblazoned on T-shirts in Menlo Park. Adam Maida The Saturday after the Cambridge Analytica scandal broke, Sandberg told Molly Cutler, a top lawyer at Facebook, to create a crisis response team. Make sure we never have a delay responding to big issues like that again, Sandberg said. She put Cutler’s new desk next to hers, to guarantee Cutler would have no problem convincing division heads to work with her. “I started the role that Monday,” Cutler says. “I never made it back to my old desk. After a couple of weeks someone on the legal team messaged me and said, ‘You want us to pack up your things? It seems like you are not coming back.’ ” Then Sandberg and Zuckerberg began making a huge show of hiring humans to keep watch over the platform. Soon you couldn’t listen to a briefing or meet an executive without being told about the tens of thousands of content moderators who had joined the company. By the end of 2018, about 30,000 people were working on safety and security, which is roughly the number of newsroom employees at all the newspapers in the United States. Of those, about 15,000 are content reviewers, mostly contractors, employed at more than 20 giant review factories around the world. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Facebook was also working hard to create clear rules for enforcing its basic policies, effectively writing a constitution for the 1.5 billion daily users of the platform. The instructions for moderating hate speech alone run to more than 200 pages. Moderators must undergo 80 hours of training before they can start. Among other things, they must be fluent in emoji; they study, for example, a document showing that a crown, roses, and dollar signs might mean a pimp is offering up prostitutes. About 100 people across the company meet every other Tuesday to review the policies. A similar group meets every Friday to review content policy enforcement screwups, like when, as happened in early July, the company flagged the Declaration of Independence as hate speech. The company hired all of these people in no small part because of pressure from its critics. It was also the company’s fate, however, that the same critics discovered that moderating content on Facebook can be a miserable, soul-scorching job. As Casey Newton reported in an investigation for the Verge, the average content moderator in a Facebook contractor’s outpost in Arizona makes $28,000 per year, and many of them say they have developed PTSD-like symptoms due to their work. Others have spent so much time looking through conspiracy theories that they’ve become believers themselves. Ultimately, Facebook knows that the job will have to be done primarily by machines—which is the company’s preference anyway. Machines can browse porn all day without flatlining, and they haven’t learned to unionize yet. And so simultaneously the company mounted a huge effort, led by CTO Mike Schroepfer, to create artificial intelligence systems that can, at scale, identify the content that Facebook wants to zap from its platform, including spam, nudes, hate speech, ISIS propaganda, and videos of children being put in washing machines. An even trickier goal was to identify the stuff that Facebook wants to demote but not eliminate—like misleading clickbait crap. Over the past several years, the core AI team at Facebook has doubled in size annually. Even a basic machine-learning system can pretty reliably identify and block pornography or images of graphic violence. Hate speech is much harder. A sentence can be hateful or prideful depending on who says it. “You not my bitch, then bitch you are done,” could be a death threat, an inspiration, or a lyric from Cardi B. Imagine trying to decode a similarly complex line in Spanish, Mandarin, or Burmese. False news is equally tricky. Facebook doesn’t want lies or bull on the platform. But it knows that truth can be a kaleidoscope. Well-meaning people get things wrong on the internet; malevolent actors sometimes get things right. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Schroepfer’s job was to get Facebook’s AI up to snuff on catching even these devilishly ambiguous forms of content. With each category the tools and the success rate vary. But the basic technique is roughly the same: You need a collection of data that has been categorized, and then you need to train the machines on it. For spam and nudity these databases already exist, created by hand in more innocent days when the threats online were fake Viagra and Goatse memes, not Vladimir Putin and Nazis. In the other categories you need to construct the labeled data sets yourself—ideally without hiring an army of humans to do so. One idea Schroepfer discussed enthusiastically with WIRED involved starting off with just a few examples of content identified by humans as hate speech and then using AI to generate similar content and simultaneously label it. Like a scientist bioengineering both rodents and rat terriers, this approach would use software to both create and identify ever-more-complex slurs, insults, and racist crap. Eventually the terriers, specially trained on superpowered rats, could be set loose across all of Facebook. The company’s efforts in AI that screens content were nowhere roughly three years ago. But Facebook quickly found success in classifying spam and posts supporting terror. Now more than 99 percent of content created in those categories is identified before any human on the platform flags it. Sex, as in the rest of human life, is more complicated. The success rate for identifying nudity is 96 percent. Hate speech is even tougher: Facebook finds just 52 percent before users do. These are the kinds of problems that Facebook executives love to talk about. They involve math and logic, and the people who work at the company are some of the most logical you’ll ever meet. But Cambridge Analytica was mostly a privacy scandal. Facebook’s most visible response to it was to amp up content moderation aimed at keeping the platform safe and civil. Yet sometimes the two big values involved—privacy and civility—come into opposition. If you give people ways to keep their data completely secret, you also create secret tunnels where rats can scurry around undetected. In other words, every choice involves a trade-off, and every trade-off means some value has been spurned. And every value that you spurn—particularly when you’re Facebook in 2018—means that a hammer is going to come down on your head. Crises offer opportunities. They force you to make some changes, but they also provide cover for the changes you’ve long wanted to make. And four weeks after Zuckerberg’s testimony before Congress, the company initiated the biggest reshuffle in its history. About a dozen executives shifted chairs. Most important, Chris Cox, longtime head of Facebook’s core product—known internally as the Blue App—would now oversee WhatsApp and Insta­gram too. Cox was perhaps Zuckerberg’s closest and most trusted confidant, and it seemed like succession planning. Adam Mosseri moved over to run product at Insta­gram. Insta­gram , which was founded in 2010 by Kevin Systrom and Mike Krieger, had been acquired by Facebook in 2012 for $1 billion. The price at the time seemed ludicrously high: That much money for a company with 13 employees? Soon the price would seem ludicrously low: A mere billion dollars for the fastest-growing social network in the world? Internally, Facebook at first watched Insta­gram’s relentless growth with pride. But, according to some, pride turned to suspicion as the pupil’s success matched and then surpassed the professor’s. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Systrom’s glowing press coverage didn’t help. In 2014, according to someone directly involved, Zuckerberg ordered that no other executives should sit for magazine profiles without his or Sandberg’s approval. Some people involved remember this as a move to make it harder for rivals to find employees to poach; others remember it as a direct effort to contain Systrom. Top executives at Facebook also believed that Insta­gram’s growth was cannibalizing the Blue App. In 2017, Cox’s team showed data to senior executives suggesting that people were sharing less inside the Blue App in part because of Insta­gram. To some people, this sounded like they were simply presenting a problem to solve. Others were stunned and took it as a sign that management at Facebook cared more about the product they had birthed than one they had adopted. By the time the Cambridge Analytica scandal hit, Instagram founders Kevin Systrom and Mike Krieger were already worried that Zuckerberg was souring on them. Most of Insta­gram—and some of Facebook too—hated the idea that the growth of the photo-sharing app could be seen, in any way, as trouble. Yes, people were using the Blue App less and Insta­gram more. But that didn’t mean Insta­gram was poaching users. Maybe people leaving the Blue App would have spent their time on Snapchat or watching Netflix or mowing their lawns. And if Insta­gram was growing quickly, maybe it was because the product was good? Insta­gram had its problems—bullying, shaming, FOMO, propaganda, corrupt micro-­influencers—but its internal architecture had helped it avoid some of the demons that haunted the industry. Posts are hard to reshare, which slows virality. External links are harder to embed, which keeps the fake-news providers away. Minimalist design also minimized problems. For years, Systrom and Krieger took pride in keeping Insta­gram free of hamburgers: icons made of three horizontal lines in the corner of a screen that open a menu. Facebook has hamburgers, and other menus, all over the place. Systrom and Krieger had also seemingly anticipated the techlash ahead of their colleagues up the road in Menlo Park. Even before Trump’s election, Insta­gram had made fighting toxic comments its top priority, and it had rolled out an AI filtering system in June 2017. By the spring of 2018, the company was working on a product to alert users that “you’re all caught up” when they’d seen all the new posts in their feed. In other words, “put your damn phone down and talk to your friends.” That may be a counterintuitive way to grow, but earning goodwill does help over the long run. And sacrificing growth for other goals wasn’t Facebook’s style at all. By the time the Cambridge Analytica scandal hit, Systrom and Krieger, according to people familiar with their thinking, were already worried that Zuckerberg was souring on them. They had been allowed to run their company reasonably independently for six years, but now Zuckerberg was exerting more control and making more requests. When conversations about the reorganization began, the Insta­gram founders pushed to bring in Mosseri. They liked him, and they viewed him as the most trustworthy member of Zuckerberg’s inner circle. He had a design background and a mathematical mind. They were losing autonomy, so they might as well get the most trusted emissary from the mothership. Or as Lyndon Johnson said about J. Edgar Hoover, “It’s probably better to have him inside the tent pissing out than outside the tent pissing in.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Meanwhile, the founders of WhatsApp, Brian Acton and Jan Koum, had moved outside of Facebook’s tent and commenced fire. Zuckerberg had bought the encrypted messaging platform in 2014 for $19 billion, but the cultures had never entirely meshed. The two sides couldn’t agree on how to make money—WhatsApp’s end-to-end encryption wasn’t originally designed to support targeted ads—and they had other differences as well. WhatsApp insisted on having its own conference rooms, and, in the perfect metaphor for the two companies’ diverging attitudes over privacy, WhatsApp employees had special bathroom stalls designed with doors that went down to the floor, unlike the standard ones used by the rest of Facebook. Eventually the battles became too much for Acton and Koum, who had also come to believe that Facebook no longer intended to leave them alone. Acton quit and started funding a competing messaging platform called Signal. During the Cambridge Analytica scandal, he tweeted, “It is time. #deletefacebook.” Soon afterward, Koum, who held a seat on Facebook’s board, announced that he too was quitting, to play more Ultimate Frisbee and work on his collection of air-cooled Porsches. The departure of the WhatsApp founders created a brief spasm of bad press. But now Acton and Koum were gone, Mosseri was in place, and Cox was running all three messaging platforms. And that meant Facebook could truly pursue its most ambitious and important idea of 2018: bringing all those platforms together into something new. By the late spring, news organizations—even as they jockeyed for scoops about the latest meltdown in Menlo Park—were starting to buckle under the pain caused by Facebook’s algorithmic changes. Back in May of 2017, according to Parse.ly, Facebook drove about 40 percent of all outside traffic to news publishers. A year later it was down to 25 percent. Publishers that weren’t in the category “politics, crime, or tragedy” were hit much harder. Jake Rowland/Esto At WIRED, the month after an image of a bruised Zuckerberg appeared on the cover , the numbers were even more stark. One day, traffic from Facebook suddenly dropped by 90 percent, and for four weeks it stayed there. After protestations, emails, and a raised eyebrow or two about the coincidence, Facebook finally got to the bottom of it. An ad run by a liquor advertiser, targeted at WIRED readers, had been mistakenly categorized as engagement bait by the platform. In response, the algorithm had let all the air out of WIRED’s tires. The publication could post whatever it wanted, but few would read it. Once the error was identified, traffic soared back. It was a reminder that journalists are just sharecroppers on Facebook’s giant farm. And sometimes conditions on the farm can change without warning. Inside Facebook, of course, it was not surprising that traffic to publishers went down after the pivot to “meaningful social interactions.” That outcome was the point. It meant people would be spending more time on posts created by their friends and family, the genuinely unique content that Facebook offers. According to multiple Facebook employees, a handful of executives considered it a small plus, too, that the news industry was feeling a little pain after all its negative coverage. The company denies this—“no one at Facebook is rooting against the news industry,” says Anne Kornblut, the company’s director of news partnerships—but, in any case, by early May the pain seemed to have become perhaps excessive. A number of stories appeared in the press about the damage done by the algorithmic changes. And so Sheryl Sandberg, who colleagues say often responds with agitation to negative news stories, sent an email on May 7 calling a meeting of her top lieutenants. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That kicked off a wide-ranging conversation that ensued over the next two months. The key question was whether the company should introduce new factors into its algorithm to help serious publications. The product team working on news wanted Facebook to increase the amount of public content—things shared by news organizations, businesses, celebrities—allowed in News Feed. They also wanted the company to provide stronger boosts to publishers deemed trustworthy, and they suggested the company hire a large team of human curators to elevate the highest-quality news inside of News Feed. The company discussed setting up a new section on the app entirely for news and directed a team to quietly work on developing it; one of the team’s ambitions was to try to build a competitor to Apple News. Some of the company’s most senior execs, notably Chris Cox, agreed that Facebook needed to give serious publishers a leg up. Others pushed back, especially Joel Kaplan, a former deputy chief of staff to George W. Bush who was now Facebook’s vice president of global public policy. Supporting high-quality outlets would inevitably make it look like the platform was supporting liberals, which could lead to trouble in Washington, a town run mainly by conservatives. Breitbart and the Daily Caller, Kaplan argued, deserved protections too. At the end of the climactic meeting, on July 9, Zuckerberg sided with Kaplan and announced that he was tabling the decision about adding ways to boost publishers, effectively killing the plan. To one person involved in the meeting, it seemed like a sign of shifting power. Cox had lost and Kaplan had won. Either way, Facebook’s overall traffic to news organizations continued to plummet. That same evening, Donald Trump announced that he had a new pick for the Supreme Court: Brett Kavanaugh. As the choice was announced, Joel Kaplan stood in the background at the White House, smiling. Kaplan and Kavanaugh had become friends in the Bush White House, and their families had become intertwined. They had taken part in each other’s weddings; their wives were best friends; their kids rode bikes together. No one at Facebook seemed to really notice or care, and a tweet pointing out Kaplan’s attendance was retweeted a mere 13 times. Meanwhile, the dynamics inside the communications department had gotten even worse. Elliot Schrage had announced that he was going to leave his post as VP of global communications. So the company had begun looking for his replacement; it focused on interviewing candidates from the political world, including Denis McDonough and Lisa Monaco, former senior officials in the Obama administration. But Rachel Whetstone also declared that she wanted the job. At least two other executives said they would quit if she got it. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The need for leadership in communications only became more apparent on July 11, when John Hegeman, the new head of News Feed, was asked in an interview why the company didn’t ban Alex Jones’ InfoWars from the platform. The honest answer would probably have been to just admit that Facebook gives a rather wide berth to the far right because it’s so worried about being called liberal. Hegeman, though, went with the following: “We created Facebook to be a place where different people can have a voice. And different publishers have very different points of view.” This, predictably, didn’t go over well with the segments of the news media that actually try to tell the truth and that have never, as Alex Jones has done, reported that the children massacred at Sandy Hook were actors. Public fury ensued. Most of Facebook didn’t want to respond. But Whetstone decided it was worth a try. She took to the @facebook account—which one executive involved in the decision called “a big fucking marshmallow we shouldn’t ever use like this”—and started tweeting at the company’s critics. “Sorry you feel that way,” she typed to one, and explained that, instead of banning pages that peddle false information, Facebook demotes them. The tweet was very quickly ratioed, a Twitter term of art for a statement that no one likes and that receives more comments than retweets. Whetstone, as @facebook, also declared that just as many pages on the left pump out misinformation as on the right. That tweet got badly ratioed too. Five days later, Zuckerberg sat down for an interview with Kara Swisher, the influential editor of Recode. Whetstone was in charge of prep. Before Zuckerberg headed to the microphone, Whetstone supplied him with a list of rough talking points, including one that inexplicably violated the first rule of American civic discourse: Don’t invoke the Holocaust while trying to make a nuanced point. About 20 minutes into the interview, while ambling through his answer to a question about Alex Jones, Zuckerberg declared, “I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down, because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong.” Sometimes, Zuckerberg added, he himself makes errors in public statements. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The comment was absurd: People who deny that the Holocaust happened generally aren’t just slipping up in the midst of a good-faith intellectual disagreement. They’re spreading anti-Semitic hate—intentionally. Soon the company announced that it had taken a closer look at Jones’ activity on the platform and had finally chosen to ban him. His past sins, Facebook decided, had crossed into the domain of standards violations. Eventually another candidate for the top PR job was brought into the headquarters in Menlo Park: Nick Clegg, former deputy prime minister of the UK. Perhaps in an effort to disguise himself—or perhaps because he had decided to go aggressively Silicon Valley casual—he showed up in jeans, sneakers, and an untucked shirt. His interviews must have gone better than his disguise, though, as he was hired over the luminaries from Washington. “What makes him incredibly well qualified,” said Caryn Marooney, the company’s VP of communications, “is that he helped run a country.” Adam Maida At the end of July, Facebook was scheduled to report its quarterly earnings in a call to investors. The numbers were not going to be good; Facebook’s user base had grown more slowly than ever, and revenue growth was taking a huge hit from the company’s investments in hardening the platform against abuse. But in advance of the call, the company’s leaders were nursing an additional concern: how to put Insta­gram in its place. According to someone who saw the relevant communications, Zuckerberg and his closest lieutenants were debating via email whether to say, essentially, that Insta­gram owed its spectacular growth not primarily to its founders and vision but to its relationship with Facebook. Zuckerberg wanted to include a line to this effect in his script for the call. Whetstone counseled him not to, or at least to temper it with praise for Insta­gram’s founding team. In the end, Zuckerberg’s script declared, “We believe Insta­gram has been able to use Facebook’s infrastructure to grow more than twice as quickly as it would have on its own. A big congratulations to the Insta­gram team—and to all the teams across our company that have contributed to this success.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After the call—with its payload of bad news about growth and investment—Facebook’s stock dropped by nearly 20 percent. But Zuckerberg didn’t forget about Insta­gram. A few days later he asked his head of growth, Javier Olivan, to draw up a list of all the ways Facebook supported Insta­gram: running ads for it on the Blue App; including link-backs when someone posted a photo on Insta­gram and then cross-published it in Facebook News Feed; allowing Insta­gram to access a new user’s Facebook connections in order to recommend people to follow. Once he had the list, Zuckerberg conveyed to Insta­gram’s leaders that he was pulling away the supports. Facebook had given Insta­gram servers, health insurance, and the best engineers in the world. Now Insta­gram was just being asked to give a little back—and to help seal off the vents that were allowing people to leak away from the Blue App. Systrom soon posted a memo to his entire staff explaining Zuckerberg’s decision to turn off supports for traffic to Insta­gram. He disagreed with the move, but he was committed to the changes and was telling his staff that they had to go along. The memo “was like a flame going up inside the company,” a former senior manager says. The document also enraged Facebook, which was terrified it would leak. Systrom soon departed on paternity leave. The tensions didn’t let up. In the middle of August, Facebook prototyped a location-­tracking service inside of Insta­gram, the kind of privacy intrusion that Insta­gram’s management team had long resisted. In August, a hamburger menu appeared. “It felt very personal,” says a senior Insta­gram employee who spent the month implementing the changes. It felt particularly wrong, the employee says, because Facebook is a data-driven company, and the data strongly suggested that Insta­gram’s growth was good for everyone. The Instagram founders' unhappiness with Facebook stemmed from tensions that had brewed over many years and had boiled over in the past six months. Friends of Systrom and Krieger say the strife was wearing on the founders too. According to someone who heard the conversation, Systrom openly wondered whether Zuckerberg was treating him the way Donald Trump was treating Jeff Sessions: making life miserable in hopes that he’d quit without having to be fired. Insta­gram’s managers also believed that Facebook was being miserly about their budget. In past years they had been able to almost double their number of engineers. In the summer of 2018 they were told that their growth rate would drop to less than half of that. When it was time for Systrom to return from paternity leave, the two founders decided to make the leave permanent. They made the decision quickly, but it was far from impulsive. According to someone familiar with their thinking, their unhappiness with Facebook stemmed from tensions that had brewed over many years and had boiled over in the past six months. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And so, on a Monday morning, Systrom and Krieger went into Chris Cox’s office and told him the news. Systrom and Krieger then notified their team about the decision. Somehow the information reached Mike Isaac, a reporter at The New York Times , before it reached the communications teams for either Facebook or Insta­gram. The story appeared online a few hours later, as Insta­gram’s head of communications was on a flight circling above New York City. After the announcement , Systrom and Krieger decided to play nice. Soon there was a lovely photograph of the two founders smiling next to Mosseri, the obvious choice to replace them. And then they headed off into the unknown to take time off, decompress, and figure out what comes next. Systrom and Krieger told friends they both wanted to get back into coding after so many years away from it. If you need a new job, it’s good to learn how to code. Just a few days after Systrom and Krieger quit, Joel Kaplan roared into the news. His dear friend Brett Kavanaugh was now not just a conservative appellate judge with Federalist Society views on Roe v. Wade; he had become an alleged sexual assailant, purported gang rapist, and national symbol of toxic masculinity to somewhere between 49 and 51 percent of the country. As the charges multiplied, Kaplan’s wife, Laura Cox Kaplan, became one of the most prominent women defending him: She appeared on Fox News and asked, “What does it mean for men in the future? It’s very serious and very troubling.” She also spoke at an #IStandWithBrett press conference that was live­streamed on Breitbart. On September 27, Kavanaugh appeared before the Senate Judiciary Committee after four hours of wrenching recollections by his primary accuser, Christine Blasey Ford. Laura Cox Kaplan sat right behind him as the hearing descended into rage and recrimination. Joel Kaplan sat one row back, stoic and thoughtful, directly in view of the cameras broadcasting the scene to the world. Kaplan isn’t widely known outside of Facebook. But he’s not anonymous, and he wasn’t wearing a fake mustache. As Kavanaugh testified, journalists started tweeting a screenshot of the tableau. At a meeting in Menlo Park, executives passed around a phone showing one of these tweets and stared, mouths agape. None of them knew Kaplan was going to be there. The man who was supposed to smooth over Facebook’s political dramas had inserted the company right into the middle of one. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kaplan had long been friends with Sandberg; they’d even dated as undergraduates at Harvard. But despite rumors to the contrary, he had told neither her nor Zuckerberg that he would be at the hearing, much less that he would be sitting in the gallery of supporters behind the star witness. “He’s too smart to do that,” one executive who works with him says. “That way, Joel gets to go. Facebook gets to remind people that it employs Republicans. Sheryl gets to be shocked. And Mark gets to denounce it.” If that was the plan, it worked to perfection. Soon Facebook’s internal message boards were lighting up with employees mortified at what Kaplan had done. Management’s initial response was limp and lame: A communications officer told the staff that Kaplan attended the hearing as part of a planned day off in his personal capacity. That wasn’t a good move. Someone visited the human resources portal and noted that he hadn’t filed to take the day off. In some ways, the world’s largest social network is stronger than ever, with record revenue of $55.8 billion in 2018. But Facebook has also never been more threatened. Here are some dangers that could knock it down. — US Antitrust Regulation In March, Democratic presidential candidate Elizabeth Warren proposed severing Instagram and WhatsApp from Facebook, joining the growing chorus of people who want to chop the company down to size. Even US attorney general William Barr has hinted at probing tech’s “huge behemoths.” But for now, antitrust talk remains talk—much of it posted to Facebook. — Federal Privacy Crackdowns Facebook and the Federal Trade Commission are negotiating a settlement over whether the company’s conduct, including with Cambridge Analytica, violated a 2011 consent decree regarding user privacy. According to The New York Times , federal prosecutors have also begun a criminal investigation into Facebook’s data-sharing deals with other technology companies. — European Regulators While America debates whether to take aim at Facebook, Europe swings axes. In 2018, the EU’s General Data Protection Regulation forced Facebook to allow users to access and delete more of their data. Then this February, Germany ordered the company to stop harvesting web-browsing data without users’ consent, effectively outlawing much of the company’s ad business. — User Exodus Although a fifth of the globe uses Facebook every day, the number of adult users in the US has largely stagnated. The decline is even more precipitous among teenagers. (Granted, many of them are switching to Instagram.) But network effects are powerful things: People swarmed to Facebook because everyone else was there; they might also swarm for the exits. The hearings were on a Thursday. A week and a day later, Facebook called an all-hands to discuss what had happened. The giant cafeteria in Facebook’s headquarters was cleared to create space for a town hall. Hundreds of chairs were arranged with three aisles to accommodate people with questions and comments. Most of them were from women who came forward to recount their own experiences of sexual assault, harassment, and abuse. Zuckerberg, Sandberg, and other members of management were standing on the right side of the stage, facing the audience and the moderator. Whenever a question was asked of one of them, they would stand up and take the mic. Kaplan appeared via video conference looking, according to one viewer, like a hostage trying to smile while his captors stood just offscreen. Another participant described him as “looking like someone had just shot his dog in the face.” This participant added, “I don’t think there was a single male participant, except for Zuckerberg looking down and sad onstage and Kaplan looking dumbfounded on the screen.” Employees who watched expressed different emotions. Some felt empowered and moved by the voices of women in a company where top management is overwhelmingly male. Another said, “My eyes rolled to the back of my head” watching people make specific personnel demands of Zuckerberg, including that Kaplan undergo sensitivity training. For much of the staff, it was cathartic. Facebook was finally reckoning, in a way, with the #MeToo movement and the profound bias toward men in Silicon Valley. For others it all seemed ludicrous, narcissistic, and emblematic of the liberal, politically correct bubble that the company occupies. A guy had sat in silence to support his best friend who had been nominated to the Supreme Court; as a consequence, he needed to be publicly flogged? In the days after the hearings, Facebook organized small group discussions, led by managers, in which 10 or so people got together to discuss the issue. There were tears, grievances, emotions, debate. “It was a really bizarre confluence of a lot of issues that were popped in the zit that was the SCOTUS hearing,” one participant says. Kaplan, though, seemed to have moved on. The day after his appearance on the conference call, he hosted a party to celebrate Kavanaugh’s lifetime appointment. Some colleagues were aghast. According to one who had taken his side during the town hall, this was a step too far. That was “just spiking the football,” they said. Sandberg was more forgiving. “It’s his house,” she told WIRED. “That is a very different decision than sitting at a public hearing.” In a year during which Facebook made endless errors, Kaplan’s insertion of the company into a political maelstrom seemed like one of the clumsiest. But in retrospect, Facebook executives aren’t sure that Kaplan did lasting harm. His blunder opened up a series of useful conversations in a workplace that had long focused more on coding than inclusion. Also, according to another executive, the episode and the press that followed surely helped appease the company’s would-be regulators. It’s useful to remind the Republicans who run most of Washington that Facebook isn’t staffed entirely by snowflakes and libs. That summer and early fall weren’t kind to the team at Facebook charged with managing the company’s relationship with the news industry. At least two product managers on the team quit, telling colleagues they had done so because of the company’s cavalier attitude toward the media. In August, a jet-lagged Campbell Brown gave a presentation to publishers in Australia in which she declared that they could either work together to create new digital business models or not. If they didn’t, well, she’d be unfortunately holding hands with their dying business, like in a hospice. Her off-the-­record comments were put on the record by The Australian , a publication owned by Rupert Murdoch, a canny and persistent antagonist of Facebook. In September, however, the news team managed to convince Zuckerberg to start administering ice water to the parched executives of the news industry. That month, Tom Alison, one of the team’s leaders, circulated a document to most of Facebook’s senior managers; it began by proclaiming that, on news, “we lack clear strategy and alignment.” Then, at a meeting of the company’s leaders, Alison made a series of recommendations, including that Facebook should expand its definition of news—and its algorithmic boosts—beyond just the category of “politics, crime, or tragedy.” Stories about politics were bound to do well in the Trump era, no matter how Facebook tweaked its algorithm. But the company could tell that the changes it had introduced at the beginning of the year hadn’t had the intended effect of slowing the political venom pulsing through the platform. In fact, by giving a slight tailwind to politics, tragedy, and crime, Facebook had helped build a news ecosystem that resembled the front pages of a tempestuous tabloid. Or, for that matter, the front page of FoxNews.com. That fall, Fox was netting more engagement on Facebook than any other English-language publisher; its list of most-shared stories was a goulash of politics, crime, and tragedy. (The network’s three most-shared posts that month were an article alleging that China was burning bibles, another about a Bill Clinton rape accuser, and a third that featured Laura Cox Kaplan and #IStandWithBrett.) Politics, Crime, or Tragedy? In early 2018, Facebook’s algorithm started demoting posts shared by businesses and publishers. But because of an obscure choice by Facebook engineers, stories involving “politics, crime, or tragedy” were shielded somewhat from the blow—which had a big effect on the news ecosystem inside the social network. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Source: Parse.ly That September meeting was a moment when Facebook decided to start paying indulgences to make up for some of its sins against journalism. It decided to put hundreds of millions of dollars toward supporting local news, the sector of the industry most disrupted by Silicon Valley; Brown would lead the effort, which would involve helping to find sustainable new business models for journalism. Alison proposed that the company move ahead with the plan hatched in June to create an entirely new section on the Facebook app for news. And, crucially, the company committed to developing new classifiers that would expand the definition of news beyond “politics, crime, or tragedy.” Zuckerberg didn’t sign off on everything all at once. But people left the room feeling like he had subscribed. Facebook had spent much of the year holding the media industry upside down by the feet. Now Facebook was setting it down and handing it a wad of cash. As Facebook veered from crisis to crisis, something else was starting to happen: The tools the company had built were beginning to work. The three biggest initiatives for the year had been integrating WhatsApp, Insta­gram, and the Blue App into a more seamless entity; eliminating toxic content; and refocusing News Feed on meaningful social interactions. The company was making progress on all fronts. The apps were becoming a family, partly through divorce and arranged marriage but a family nonetheless. Toxic content was indeed disappearing from the platform. In September, economists at Stanford and New York University revealed research estimating that user interactions with fake news on the platform had declined by 65 percent from their peak in December 2016 to the summer of 2018. On Twitter, meanwhile, the number had climbed. There wasn’t much time, however, for anyone to absorb the good news. Right after the Kavanaugh hearings, the company announced that, for the first time, it had been badly breached. In an Ocean’s 11 –style heist, hackers had figured out an ingenious way to take control of user accounts through a quirk in a feature that makes it easier for people to play Happy Birthday videos for their friends. The breach was both serious and absurd, and it pointed to a deep problem with Facebook. By adding so many features to boost engagement, it had created vectors for intrusion. One virtue of simple products is that they are simpler to defend. Given the sheer number of people who accused Facebook of breaking democracy in 2016, the company approached the November 2018 US midterm elections with trepidation. It worried that the tools of the platform made it easier for candidates to suppress votes than get them out. And it knew that Russian operatives were studying AI as closely as the engineers on Mike Schroepfer’s team. So in preparation for Brazil’s October 28 presidential election and the US midterms nine days later, the company created what it called “ election war rooms ”—a term despised by at least some of the actual combat veterans at the company. The rooms were partly a media prop, but still, three dozen people worked nearly around the clock inside of them to minimize false news and other integrity issues across the platform. Ultimately the elections passed with little incident, perhaps because Facebook did a good job, perhaps because a US Cyber Command operation temporarily knocked Russia’s primary troll farm offline. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Facebook got a boost of good press from the effort, but the company in 2018 was like a football team that follows every hard-fought victory with a butt fumble and a 30-point loss. In mid-November, The New York Times published an impressively reported stem-winder about trouble at the company. The most damning revelation was that Facebook had hired an opposition research firm called Definers to investigate, among other things, whether George Soros was funding groups critical of the company. Definers was also directly connected to a dubious news operation whose stories were often picked up by Breitbart. After the story broke, Zuckerberg plausibly declared that he knew nothing about Definers. Sandberg, less plausibly, did the same. Numerous people inside the company were convinced that she entirely understood what Definers did, though she strongly maintains that she did not. Meanwhile, Schrage, who had announced his resignation but never actually left, decided to take the fall. He declared that the Definers project was his fault; it was his communications department that had hired the firm, he said. But several Facebook employees who spoke with WIRED believe that Schrage’s assumption of responsibility was just a way to gain favor with Sandberg. Inside Facebook, people were furious at Sandberg, believing she had asked them to dissemble on her behalf with her Definers denials. Sandberg, like everyone, is human. She’s brilliant, inspirational, and more organized than Marie Kondo. Once, on a cross-country plane ride back from a conference, a former Facebook executive watched her quietly spend five hours sending thank-you notes to everyone she’d met at the event—while everyone else was chatting and drinking. But Sandberg also has a temper, an ego, and a detailed memory for subordinates she thinks have made mistakes. For years, no one had a negative word to say about her. She was a highly successful feminist icon, the best-selling author of Lean In, running operations at one of the most powerful companies in the world. And she had done so under immense personal strain since her husband died in 2015. But resentment had been building for years, and after the Definers mess the dam collapsed. She was pummeled in the Times , in The Washington Post , on Breit­bart, and in WIRED. Former employees who had refrained from criticizing her in interviews conducted with WIRED in 2017 relayed anecdotes about her intimidation tactics and penchant for retribution in 2018. She was slammed after a speech in Munich. She even got dinged by Michelle Obama, who told a sold-out crowd at the Barclays Center in Brooklyn on December 1, “It’s not always enough to lean in, because that shit doesn’t work all the time.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Everywhere, in fact, it was becoming harder to be a Facebook employee. Attrition increased from 2017, though Facebook says it was still below the industry norm, and people stopped broadcasting their place of employment. The company’s head of cybersecurity policy was swatted in his Palo Alto home. “When I joined Facebook in 2016, my mom was so proud of me, and I could walk around with my Facebook backpack all over the world and people would stop and say, ‘It’s so cool that you worked for Facebook.’ That’s not the case anymore,” a former product manager says. “It made it hard to go home for Thanksgiving.” By the holidays in 2018, Facebook was beginning to seem like Monty Python’s Black Knight: hacked down to a torso hopping on one leg but still filled with confidence. The Alex Jones, Holocaust, Kaplan, hack, and Definers scandals had all happened in four months. The heads of WhatsApp and Insta­gram had quit. The stock price was at its lowest level in nearly two years. In the middle of that, Facebook chose to launch a video chat service called Portal. Reviewers thought it was great, except for the fact that Facebook had designed it, which made them fear it was essentially a spycam for people’s houses. Even internal tests at Facebook had shown that people responded to a description of the product better when they didn’t know who had made it. Two weeks later, the Black Knight lost his other leg. A British member of parliament named Damian Collins had obtained hundreds of pages of internal Facebook emails from 2012 through 2015. Ironically, his committee had gotten them from a sleazy company that helped people search for photos of Facebook users in bikinis. But one of Facebook’s superpowers in 2018 was the ability to turn any critic, no matter how absurd, into a media hero. And so, without much warning, Collins released them to the world. One of Facebook’s superpowers in 2018 was the ability to turn any critic, no matter how absurd, into a media hero. The emails, many of them between Zuckerberg and top executives, lent a brutally concrete validation to the idea that Facebook promoted growth at the expense of almost any other value. In one message from 2015, an employee acknowledged that collecting the call logs of Android users is a “pretty high-risk thing to do from a PR perspective.” He said he could imagine the news stories about Facebook invading people’s private lives “in ever more terrifying ways.” But, he added, “it appears that the growth team will charge ahead and do it.” (It did.) Perhaps the most telling email is a message from a then executive named Sam Lessin to Zuckerberg that epitomizes Facebook’s penchant for self-justification. The company, Lessin wrote, could be ruthless and committed to social good at the same time, because they are essentially the same thing: “Our mission is to make the world more open and connected and the only way we can do that is with the best people and the best infrastructure, which requires that we make a lot of money / be very profitable.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The message also highlighted another of the company’s original sins: its assertion that if you just give people better tools for sharing, the world will be a better place. That’s just false. Sometimes Facebook makes the world more open and connected; sometimes it makes it more closed and disaffected. Despots and demagogues have proven to be just as adept at using Facebook as democrats and dreamers. Like the communications innovations before it—the printing press, the telephone, the internet itself—Facebook is a revolutionary tool. But human nature has stayed the same. Perhaps the oddest single day in Facebook’s recent history came on January 30, 2019. A story had just appeared on TechCrunch reporting yet another apparent sin against privacy : For two years, Facebook had been conducting market research with an app that paid you in return for sucking private data from your phone. Facebook could read your social media posts, your emoji sexts, and your browser history. Your soul, or at least whatever part of it you put into your phone, was worth up to $20 a month. Other big tech companies do research of this sort as well. But the program sounded creepy, particularly with the revelation that people as young as 13 could join with a parent’s permission. Worse, Facebook seemed to have deployed the app while wearing a ski mask and gloves to hide its fingerprints. Apple had banned such research apps from its main App Store, but Facebook had fashioned a workaround : Apple allows companies to develop their own in-house iPhone apps for use solely by employees—for booking conference rooms, testing beta versions of products, and the like. Facebook used one of these internal apps to disseminate its market research tool to the public. Apple cares a lot about privacy, and it cares that you know it cares about privacy. It also likes to ensure that people honor its rules. So shortly after the story was published, Apple responded by shutting down all of Facebook’s in-house iPhone apps. By the middle of that Wednesday afternoon, parts of Facebook’s campus stopped functioning. Applications that enabled employees to book meetings, see cafeteria menus, and catch the right shuttle bus flickered out. Employees around the world suddenly couldn’t communicate via messenger with each other on their phones. The mood internally shifted between outraged and amused—with employees joking that they had missed their meetings because of Tim Cook. Facebook’s cavalier approach to privacy had now poltergeisted itself on the company’s own lunch menus. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But then something else happened. A few hours after Facebook’s engineers wandered back from their mystery meals, Facebook held an earnings call. Profits, after a months-long slump, had hit a new record. The number of daily users in Canada and the US, after stagnating for three quarters, had risen slightly. The stock surged, and suddenly all seemed well in the world. Inside a conference room called Relativity, Zuckerberg smiled and told research analysts about all the company’s success. At the same table sat Caryn Marooney, the company’s head of communications. “It felt like the old Mark,” she said. “This sense of ‘We’re going to fix a lot of things and build a lot of things.’ ” Employees couldn’t get their shuttle bus schedules, but within 24 hours the company was worth about $50 billion more than it had been worth the day before. Less than a week after the boffo earnings call, the company gathered for another all-hands. The heads of security and ads spoke about their work and the pride they take in it. Nick Clegg told everyone that they had to start seeing themselves the way the world sees them, not the way they would like to be perceived. It seemed to observers as though management actually had its act together after a long time of looking like a man in lead boots trying to cross a lightly frozen lake. “It was a combination of realistic and optimistic that we hadn’t gotten right in two years,” one executive says. Soon it was back to bedlam, though. Shortly after the all-hands, a parliamentary committee in the UK published a report calling the company a bunch of “digital gangsters.” A German regulatory authority cracked down on a significant portion of the company’s ad business. And news broke that the FTC in Washington was negotiating with the company and reportedly considering a multibillion-­dollar fine due in part to Cambridge Analytica. Later, Democratic presidential hopeful Elizabeth Warren published a proposal to break Facebook apart. She promoted her idea with ads on Facebook, using a modified version of the company’s logo—an act specifically banned by Facebook’s terms of service. Naturally, the company spotted the violation and took the ads down. Warren quickly denounced the move as censorship, even as Facebook restored the ads. It was the perfect Facebook moment for a new year. By enforcing its own rules, the company had created an outrage cycle about Facebook—inside of a larger outrage cycle about Facebook. This January, George Soros gave another speech on a freezing night in Davos. This time he described a different menace to the world: China. The most populous country on earth, he said, is building AI systems that could become tools for totalitarian control. “For open societies,” he said, “ they pose a mortal threat. ” He described the world as in the midst of a cold war. Afterward, one of the authors of this article asked him which side Facebook and Google are on. “Facebook and the others are on the side of their own profits,” the financier answered. The response epitomized one of the most common critiques of the company now: Everything it does is based on its own interests and enrichment. The massive efforts at reform are cynical and deceptive. Yes, the company’s privacy settings are much clearer now than a year ago, and certain advertisers can no longer target users based on their age, gender, or race, but those changes were made at gunpoint. The company’s AI filters help, sure, but they exist to placate advertisers who don’t want their detergent ads next to jihadist videos. The company says it has abandoned “ Move fast and break things ” as its motto, but the guest Wi-Fi password at headquarters remains “M0vefast.” Sandberg and Zuckerberg continue to apologize, but the apologies seem practiced and insincere. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg At a deeper level, critics note that Facebook continues to pay for its original sin of ignoring privacy and fixating on growth. And then there’s the existential question of whether the company’s business model is even compatible with its stated mission: The idea of Facebook is to bring people together, but the business model only works by slicing and dicing users into small groups for the sake of ad targeting. Is it possible to have those two things work simultaneously? To its credit, though, Facebook has addressed some of its deepest issues. For years, smart critics have bemoaned the perverse incentives created by Facebook’s annual bonus program, which pays people in large part based on the company hitting growth targets. In February, that policy was changed. Everyone is now given bonuses based on how well the company achieves its goals on a metric of social good. Another deep critique is that Facebook simply sped up the flow of information to a point where society couldn’t handle it. Now the company has started to slow it down. The company’s fake-news fighters focus on information that’s going viral. WhatsApp has been reengineered to limit the number of people with whom any message can be shared. And internally, according to several employees, people communicate better than they did a year ago. The world might not be getting more open and connected, but at least Facebook’s internal operations are. “It’s going to take real time to go backwards,” Sheryl Sandberg told WIRED, “and figure out everything that could have happened.” In early March, Zuckerberg announced that Facebook would, from then on, follow an entirely different philosophy. He published a 3,200-word treatise explaining that the company that had spent more than a decade playing fast and loose with privacy would now prioritize it. Messages would be encrypted end to end. Servers would not be located in authoritarian countries. And much of this would happen with a further integration of Facebook, WhatsApp, and Insta­gram. Rather than WhatsApp becoming more like Facebook, it sounded like Facebook was going to become more like WhatsApp. When asked by WIRED how hard it would be to reorganize the company around the new vision, Zuckerberg said, “You have no idea how hard it is.” Just how hard it was became clear the next week. As Facebook knows well, every choice involves a trade-off, and every trade-off involves a cost. The decision to prioritize encryption and interoperability meant, in some ways, a decision to deprioritize safety and civility. According to people involved in the decision, Chris Cox, long Zuckerberg’s most trusted lieutenant, disagreed with the direction. The company was finally figuring out how to combat hate speech and false news; it was breaking bread with the media after years of hostility. Now Facebook was setting itself up to both solve and create all kinds of new problems. And so in the middle of March, Cox announced that he was leaving. A few hours after the news broke, a shooter in New Zealand livestreamed on Facebook his murderous attack on a mosque. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Sandberg says that much of her job these days involves harm prevention; she’s also overseeing the various audits and investigations of the company’s missteps. “It’s going to take real time to go backwards,” she told WIRED, “and figure out everything that could have happened.” Zuckerberg, meanwhile, remains obsessed with moving forward. In a note to his followers to start the year, he said one of his goals was to host a series of conversations about technology: “I’m going to put myself out there more.” The first such event, a conversation with the internet law scholar Jonathan Zittrain, took place at Harvard Law School in late winter. Near the end of their exchange, Zittrain asked Zuckerberg what Facebook might look like 10 or so years from now. The CEO mused about developing a device that would allow humans to type by thinking. It sounded incredibly cool at first. But by the time he was done, it sounded like he was describing a tool that would allow Facebook to read people’s minds. Zittrain cut in dryly: “The Fifth Amendment implications are staggering.” Zuckerberg suddenly appeared to understand that perhaps mind-reading technology is the last thing the CEO of Facebook should be talking about right now. “Presumably this would be something someone would choose to use,” he said, before adding, “I don’t know how we got onto this.” Updated 5-8-2019, 2 pm EDT: This story was updated to clarify that the British publication that reported on the relationship between Facebook and Cambridge Analytica is The Observer/Guardian. Nicholas Thompson (@nxthompson) is WIRED’s editor in chief. Fred Vogelstein (@­fvogelstein) is a contributing editor at the magazine. This article appears in the May issue. Subscribe now. Let us know what you think about this article. Submit a letter to the editor at [email protected]. A brief history of porn on the internet How Android fought an epic botnet —and won A fight over specialized chips threatens an Ethereum split Tips for getting the most out of Spotify A tiny guillotine decapitates mosquitoes to fight malaria 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Editor in Chief X LinkedIn Topics magazine-27.05 Cover Story longreads Facebook Mark Zuckerberg Morgan Meaker Reece Rogers Nelson C.J. Peter Guest Andy Greenberg Joel Khalili David Gilbert Kari McMahon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,208
2,019
"Elizabeth Warren Fires a Warning Shot at Big Tech | WIRED"
"https://www.wired.com/story/elizabeth-warren-break-up-amazon-facebook-google"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Issie Lapowsky Business Elizabeth Warren Fires a Warning Shot at Big Tech Andrew Harrer/Bloomberg/Getty Images Save this story Save Save this story Save The tech industry is already under siege by the press, the public, and regulators around the world. But on Friday, Democratic presidential candidate Elizabeth Warren lobbed a bomb onto that battlefield, designed to crack the fortresses that have formed around tech monopolies like Google, Facebook, and Amazon. In a Medium post , the US senator from Massachusetts laid out her presidential platform for breaking up these big tech companies, unwinding their past mergers, and preventing giant platforms like Amazon from also selling their own products on those platforms, potentially stifling competition. "As these companies have grown larger and more powerful, they have used their resources and control over the way we use the Internet to squash small businesses and innovation, and substitute their own financial interests for the broader interests of the American people," Warren wrote in the post. "To restore the balance of power in our democracy, to promote competition, and to ensure that the next generation of technology innovation is as vibrant as the last, it’s time to break up our biggest tech companies." Several of Warren's fellow candidates, including US senators Amy Klobuchar (D-Minnesota) and Bernie Sanders (Vermont), have recently spoken out about tech monopolies and mergers. But Warren's stance is by far the boldest articulation of how the country might go about dismantling the businesses that have insinuated themselves into every part of our lives. It's also the clearest sign yet that Big Tech is in big trouble going into the 2020 primaries. "This is a pace-setter," says Matt Stoller, a fellow at the anti-monopoly think tank Open Markets Institute, who applauded Warren's proposal. "This is going to be a real party debate. If you don't have a plank on tech platforms, it will be very notable." Warren's plan envisions a new category of company called a "platform utility." This would include companies "that offer to the public an online marketplace, an exchange, or a platform for connecting third parties." That includes, of course, Facebook, Google, and Amazon. Any platform utility that makes at least $25 billion in annual revenue would be prohibited from simultaneously owning and participating on that platform. It would also have to commit to "meet a standard of fair, reasonable, and nondiscriminatory dealing with users," though it's still unclear how that would be defined. This means, for instance, that Amazon's private-label product division, called Amazon Basics, would have to be spun off into its own company or be prohibited from selling on Amazon's marketplace. Google's ad exchange and Google Search would also have to be split up under such a policy. Companies that make less than $25 billion a year wouldn't have to split up, but would still be monitored for fairness and nondiscrimination. Warren also wants to unwind what she calls "anti-competitive mergers," specifically naming Facebook's acquisitions of Instagram and WhatsApp, Amazon's acquisitions of Whole Foods and Zappos, and Google's acquisitions of Waze, Nest, and DoubleClick. Though it wasn't mentioned in the post, Warren's campaign also confirmed to WIRED that Google's acquisition of YouTube would be reviewed, and that YouTube could be considered a platform utility in its own right. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Finally, Warren seeks to prevent these so-called "platform utilities" from sharing data with third parties. That would simultaneously shift Facebook and Google's position as the center of the data economy and also go a long way toward protecting user privacy. Several tech advocacy groups jumped to condemn Warren's proposal as anti-consumer. "Consumers now benefit greatly from having one Amazon, one Google, and one Facebook," Rob Atkinson, president of the Information Technology and Innovation Foundation, said in a statement. "The goal of competition policy should be to enhance consumer welfare, not penalize companies for earning market share and operating at scale—yet that is exactly what the Warren proposal would do." Ed Black, president and CEO of the Computer and Communications Industry Association, said that while he agrees competition enforcement is important, "this unwarranted and extreme proposal, which focuses on a highly admired and highly performing sector, is misaligned with progressive values, many of which are shared within the tech industry." Others were more reticent. The Internet Association, which represents Facebook, Google, and Amazon, declined to comment on the plans. According to Frank Pasquale, a law professor at the University of Maryland and coauthor of the book The Black Box Society: The Secret Algorithms That Control Money and Information , these reactions from the industry ignore a key principle of market dynamics. "The more competitors that have a chance on proprietary marketplaces, the better off consumers are in terms of quality, variety, and price," he says. "I don't think you can say whatever Big Tech wants is best for consumers." Pasquale says the country's regulators need to reassess the definition of consumer welfare, which guides antitrust decisions in the United States. It's what has traditionally led to the assumption that lower prices are always better for consumers. But, Pasquale argues, there are other aspects of consumer welfare to consider. "The newer forms of antitrust coming out in Europe, particularly with respect to German authorities, say that privacy is a social value," Pasquale says. Across the country, the past few years have seen a growing understanding that the tech industry's interests and the interests of the public aren't always aligned. Warren's declaration of war with tech monopolies says as much about her as it does about the state of Silicon Valley's reputation. Warren has been one of Congress' most vocal tech critics for years, having delivered impassioned speeches on breaking up big tech in 2016 and 2017. Given that context, she can hardly be charged with opportunism. And yet, the timing also seems fortuitous. For all the talk of reining in Wall Street that took place in the 2016 Democratic primaries, the tech industry's unchecked power was scarcely mentioned. Now, just three years later, it's hard to escape. That may make it feel less risky for Democrats to take on an industry that has disproportionately swung left with both its campaign donations and its votes. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This week, Senator Klobuchar told The Washington Post that the United States has "a major monopoly problem," and that the biggest one is in the tech sector. Even Senator Cory Booker, who as mayor of Newark worked closely with Facebook CEO Mark Zuckerberg and received substantial backing from tech industry employees in 2018, recently spoke at an event about corporate monopolies, saying, "It’s no coincidence that after the most sustained period of merger activity in American corporate history, entrepreneurship has reached a 40-year low." Warren's proposal is undoubtedly the most aggressive, but it's clear she won't be the only candidate in the race pushing for a national discussion on these issues. That includes President Trump, who has accused Facebook and Google of being biased against conservatives and is presently engaged in a mud-slinging battle with Amazon CEO Jeff Bezos (or as the president recently called him, "Jeff Bozo"). If tech giants think the years since the 2016 election have been rough on them, the path to 2020 is about to get a whole lot bumpier. How to keep parents from fleeing STEM careers Machine learning can use tweets to spot security flaws Ways to get text onto your screen— without a keyboard Gene mutation that could cure HIV has a checkered past Anarchy, bitcoin, and murder in Acapulco 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Senior Writer X Topics politics Antitrust Khari Johnson Kari McMahon David Gilbert Amit Katwala Will Knight Joel Khalili Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,209
2,017
"Trump's Muslim Ban Leaves Detainees Stranded and Visa Holders Uncertain | WIRED"
"https://www.wired.com/2017/01/trumps-refugee-ban-direct-assault-civil-liberties"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Issie Lapowsky Andy Greenberg Culture Trump's Ban Leaves Refugees in Civil Liberties Limbo Protestors rally during a demonstration against the Muslim immigration ban at the John F. Kennedy International Airport in New York, January 28, 2017. Stephanie Keith/Getty Images Save this story Save Save this story Save At Terminal Four of New York’s John F. Kennedy airport Saturday afternoon, hundreds of protesters chanted in the snow for hours, pushed out of the chaotic lobby by police. Inside the terminal, volunteer lawyers and immigrant advocates scoured the crowd, asking one family after another if they were waiting for a loved one or relative who had failed to appear. Deeper inside the airport’s bureaucratic bowels, 11 foreign travelers waited in the legal no-man’s-land of the Customs and Border Protection office. They were separated from one another and detained for over 16 hours, according to the brief updates CBP provided immigrant rights groups, and had no access to lawyers. They’d been forbidden from using their electronic devices, legal advocates feared, even as their phones and social media accounts were searched for evidence of wrongdoing. Even their names remained largely unknown. “We’re not getting any information on who they are,” said Murad Awawdeh, an activist with the New York Immigration coalition who was on the scene. “But we’re trying to continue the pressure here today to make sure these people get out. And we want to make sure that Donald Trump sees that people won’t stand for this. This is outrageous. How are we closing our borders to the most vulnerable people in the world?” Witness day one of President Donald Trump’s Muslim ban. Over at JFK’s terminal one, a 26-year-old man had spent most of the day trying, and failing, to speak with his aunt, a Yemeni citizen whose flight from Jeddah, Saudi Arabia had landed at 11:30 that morning. CBP had detained her upon arrival. Mohammed, who declined to give his last name, said that his aunt is 68 years old, and suffers from both diabetes and high blood pressure. She had traveled to the US to move in with her son, a citizen, and had a family visa. Now, Mohammed says, CBP plans to deport her at 8pm Saturday night, and won’t let him see her, talk to her, or share information about her medical condition. She’s had no access to a lawyer, and Mohammed worries for her well-being. “It’s insane,” Mohammed said. “She’s not coming here to do anything bad. She’s sick. We just wanted her to come live the American dream.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg She was just one of the unknown number of travelers subject to the sweeping, yet vaguely worded, executive order that institutes “extreme vetting” processes for refugees from seven Muslim countries. It bars refugees from Iran, Iraq, Libya, Somalia, Sudan, and Yemen from entering the United States for the next 120 days. It also creates a 90-day ban on new visas for immigrants from those countries, and bans Syrian refugees indefinitely. But while it purports to primarily impact newcomers, the first hours of its implementation revealed the executive order to be much broader than that. It impacts both “immigrants and nonimmigrants,” meaning it includes even green-card holders who have been living in the United States for decades, but who may currently be traveling abroad. According to the White House press pool report, one senior official confirmed that green-card holders who are currently outside the United States need a case-by-case waiver to return. The Wall Street Journal has also confirmed that even US citizens with dual citizenship in any of these countries will also be barred entry. Overnight, reports from immigrants around the world began flowing in, according to Mana Yegani, a Houston-based lawyer who represents Iranian immigrants. She and other lawyers in her network began compiling them in a Google Doc created by the American Immigration Bar Association. There were reports from Amsterdam and Frankfurt, where airlines were refusing to board anyone with an Iranian passport on US-bound flights. In Istanbul, a family of nine, including eight visa holders and one green card holder, was escorted off their plane. “Up until 8:30 pm people were coming through with visas and green cards, no problem,” Yegani says. But once the text of the executive order was released, she says, “We saw disaster occur.” Reports of legal visa holders being detained stunned civil liberty and immigrant rights groups alike. Saturday morning, the American Civil Liberties Union filed a petition for writ of habeas corpus on behalf of Iraqi refugees Hameed Khalid Darweesh and Haider Sameer Abdulkhaleq Alshawi, who were among the JFK detainees. It argues that because they and other asylum seekers are on US soil, the government is required under the Immigration and Nationality Act to at least grant them an asylum hearing. “We don’t think there’s a legitimate argument that the executive order can override the asylum laws,” says Lee Gelernt, a lawyer with the ACLU. He acknowledges the situation may be different for people who are not currently in the United States. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The ACLU’s suit also does not address the constitutionality of the executive order itself. It would be rendered moot, in other words, if the Trump administration were to order all the current detainees released. But Gelernt says, “The ACLU is prepared to go to court to challenge the exec order more generally.” On Monday, the Council on American-Islamic Relations will also announce its own constitutional challenge against the order. The nature of the immigration system makes this a difficult legal battle to fight, particularly for those already in CBP custody. “It’s all shrouded in secrecy. You’re detained for hours and questioned outside the presence of an attorney,” says Jackie Esposito, an immigration attorney and one of the JFK protest’s organizers. “It’s one of the major flaws of our immigration system that there’s no due process.” “This is a system that has trafficked in a lack of transparency and human devastation,” says Daniel Altschuler, director of civic engagement and research for the immigrant rights group Make the Road New York, who attended the JFK rally. “The actions of the president yesterday will only make that worse.” The ACLU will focus on the executive order’s disproportionate impact on Muslims over Christians. The White House maintains this is different from the Muslim ban President Trump promised on the campaign trail, but in practice, it amounts to precisely that, Gelernt says. “It’s favoring one religion over another,” he says. “That’s antithetical to the basic principles of the constitution.” It also runs counter to a 1965 law which states that “no person shall receive any preference or priority or be discriminated against in the issuance of an immigrant visa because of the person’s race, sex, nationality, place of birth, or place of residence.” Meanwhile, the full impact remains to be seen, but some prominent oppositional voices have responded quickly. Google CEO Sundar Pichai urged foreign employees to return to the US as soon as possible in a memo obtained by Bloomberg. “It’s painful to see the personal cost of this executive order on our colleagues,” Pichai wrote. An estimated 187 Google employees could be denied re-entry to the US. Apple CEO Tim Cook said in an email to employees that it is “not a policy we support.” Others, including Reza Zadeh, a Stanford professor and CEO of the machine learning startup Matroid, took to Twitter to express concern over their own futures in this country. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And while Republican politicians have remained largely silent, many Democrats have offered full-throated censure. “This is ill-conceived, misguided policy, and it undermines the collaboration we need to have with Muslim countries in fighting ISIL,” says New York Congresswoman Nydia Velazquez 1, who was working to negotiate the release of detainees at JFK. “But it’s also arbitrary, with no clear guidelines, and it’s creating confusion for not only the families, but for the [customs] agents.” Saturday marked the beginning of what will no doubt be a long battle over the constitutionality of the order and the civil rights of each individual who’s been refused entry. But as his fellow detainees waited inside, no doubt worried and wondering about their fates, Darweesh, the first refugee to be released, was steadfast in his belief that America is still welcome place for people like him. “Thank you to the people who came to support me,” he said. “This is the soul of America, this is the land of freedom.” Update: A federal judge for the Eastern District Court of New York issued a temporary stay on Trump’s executive order tonight, according to Dale Ho , Director of ACLU’s Voting Rights Project. The stay, which came in response to a habeas corpus petition filed by the ACLU, means that anyone already in custody in the US under the order cannot be deported. <em>Co tion on 1/28/17 at 9:30 pm ET: An earlier version of this story misspelled Congresswoman Nydia Velazquez’s name.</em>< Senior Writer X Senior Writer X Topics Donald Trump National Affairs refugees Amit Katwala Geek's Guide to the Galaxy Saniya Ahmed Matt Kamen Angela Watercutter Angela Watercutter Jennifer M. Wood Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,210
2,016
"The Tech Exec Who Wants the Cloud to Be Google's Moneymaker | WIRED"
"https://www.wired.com/2016/03/tech-exec-wants-cloud-googles-moneymaker"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Tech Exec Who Wants the Cloud to Be Google's Moneymaker Google Save this story Save Save this story Save Google really wants to be a cloud company. But as the Internet giant best known for search begins a two-day conference in San Francisco dedicated to its cloud computing business, it's faced with the reality that Amazon got there first. Despite running what is likely the world's largest and most advanced computer network ---the global network of data centers and machines that underpins every one of the company's myriad online services--- Google was rather slow in allowing other businesses to build and run their own software and services atop this online empire. That's called cloud computing, and for Amazon, it's now a $9.6 billion-a-year business. 'It's all about us taking our expertise and our capabilities and then going and understanding what the possibilities are.' Diane Greene, Google According to Morgan Stanley, Google's cloud business pulls in closer to $500 million a year, a fraction of the Amazon empire. The potential market is still enormous---tech research firm Forrester predicts the cloud market will exceed $191 billion by 2020---but there's an added rub. Google isn't a company that's geared towards selling stuff to other businesses. Larry Page and Sergey Brin built Google on free services used by the world's consumers and, yes, ads that appear on those free services. Nonetheless, some at Google believe the cloud could be an even bigger business for the company than online ads. So, earlier this year, hoping to change the culture of its cloud computing group, Google brought in a ringer: Diane Greene. In 1998, the same year that Page and Brin founded Google, Greene and her husband, Stanford University professor Mendel Rosenblum, founded a company called VMware. With Greene serving as the company's CEO, VMware offered a technology called virtualization, which allowed businesses to run many virtual computers on a single physical machine. This sparked a revolution inside the computer data center: Businesses could get far more out of each machine---not to mention all the electricity needed to power their machines. And eventually, at places like Amazon, virtualization also gave rise to cloud computing. With its cloud service, Amazon is merely offering virtual machines to the world at large, via the Internet. In 2008, amid ongoing disagreements between Greene and the VMware board, the company fired her. And in the years since, VMware largely missed out on the cloud revolution, focusing more on software that businesses could use inside their own data centers rather than pushing toward public cloud services that businesses could use without setting up their own hardware. It was yet another example of that old Silicon Valley conundrum: the innovator's dilemma. Should VMware venture wholeheartedly into a new, unproven business that could potentially cannibalize its old, proven business? For VMware, the answer was "No." But in 2012, Greene resurfaced on the Google board. And after taking over not only the Google cloud group but the group that runs Google Apps, a set of pre-built online office applications, she's trying to take Google to a place where VMware should have gone. For Google, the trick will lie in combining the best of Google with the best of VMware, a company that very much knew how to sell stuff to big businesses. Greene assumed her new role at Google only about 90 days ago, but she has already started to transform the company, combining the engineering and the sales staff inside the cloud computing group while working to pump new money and new ideas into the operation. As Bloomberg Businessweek reported yesterday, Greene and company plan to erect 12 new cloud data centers over the next 18 months. And to further expand its cloud, Google will draw on the work of BeBop, a startup that Greene and Rosenblum were building before she took control of the Google cloud group. In her new role, Greene will work closely with Urs Hölzle, the former University of California, Santa Barbara, professor who joined Google in 1999 as employee number eight and oversaw the creation of the company's massive data center network. In fact, the two have worked together for quite a while in a less official capacity, discussing the future of the Google cloud while walking their dogs on weekends. They share the same enormous vision. Now, we'll see if they can achieve it. Today's conference will serve as a kind of rechristening of Google's cloud business now that Greene has taken the reins. We spoke to Greene prior to the conference as she explained the path to this new Google and described where she hopes it will go in the future. (The conversation has been edited for clarity). From the outside, this seems like a natural next step for you after what you built at VMware? Do you see it that way? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Well, it's not anything I ever anticipated. It sort of surprised me that I found myself doing this. But it is extremely natural. At VMware, we did the virtualization, which is now used extensively across the cloud. Virtualization layers---and containers ---are what facilitated the cloud originally. And it certainly would have been a natural progression at VMware had I stayed there. After leaving VMware, how did you wind up on the Google board? I met Larry and Sergey when they were still at Stanford. I live on the Stanford campus. My husband is a professor there, so I knew them and I thought the world of them, even when they were graduate students. (Google board member and Stanford president) John Hennessey I know through Stanford. Paul Otellini , the former CEO of Intel, I worked with while I was still with VMware. I knew Ram Shriram through the tech industry. I basically knew all these people (on the Google board), and they reached out to me. Then, once I joined the Google board, I started getting involved on the cloud side, working with Urs Hölzle---fairly lightly at first. How did you end up taking control of the cloud group? I was actually helping them look for someone to run the organization for a few years. Larry Page had suggested that I run it a while ago, and I thought: 'Well, that's a nice gesture. Thank you.' But Urs and I got so that we were talking a lot while walking our dogs together on the weekend, and he kept bringing it up. It's such an exciting space. It's such an enormous opportunity. Google's assets---Google's technology and technologists---are, I think, unparalleled. So I said: 'Okay, I'm in.' I was also committed to working on my startup, and as Urs and I talked, we started seeing a pretty strong strategic fit with Google. We discussed it with Sundar (Pichai, the new Google CEO), and we decided it all made sense. BeBop was in stealth mode when you joined the Google cloud group and Google acquired the startup. Can you now say what BeBop aims to do and how it plays into Google's ambitions? Our hand was kind of forced by the acquisition. I hadn't planned to talk about BeBop for a while. It's a pretty ambitious project. But basically, we're developing ways of quickly building very rich enterprise applications, which is what you want to do in the cloud. We've invested very heavily in design---human-centered design, how humans look at this. Eventually, we were able to build a real center for design excellence at BeBop, which I found really exciting. You have to recognize how important it is---and how powerful it is---to a have a good user experience. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Can you get more specific? This is where many companies aim to go. Amazon is adding all sorts of pre-built applications to its cloud. Companies like Workday are redesigning offices applications for the cloud age. I'm not wild about getting into too many details. I like to under-promise and over-deliver. Few would deny that Google's technology underpinning the Google cloud is well ahead of the competition. But at the same time, few would argue that Google is ahead of rivals when it comes to actually selling stuff to big businesses. Google is an engineering company. And it's an advertising company. It's geared towards building all sorts of new consumer tech and then driving revenue through advertising. But to find success with the Google cloud, you must do something very different. Do you have to change the culture of the company? Do you have make it more like VMware? I do---to some extent. VMware had an extraordinarily strong engineering culture. But I used to joke with the engineers: 'We could let all of you go and still pull in revenue for a good two years.' And then I would joke with the salespeople: 'We could let all the engineers go, but then, after two years, you'd be out of a job.' You need both. At VMware, the engineers worked very closely with our customers, closely with the field, and it was exciting for everyone. So, one of things I did here at Google when I arrived was to combine sales, marketing, engineering, and product. This is powerful. Engineers love having an impact on what customers can do, and by bringing (the Google cloud staff) together in this way, we can work towards that, feeding off each other and moving more quickly. But can't it also be hard to combine two disparate organizations in this way? It helps to build roadmaps---roadmaps that include new inventions from the engineers, that have features our customers need, that have general horizontal improvements, including security and performance. When you put all that together---and prioritize it---it's a wonderful way of bringing everyone together. People love the clarity of it. It's about getting very high level about what we're trying to do. You have to keep going up a level until you're all aligned. Then you can come back down and implement the details. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Urs has said that the Google cloud can generate more revenue for the company than ads. That's an easy thing to say. But it's an extraordinarily large goal. How will you actually reach that? We're investing heavily in the business side, so we can go out and support our customers. The cloud is really at the beginning. As big as it already is, it's in the very early stages. Once you get everything in the cloud, what it enables for a company is unbelievable. You can start applying machine learning and intelligence to everything you do. I don't think we even know where this is going to go. It's all about us taking our expertise and our capabilities and then going and understanding what the possibilities are for other companies, and this requires investment. We're very serious about that. VMware was a very different company from Google. It was focused on business computing from the beginning. But as Google cloud guru Eric Brewer has pointed out , the two companies were cut from at least some of the same cloth. They were created at around the same time, just as the country's great research labs were dying, and many of the top minds at these labs, including those run by DEC, moved into Google and VMware. "At the time of the bubble burst in 2001, when everyone was downsizing, including DEC, the main two high-tech companies that were hiring were Google and VMware," Brewer once told me. "Because of the crazy lopsidedness of that supply and demand, both companies hired many truly great people and both have done well in part because of that factor." Do you see this phenomenon from the inside? Absolutely. In the early days, we had parties with Google. Then we really started competing with them for all the top people out of the DEC Western Research Lab. We got some of them, and Google got some of them. And then weren't having so many parties together. But we were very similar culturally. It was all about engineering excellence. Google went in a consumer direction, and VMware was system infrastructure. But there are a lot of parallels and similarities. And now, at least symbolically, you are bringing the two companies back together? In a certain way. I'm really enjoying it. I almost wish I had done it a few years earlier. But now is a great time. Senior Writer X Topics Amazon Cloud Computing Enterprise Google Larry Page Sergey Brin VMware Paresh Dave Amanda Hoover WIRED Staff Vittoria Elliott Peter Guest Paresh Dave Aarian Marshall Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,211
2,018
"To an AI, Every Eye Tells a Story | WIRED"
"https://www.wired.com/story/wired25-sundar-pichai-r-kim-artificial-intelligence-vision"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Anthony Lydgate Science To an AI, Every Eye Tells a Story Sundar Pichai Michelle Groskopf Save this story Save Save this story Save Company Alphabet Google End User Consumer Sector Health care Sundar Pichai , CEO of Google R. Kim , Chief medical officer of Aravind Eye Hospital October 2018. Subscribe to WIRED. Plunkett + Kuhr Designers Ten cents won’t get you much in the American health care system—maybe a Band-Aid, if your HMO is feeling generous—but in parts of India, where nearly a quarter of the world’s blind population lives, it will cover the cost of a vision screening. Across the state of Tamil Nadu, the Aravind Eye Care System has set up a network of rural teleconsultation centers, each one supervised by a trained technician. When a patient comes in, the technician performs a basic workup, snaps photos of the inside of the eye, and sends a digital report to one of Aravind’s doctors, who calls in a diagnosis and a course of treatment. According to R. Kim, chief medical officer at Aravind’s hospital in Madurai, nearly 2,000 patients take advantage of these services every day. Yet he foresees an even breezier ophthalmic future, one powered by artificial intelligence : “You put a coin in a vending machine at the airport or the railway station, it takes pictures, and within a few seconds it tells you ‘Hey, you have this problem in your eye.’ ” AI can already screen for diabetic retinopathy by analyzing a retinal scan; in the future, the technology could be used to predict the risk for heart disease—and even dementia. Courtesy of R. Kim Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Four years ago, a joint team of researchers from Google and Aravind began work on an automated tool for detecting diabetic retinopathy, one of the leading causes of blindness worldwide. (India is home to 74 million people with diabetes.) First, they trained an algorithm to recognize the signs of the disease—distinctive spots and bleeding in the retina, the light-sensing tissue at the back of the eye. Then they began feeding it new data from Aravind’s centers. When supplied with a patient’s retinal photos, the algorithm can spit out a diagnosis in a matter of seconds. For now, Aravind’s doctors still check its work, but soon—once it receives regulatory approval—the AI will go solo. Is Kim worried about losing his job to automation? “Not really,” he says. The easier screenings get, the more patients will get screened. “I have a feeling that we’ll be put to more work when AI comes into play, because it’s going to detect many more problems,” Kim says. Similar tools could soon spot glaucoma and other vision-killing conditions. Silicon Valley trope that is every bit as absurd as people think : “The ubiquity of kale. It really is everywhere. I don’t believe people who say they like it.” Although Aravind’s rural vision centers are modestly equipped, they still require specialized gear. A retinal camera will set you back thousands of dollars, and it’s not something you’d want to lug into, say, a pop-up clinic or a refugee camp. But there may be a cheaper, more portable solution. Two years ago, a group of researchers published an article in the Indian Journal of Ophthalmology describing a remarkably effective DIY retinal camera. The ingredients: a smartphone with a plastic cover, a relatively inexpensive condensing lens, and about a dollar’s worth of PVC pipe, sandpaper, and electrical tape. Long before state-of-the-art ophthalmic vending machines begin dotting the world’s airports and railway stations, a setup like this could enable vision screenings on the fly. You take a picture, upload it to the cloud, and get your diagnosis in moments. The career choice I didn’t pursue : “Architecture. I still enjoy playing with Legos!” Eyes have been called many things by many people—the interpreters of the mind (Cicero), the lamps of the body (Saint Matthew), the windows to the soul (anyone with a keyboard). In strictly neurological terms, though, your retinas are extensions of your central nervous system. They’re rooted in the brain, and they have all sorts of stories to tell about what’s going on beneath your skull. Earlier this year, for instance, Google debuted an algorithm that can identify a person’s sex and smoking status and predict the five-year risk of a heart attack, all on the basis of retinal imagery. (The same AI can also “infer ethnicity.”) As Kim notes, what makes these results so exciting is that the algorithm picked up on problems that the people who trained it couldn’t. “This is not something the human eye can see at this point,” he says. “There’s something beyond that the machine is seeing.” Medical researchers are actively studying the retina as an early-warning system for dementia, multiple sclerosis, Parkinson’s, Alzheimer’s, and even schizophrenia. To understand the body, look to the eye. This article appears in the October issue. Subscribe now. MORE FROM WIRED@25 : 2013-2018 Editor's Letter: Tech has turned the world upside down. Who will shake up the next 25 years ? Opening essay by Virginia Heffernan: Things break and decay on the internet— that's a good thing Edward Snowden and Malkia Cyril : Turnkey tyranny Satya Nadella and Jenny Lay-Flurrie : Mindful tech Susan Wojcicki and Geetha Murali : Getting girls into tech Jennifer Doudna and Jiwoo Lee : Taming Crispr Join us for a four-day celebration of our anniversary in San Francisco, October 12–15. From a robot petting zoo to provocative onstage conversations, you won't want to miss it. More information at www.Wired.com/25. Topics magazine-26.10 WIRED25 vision artificial intelligence healthcare Amit Katwala Meg Marco Emily Mullin Emily Mullin Grace Browne Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,212
2,018
"When Bots Teach Themselves to Cheat | WIRED"
"https://www.wired.com/story/when-bots-teach-themselves-to-cheat"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business When Bots Teach Themselves to Cheat Dalbert B. Vilarino Save this story Save Save this story Save Application Ethics Safety Games Company Alphabet Sector IT Research Once upon a time, a bot deep in a game of tic-tac-toe figured out that making improbable moves caused its bot opponent to crash. Smart. Also sassy. Moments when experimental bots go rogue—some would call it cheating—are not typically celebrated in scientific papers or press releases. Most AI researchers strive to avoid them, but a select few document and study these bugs in the hopes of revealing the roots of algorithmic impishness. “We don’t want to wait until these things start to appear in the real world,” says Victoria Krakovna, a research scientist at Alphabet's DeepMind unit. Krakovna is the keeper of a crowdsourced list of AI bugs. To date, it includes more than three dozen incidents of algorithms finding loopholes in their programs or hacking their environments. The specimens collected by Krakovna and fellow bug hunters point to a communication problem between humans and machines: Given a clear goal, an algorithm can master complex tasks, such as beating a world champion at Go. But even with logical parameters, it turns out that mathematical optimization empowers bots to develop shortcuts humans didn’t think to deem off-­limits. Teach a learning algorithm to fish, and it might just drain the lake. Gaming simulations are fertile ground for bug hunting. Earlier this year, researchers at the University of Freiburg in Germany challenged a bot to score big in the Atari game Qbert. Instead of playing through the levels like a sweaty-palmed human, it invented a complicated move to trigger a flaw in the game, unlocking a shower of ill-gotten points. “Today’s algorithms do what you say, not what you meant,” says Catherine Olsson, a researcher at Google who has contributed to Krakovna’s list and keeps her own private zoo of AI bugs. These examples may be cute, but here’s the thing: As AI systems become more powerful and pervasive, hacks could materialize on bigger stages with more consequential results. If a neural network managing an electric grid were told to save energy—DeepMind has considered just such an idea—it could cause a blackout. “Seeing these systems be creative and do things you never thought of, you recognize their power and danger,” says Jeff Clune, a researcher at Uber’s AI lab. A recent paper that Clune coauthored, which lists 27 examples of algorithms doing unintended things, suggests future engineers will have to collaborate with, not command, their creations. “Your job is to coach the system,” he says. Embracing flashes of artificial creativity may be the solution to containing them. Infanticide : In a survival simulation, one AI species evolved to subsist on a diet of its own children. Space War : Algorithms exploited flaws in the rules of the galactic videogame Elite Dangerous to invent powerful new weapons. Body Hacking : A four-legged virtual robot was challenged to walk smoothly by balancing a ball on its back. Instead, it trapped the ball in a leg joint, then lurched along as before. Goldilocks Electronics : Software evolved circuits to interpret electrical signals, but the design only worked at the temperature of the lab where the study took place. Optical Illusion : Humans teaching a gripper to grasp a ball accidentally trained it to exploit the camera angle so that it appeared successful—even when not touching the ball. This article appears in the August issue. Subscribe now. The political education of Silicon Valley The challenge of teaching helicopters to fly themselves This fake conspiracy site is fully post-parody Inside the moonshot factory building the next Google How to free up space on your iPhone Hungry for even more deep-dives on your next favorite topic? Sign up for the Backchannel newsletter Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Editor X Topics magazine-26.08 artificial intelligence Khari Johnson Amanda Hoover Caitlin Harrington Aarian Marshall Steven Levy Khari Johnson Gregory Barber Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "