id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
1,513 | 2,012 |
"Google Puts Its Virtual Brain Technology to Work | MIT Technology Review"
|
"https://www.technologyreview.com/2012/10/05/18401/google-puts-its-virtual-brain-technology-to-work"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google Puts Its Virtual Brain Technology to Work By Tom Simonite archive page This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “ Self-Taught Software ”). That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit.
Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind—and the network is said to have learned something.
Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face detection. Google’s engineers have found ways to put more computing power behind the approach than was previously possible, creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations.
The company’s neural networks decide for themselves which features of data to pay attention to, and which patterns matter, rather than having humans decide that, say, colors and particular shapes are of interest to software trying to identify objects.
Google is now using these neural networks to recognize speech more accurately, a technology increasingly important to Google’s smartphone operating system, Android, as well as the search app it makes available for Apple devices (see “ Google’s Answer to Siri Thinks Ahead ”). “We got between 20 and 25 percent improvement in terms of words that are wrong,” says Vincent Vanhoucke , a leader of Google’s speech-recognition efforts. “That means that many more people will have a perfect experience without errors.” The neural net is so far only working on U.S. English, and Vanhoucke says similar improvements should be possible when it is introduced for other dialects and languages.
Other Google products will likely improve over time with help from the new learning software. The company’s image search tools, for example, could become better able to understand what’s in a photo without relying on surrounding text. And Google’s self-driving cars (see “ Look, No Hands ”) and mobile computer built into a pair of glasses (see “ You Will Want Google’s Goggles ”) could benefit from software better able to make sense of more real-world data.
The new technology grabbed headlines back in June of this year, when Google engineers published results of an experiment that threw 10 million images grabbed from YouTube videos at their simulated brain cells, running 16,000 processors across a thousand computers for 10 days without pause.
“Most people keep their model in a single machine, but we wanted to experiment with very large neural networks,” says Jeff Dean , an engineer helping lead the research at Google. “If you scale up both the size of the model and the amount of data you train it with, you can learn finer distinctions or more complex features.” The neural networks that come out of that process are more flexible. “These models can typically take a lot more context,” says Dean, giving an example from the world of speech recognition. If, for example, Google’s system thought it heard someone say “I’m going to eat a lychee,” but the last word was slightly muffled, it could confirm its hunch based on past experience of phrases because “lychee” is a fruit and is used in the same context as “apple” or “orange.” Dean says his team is also testing models that understand both images and text together. “You give it ‘porpoise’ and it gives you pictures of porpoises,” he says. “If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word.” A next step could be to have the same model learn the sounds of words as well. Being able to relate different forms of data like that could lead to speech recognition that gathers extra clues from video, for example, and it could boost the capabilities of Google’s self-driving cars by helping them understand their surroundings by combining the many streams of data they collect, from laser scans of nearby obstacles to information from the car’s engine.
Google’s work on making neural networks brings us a small step closer to one of the ultimate goals of AI—creating software that can match animal or perhaps even human intelligence, says Yoshua Bengio , a professor at the University of Montreal who works on similar machine-learning techniques. “This is the route toward making more general artificial intelligence—there’s no way you will get an intelligent machine if it can’t take in a large volume of knowledge about the world,” he says.
In fact, the workings of Google’s neural networks operate in similar ways to what neuroscientists know about the visual cortex in mammals, the part of the brain that processes visual information, says Bengio. “It turns out that the feature learning networks being used [by Google] are similar to the methods used by the brain that are able to discover objects that exist.” However, he is quick to add that even Google’s neural networks are much smaller than the brain, and that they can’t perform many things necessary to intelligence, such as reasoning with information collected from the outside world.
Dean is also careful not to imply that the limited intelligences he’s building are close to matching any biological brain. But he can’t resist pointing out that if you pick the right contest, Google’s neural networks have humans beat.
“We are seeing better than human-level performance in some visual tasks,” he says, giving the example of labeling, where house numbers appear in photos taken by Google’s Street View car, a job that used to be farmed out to many humans.
“They’re starting to use neural nets to decide whether a patch [in an image] is a house number or not,” says Dean, and they turn out to perform better than humans. It’s a small victory—but one that highlights how far artificial neural nets are behind the ones in your head. “It’s probably that it’s not very exciting, and a computer never gets tired,” says Dean. It takes real intelligence to get bored.
hide by Tom Simonite Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,514 | 2,018 |
"DeepMind’s AI will accelerate drug discovery by predicting how proteins fold | MIT Technology Review"
|
"https://www.technologyreview.com/2018/12/03/138830/deepminds-ai-system-will-accelerate-drug-discovery-by-predicting-how-proteins"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts DeepMind’s AI will accelerate drug discovery by predicting how proteins fold By Karen Hao archive page Google DeepMind has developed a tool to predict the structure of proteins from their genetic sequence, marking a noteworthy example of using AI in the process of scientific discovery.
How it works: The system, called AlphaFold, models the complex folding patterns of long chains of amino acids, based on their chemical interactions, to form the three-dimensional shape of a protein. This is known as the “ protein folding problem ,” which has challenged scientists for decades.
Why it matters: The shape of a protein dictates its function in the body, so being able to predict a protein’s structure allows scientists to synthesize new protein-based drugs to treat diseases or new enzymes to break down pollutants in our environment.
Training data: The DeepMind team trained deep neural networks to predict the distances between pairs of amino acids and the angles between their chemical bonds, using the massive amounts of data available from genomic sequencing. The resulting system generates highly accurate protein structures, exceeding previous prediction techniques, the team says.
The bigger picture: DeepMind isn’t the only one working to accelerate scientific discovery with machine learning. Many other companies and researchers have sought to develop algorithms for discovering new drugs and new materials.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,515 | 2,020 |
"How lockdown is changing shopping for good | MIT Technology Review"
|
"https://www.technologyreview.com/2020/05/25/1002168/retail-robots-save-local-store-business-lockdown-pandemic-coronavirus-economic-crisis"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How lockdown is changing shopping for good By Will Douglas Heaven archive page Pablo Blazquez Dominguez/Getty In a warehouse in Secaucus, New Jersey, a handful of people stand around the base of a white box as big as a house. Every few seconds a plastic bin emerges from an opening in its sleek walls. Someone reaches in and grabs an item of lingerie or swimwear, and then the bin is gone again—whisked back inside the box to be restacked among 33,000 others arranged in row upon row of floor-to-ceiling towers.
More on coronavirus Our most essential coverage of covid-19 is free, including: What is herd immunity? What is serological testing? How does the coronavirus work? What are the potential treatments? Which drugs work best? What's the right way to do social distancing? Other frequently asked questions about coronavirus --- Newsletter: Coronavirus Tech Report Zoom show: Radio Corona See also: All our covid-19 coverage The covid-19 special issue Please click here to subscribe and support our non-profit journalism.
On top of the box, 73 robots crisscross the grid like giant bees tending a honeycomb. Working together, they move the bins around nonstop, accessing specific items and delivering them to the people on the outside. On a busy day, these robots churn through 20,000 online orders, 80% of which are placed via smartphones.
A growing number of retailers are turning to this kind of automation to out-compete their rivals.
Robots keep costs down and make order fulfillment quicker and more accurate. Now, given a series of lockdowns that could go on for months or even years , this kind of small-scale automation could be key if retailers are to survive. This is true not only for smaller firms looking to keep up but also for big, established players, who are seeing their business model shift by the week. The way we shop is changing: the future of retail automation is smaller, closer to home, and more flexible.
Built last year, this automated fulfillment center is Adore Me’s first physical store. Adore Me, a medium-size online retailer founded in 2011 to compete against established brands like Victoria’s Secret, previously relied on third-party logistics firms to manage its stock picking and deliveries. But thanks to stacking tech developed by AutoStore, one of a wave of companies that have sprung up to help smaller firms automate, it now operates its own warehouse.
The technology that Adore Me is now using is not new. Automated fulfillment centers pioneered by behemoths like Amazon and Ocado Technology are vast places, with thousands of robots shunting millions of bins across spaces the size of several soccer fields. But in the last few years the tech has become more distributed as the online shopping market has matured. As robotic warehousing systems become more compact and more modular, more retailers are choosing to install their own, tailored to their business needs and available space. Instead of filling several city blocks, the new generation of systems can be installed in a supermarket stockroom.
This shift toward smaller-scale automation distributed across multiple locations has come as the retail sector is in danger of collapsing. According to the US Department of Commerce, retail sales in the US fell by 16.4% last month—the worst drop since reporting began in 1992. The previous record—a drop of 8.3%—was set in March. With shoppers stuck at home, retailers are suffering across the board. Many physical stores have been shuttered.
Spike in demand It’s not all bad news, however. Others are seeing their online business explode and finding it hard to meet demand. In the US, e-commerce is up by more than 21% since this time last year. The biggest shift is in groceries. In a letter to grocery industry clients on March 19, consultants McKinsey noted that some were seeing spikes as high as 700%. Instead of making weekly visits to a supermarket, many consumers are now buying food online. Businesses with fast and efficient ways to fulfill online orders will win out.
To keep up, some retailers are scrambling to change how their now-empty stores are used. Instead of displaying items for passing customers, spaces are being turned into storerooms and delivery depots for businesses that have moved entirely online.
“It’s as if e-commerce jumped ahead five years,” says Vince Martinelli, head of product and marketing at RightHand Robotics, a US firm that has installed robotic arms for picking items from bins in around a dozen retail warehouses in the US, Europe, and Japan.
One response to the spike in demand is to hire tens of thousands of temporary staff, as Amazon has done. But people are expensive. “We’ve had a real jolt to the system, and you cannot solve it in the long run just by throwing people at it,” says Martinelli.” The other is to accelerate the rollout of technologies to meet it.
Stores have been weighing the pros and cons of investing in more automation for years, he says. Increasingly, it’s no longer a choice. “Automation is one thing you need to survive,” says Scott Gravelle, CEO of Attabotics, a Canadian company that makes robotic fulfillment systems small enough to fit inside an average-size store.
Increased use of robotics is one part of this survival strategy. Not surprisingly, companies building robots or sensors are seeing a spike in interest. Brain Corp, which makes control software for floor-cleaning and stock-moving robots, says it saw usage of its technology increase by 24% in April over the same period last year. Its robots now work a total of 8,000 hours a day, the equivalent of 1,000 employees. Cleaning robots are usually run overnight, but two-thirds of the increased use was during daytime working hours, which Brain Corp thinks reflects the more stringent cleaning demands during the pandemic.
Inertial Sense, which builds smart sensors that allow robots to navigate, says it has had a couple of big orders come in already, and lots of requests for its demo kit. “People are like, ‘Oh my gosh, I’d better get on this,’” says the firm’s CEO, Tom Bennett.
The upshot is that smaller retailers are benefiting from the way the big guns have changed the field in recent years. Many large retailers rely on companies like Ocado Technology, which builds and operates vast out-of-town fulfillment centers for several big UK supermarkets. Those that have long embraced automation in this way seem to have adapted to the crisis well. The 3,000 or so robots in Ocado Technology’s larger warehouses are managed by a central AI, which continually tweaks thousands of parameters to ensure that the whole system runs as smoothly as possible. It might change a picking order here, delay one robot over there so that another can catch up, or suggest a more efficient way to stack items. “A system that complicated is really beyond human control,” says Alex Harvey, head of AI. “We have to use AI to run it optimally.” To help it keep tabs on the warehouse, the AI checks its performance against a virtual simulation of the physical space that mirrors its every movement. When the physical and digital twins fall out of sync, the simulation alerts the AI and its human operators of a potential problem, such as a dropped item or a wonky wheel on one of the robots. This simulation helped the AI recommend a few changes when online demand peaked in the first few weeks of lockdown. As buying habits shift, the grid layout of the bin stacks can be updated overnight. Bins containing items that were bought more frequently were moved to the top, where they could be accessed more quickly.
Recently, Ocado Technology started to replicate its technology for retailers outside the UK. It has deals with Kroger in the US, Sobeys in Canada, Casino in France, Aeon in Japan, and others. “We did a copy-and-paste for them,” says Harvey. Ocado Technology takes on the high cost of installation itself in exchange for a cut of the retailer’s revenue.
Closer to home All these firms—big and small—are now watching our new shopping habits closely. When demand spikes and online customers want purchases delivered as soon as possible, centralized fulfillment stops being so cost effective. Efficient stock picking and shorter delivery routes are key, which gives an advantage to smaller local stores over large out-of-town warehouses.
For example, last year Ocado launched Ocado Zoom, a one-hour delivery service to London, from a smaller warehouse just outside the city. Based on Ocado’s larger installations, Zoom’s stacking system is modular and can be customized to a site. This plus the lower up-front costs will make it easier for smaller stores to adopt automation: they can start small and add capacity as they grow.
In the US, Walmart is another retail giant that is rapidly adapting its model to better suit how we now shop online.
When the pandemic hit, Walmart was in the early stages of offering an express service, which would deliver items ordered online to a customer’s home within two hours. Upgrades to the software that calculated delivery routes and the processes for picking the items in store were rushed through. At the end of March it tested the service in a store in Phoenix, Arizona. On April 16 it rolled it out to 100 stores across the US. The company is now expanding it to more than 2,000.
But express delivery only works if the ordered items are shipped from a location close to the customer. Luckily, Walmart was already experimenting with going small-scale, by shipping items directly from stores rather than from its massive out-of-town warehouses. At the end of 2019, it had rolled out the technology to 130 stores. The software, which tracked every purchase across Walmart’s thousands of stores and kept a millisecond-by-millisecond record of stock, crunched through millions of variables (including availability, speed of delivery, and cost to Walmart) to identify which of those stores was the best choice for fulfilling a local online order. At first Walmart was not seeing much demand for the service, but of course that soon changed. When its large fulfillment centers began to struggle, the company ramped up its ship-from-store service to 2,400 stores in just two weeks.
Companies like Attabotics are helping smaller names mimic the tactics of the big firms. Its micro-fulfillment system lets small retailers turn a stockroom in the back of their store, or the shop floor itself if it’s closed to customers, into an AutoStore-style order processing machine. It’s a better use of real estate, says Gravelle.
Where AutoStore uses robots the size of washing machines that move across the top of stacks of bins, Attabotics makes a system in which smaller bots burrow up, down, and through a densely packed warren. The whole thing takes up around 6 to 8% of the space that a store would fill if its items were out on display, says Gravelle. Attabotics uses machine learning to determine where stock should be stored, on the basis of what items typically go together in customers’ orders, and the system is adjusted in real time as purchasing behavior changes. It also provides a common set of parts, which can be pieced together in various configurations to fit the shape and size of a room. Attabotics says it runs the smallest (350 square feet, or 33 square meters) and the largest (61,000 square feet) robotic fulfillment centers in the US, including warehouses for the department store chain Nordstrom. “You could have lots of bins and one robot, or a few bins and lots of robots,” says Gravelle.
Even when stores reopen and people return to work, retail will not go back to normal. Stores and warehouses will have to enforce social distancing. Martinelli of RightHand Robotics thinks that could lead to even more automation. “If fewer people are allowed in a building, humans become of higher value,” he says. “You don’t want to waste a human on a mundane task if you can automate it.” For example, in most automated fulfillment centers, humans still pick items from bins that robots put in front of them. Unsurprisingly, Martinelli thinks this is a task better suited to the kind of robot his company makes. Ocado Technology has also been testing a robot picking arm that could help with social distancing in the post-covid-19 factory.
Retail therapy Of course, none of this was on the horizon when Adore Me set up its new warehouse. The company invested heavily in automation to support an aggressive international growth strategy. Its robots allow it to process four times as many orders as it could before. Now those robots are helping it keep up during the pandemic, when many people are apparently comfort-buying pajamas.
The efficiencies speak for themselves, says Steven Keith Platt, director of the Platt Retail Institute in Boston, which studies robots in the retail industry: “This is a massive impetus for companies to ramp up investment in automation.” Bennett, CEO of Inertial Sense, agrees. “Retail is the place where economics is going to drive long-term adoption,” he says. “This has become a boardroom issue faster than I've ever seen anything.” But he cautions that automation is not a plug-and-play solution for everyone. Companies looking to invest in automation may have to work around legacy processes and in-house technology.
Even without these hurdles, switching to automation takes time, unless you already have a platform to build on. Millions of dollars’ worth of machinery needs to be ordered, manufactured, and tested. The effect won’t be instant, but when it comes it’ll be here to stay, says Martinelli: “In 2021 or 2022 you’re going to see the impact of what the last month or two has kicked off.” Businesses that were on the fence at the start of the year have seen their priorities change. Many are now eyeing long months or years of uncertainty ahead. More than a few will commit to an investment that did not look like an immediate need until a few weeks ago.
“There’s always a lot of aspirational talk about the future and what companies would like to do,” says Gravelle. Suddenly there are fewer reasons to put off those plans: “Now they have to do it.” hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,516 | 2,020 |
"Our weird behavior during the pandemic is messing with AI models | MIT Technology Review"
|
"https://www.technologyreview.com/2020/05/11/1001563/covid-pandemic-broken-ai-machine-learning-amazon-retail-fraud-humans-in-the-loop"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Our weird behavior during the pandemic is messing with AI models By Will Douglas Heaven archive page Getty In the week of April 12-18, the top 10 search terms on Amazon.com were: toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too —and in bulk. The majority of people looking for masks ended up buying the new Amazon #1 Best Seller, “Face Mask, Pack of 50”.
When covid-19 hit, we started buying things we’d never bought before. The shift was sudden: the mainstays of Amazon’s top ten—phone cases, phone chargers, Lego—were knocked off the charts in just a few days. Nozzle, a London-based consultancy specializing in algorithmic advertising for Amazon sellers, captured the rapid change in this simple graph.
It took less than a week at the end of February for the top 10 Amazon search terms in multiple countries to fill up with products related to covid-19. You can track the spread of the pandemic by what we shopped for: the items peaked first in Italy, followed by Spain, France, Canada, and the US. The UK and Germany lag slightly behind. “It’s an incredible transition in the space of five days,” says Rael Cline, Nozzle’s CEO. The ripple effects have been seen across retail supply chains.
But they have also affected artificial intelligence , causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should.
How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, “automation is in tailspin.” Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.
What’s clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. “You can never sit and forget when you’re in such extraordinary circumstances,” says Cline.
More on coronavirus Our most essential coverage of covid-19 is free, including: What is herd immunity? What is serological testing? How does the coronavirus work? What are the potential treatments? Which drugs work best? What's the right way to do social distancing? Other frequently asked questions about coronavirus --- Newsletter: Coronavirus Tech Report Zoom show: Radio Corona See also: All our covid-19 coverage The covid-19 special issue Please click here to subscribe and support our non-profit journalism.
Machine-learning models are designed to respond to changes. But most are also fragile; they perform badly when input data differs too much from the data they were trained on. It is a mistake to assume you can set up an AI system and walk away, says Rajeev Sharma, global vice president at Pactera Edge: “AI is a living, breathing engine.” Sharma has been talking to several companies struggling with wayward AI. One company that supplies sauces and condiments to retailers in India needed help fixing its automated inventory management system when bulk orders broke its predictive algorithms. The system's sales forecasts that the company relied on to reorder stock no longer matched up with what was actually selling. “It was never trained on a spike like this, so the system was out of whack,” says Sharma.
Another firm uses an AI to assess the sentiment of news articles and provides daily investment recommendations based on the results. But with the news at the moment being gloomier than usual, the advice is going to be very skewed, says Sharma. And a large streaming firm that has had a sudden influx of content-hungry subscribers is also having problems with its recommendation algorithms, he says. The company uses machine learning to suggest relevant and personalized content to viewers so that they keep coming back. But the sudden change in subscriber data was making its system's recommendations less accurate.
Many of these problems with models arise because more businesses are buying machine-learning systems but lack the in-house know-how needed to maintain them. Retraining a model can require expert human intervention.
The current crisis has also shown that things can get worse than the fairly vanilla worst-case scenarios included in training sets. Sharma thinks more AIs should be trained not just on the ups and downs of the last few years, but also on freak events like the Great Depression of the 1930s, the Black Monday stock market crash in 1987, and the 2007-2008 financial crisis. “A pandemic like this is a perfect trigger to build better machine-learning models,” he says.
Even so, you can’t prepare for everything. In general, if a machine-learning system doesn’t see what it’s expecting to see, then you will have problems, says David Excell, founder of Featurespace, a behavior analytics company that uses AI to detect credit card fraud. Perhaps surprisingly, Featurespace has not seen its AI hit too badly. People are still buying things on Amazon and subscribing to Netflix the way they were before, but they are not buying big-ticket items or spending in new places, which are the behaviors that can raise suspicions. “People’s spending behavior is a contraction of their old habits,” says Excell.
The firm’s engineers only had to step in to adjust for a surge in people buying garden equipment and power tools, says Excell. These are the kinds of mid-price anomalous purchases that fraud-detection algorithms might pick up on. “I think there is certainly more oversight,” says Excell. “The world has changed, and the data has changed.” Getting the tone right London-based Phrasee is another AI company that is being hands-on. It uses natural-language processing and machine learning to generate email marketing copy or Facebook ads on behalf of its clients. Making sure that it gets the tone right is part of its job. Its AI works by generating lots of possible phrases and then running them through a neural network that picks the best ones. But because natural-language generation can go very wrong , Phrasee always has humans check what goes into and comes out of its AI.
When covid-19 hit, Phrasee realized that more sensitivity than usual might be required and started filtering out additional language. The company has banned specific phrases, such as “going viral,” and doesn’t allow language that refers to discouraged activities, such as “party wear.” It has even culled emojis that may be read as too happy or too alarming. And it has also dropped terms that may stoke anxiety, such as “OMG,” “be prepared,” “stock up,” and “brace yourself.” “People don’t want marketing to make them feel anxious and fearful—you know, like, this deal is about to run out, pressure pressure pressure,” says Parry Malm, the firm’s CEO.
As a microcosm for the retail industry as a whole, however, you can’t beat Amazon. It’s also where some of the most subtle behind-the-scenes adjustments are being made. As Amazon and the 2.5 million third-party sellers it supports struggle to meet demand, it is making tiny tweaks to its algorithms to help spread the load.
Most Amazon sellers rely on Amazon to fulfill their orders. Sellers store their items in an Amazon warehouse and Amazon takes care of all the logistics, delivering to people’s homes and handling returns. It then promotes sellers whose orders it fulfills itself. For example, if you search for a specific item, such as a Nintendo Switch, the result that appears at the top, next to the prominent “Add to Basket” button, is more likely to be from a vendor that uses Amazon’s logistics than one that doesn’t.
But in the last few weeks Amazon has flipped that around, says Cline. To ease demand on its own warehouses, its algorithms now appear more likely to promote sellers that handle their own deliveries.
Volatile markets This kind of adjustment would be hard to do without manual intervention. “The situation is so volatile,” says Cline. “You're trying to optimize for toilet paper last week, and this week everyone wants to buy puzzles or gym equipment.” The tweaks Amazon makes to its algorithms then have knock-on effects on the algorithms that sellers use to decide what to spend on online advertising. Every time a web page with ads loads, a super-fast auction takes place where automated bidders decide between themselves who gets to fill each ad box. The amount these algorithms decide to spend for an ad depends on a myriad of variables, but ultimately the decision is based on an estimate of how much you, the eyeballs on the page, are worth to them. There are lots of ways to predict customer behavior, including not only data about your past purchases but also the pigeonhole that ad companies have placed you in on the basis of your online activity.
But now one of the best predictors of whether someone who clicks an ad will buy your product is how long you say it will take to deliver it, says Cline. So Nozzle is talking to customers about adjusting their algorithms to take this into account. For example, if you think you can’t deliver faster than a competitor, it might not be worth trying to outbid them in an ad auction. On the other hand, if you know your competitor has run out of stock, then you can go in cheap, gambling that they won’t bid.
All of this is possible only with a dedicated team keeping tabs on things, says Cline. He thinks the current situation is an eye-opener for a lot of people who assumed all automated systems could run themselves. “You need a data science team who can connect what’s going on in the world to what’s going on the algorithms,” he says. “An algorithm would never pick some of this stuff up.” With everything connected , the impact of a pandemic has been felt far and wide, touching mechanisms that in more typical times remain hidden. If we are looking for a silver lining, then now is a time to take stock of those newly exposed systems and ask how they might be designed better, made more resilient. If machines are to be trusted, we need to watch over them.
hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,517 | 2,020 |
"Chinese hackers and others are exploiting coronavirus fears for cyber espionage | MIT Technology Review"
|
"https://www.technologyreview.com/2020/03/12/916670/chinese-hackers-and-others-are-exploiting-coronavirus-fears-for-cyberespionage"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Chinese hackers and others are exploiting coronavirus fears for cyber espionage By Patrick Howell O'Neill archive page Photo by Andras Vas on Unsplash Government-sponsored and criminal hackers from around the world are taking advantage of the ongoing coronavirus pandemic to spy on adversaries, according to multiple cybersecurity threat intelligence companies.
Hacking groups aligned with the Chinese and Russian governments, among others, have been sending out malicious email attachments about the virus in recent weeks.
More on coronavirus Our most essential coverage of covid-19 is free, including: What is herd immunity? What is serological testing? How does the coronavirus work? What are the potential treatments? Which drugs work best? What's the right way to do social distancing? Other frequently asked questions about coronavirus --- Newsletter: Coronavirus Tech Report Zoom show: Radio Corona See also: All our covid-19 coverage The covid-19 special issue Please click here to subscribe and support our non-profit journalism.
Governments Two hacking groups aligned with the Chinese government targeted Vietnam, the Philippines, Taiwan, and Mongolia, the cybersecurity firms FireEye and Check Point reported today. The hackers are sending email attachments with genuine health information about coronavirus but laced with malware such as Sogu and Cobalt Strike, according to Ben Read, a senior intelligence analyst at FireEye.
“The lures were legitimate statements by political leaders or authentic advice for those worried about the disease, likely taken from public sources,” Read explained.
A Russian group known as TEMP.Armageddon sent spear-phishing emails to Ukrainian targets. Spear-phishing is a tactic hackers use to send specifically crafted malicious links that trick targets into clicking, allowing them to be unknowingly infected.
FireEye analysts also suspect a recent such attack against a South Korean target is the work of North Korean hackers. Like China, South Korea has been hit especially hard by the outbreak. The phishing email had the Korean language title “Coronavirus Correspondence.” “You expect to get information from government sources, so it’s most likely that you will open and execute documents to see what it says,” said Lotem Finkelstein, head of threat intelligence at Check Point. “It makes it very useful to trigger an attack. The coronavirus outbreak serves threat actors very well, especially those that rely on phishing attacks to ignite attacks.” Criminals In addition to ongoing activity by government-sponsored hackers, cybercriminals are taking advantage of the chaos of current events. Hackers have previously used anxiety surrounding Ebola, Zika, and SARS to make money.
“We’ve seen financially motivated actors using coronavirus-themed phishing in many campaigns, with dramatic month-over-month volume increases from January through to today,” FireEye said in a statement. “We expect continued use of coronavirus-themed lures by both opportunistic and targeted financially motivated attackers due to the global relevance of the theme.” Targets “have heightened interest in news and developments related to the virus, potentially making them more susceptible to social engineering that tricks them into clicking on malicious links,” researchers at the cyberintelligence firm RiskIQ assessed.
Although it’s relatively simple, phishing—sending a link or file meant to infect anyone who clicks—is the most common and successful type of attack year after year. Hackers looking to take advantage of the coronavirus have targeted both individuals and businesses with fake emails claiming to be from trusted organizations like the Centers for Disease Control (CDC) and the World Health Organization.
The phishing emails promise everything from information on cures to medical equipment. In reality, they aim to deliver malware or steal passwords in a bid to cash in on chaos.
Hackers are looking all over the globe for targets, but some have zeroed in on the worst-hit countries. Italy, which has so far seen the worst rash of illnesses outside Asia, has been targeted by a phishing campaign against businesses. Fake emails, which pretend to be from the World Health Organization, promise precautionary measures Italians can take in the form of a Microsoft Word document, but it will download a banking Trojan called Trickbot aimed at stealing vast sums of money.
Although the email sender claims to be with the WHO, the sender’s domain doesn’t match the WHO’s who.int website.
Japan, another country dealing with a sizeable outbreak, has also seen targeted hacking campaigns pretending to offer coronavirus information from health authorities.
“Attackers are also subverting internal businesses’ credibility in their attacks,” researchers from the cyber firm Proofpoint wrote.
“We have seen a campaign that uses a Coronavirus-themed email that is designed to look like an internal email from the company’s president to all employees ... This email is extremely well-crafted and lists the business’ president’s correct name.” Your best bet Online dashboards have become the de facto standard for how much of the world is tracking the spread of this illness. Malicious dashboards are circulating that prompt you to download an application in order to spread AZORult malware for Windows that steals personal and financial data, cryptocurrency, and anything else of value from an infected machine.
It’s not the first time hackers have used headline news and high emotion to try to trick victims, and it won’t be the last.
The best defense is to keep your tech up to date, don’t download software or click links from unknown people, and stick with authoritative sources for news on important topics.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,518 | 2,020 |
"Reinforcement-learning AIs are vulnerable to a new kind of attack | MIT Technology Review"
|
"https://www.technologyreview.com/s/615299/reinforcement-learning-adversarial-attack-gaming-ai-deepmind-alphazero-selfdriving-cars"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Reinforcement-learning AIs are vulnerable to a new kind of attack By Will Douglas Heaven archive page MIT TECHNOLOGY REVIEW / ADAM GLEAVE The soccer bot lines up to take a shot at the goal. But instead of getting ready to block it, the goalkeeper drops to ground and wiggles its legs. Confused, the striker does a weird little sideways dance, stamping its feet and waving one arm, and then falls over. 1-0 to the goalie.
It’s not a tactic you’ll see used by the pros, but it shows that an artificial intelligence trained via deep reinforcement learning —the technique behind cutting-edge game-playing AIs like AlphaZero and the OpenAI Five—is more vulnerable to attack than previously thought. And that could have serious consequences.
In the last few years researchers have found many ways to break AIs trained using labeled data, known as supervised learning. Tiny tweaks to an AI’s input—such as changing a few pixels in an image—can completely flummox it, making it identify a picture of a sloth as a race car, for example. These so-called adversarial attacks have no sure fix.
Compared with supervised learning, reinforcement learning is a relatively new technique and has been studied less. But it turns out that it is also vulnerable to doctored input. Reinforcement learning teaches an AI how to behave in different situations by giving it rewards for doing the right thing. Eventually the AI learns a plan for action, known as a policy. Policies allow AIs to play games, drive cars, or run automated trading systems.
In 2017, Sandy Huang, who is now at DeepMind, and her colleagues looked at an AI trained via reinforcement learning to play the classic video game Pong. They showed that adding a single rogue pixel to frames of video input would reliably make it lose.
Now Adam Gleave at the University of California, Berkeley, has taken adversarial attacks to another level.
Gleave is not too worried about most of the examples we have seen so far. “I'm a bit skeptical of them being a threat,” he says. “The idea that an attacker is going to break our machine-learning system by adding a small amount of noise doesn't seem realistic.” But instead of fooling an AI into seeing something that isn’t really there, you can change how things around it act. In other words, an AI trained using reinforcement learning can be tricked by weird behavior. Gleave and his colleagues call this an adversarial policy. It’s a previously unrecognized threat model, says Gleave.
Losing control In some ways, adversarial policies are more worrying than attacks on supervised learning models, because reinforcement learning policies govern an AI’s overall behavior. If a driverless car misclassifies input from its camera, it could fall back on other sensors, for example. But sabotage the car’s control system—governed by a reinforcement learning algorithm—and it could lead to disaster. “If policies were to be deployed without solving these problems, it could be very serious,” says Gleave. Driverless cars could go haywire if confronted with an arm-waving pedestrian.
Gleave and his colleagues used reinforcement learning to train stick-figure bots to play a handful of two-player games, including kicking a ball at a goal, racing across a line, and sumo wrestling. The bots were aware of the position and movement of their limbs and those of their opponents.
They then trained a second set of bots to find ways to exploit the first, and this second group quickly discovered adversarial policies. The team found that the adversaries learned to beat their victims reliably after training for less than 3% of the time it took the victims to learn to play the games in the first place.
The adversaries learned to win not by becoming better players but by performing actions that broke their opponents’ policies. In the soccer game and the running game, the adversary sometimes never even stands up. This makes the victim collapse into a contorted heap or wriggle around in circles. What’s more, the victims actually performed far better when they were “masked” and unable to see their adversary at all.
The research, to be presented at the International Conference on Learning Representations in Addis Ababa, Ethiopia, in April, shows that policies that appear robust can hide serious flaws. “In deep reinforcement learning we're not really evaluating policies in a comprehensive enough fashion,” says Gleave. A supervised learning model, trained to classify images, say, is tested on a different data set from the one it was trained on to ensure that it has not simply memorized a particular bunch of images. But with reinforcement learning, models are typically trained and tested in the same environment. That means that you can never be sure how well the model will cope with new situations.
The good news is that adversarial policies may be easier to defend against than other adversarial attacks. When Gleave fine-tuned the victims to take into account the weird behavior of their adversaries, the adversaries were forced to try more familiar tricks, such as tripping their opponents up. That’s still dirty play but doesn’t exploit a glitch in the system. After all, human players do it all the time.
hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,519 | 2,020 |
"What AI still can’t do | MIT Technology Review"
|
"https://www.technologyreview.com/s/615189/what-ai-still-cant-do"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts What AI still can’t do By Brian Bergstein archive page Saiman Chow In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.
Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.
Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.” These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.
Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem.
His idea is to infuse artificial-intelligence research with insights from the relatively new science of causality, a field shaped to a huge extent by Judea Pearl, a Turing Award–winning scholar who considers Bareinboim his protégé.
As Bareinboim and Pearl describe it, AI’s ability to spot correlations—e.g., that clouds make rain more likely—is merely the simplest level of causal reasoning. It’s good enough to have driven the boom in the AI technique known as deep learning over the past decade. Given a great deal of data about familiar situations, this method can lead to very good predictions. A computer can calculate the probability that a patient with certain symptoms has a certain disease, because it has learned just how often thousands or even millions of other people with the same symptoms had that disease.
But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.
Today’s AI has only a limited ability to infer what will result from a given action. In reinforcement learning, a technique that has allowed machines to master games like chess and Go, a system uses extensive trial and error to discern which moves will essentially cause them to win. But this approach doesn’t work in messier settings in the real world. It doesn’t even leave a machine with a general understanding of how it might play other games.
An even higher level of causal thinking would be the ability to reason about why things happened and ask “what if” questions. A patient dies while in a clinical trial; was it the fault of the experimental medicine or something else? School test scores are falling; what policy changes would most improve them? This kind of reasoning is far beyond the current capability of artificial intelligence.
Performing miracles The dream of endowing computers with causal reasoning drew Bareinboim from Brazil to the United States in 2008, after he completed a master’s in computer science at the Federal University of Rio de Janeiro. He jumped at an opportunity to study under Judea Pearl, a computer scientist and statistician at UCLA. Pearl, 83, is a giant— the giant—of causal inference, and his career helps illustrate why it’s hard to create AI that understands causality.
Even well-trained scientists are apt to misinterpret correlations as signs of causation—or to err in the opposite direction, hesitating to call out causation even when it’s justified. In the 1950s, for example, a few prominent statisticians muddied the waters around whether tobacco caused cancer. They argued that without an experiment randomly assigning people to be smokers or nonsmokers, no one could rule out the possibility that some unknown—stress, perhaps, or some gene—caused people both to smoke and to get lung cancer.
Eventually, the fact that smoking causes cancer was definitively established, but it needn’t have taken so long. Since then, Pearl and other statisticians have devised a mathematical approach to identifying what facts would be required to support a causal claim. Pearl’s method shows that, given the prevalence of smoking and lung cancer, an independent factor causing both would be extremely unlikely.
Conversely, Pearl’s formulas also help identify when correlations can’t be used to determine causation. Bernhard Schölkopf, who researches causal AI techniques as a director at Germany’s Max Planck Institute for Intelligent Systems, points out that you can predict a country’s birth rate if you know its population of storks. That isn’t because storks deliver babies or because babies attract storks, but probably because economic development leads to more babies and more storks. Pearl has helped give statisticians and computer scientists ways of attacking such problems, Schölkopf says.
Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.
In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.
Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.
Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect, which would enable the introspection that is at the core of cognition.
One of his systems, which is still in beta, can help scientists determine whether they have sufficient data to answer a causal question. Richard McElreath, an anthropologist at the Max Planck Institute for Evolutionary Anthropology, is using the software to guide research into why humans go through menopause (we are the only apes that do).
The hypothesis is that the decline of fertility in older women benefited early human societies because women who put more effort into caring for grandchildren ultimately had more descendants. But what evidence might exist today to support the claim that children do better with grandparents around? Anthropologists can’t just compare the educational or medical outcomes of children who have lived with grandparents and those who haven’t. There are what statisticians call confounding factors: grandmothers might be likelier to live with grandchildren who need the most help. Bareinboim’s software can help McElreath discern which studies about kids who grew up with their grandparents are least riddled with confounding factors and could be valuable in answering his causal query. “It’s a huge step forward,” McElreath says.
The last mile Bareinboim talks fast and often gestures with two hands in the air, as if he’s trying to balance two sides of a mental equation. It was halfway through the semester when I visited him at Columbia in October, but it seemed as if he had barely moved into his office—hardly anything on the walls, no books on the shelves, only a sleek Mac computer and a whiteboard so dense with equations and diagrams that it looked like a detail from a cartoon about a mad professor.
He shrugged off the provisional state of the room, saying he had been very busy giving talks about both sides of the causal revolution. Bareinboim believes work like his offers the opportunity not just to incorporate causal thinking into machines, but also to improve it in humans.
Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.” He argues that anyone asking “what if”—medical researchers setting up clinical trials, social scientists developing pilot programs, even web publishers preparing A/B tests—should start not merely by gathering data but by using Pearl’s causal logic and software like Bareinboim’s to determine whether the available data could possibly answer a causal hypothesis. Eventually, he envisions this leading to “automated scientist” software: a human could dream up a causal question to go after, and the software would combine causal inference theory with machine-learning techniques to rule out experiments that wouldn’t answer the question. That might save scientists from a huge number of costly dead ends.
Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.” That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on. Maybe some public policy that has been shown to work only in Texas could be made to work in California if a few causally relevant factors were better appreciated. Scientists would no longer be “doing experiments in the darkness,” Bareinboim said.
He also doesn’t think it’s that far off: “This is the last mile before the victory.” What if? Finishing that mile will probably require techniques that are just beginning to be developed. For example, Yoshua Bengio, a computer scientist at the University of Montreal who shared the 2018 Turing Award for his work on deep learning, is trying to get neural networks—the software at the heart of deep learning—to do “meta-learning” and notice the causes of things.
As things stand now, if you wanted a neural network to detect when people are dancing, you’d show it many, many images of dancers. If you wanted it to identify when people are running, you’d show it many, many images of runners. The system would learn to distinguish runners from dancers by identifying features that tend to be different in the images, such as the positions of a person’s hands and arms. But Bengio points out that fundamental knowledge about the world can be gleaned by analyzing the things that are similar or “invariant” across data sets. Maybe a neural network could learn that movements of the legs physically cause both running and dancing. Maybe after seeing these examples and many others that show people only a few feet off the ground, a machine would eventually understand something about gravity and how it limits human movement. Over time, with enough meta-learning about variables that are consistent across data sets, a computer could gain causal knowledge that would be reusable in many domains.
For his part, Pearl says AI can’t be truly intelligent until it has a rich understanding of cause and effect. Although causal reasoning wouldn’t be sufficient for an artificial general intelligence, it’s necessary, he says, because it would enable the introspection that is at the core of cognition. “What if” questions “are the building blocks of science, of moral attitudes, of free will, of consciousness,” Pearl told me.
You can’t draw Pearl into predicting how long it will take for computers to get powerful causal reasoning abilities. “I am not a futurist,” he says. But in any case, he thinks the first move should be to develop machine-learning tools that combine data with available scientific knowledge: “We have a lot of knowledge that resides in the human skull which is not utilized.” Brian Bergstein, a former editor at MIT Technology Review, is deputy opinion editor at the Boston Globe.
hide by Brian Bergstein Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,520 | 2,018 |
"Your next doctor’s appointment might be with an AI | MIT Technology Review"
|
"https://www.technologyreview.com/s/612267/your-next-doctors-appointment-might-be-with-an-ai"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Your next doctor’s appointment might be with an AI A new wave of chatbots are replacing physicians and providing frontline medical advice—but are they as good as the real thing? By Will Douglas Heaven archive page Illustration of medical equipment and ipad “My stomach is killing me!” “I’m sorry to hear that,” says a female voice. “Are you happy to answer a few questions?” And so the consultation begins. Where’s the pain? How bad is it? Does it come and go? There’s some deliberation before you get an opinion. “This sounds like dyspepsia to me. Dyspepsia is doctor-speak for indigestion.” Doctor-speak, maybe, but it’s not a doctor speaking. The female voice belongs to Babylon, part of a wave of new AI apps designed to relieve your doctor of needless paperwork and office visits—and reduce the time you have to wait for medical advice. If you’re feeling unwell, instead of calling a doctor, you use your phone to chat with an AI.
The idea is to make seeking advice about a medical condition as simple as Googling your symptoms, but with many more benefits. Unlike self-diagnosis online, these apps lead you through a clinical-grade triage process—they’ll tell you if your symptoms need urgent attention or if you can treat yourself with bed rest and ibuprofen instead. The tech is built on a grab bag of AI techniques: language processing to allow users to describe their symptoms in a casual way, expert systems to mine huge medical databases, machine learning to string together correlations between symptom and condition.
Babylon Health, a London-based digital-first health-care provider, has a mission statement it likes to share in a big, bold font: to put an accessible and affordable health service in the hands of every person on earth. The best way to do this, says the company’s founder, Ali Parsa, is to stop people from needing to see a doctor.
When in doubt, the apps will always recommend seeking a second, human opinion. But by placing themselves between us and medical professionals, they shift the front line of health care. When the Babylon Health app started giving advice on ways to self-treat, half the company’s patients stopped asking for an appointment, realizing they didn’t need one.
Babylon is not the only app of its kind—others include Ada, Your.MD, and Dr. AI. But Babylon is the front-runner because it’s been integrated with the UK’s National Health Service (NHS), showing how such tech could change the way health services are run and paid for. Last year Babylon started a trial with a hospital trust in London in which calls to the NHS’s non-emergency 111 advice line are handled partly by Babylon’s AI. Callers are asked if they want to wait for a human to pick up or download the Babylon-powered “NHS Online: 111” app instead.
Around 40,000 people have already opted for the app. Between late January and early October 2017, 40% of those who used the app were directed to self-treatment options rather than a doctor—around three times the proportion of people who spoke to a human operator. But both the AI and the humans staffing the phone line told the same proportion of people to seek emergency care (21%).
When the app started giving advice on ways to self-treat, half of patients stopped asking for an appointment, realizing they didn’t need one.
Now Babylon has also co-launched the UK’s first digital doctor’s practice, called GP at Hand. People in London can register with the service as they would with their local doctor. But instead of waiting for an appointment slot and taking time off work to see a physician in person, patients can either chat with the app or talk to a GP at Hand doctor on a video link. And in many cases the call isn’t needed. The human doctor becomes your last resort rather than your first.
GP at Hand has proved popular; some 50,000 people registered in the first few months, among them Matt Hancock, the UK health minister. Babylon now wants to expand across the UK. The service is also available in Rwanda, where 20% of the adult population has already signed up, according to Mobasher Butt, a doctor and a member of Babylon’s founding team. And it’s setting up services in Canada, with plans to do the same in the US, the Middle East, and China.
Your doctor is overloaded For 70 years, the NHS has provided free medical care to anyone who needs it, paid for by UK taxpayers. But it is showing signs of strain. Two generations ago there were 50 million Britons, and their average life expectancy was not much over 60 years. There are now 66 million, and most can expect to live into their 80s. That stretches the resources of a system that has never been flush with cash.
On average, people in the UK see a doctor six times a year, twice as often as a decade ago. From 2011 to 2015, the average GP clinic’s patient list grew by 10% and its number of contacts with patients (by phone or in person) grew by 15.4%, according to a survey by the King’s Fund. In a survey by the British Medical Association in 2016, 84% of general practitioners said they found their workload either “unmanageable” or “excessive,” with “a direct impact on the quality” of care they gave their patients.
In turn, people often have to wait days to get a non-urgent consultation. Many show up at hospital emergency departments instead, adding even more strain to the system. “We have the perception that it’s older people who turn up [at the emergency room],” says Lee Dentith, CEO and founder of the Now Healthcare Group, a health-tech company based in Manchester, UK. “But it’s not. It’s the 18- to 35-year-olds who are unwilling to wait a week for an appointment.” Population and life expectancy will continue to grow. By 2040, it is estimated, the UK will have more than 70 million people, one in four of whom will be over 65. Most other rich countries are also getting older.
At the same time, the next few decades will see more people living with long-term illnesses such as diabetes and heart disease. And better treatment for diseases like cancer means millions more people will be living with or recovering from them.
Of course, the UK is not alone. Whether because of prohibitive costs in the US or the lack of medical professionals in Rwanda, “all health systems around the world are stretched,” says Butt. “There’s not enough clinical resources. There’s not enough money.” Which is where companies like Babylon come in. A chatbot can act as a gatekeeper to overworked doctors. Freeing up even more of the doctor’s time, the AI can also handle paperwork and prescriptions, and even monitor care at home.
A chatbot can also direct people to the right provider. “A GP is not always the best person to see,” says Naureen Bhatti, a general practitioner in East London. “A nurse might be better at dressing a wound, and a pharmacist might be better for advice about a repeat prescription. Anything that helps unload a very overloaded system, allowing doctors to do what they are best at, is always welcome.” Sometimes AI is just better Bhatti remembers how upset lots of doctors were when patients first started bringing in printouts from their own web searches. “How dare they try and diagnose themselves! Don’t think you can negate my six years at medical school with your one hour on the internet.” But she likes to see it from the patients’ perspective: “Well, don’t think you can negate my six years of living with this illness with your one-hour lecture at medical school.” When a patient does meet a doctor face to face, the AI can still help by suggesting diagnoses and possible treatments. This is useful even when a doctor is highly skilled, says Butt, and it’s “really critical” in poorer countries with a shortage of competent doctors.
AI can also help spot serious conditions early. “By the time most diseases are diagnosed, a £10 problem has become a £1,000 one,” says Parsa. “We wait until we break down before going to a doctor.” Catching a disease early slashes the cost of treating it.
These apps first hit the market as private health services. Now they are starting to integrate with national health-care providers and insurers. For example, Ada users can share their chatbot sessions with their NHS doctor, and the company is now working with a handful of GP practices to enable the chatbot to refer them to the doctor. Another app, Now Patient, provides video consultations with your existing doctor, and it also acts as an AI pharmacist. Users can buy their drugs from the Now Healthcare Group’s drug-delivery service. It’s a kind of Amazon for medicines.
“How do we make this a job that people want to do? I don’t think ... consulting from their kitchen is why people get into medicine. They come to meet patients.” “This is a service that patients really want, that they didn’t previously have, and that is now being provided to them through the NHS 365 days a year, 24 hours a day, for free,” Butt says of Babylon. “And the brilliant thing is it doesn’t cost the NHS a single penny more to deliver that.” Not only will the AI in these apps get smarter; it will get to know its users better. “We’re building in the ability for patients to manage their health not only when they’re sick, but also when they’re not sick,” says Butt. The apps will become constant companions for millions of us, advising us and coaxing us through everyday health choices.
Death by chatbot? Not everyone is happy about all this. For a start, there are safety concerns. Parsa compares what Babylon does with your medical data to what Facebook does with your social activities—amassing information, building links, drawing on what it knows about you to prompt some action. Suggesting you make a new friend won’t kill you if it’s a bad recommendation, but the stakes are a lot higher for a medical app.
According to Babylon, its chatbot can identify medical conditions as well as human doctors do, and give treatment advice that’s safer. In a study posted online in June and coauthored with researchers at Imperial College London, Stanford University, and the Northeastern Medical Group, Babylon put its AI through a version of the final exam of the Royal College of General Practitioners (RCGP), which British GPs must pass in order to practice unsupervised. Babylon’s AI scored 81%, 9% higher than the average grade achieved by UK medical students.
The RCGP was quick to distance itself from Babylon’s hype, however. “The potential of technology to support doctors to deliver the best possible patient care is fantastic, but at the end of the day, computers are computers, and GPs are highly trained medical professionals: the two can’t be compared and the former may support but will never replace the latter,” said RCGP vice chair Martin Marshall in a statement. “No app or algorithm will be able to do what a GP does.” Others level far more serious charges, suggesting that Babylon has focused on making its service accessible and affordable at the expense of patients’ safety. One Twitter user with the handle DrMurphy11 (he’s an NHS consultant who told me he needs to remain anonymous because of the corporate culture there) has coined the hashtag #DeathByChatbot. In videos showing interactions with the app, DrMurphy11 suggests that Babylon’s AI misses obvious diagnoses and fails to ask the right questions. “I have no concerns about health tech or AI in general,” he says. “No doctor wants to make mistakes, and any system that helps minimize the risk of harm from human error will be welcomed.” But he’s worried that companies are misleading doctors and the public with marketing claims that vastly oversell their current tech.
Babylon has also met with criticism in Rwanda, where it runs the Babyl service, for not taking local epidemiology into account. In an interview with the BBC, Rwanda’s minister of health claimed that the Babyl app included no questions about malaria, for example (although Babylon disputes this).
Still, while Babylon may not be as good as a real doctor (and such apps are always careful to recommend you see a real doctor when in doubt), playing it too safe would defeat the purpose. “We wanted to re-create the same pragmatic approach that a clinician takes,” says Butt. “If we just had a group of nonclinical people building the service, they might have gone for something that was 100 percent safe, but that could mean you send everyone to hospital, which is not what a real doctor or nurse would do.” Another fear is that digital-first services will create a two-tiered health-care system. For example, GP at Hand advises people with serious medical issues to think twice about signing up to a practice that offers mostly remote access to doctors. That might seem prudent, but it has led to accusations that GP at Hand is effectively cherry-picking younger patients with less complex—and less expensive—health-care needs. Since British GP practices get per-patient funding from the NHS, cherry-picking would mean the rest of the health-care system is left to do more with less.
For some GPs, this isn’t acceptable. “We take everybody,” says Bhatti. But Oliver Michelson, a spokesperson for the NHS, accepts that GP at Hand has to issue some form of caveat—it can’t realistically welcome everyone. “They are not denying people access but saying that if you’re going to need to come into your GP regularly, a digital-first service may not be the best place to be,” he says.
And Butt insists that they exclude nobody. “The service is available to everyone,” he says; it just may not suit some people, such as those with severe learning difficulties or visual impairments, who would struggle with the app.
People still come in handy For Bhatti, having a local doctor who knows you is a crucial part of the health system. “Knowing your doctor saves lives,” she says. “Doctors will pick up things because there’s continuity.” She thinks this is just as much an issue for doctors as for patients. “How do we make this a job people want to do?” she says. “I don’t think people working flexibly, consulting from their kitchen, is why people come to medicine. They come to meet patients.” Not even Butt envisions chatbots replacing human doctors entirely. “Care is not just about diagnosing or prescribing medicine,” he says. “It’s about knowing your patient is going to be able to cope with the chemotherapy you’re proposing for them, knowing that their family will be able to offer them the support that they’re going to need for the next few months. Currently there is no software that’s going to be able to replace that.” hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat.
By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications.
By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,521 | 2,016 |
"Five Lessons from AlphaGo’s Historic Victory | MIT Technology Review"
|
"https://www.technologyreview.com/s/601072/five-lessons-from-alphagos-historic-victory"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Five Lessons from AlphaGo’s Historic Victory By Will Knight archive page AlphaGo handily beat 18-time world Go champion Lee Sedol 4-1, and in doing so taught us several interesting lessons about where AI research is today, and where it is headed.
There’s life in old AI approaches One fascinating thing about AlphaGo is the unusual way it was designed. The software combined deep learning—the hottest AI technique out there today—with a much older, and far less fashionable, approach. Deep learning involves using very large simulated neural networks, and usually it eschews logic or symbol manipulation of the kind pioneered by the likes of Marvin Minksy and John McCarthy. But AlphaGo combines deep learning with something called tree-search , a technique invented by one of Minksy’s contemporaries and colleagues, Claude Shannon. Perhaps, then, we will increasingly see the connectionist and symbolic AI coming together in the future.
Polanyi’s paradox isn’t a problem The game of Go, in which players try to surround and capture each other’s pieces across a large board, is a neat example of Polanyi’s famous paradox: “We know more than we can tell.” Unlike with chess, there aren’t straightforward guidelines for playing the game or measuring progress, which is one reason why Go has historically been so difficult for computers to play. Machine learning, where a computer isn’t programmed (in the conventional sense) but rather generates its own algorithm for learning from examples, offers a way for computers to navigate Polanyi’s paradox. Plenty of things we do, like driving a car or recognizing a face, are similar.
Some economists have highlighted this as an important point. And, as an article in the New York Times shows , some even see AlphaGo’s triumph as compelling evidence that computers will take over more tasks (and jobs) as machine learning is used ever more widely.
AlphaGo isn’t really AI Not so fast, though. Amazing as AlphaGo is, it’s still a long way from truly intelligent. As AI expert and robotics entrepreneur Jean-Christophe Baillie points out , real intelligence will require not just more sophisticated learning but things like embodiment and the ability to communicate. Indeed, driving a car on a busy city street or interacting with someone you recognize is a lot more complex than we might realize. So while machine learning might let computers take on more tasks, it’s going to be a long time before they can replace everything people do.
AlphaGo is pretty inefficient Compared with a human, AlphaGo learns quickly, consuming data on previous games and playing against itself at silicon speed. But it’s much less efficient than a person at learning, in that it requires far more examples of Go games in order to pick up effective techniques. This is one of the key problems with deep learning, which many people are trying to solve, by finding ways to learn from either from new kinds of data or from less data altogether.
Commercialization isn’t obvious The skills demonstrated by AlphaGo—subtle pattern recognition, planning, and decision making—are obviously important. But it’s less obvious how they might be turned into a commercially viable product. Demis Hassabis, the founder of Google DeepMind, has said that the techniques developed for AlphaGo could be used to build a personal assistant that learns its master’s preferences and habits more effectively. But human language is a lot more complex than a board game , and a lot harder to learn from. In other words, it might be tricky to apply AlphaGo’s specific skill set in the messy real world.
(Read more: New York Times , IEEE Spectrum , Nature , “ The Missing Link of Artificial Intelligence ,” “ Can this Man Make AI More Human? “) hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,522 | 2,015 |
"Google DeepMind Teaches Artificial Intelligence Machines to Read | MIT Technology Review"
|
"https://www.technologyreview.com/s/538616/google-deepmind-teaches-artificial-intelligence-machines-to-read"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google DeepMind Teaches Artificial Intelligence Machines to Read By Emerging Technology from the arXiv archive page A revolution in artificial intelligence is currently sweeping through computer science. The technique is called deep learning and it’s affecting everything from facial and voice to fashion and economics.
But one area that has not yet benefitted is natural language processing—the ability to read a document and then answer questions about it. That’s partly because deep learning machines must first learn their trade from vast databases that are carefully annotated for the purpose. However, these simply do not exist in sufficient size to be useful.
Today, that changes thanks to the work of Karl Moritz Hermann at Google DeepMind in London and a few pals. These guys say the special way that the Daily Mail and CNN write online news articles allows them to be used in this way. And the sheer volume of articles available online creates for the first time, a database that computers can use to learn and then answer related about. In other words, DeepMind is using Daily Mail and CNN articles to teach computers to read.
The deep learning revolution has come about largely because of two breakthroughs. The first is related to neural networks, where computer scientists have developed new techniques to train networks with many layers, a task that has been tricky because of the number of parameters that must be fine-tuned. The new techniques essentially produce “ready-made” nets that are ready to learn.
But a neural network is of little use without a database to learn from. Such a database has to be carefully annotated so that the machine has a gold standard to learn from. For example, for face recognition, the training database must contain pictures in which faces and their positions in the frame are clearly identified. And so that the images cover as many facial arrangements as possible, the databases have to be huge.
That’s recently become possible thanks to crowdsourcing services like Amazon’s Mechanical Turk. Various teams have created this kind of gold standard database by showing people pictures and asking them to draw bounding boxes around the faces they contain.
But creating a similarly annotated database for the written word is much harder. Sure, it’s possible to extract sentences that contain important points. But these aren’t much help because any machine algorithm quickly learns to hunt through the text for the same phrase, a trivial task for a computer.
Instead, the annotation must describe the content of the text but without appearing within it. To understand the link, a learning algorithm must then look beyond the mere occurrence of words and phrases but also at their grammatical links and causal relationships.
Creating such a database is easier said than done. Computer scientists have generated small versions by hand but these are too tiny to be of much use to a neural network. And there seems little possibility of creating larger ones by hand because humans are generally poor at annotating text accurately, unless they are specialist editors.
Enter the Daily Mail website, MailOnline, and CNN online. These sites display news stories with the main points of the story displayed as bullet points that are written independently of the text. “Of key importance is that these summary points are abstractive and do not simply copy sentences from the documents,” say Hermann and co.
That immediately suggests a way of creating an annotated database: take the news articles as the texts and the bullet point summaries as the annotation.
The DeepMind team goes further, however. They point out that it is still possible to work out the answer to many queries using simple word search approaches.
They give the following example of a type of problem known as a Cloze query, that machine learning algorithms are often used to solve. Here, the goal is to identify X in these modified headlines from the Daily Mail: a) The hi-tech bra that helps you beat breast X; b) Could Saccharin help beat X ?; c) Can fish oils help fight prostate X ? Hermann and co point out that a simple type of data mining algorithm called an ngram search could easily find the answer by looking for words that appear most often next to all these phrases. The answer, of course, is the word “cancer.” To foil this type of solution, Hermann and co anonymize the dataset by replacing the actors in sentences with a generic description. An example of some original text from the Daily Mail is this: The BBC producer allegedly struck by Jeremy Clarkson will not press charges against the “Top Gear” host, his lawyer said Friday. Clarkson, who hosted one of the most-watched television shows in the world, was dropped by the BBC Wednesday after an internal investigation by the British broadcaster found he had subjected producer Oisin Tymon “to an unprovoked physical and verbal attack.” An anonymized version of this text would be the following: The ent381 producer allegedly struck by ent212 will not press charges against the “ ent153 ” host, his lawyer said friday.
ent212 , who hosted one of the most - watched television shows in the world, was dropped by the ent381 wednesday after an internal investigation by the ent180 broadcaster found he had subjected producer ent193 “ to an unprovoked physical and verbal attack .” In this way it is possible to convert the following Cloze-type query to identify X from “ Producer X will not press charges against Jeremy Clarkson, his lawyer says ” to “ Producer X will not press charges against ent212 , his lawyer says.
” And the required answer changes from “Oisin Tymon” to “ ent212.
” In that way, the anonymized actor is only possible to identify with some kind of understanding of the grammatical links and causal relationships between the entities in the story.
The resulting database is vast, consisting of 110,000 articles from CNN and 218,000 articles from the Daily Mail website.
Having created this kind of database for the first time, Hermann and co can’t resist using it to put several machine learning techniques through their paces. They compare conventional natural language processing techniques, such as measuring the distance between combinations of words, and more modern neural network approaches.
The results clearly show how powerful neural nets have become. Hermann and co say the best neural nets can answer 60 percent of the queries put to them. They suggest that these machines can answer all queries that are structured in a simple way and struggle only with queries that have more complex grammatical structures.
There are some caveats of course. The most obvious is that articles from the Daily Mail and CNN have a very specific underlying structure that differs from other nonjournalistic forms of writing. Just how this underlying structure influences the results isn’t clear.
Neither is it clear how these machines compare to human capabilities, something that would be straightforward to find out using services like Mechanical Turk. That would put in context DeepMind’s claim, implied in the title of its paper, that these machines are learning to comprehend what they read.
Nevertheless, this is interesting work that sets the scene for some fascinating developments in the near future. Machine reading is coming; the only question is how quickly.
Ref: arxiv.org/abs/1506.03340 : Teaching Machines to Read and Comprehend hide by Emerging Technology from the arXiv Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,523 | 2,018 |
"The GANfather: The man who’s given machines the gift of imagination | MIT Technology Review"
|
"https://www.technologyreview.com/2018/02/21/145289/the-ganfather-the-man-whos-given-machines-the-gift-of-imagination"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The GANfather: The man who’s given machines the gift of imagination By Martin Giles archive page Christie Hemm Klok One night in 2014, Ian Goodfellow went drinking to celebrate with a fellow doctoral student who had just graduated. At Les 3 Brasseurs (The Three Brewers), a favorite Montreal watering hole, some friends asked for his help with a thorny project they were working on: a computer that could create photos by itself.
Researchers were already using neural networks, algorithms loosely modeled on the web of neurons in the human brain, as “generative” models to create plausible new data of their own. But the results were often not very good: images of a computer-generated face tended to be blurry or have errors like missing ears. The plan Goodfellow’s friends were proposing was to use a complex statistical analysis of the elements that make up a photograph to help machines come up with images by themselves. This would have required a massive amount of number-crunching, and Goodfellow told them it simply wasn’t going to work.
But as he pondered the problem over his beer, he hit on an idea. What if you pitted two neural networks against each other? His friends were skeptical, so once he got home, where his girlfriend was already fast asleep, he decided to give it a try. Goodfellow coded into the early hours and then tested his software. It worked the first time.
What he invented that night is now called a GAN, or “generative adversarial network.” The technique has sparked huge excitement in the field of machine learning and turned its creator into an AI celebrity.
In the last few years, AI researchers have made impressive progress using a technique called deep learning. Supply a deep-learning system with enough images and it learns to, say, recognize a pedestrian who’s about to cross a road. This approach has made possible things like self-driving cars and the conversational technology that powers Alexa, Siri, and other virtual assistants.
But while deep-learning AIs can learn to recognize things, they have not been good at creating them. The goal of GANs is to give machines something akin to an imagination.
In the future, computers will get much better at feasting on raw data and working out what they need to learn from it.
Doing so wouldn’t merely enable them to draw pretty pictures or compose music; it would make them less reliant on humans to instruct them about the world and the way it works. Today, AI programmers often need to tell a machine exactly what’s in the training data it’s being fed—which of a million pictures contain a pedestrian crossing a road, and which don’t. This is not only costly and labor-intensive; it limits how well the system deals with even slight departures from what it was trained on. In the future, computers will get much better at feasting on raw data and working out what they need to learn from it without being told.
That will mark a big leap forward in what’s known in AI as “unsupervised learning.” A self-driving car could teach itself about many different road conditions without leaving the garage. A robot could anticipate the obstacles it might encounter in a busy warehouse without needing to be taken around it.
That will mark a big leap forward in what is known in AI as “unsupervised learning.” Our ability to imagine and reflect on many different scenarios is part of what makes us human. And when future historians of technology look back, they’re likely to see GANs as a big step toward creating machines with a human-like consciousness. Yann LeCun, Facebook’s chief AI scientist, has called GANs “the coolest idea in deep learning in the last 20 years.” Another AI luminary, Andrew Ng, the former chief scientist of China’s Baidu, says GANs represent “a significant and fundamental advance” that’s inspired a growing global community of researchers.
The GANfather, Part II: AI fight club Goodfellow is now a research scientist on the Google Brain team, at the company’s headquarters in Mountain View, California. When I met him there recently, he still seemed surprised by his superstar status, calling it “a little surreal.” Perhaps no less surprising is that, having made his discovery, he now spends much of his time working against those who wish to use it for evil ends.
The magic of GANs lies in the rivalry between the two neural nets. It mimics the back-and-forth between a picture forger and an art detective who repeatedly try to outwit one another. Both networks are trained on the same data set. The first one, known as the generator, is charged with producing artificial outputs, such as photos or handwriting, that are as realistic as possible. The second, known as the discriminator, compares these with genuine images from the original data set and tries to determine which are real and which are fake. On the basis of those results, the generator adjusts its parameters for creating new images. And so it goes, until the discriminator can no longer tell what’s genuine and what’s bogus.
In one widely publicized example last year, researchers at Nvidia, a chip company heavily invested in AI, trained a GAN to generate pictures of imaginary celebrities by studying real ones. Not all the fake stars it produced were perfect, but some were impressively realistic. Unlike other machine-learning approaches that require tens of thousands of training images, GANs can become proficient with a few hundred.
Related Story This power of imagination is still limited. Once it’s been trained on a lot of dog photos, a GAN can generate a convincing fake image of a dog that has, say, a different pattern of spots; but it can’t conceive of an entirely new animal. The quality of the original training data also has a big influence on the results. In one telling example, a GAN began producing pictures of cats with random letters integrated into the images. Because the training data contained cat memes from the internet, the machine had taught itself that words were part of what it meant to be a cat.
GANs are also temperamental, says Pedro Domingos, a machine-learning researcher at the University of Washington. If the discriminator is too easy to fool, the generator’s output won’t look realistic. And calibrating the two dueling neural nets can be difficult, which explains why GANs sometimes spit out bizarre stuff such as animals with two heads.
Still, the challenges haven’t deterred researchers. Since Goodfellow and a few others published the first study on his discovery, in 2014, hundreds of GAN-related papers have been written. One fan of the technology has even created a web page called the “GAN zoo,” dedicated to keeping track of the various versions of the technique that have been developed.
The most obvious immediate applications are in areas that involve a lot of imagery, such as video games and fashion: what, for instance, might a game character look like running through the rain? But looking ahead, Goodfellow thinks GANs will drive more significant advances. “There are a lot of areas of science and engineering where we need to optimize something,” he says, citing examples such as medicines that need to be more effective or batteries that must get more efficient. “That’s going to be the next big wave.” In high-energy physics, scientists use powerful computers to simulate the likely interactions of hundreds of subatomic particles in machines like the Large Hadron Collider at CERN in Switzerland. These simulations are slow and require massive computing power. Researchers at Yale University and Lawrence Berkeley National Laboratory have developed a GAN that, after training on existing simulation data, learns to generate pretty accurate predictions of how a particular particle will behave, and does it much faster.
Medical research is another promising field. Privacy concerns mean researchers sometimes can’t get enough real patient data to, say, analyze why a drug didn’t work. GANs can help solve this problem by generating fake records that are almost as good as the real thing, says Casey Greene of the University of Pennsylvania. This data could be shared more widely, helping to advance research, while the real records are tightly protected.
The GANfather, Part III: Bad fellows There is a darker side, however. A machine designed to create realistic fakes is a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections. AI tools are already being used to put pictures of other people’s faces on the bodies of porn stars and put words in the mouths of politicians. GANs didn’t create this problem, but they’ll make it worse.
Hany Farid, who studies digital forensics at Dartmouth College, is working on better ways to spot fake videos, such as detecting slight changes in the color of faces caused by inhaling and exhaling that GANs find hard to mimic precisely. But he warns that GANs will adapt in turn. “We’re fundamentally in a weak position,” says Farid.
This cat-and-mouse game will play out in cybersecurity, too. Researchers are already highlighting the risk of “black box” attacks, in which GANs are used to figure out the machine-learning models with which plenty of security programs spot malware. Having divined how a defender’s algorithm works, an attacker can evade it and insert rogue code. The same approach could also be used to dodge spam filters and other defenses.
“There are a lot of areas of science and engineering where we need to optimize something. That’s going to be the next big wave.” Goodfellow is well aware of the dangers. Now heading a team at Google that’s focused on making machine learning more secure, he warns that the AI community must learn the lesson of previous waves of innovation, in which technologists treated security and privacy as an afterthought. By the time they woke up to the risks, the bad guys had a significant lead. “Clearly, we’re already beyond the start,” he says, “but hopefully we can make significant advances in security before we’re too far in.” Nonetheless, he doesn’t think there will be a purely technological solution to fakery. Instead, he believes, we’ll have to rely on societal ones, such as teaching kids critical thinking by getting them to take things like speech and debating classes. “In speech and debate you’re competing against another student,” he says, “and you’re thinking about how to craft misleading claims, or how to craft correct claims that are very persuasive.” He may well be right, but his conclusion that technology can’t cure the fake-news problem is not one many will want to hear.
hide by Martin Giles Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2018 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,524 | 2,020 |
"The UK exam debacle reminds us that algorithms can’t fix broken systems | MIT Technology Review"
|
"https://www.technologyreview.com/2020/08/20/1007502/uk-exam-algorithm-cant-fix-broken-system"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The UK exam debacle reminds us that algorithms can’t fix broken systems By Karen Hao archive page Ms Tech | AP, Getty When the UK first set out to find an alternative to school leaving qualifications, the premise seemed perfectly reasonable. Covid-19 had derailed any opportunity for students to take the exams in person, but the government still wanted a way to assess them for university admission decisions.
Chief among its concerns was an issue of fairness. Teachers had already made predictions of their students’ exam scores, but previous studies had shown that these could be biased on the basis of age, gender, and ethnicity. After a series of expert panels and consultations , Ofqual, the Office of Qualifications and Examinations Regulation, turned to an algorithm. From there, things went horribly wrong.
Nearly 40% of students ended up receiving exam scores downgraded from their teachers’ predictions, threatening to cost them their university spots.
Analysis of the algorithm also revealed that it had disproportionately hurt students from working-class and disadvantaged communities and inflated the scores of students from private schools. On August 16, hundreds chanted “ Fuck the algorithm ” in front of the UK's Department of Education building in London to protest the results. By the next day, Ofqual had reversed its decision.
Students will now be awarded either their teacher’s predicted scores or the algorithm’s—whichever is higher.
The debacle feels like a textbook example of algorithmic discrimination.
Those who have since dissected the algorithm have pointed out how predictable it was that things would go awry; it was trained, in part, not just on each student’s past academic performance but also on the past entrance-exam performance of the student’s school. The approach could only have led to punishment of outstanding outliers in favor of a consistent average.
But the root of the problem runs deeper than bad data or poor algorithmic design. The more fundamental errors were made before Ofqual even chose to pursue an algorithm. At bottom, the regulator lost sight of the ultimate goal: to help students transition into university during anxiety-ridden times. In this unprecedented situation, the exam system should have been completely rethought.
“There was just a spectacular failure of imagination,” says Hye Jung Han, a researcher at Human Rights Watch in the US, who focuses on children’s rights and technology. “They just didn’t question the very premise of so many of their processes even when they should have.” At a basic level, Ofqual faced two potential objectives after exams were canceled. The first was to avoid grade inflation and standardize the scores; the second was to assess students as accurately as possible in a way useful for university admissions. Under a directive from the secretary of state, it prioritized the first goal. “I think really that’s the moment that was the problem,” says Hannah Fry, a senior lecturer at University College London and author of Hello World: How to Be Human in the Age of the Machine.
“They were optimizing for the wrong thing. Then it basically doesn’t matter what the algorithm is—it was never going to be perfect.” “There was just a spectacular failure of imagination.” The objective completely shaped the way Ofqual went about pursuing the problem. The need for standardization overruled everything else. The regulator then logically chose one of the best standardization tools, a statistical model, for predicting a distribution of entrance-exam scores for 2020 that would match the distribution from 2019.
Had Ofqual chosen the other objective, things would have gone quite differently. It likely would have scrapped the algorithm and worked with universities to change how the exam grades are weighted in their admissions processes. “If they just looked one step past their immediate problem and looked at what are the purpose of grades—to go to university, to be able to get jobs—they could have flexibly worked with universities and with workplaces to say, ‘Hey, this year grades are going to look different, which means that any important decisions that traditionally were made based off of grades also need to flexible and need to be changed,” says Han.
In fixating on the perceived fairness of an algorithmic solution, Ofqual blinded itself to the glaring inequities of the overall system. “There’s an inherent unfairness in defining the problem to predict student grades as if a pandemic hadn’t happened,” Han says. “It actually ignores what we already know, which is that the pandemic exposed all of these digital divides in education.” Ofqual’s failures are not unique.
In a report published last week by the Oxford Internet Institute, researchers found that one of the most common traps organizations fall into when implementing algorithms is the belief that they will fix really complex structural issues. These projects “lend themselves to a kind of magical thinking,” says Gina Neff, an associate professor at the institute, who coauthored the report. “Somehow the algorithm will simply wash away any teacher bias, wash away any attempt at cheating or gaming the system.” “I think it’s the first time that an entire nation has felt the injustice of an algorithm simultaneously.” But the truth is, algorithms cannot fix broken systems. They inherit the flaws of the systems in which they’re placed. In this case, the students and their futures ultimately bore the brunt of the harm. “I think it’s the first time that an entire nation has felt the injustice of an algorithm simultaneously,” says Fry.
Fry, Neff, and Han all worry that this won’t be the end of algorithmic gaffes. Despite the new public awareness of the problems, designing and implementing fair and beneficial algorithms is frankly really hard.
Nonetheless, they urge organizations to make the most of the lessons learned from this experience. First, return to the objective and critically think about whether it’s the right one. Second, evaluate the structural issues that need to be fixed in order to achieve the objective. (“When the government cancelled the exam in March, that should have been the signal to come up with another strategy to allow a much larger ecology of decision makers to fairly assess student performance,” Neff says.) Finally, pick a solution that’s easy to understand, implement, and contest, especially in times of uncertainty. In this case, says Fry, that means forgoing the algorithm in favor of teacher-predicted scores: “I’m not saying that’s perfect,” she says, “but it’s at least a simple and transparent system.” hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,525 | 2,020 |
"This startup is using AI to give workers a “productivity score” | MIT Technology Review"
|
"https://www.technologyreview.com/2020/06/04/1002671/startup-ai-workers-productivity-score-bias-machine-learning-business-covid"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts This startup is using AI to give workers a “productivity score” By Will Douglas Heaven archive page Kate Sade / Unsplash In the last few months, millions of people around the world stopped going into offices and started doing their jobs from home. These workers may be out of sight of managers, but they are not out of mind. The upheaval has been accompanied by a reported spike in the use of surveillance software that lets employers track what their employees are doing and how long they spend doing it.
Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.” Now, one firm wants to take things even further. It is developing machine-learning software to measure how quickly employees complete different tasks and suggest ways to speed them up. The tool also gives each person a productivity score, which managers can use to identify those employees who are most worth retaining—and those who are not.
How you feel about this will depend on how you view the covenant between employer and employee. Is it okay to be spied on by people because they pay you? Do you owe it to your employer to be as productive as possible, above all else? Critics argue that workplace surveillance undermines trust and damages morale. Workers’ rights groups say that such systems should only be installed after consulting employees. “It can create a massive power imbalance between workers and the management,” says Cori Crider, a UK-based lawyer and cofounder of Foxglove, a nonprofit legal firm that works to stop governments and big companies from misusing technology. “And the workers have less ability to hold management to account.” Whatever your views, this kind of software is here to stay—in part because remote work is normalizing it. “I think workplace monitoring is going to become mainstream,” says Tommy Weir, CEO of Enaible, the startup based in Boston that is developing the new monitoring software. “In the next six to 12 months it will become so pervasive it disappears.” Weir thinks most tools on the market don’t go far enough. “Imagine you’re managing somebody and you could stand and watch them all day long, and give them recommendations on how to do their job better,” says Weir. “That's what we’re trying to do. That’s what we’ve built.” Weir founded Enaible in 2018 after coaching CEOs for 20 years. The firm already provides its software to several large organizations around the world, including the Dubai customs agency and Omnicom Media Group, a multinational marketing and corporate communications company. But Weir claims to also be in in late-stage talks with Delta Airlines and CVS Health, a US health-care and pharmacy chain ranked #5 on the Fortune 500 list. Neither company would comment on if or when they were preparing to deploy the system.
Weir says he has been getting four times as many inquiries since the pandemic closed down offices. “I’ve never seen anything like it,” he says.
Why the sudden uptick in interest? “Bosses have been seeking to wring every last drop of productivity and labor out of their workers since before computers,” says Crider. “But the granularity of the surveillance now available is like nothing we’ve ever seen.” It’s no surprise that this level of detail is attractive to employers, especially those looking to keep tabs on a newly remote workforce. But Enaible’s software, which it calls the AI Productivity Platform, goes beyond tracking things like email, Slack, Zoom, or web searches. None of that shows a full picture of what a worker is doing, says Weir—it’s just checking if you are working or not.
Once set up, the software runs in the background all the time, monitoring whatever data trail a company can provide for each of its employees. Using an algorithm called Trigger-Task-Time, the system learns the typical workflow for different workers: what triggers, such as an email or a phone call, lead to what tasks and how long those tasks take to complete.
Once it has learned a typical pattern of behavior for an employee, the software gives that person a “productivity score” between 0 and 100. The AI is agnostic to tasks, says Weir. In theory, workers across a company can still be compared by their scores even if they do different jobs. A productivity score also reflects how your work increases or decreases the productivity of other people on your team. There are obvious limitations to this approach. The system works best with employees who do a lot of repetitive tasks in places like call centers or customer service departments rather than those in more complex or creative roles.
But the idea is that managers can use these scores to see how their employees are getting on, rewarding them if they get quicker at doing their job or checking in with them if performance slips. To help them, Enaible’s software also includes an algorithm called Leadership Recommender, which identifies specific points in an employee’s workflow that could be made more efficient.
For some tasks, that might mean cutting the human out of the loop and automating it. In one example, the tool suggested that automating a 40-second quality-checking task that was performed by customer service workers 186,000 times a year would save them 5,200 hours. This meant that the human employees could devote more attention to more valuable work, improving customer-service response times, suggests Weir.
Business as usual But talk of cost cutting and time saving has long been double-speak for laying off staff. As the economy slumps, Enaible is promoting its software as a way for companies to identify the employees who must be retained—“those that are making a big difference in fulfilling company objectives and driving profits”—and keep them motivated and focused as they work from home.
The flipside, of course, is that the software can also be used by managers to choose whom to fire. “Companies will lay people off—they always have,” says Weir. “But you can be objective in how you do that, or subjective.” Crider sees it differently. “The thing that’s so insidious about these systems is that there’s a veneer of objectivity about them,” she says. “It’s a number, it’s on a computer—how could there be anything suspect? But you don’t have to scratch the surface very hard to see that behind the vast majority of these systems are values about what is to be prioritized.” Machine-learning algorithms also encode hidden bias in the data they are trained on. Such bias is even harder to expose when it’s buried inside an automated system. If these algorithms are used to assess an employee’s performance, it can be hard to appeal an unfair review or dismissal.
In a pitch deck, Enaible claims that the Dubai customs agency is now rolling out its software across the whole organization, with the goal of $75 million in “payroll savings” over the coming two years. “We’ve essentially decoupled our growth rate from our payroll,” the agency’s director general is quoted as saying. Omnicom Media Group is also happy with how Enaible helps it get more out its employees. “Our global team needs tools that can move the needle when it comes to building our internal capacity without adding to our head count,” says CEO Nadim Samara. In other words, squeezing more out of existing employees.
Crider insists there are better ways to encourage people to work. “What you’re seeing is an effort to turn a human into a machine before the machine replaces them,” she says. “You’ve got to create an environment in which people feel trusted to do their job. You don’t get that by surveilling them.” hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,526 | 2,020 |
"A hybrid AI model lets it reason about the world’s physics like a child | MIT Technology Review"
|
"https://www.technologyreview.com/2020/03/06/905479/ai-neuro-symbolic-system-reasons-like-child-deepmind-ibm-mit"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts A hybrid AI model lets it reason about the world’s physics like a child By Karen Hao archive page A red rubber ball hits a blue rubber cylinder that continues on to hit a metal cylinder.
Courtesy of MIT-IBM Watson AI Lab A new data set reveals just how bad AI is at reasoning—and suggests that a new hybrid approach might be the best way forward.
Questions, questions: Known as CLEVRER, the data set consists of 20,000 short synthetic video clips and more than 300,000 question and answer pairings that reason about the events in the videos. Each video shows a simple world of toy objects that collide with one another following simulated physics. In one, a red rubber ball hits a blue rubber cylinder, which continues on to hit a metal cylinder.
The questions fall into four categories: descriptive (e.g., “What shape is the object that collides with the cyan cylinder?”), explanatory (“What is responsible for the gray cylinder’s collision with the cube?”), predictive (“Which event will happen next?”), and counterfactual (“Without the gray object, which event will not happen?”). The questions mirror many of the concepts that children learn early on as they explore their surroundings. But the latter three categories, which specifically require causal reasoning to answer, often stump deep-learning systems.
Fail: The data set, created by researchers at Harvard, DeepMind, and MIT-IBM Watson AI Lab is meant to help evaluate how well AI systems can reason. When the researchers tested several state-of-the-art computer vision and natural language models with the data set, they found that all of them did well on the descriptive questions but poorly on the others.
Mixing the old and the new: The team then tried a new AI system that combines both deep learning and symbolic logic. Symbolic systems used to be all the rage before they were eclipsed by machine learning in the late 1980s. But both approaches have their strengths: deep learning excels at scalability and pattern recognition; symbolic systems are better at abstraction and reasoning.
The composite system, known as a neuro-symbolic model , leverages both: it uses a neural network to recognize the colors, shapes, and materials of the objects and a symbolic system to understand the physics of their movements and the causal relationships between them. It outperformed existing models across all categories of questions.
Why it matters: As children, we learn to observe the world around us, infer why things happened and make predictions about what will happen next. These predictions help us make better decisions, navigate our environments, and stay safe. Replicating that kind of causal understanding in machines will similarly equip them to interact with the world in a more intelligent way.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,527 | 2,019 |
"An AI conference once known for blowout parties is finally growing up | MIT Technology Review"
|
"https://www.technologyreview.com/2019/12/13/131579/ai-conference-neurips-power-responsibility"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts An AI conference once known for blowout parties is finally growing up By Karen Hao archive page NeurIPS 2019 Karen Hao/MIT Technology Review Only two years ago, so I’m told, one of the hottest AI research conferences of the year was more giant party than academic exchange. In a fight for the best talent, companies handed out endless free swag and threw massive, blowout events , including one featuring Flo Rida, hosted by Intel. The attendees (mostly men in their early 20s and 30s), flush with huge salaries and the giddiness of being highly coveted, drank free booze and bumped the night away.
I never witnessed this version of NeurIPS, short for the Neural Information Processing Systems conference. I came for my first time last year, after the excess had reached its peak. Externally, the community was coming under increasing scrutiny as the upset of the 2016 US presidential election drove people to question the influence of algorithms in society. Internally, reports of sexual harrassment , anti-Semitism, racism, and ageism were also driving conference goers to question whether they should continue to attend.
So when I arrived in 2018, a diversity and inclusion committee had been appointed, and the long-standing abbreviation NIPS had been updated. Still, this year’s proceedings feel different from the last. The parties are smaller, the talks are more socially minded, and the conversations happening in between seem more aware of the ethical challenges that the field needs to address.
As the role of AI has expanded dramatically, along with the more troubling aspects of its impact, the community, it seems, has finally begun to reflect on its power and the responsibilities that come with it. As one attendee put it to me: “It feels like this community is growing up.” Is that a Rolling Stones concert? No, that's a keynote at #NeurIPS2019 pic.twitter.com/nJjONGzJww This change manifested in some concrete ways. Many of the technical sessions were more focused on addressing real-world, human-centric challenges rather than theoretical ones. Entire poster tracks were centered on better methods for protecting user privacy, ensuring fairness, and reducing the amount of energy it can take to run and train state-of-the-art models. Day-long workshops, scheduled to happen today and tomorrow, have titles like “Tackling Climate Change with Machine Learning” and “Fairness in Machine Learning for Health.” Additionally, many of the invited speakers directly addressed the social and ethical challenges facing the field—topics once dismissed as not core to the practice of machine learning. Their talks were also well received by attendees, signaling a new openness to engage with these issues. At the opening event, for example, cognitive psychologist and #metoo figurehead Celeste Kidd gave a rousing speech exhorting the tech industry to take responsibility for how its technologies shape people’s beliefs and debunking myths around sexual harassment. She received a standing ovation. In an opening talk at the Queer in AI symposium, Stanford researcher Ria Kalluri also challenged others to think more about how their machine-learning models could shift the power in society from those who have it to those who don’t. Her talk was widely circulated online.
I'm happy to share my comments on the climate for men from my #NeurIPS2019 talk: https://t.co/VYo4st5VTt Much of this isn’t coincidental. Through the work of the diversity and inclusion committee, the conference saw the most diverse participation in the its history. Close to half the main-stage speakers were women and a similar number minorities; 20% of the over 13,000 attendees were also women, up from 18% last year. There were seven community-organized groups for supporting minority researchers, which is a record. These included Black in AI, Queer in AI, and Disability in AI, and they held parallel proceedings in the same space as NeurIPS to facilitate mingling of people and ideas.
When we involve more people from diverse backgrounds in AI, Kidd told me, we naturally talk more about how AI is shaping society, for good or for bad. “They come from a less privileged place and are more acutely aware of things like bias and injustice and how technologies that were designed for a certain demographic may actually do harm to disadvantaged populations,” she said. Kalluri echoed the sentiment. The intentional efforts to diversify the community, she said, are forcing it to “confront the questions of how power works in this field.” "Is this ML model doing good?" is the wrong question to be asking.
We need to ask: "how is this ML model shifting power?" @riakall #NeurIPS2019 pic.twitter.com/LWEciMqgKs Despite the progress, however, many emphasized that the work is just getting started. Having 20% women is still appalling, and this year, as in past years, there continued to be Herculean challenges in securing visas for international researchers, particularly from Africa.
“Historically, this field has been pretty narrowed in on a particular demographic of the population, and the research that comes out reflects the values of those people,” says Katherine Heller, an assistant professor at Duke University and co-chair of the diversity committee. “What we want in the long run is a more inclusive place to shape what the future direction of AI is like. There’s still a far way to go.” Yes, there’s still a long way to go. But on Monday, as people lined up to thank Kidd for her talk one by one, I let myself feel hopeful.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,528 | 2,019 |
"Military artificial intelligence can be easily and dangerously fooled | MIT Technology Review"
|
"https://www.technologyreview.com/2019/10/21/132277/military-artificial-intelligence-can-be-easily-and-dangerously-fooled"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Military artificial intelligence can be easily and dangerously fooled AI warfare is beginning to dominate military strategy in the US and China, but is the technology ready? By Will Knight archive page A turtle and a rifle getty images Last March, Chinese researchers announced an ingenious and potentially devastating attack against one of America’s most prized technological assets—a Tesla electric car.
The team, from the security lab of the Chinese tech giant Tencent, demonstrated several ways to fool the AI algorithms on Tesla’s car. By subtly altering the data fed to the car’s sensors, the researchers were able to bamboozle and bewilder the artificial intelligence that runs the vehicle.
In one case, a TV screen contained a hidden pattern that tricked the windshield wipers into activating. In another, lane markings on the road were ever-so-slightly modified to confuse the autonomous driving system so that it drove over them and into the lane for oncoming traffic.
Tesla’s algorithms are normally brilliant at spotting drops of rain on a windshield or following the lines on the road, but they work in a way that’s fundamentally different from human perception. That makes such “deep learning” algorithms, which are rapidly sweeping through different industries for applications such as facial recognition and cancer diagnosis, surprisingly easy to fool if you find their weak points.
Leading a Tesla astray might not seem like a strategic threat to the United States. But what if similar techniques were used to fool attack drones, or software that analyzes satellite images, into seeing things that aren’t there—or not seeing things that are? Artificial intelligence-gathering Around the world, AI is already seen as the next big military advantage.
Early this year, the US announced a grand strategy for harnessing artificial intelligence in many areas of the military, including intelligence analysis, decision-making, vehicle autonomy, logistics, and weaponry. The Department of Defense’s proposed $718 billion budget for 2020 allocates $927 million for AI and machine learning. Existing projects include the rather mundane (testing whether AI can predict when tanks and trucks need maintenance) as well as things on the leading edge of weapons technology (swarms of drones).
The Pentagon’s AI push is partly driven by fear of the way rivals might use the technology. Last year Jim Mattis, then the secretary of defense, sent a memo to President Donald Trump warning that the US is already falling behind when it comes to AI. His worry is understandable.
In July 2017, China articulated its AI strategy, declaring that “the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security.” And a few months later, Vladimir Putin of Russia ominously declared: “Whoever becomes the leader in [the AI] sphere will become the ruler of the world.” The ambition to build the smartest, and deadliest, weapons is understandable, but as the Tesla hack shows, an enemy that knows how an AI algorithm works could render it useless or even turn it against its owners. The secret to winning the AI wars might rest not in making the most impressive weapons but in mastering the disquieting treachery of the software.
Battle bots On a bright and sunny day last summer in Washington, DC, Michael Kanaan was sitting in the Pentagon’s cafeteria, eating a sandwich and marveling over a powerful new set of machine-learning algorithms.
A few weeks earlier, Kanaan had watched a video game in which five AI algorithms worked together to very nearly outmaneuver, outgun, and outwit five humans in a contest that involved controlling forces, encampments, and resources across a complex, sprawling battlefield. The brow beneath Kanaan’s cropped blond hair was furrowed as he described the action, though. It was one of the most impressive demonstrations of AI strategy he’d ever seen, an unexpected development akin to AI advances in chess, Atari, and other games.
The war game had taken place within Dota 2, a popular sci-fi video game that is incredibly challenging for computers. Teams must defend their territory while attacking their opponents’ encampments in an environment that is more complex and deceptive than any board game. Players can see only a small part of the whole picture, and it can take about half an hour to determine if a strategy is a winning one.
The AI combatants were developed not by the military but by OpenAI, a company created by Silicon Valley bigwigs including Elon Musk and Sam Altman to do fundamental AI research. The company’s algorithmic warriors, known as the OpenAI Five, worked out their own winning strategies through relentless practice, and by responding with moves that proved most advantageous.
AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets.
It is exactly the type of software that intrigues Kanaan, one of the people tasked with using artificial intelligence to modernize the US military. To him, it shows what the military stands to gain by enlisting the help of the world’s best AI researchers. But whether they are willing is increasingly in question.
Kanaan was the Air Force lead on Project Maven, a military initiative aimed at using AI to automate the identification of objects in aerial imagery. Google was a contractor on Maven, and when other Google employees found that out, in 2018, the company decided to abandon the project. It subsequently devised an AI code of conduct saying Google would not use its AI to develop “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Workers at some other big tech companies followed by demanding that their employers eschew military contracts. Many prominent AI researchers have backed an effort to initiate a global ban on developing fully autonomous weapons.
To Kanaan, however, it would be a big problem if the military couldn’t work with researchers like those who developed the OpenAI Five. Even more disturbing is the prospect of an adversary gaining access to such cutting-edge technology. “The code is just out there for anyone to use,” he said. He added: “war is far more complex than some video game.” The AI surge Kanaan is generally very bullish about AI, partly because he knows firsthand how useful it stands to be for troops. Six years ago, as an Air Force intelligence officer in Afghanistan, he was responsible for deploying a new kind of intelligence-gathering tool: a hyperspectral imager. The instrument can spot objects that are normally hidden from view, like tanks draped in camouflage or emissions from an improvised bomb-making factory. Kanaan says the system helped US troops remove many thousands of pounds of explosives from the battlefield. Even so, it was often impractical for analysts to process the vast amounts of data collected by the imager. “We spent too much time looking at the data and not enough time making decisions,” he says. “Sometimes it took so long that you wondered if you could’ve saved more lives.” A solution could lie in a breakthrough in computer vision by a team led by Geoffrey Hinton at the University of Toronto. It showed that an algorithm inspired by a many-layered neural network could recognize objects in images with unprecedented skill when given enough data and computer power.
Training a neural network involves feeding in data, like the pixels in an image, and continuously altering the connections in the network, using mathematical techniques, so that the output gets closer to a particular outcome, like identifying the object in the image. Over time, these deep-learning networks learn to recognize the patterns of pixels that make up houses or people. Advances in deep learning have sparked the current AI boom; the technology underpins Tesla’s autonomous systems and OpenAI’s algorithms.
Kanaan immediately recognized the potential of deep learning for processing the various types of images and sensor data that are essential to military operations. He and others in the Air Force soon began lobbying their superiors to invest in the technology. Their efforts have contributed to the Pentagon’s big AI push.
But shortly after deep learning burst onto the scene, researchers found that the very properties that make it so powerful are also an Achilles’ heel.
Just as it’s possible to calculate how to tweak a network’s parameters so that it classifies an object correctly, it is possible to calculate how minimal changes to the input image can cause the network to misclassify it. In such “adversarial examples,” just a few pixels in the image are altered, leaving it looking just the same to a person but very different to an AI algorithm. The problem can arise anywhere deep learning might be used—for example, in guiding autonomous vehicles, planning missions, or detecting network intrusions.
Amid the buildup in military uses of AI, these mysterious vulnerabilities in the software have been getting far less attention.
Moving targets One remarkable object serves to illustrate the power of adversarial machine learning. It’s a model turtle.
To you or me it looks normal, but to a drone or a robot running a particular deep-learning vision algorithm, it seems to be … a rifle. In a separate project, the researchers had used 2D images so that an AI vision system made available through Google’s cloud would mistake it for just about anything. (Google has since updated the algorithm so that it isn’t fooled.) The turtle was created not by some nation-state adversary, but by four guys at MIT. One of them is Anish Athalye, a lanky and very polite young man who works on computer security in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). In a video on Athalye’s laptop of the turtles being tested (some of the models were stolen at a conference, he says), it is rotated through 360 degrees and flipped upside down. The algorithm detects the same thing over and over: “rifle,” “rifle,” “rifle.” The earliest adversarial examples were brittle and prone to failure, but Athalye and his friends believed they could design a version robust enough to work on a 3D-printed object. This involved modeling a 3D rendering of objects and developing an algorithm to create the turtle, an adversarial example that would work at different angles and distances. Put more simply, they developed an algorithm to create something that would reliably fool a machine-learning model.
The military applications are obvious. Using adversarial algorithmic camouflage, tanks or planes might hide from AI-equipped satellites and drones. AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets. Information fed into intelligence algorithms might be poisoned to disguise a terrorist threat or set a trap for troops in the real world.
Athalye is surprised by how little concern over adversarial machine learning he has encountered. “I’ve talked to a bunch of people in industry, and I asked them if they are worried about adversarial examples. The answer is, almost across the board, no,” he says, as companies are focused on getting their AI systems to work as their top priority.
Fortunately, the Pentagon is starting to take notice. This August, the Defense Advanced Research Projects Agency (DARPA) announced several big AI research projects. Among them is GARD, a program focused on adversarial machine learning. Hava Siegelmann, a professor at the University of Massachusetts, Amherst, and the program manager for GARD, says these attacks could be devastating in military situations because people cannot identify them. “It’s like we’re blind,” she says. “That’s what makes it really very dangerous.” The challenges presented by adversarial machine learning also explain why the Pentagon is so keen to work with companies like Google and Amazon as well as academic institutions like MIT. The technology is evolving fast, and the latest advances are taking hold in labs run by Silicon Valley companies and top universities, not conventional defense contractors.
Crucially, they’re also happening outside the US, particularly in China. “I do think that a different world is coming,” says Kanaan, the Air Force AI expert. “And it’s one we have to combat with AI.” The backlash against military use of AI is understandable, but it may miss the bigger picture. Even as people worry about intelligent killer robots, perhaps a bigger near-term risk is an algorithmic fog of war—one that even the smartest machines cannot peer through.
Will Knight was until recently senior editor for AI at MIT Technology Review, and now works at Wired.
hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2019 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,529 | 2,019 |
"How malevolent machine learning could derail AI | MIT Technology Review"
|
"https://www.technologyreview.com/2019/03/25/1216/emtech-digital-dawn-song-adversarial-machine-learning"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How malevolent machine learning could derail AI By Will Knight archive page Jeremy Portje Artificial intelligence won’t revolutionize anything if hackers can mess with it.
That’s the warning from Dawn Song , a professor at UC Berkeley who specializes in studying the security risks involved with AI and machine learning.
Speaking at EmTech Digital, an event in San Francisco produced by MIT Technology Review, Song warned that new techniques for probing and manipulating machine-learning systems—known in the field as “adversarial machine learning” methods—could cause big problems for anyone looking to harness the power of AI in business.
Song said adversarial machine learning could be used to attack just about any system built on the technology.
“It’s a big problem,” she told the audience. “We need to come together to fix it.” Adversarial machine learning involves experimentally feeding input into an algorithm to reveal the information it has been trained on, or distorting input in a way that causes the system to misbehave. By inputting lots of images into a computer vision algorithm, for example, it is possible to reverse-engineer its functioning and ensure certain kinds of outputs, including incorrect ones.
Song presented several examples of adversarial-learning trickery that her research group has explored.
One project, conducted in collaboration with Google, involved probing machine-learning algorithms trained to generate automatic responses from e-mail messages (in this case the Enron e-mail data set ). The effort showed that by creating the right messages, it is possible to have the machine model spit out sensitive data such as credit card numbers. The findings were used by Google to prevent Smart Compose, the tool that auto-generates text in Gmail, from being exploited.
Another project involved modifying road signs with a few innocuous-looking stickers to fool the computer vision systems used in many vehicles. In a video demo, Song showed how the car could be tricked into thinking that a stop sign actually says the speed limit is 45 miles per hour. This could be a huge problem for an automated driving system that relies on such information.
Adversarial machine learning is an area of growing interest for machine-learning researchers. Over the past couple of years, other research groups have shown how online machine-learning APIs can be probed and exploited to devise ways to deceive them or to reveal sensitive information.
Unsurprisingly, adversarial machine learning is also of huge interest to the defense community. With a growing number of military systems—including sensing and weapons systems—harnessing machine learning, there is huge potential for these techniques to be used both defensively and offensively.
This year, the Pentagon’s research arm, DARPA, launched a major project called Guaranteeing AI Robustness against Deception (GARD), aimed at studying adversarial machine learning.
Hava Siegelmann , director of the GARD program, told MIT Technology Review recently that the goal of this project was to develop AI models that are robust in the face of a wide range of adversarial attacks, rather than simply able to defend against specific ones.
hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,530 | 2,019 |
"This robot can probably beat you at Jenga—thanks to its understanding of the world | MIT Technology Review"
|
"https://www.technologyreview.com/2019/01/30/137650/this-robot-can-probably-beat-you-at-jengathanks-to-its-understanding-of-the-world"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts This robot can probably beat you at Jenga—thanks to its understanding of the world By Will Knight archive page SCIENCE ROBOTICS / COVER IMAGE: JOHN HOPKINS UNIVERSITY, WILL KIRK / HOMEWOOD PHOTOGRAPHY Science Robotics / Cover image: John Hopkins University, WILL KIRK / HOMEWOOD PHOTOGRAPHY Despite dazzling advances in AI, robots are still horribly ham-fisted.
Increasingly, researchers and companies are turning to machine learning to make them more adaptive and dexterous. This typically means feeding the robot a video of what’s in front of it and asking it to work out how it should move in order to manipulate that object. For instance, researchers at OpenAI, a nonprofit in San Francisco, taught a robotic hand to manipulate a child’s block in this way.
But humans, of course, use more than just their eyes to learn how to handle objects. Vision is combined with a sense of touch—and we learn, early on, that objects positioned unstably will probably fall over.
That is what inspired a new robot, developed by Nima Fazeli and his colleagues at MIT, that has been given a fundamental understanding of the real world’s physics—and a usable sense of touch.
It proved how nimble-fingered it is by mastering Jenga , a game that involves removing blocks from a precariously assembled tower, ideally without causing it to topple over. The robot also displayed a kind of ingenuity that is crucial for human players: judging which block it can remove without making the tower fall down.
The research draws from several key ideas developed by Josh Tenenbaum , in the Department of Brain and Cognitive Sciences at MIT, and his research on human cognition. This includes the idea that humans develop an intuitive understanding of physics from an early age, and that probability is key to reasoning about the world. This differs from a lot of AI research today, which revolves around feeding as much data as possible to very large, or “deep,” neural networks.
The robot, equipped with force sensors as well as cameras, learns to play Jenga by poking and prodding blocks and using visual and tactile feedback to train a physics model of the world.
Then, when faced with a new tower of blocks, it used the model to infer, probabilistically, which block it should try to poke out of the tower next. You can see how good it was in the video above.
By combining vision, touch, and this model of real-world physics, the robot can learn to play Jenga more efficiently than would be possible otherwise. The intuitive physics model also lets the robot understand quickly that a block hanging over an edge will most probably fall. In testing, the approach outperformed conventional machine-learning methods. The research is published today in the journal Science Robotics.
This more humanlike learning technique could help make factory and warehouse robots far more capable. If that fails, they could at least challenge you to a fun party game.
hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat.
By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications.
By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,531 | 2,017 |
"Zuckerberg: Facebook's security investments will 'significantly impact' profitability | VentureBeat"
|
"https://venturebeat.com/2017/11/01/zuckerberg-facebooks-security-investments-will-significantly-impact-profitability"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zuckerberg: Facebook’s security investments will ‘significantly impact’ profitability Share on Facebook Share on X Share on LinkedIn Facebook CEO Mark Zuckerberg on stage at the company's F8 developer conference in San Francisco, Calif., in October 2015.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Facebook today revealed that, for the first time in the company’s history, it has surpassed $10 billion in quarterly revenue.
But on a nearly hour-long earnings call between company executives and financial analysts, the focus was primarily on security to prevent Russian meddling like the kind seen during the 2016 U.S. presidential election.
500 million daily active users on Instagram or $10 billion in revenue doesn’t matter, Zuckerberg said in a prepared statement at the start of the call, “if our services are used in a way that doesn’t bring people closer together or the foundation of our society is undermined by foreign interference.” “In many places, we’re doubling or more our engineering efforts focused on security and we’re also building new AI to detect bad content and actors just like we’ve done with terrorist propaganda ,” he said. “I am dead serious about this, and the reason I’m talking about this on our earnings calls is that I’ve directed our teams to invest so much in security on top of our other investments we’re making that it will significantly impact our profitability going forward, and I wanted our investors to hear that directly from me. I believe this will make our society stronger, and in doing so will be good for all of us over the long term, but I want to be clear about what our priorities are.” The focus of investments in security, Zuckerberg said, “goes beyond elections” and includes efforts to combat hate speech, bullying, and fake news. The changes underway at Facebook are taking place as members of Congress propose new legislation to regulate political advertising online. Zuckerberg called the issue a national security threat and said “it’s part of our responsibility to society overall.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Facebook has committed to doubling the headcount of its safety and security workforce that does things like review ads or flag hate speech — from 10,000 to 20,000 in 2018. Due to investments in infrastructure for growth and spending to bolster security, Facebook CFO Dave Wehner said capital expenditures in 2018 are forecast to double from $7 billion to $14 billion. He added that some of those additional 10,000 people may not be Facebook employees but rather partner businesses.
Wehner also talked today about advancements made in Q3 to find duplicate and inauthentic Facebook accounts. Duplicate accounts make up roughly 10 percent of Facebook’s 2 billion monthly active users (MAU), up from a previously stated 6 percent, and inauthentic accounts make up 2 to 3 percent of global MAUs.
Last week, Facebook announced that in 2018 the company will roll out a tool for tracking who paid for ads, whether those ads are of a political nature or not. Political ads will come with a label, and for federal elections like Congressional or presidential races, users will be able to see how much money was spent, an archive of previous ads, and demographics a Facebook page attempted to reach. Machine learning will be used to track down political advertisers and require them to verify their identity.
COO Sheryl Sandberg took time during the call to highlight efforts to get rid of malicious content beyond political advertisements.
“Because the interference on our platform went beyond ads, we’re also increasing transparency around organic content from pages. We’re looking at ways to provide more information about who’s behind a political or issue-based Facebook page. We believe this will make it harder for deceptive pages to gain large followings and make it easier for us to identify malicious activity,” Sandberg said.
Zuckerberg also fielded questions about whether security measures could impact engagement on Facebook and whether existing AI solutions made by Facebook can be used to mitigate increasing security costs.
“You’re definitely right that a lot of the AI research that we do is applicable to multiple areas,” Zuckerberg said in respond to a question about the use of existing AI. “But we still need to build those tools, so it takes a lot of engineering investment, and we will be prioritizing that, in some cases by adding people to teams and in other cases by trading off and doing more security work instead of product work we might have done. But this is really important and this is our priority.” The earnings call and comments by Zuckerberg, Sandberg, and Wehner coincided with the second day of testimony before a Senate Judiciary Subcommittee about Russian meddling in U.S. election results. Testimony was offered by Facebook chief counsel Colin Stretch, alongside legal representatives from Twitter and Google.
A month ago, we learned that 10 million U.S. users saw Russia-linked ads on Facebook. Two days ago, it was revealed that Russia-linked posts likely reached 126 million people.
Concern over election meddling isn’t limited to the United States. The tool for political advertising info will roll out first in Canada, where national elections are scheduled to take place next year. And Facebook continues to field questions about ads to manipulate German voters, following elections there in September.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,532 | 2,016 |
"Google's Hand-fed AI Now Gives Answers, Not Just Search Results | WIRED"
|
"https://www.wired.com/2016/11/googles-search-engine-can-now-answer-questions-human-help"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google's Hand-Fed AI Now Gives Answers, Not Just Search Results Krisztian Bocsi/Bloomberg/Getty Images Save this story Save Save this story Save Ask the Google search app "What is the fastest bird on Earth?," and it will tell you.
"Peregrine falcon," the phone says. "According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour." That's the right answer, but it doesn't come from some master database inside Google. When you ask the question, Google's search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you're looking for. It doesn't mention those other four birds. And it responds in similar fashion if you ask, say, "How many days are there in Hanukkah?" or "How long is Totem ?" The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.
Google, Facebook, and Microsoft Are Remaking Themselves Around AI Giant Corporations Are Hoarding the World’s AI Talent Intel Looks to a New Chip to Power the Coming Age of AI Whoa, Google’s AI Is Really Good at Pictionary Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google's search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they've learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot---the information you're looking for.
These "sentence compression algorithms" just went live on the desktop incarnation of the search engine. They handle a task that's pretty simple for humans but has traditionally been quite difficult for machines. They show how deep learning is advancing the art of natural language understanding, the ability to understand and respond to natural human speech. "You need to use neural networks---or at least that is the only way we have found to do it," Google research product manager David Orr says of the company's sentence compression work. "We have to use all of the most advanced technology we have." Not to mention a whole lot of people with advanced degrees. Google trains these neural networks using data handcrafted by a massive team of PhD linguists it calls Pygmalion. In effect, Google's machines learn how to extract relevant answers from long strings of text by watching humans do it---over and over again. These painstaking efforts show both the power and the limitations of deep learning. To train artificially intelligent systems like this, you need lots and lots of data that's been sifted by human intelligence. That kind of data doesn't come easy---or cheap. And the need for it isn't going away anytime soon.
To train Google's artificial Q&A brain, Orr and company also use old news stories, where machines start to see how headlines serve as short summaries of the longer articles that follow. But for now, the company still needs its team of PhD linguists. They not only demonstrate sentence compression, but actually label parts of speech in ways that help neural nets understand how human language works. Spanning about 100 PhD linguists across the globe, the Pygmalion team produces what Orr calls "the gold data," while and the news stories are the "silver." The silver data is still useful, because there's so much of it. But the gold data is essential. Linne Ha, who oversees Pygmalion, says the team will continue to grow in the years to come.
This kind of human-assisted AI is called "supervised learning," and today, it's just how neural networks operate. Sometimes, companies can crowdsource this work---or it just happens organically. People across the internet have already tagged millions of cats in cat photos, for instance, so that makes it easy to train a neural net that recognizes cats. But in other cases, researchers have no choice but to label the data on their own.
To train systems like this, you need lots of data exquisitely sifted by human intelligence.
Chris Nicholson, the founder of a deep learning startup called Skymind , says that in the long term, this kind of hand-labeling doesn't scale. "It's not the future," he says. "It's incredibly boring work. I can't think of anything I would less want do with my PhD." The limitations are even more apparent when you consider that the system won't really work unless Google employs linguists across all languages.
Right now, Orr says, the team spans between 20 and 30 languages. But the hope is that companies like Google can eventually move to a more automated form of AI called "unsupervised learning." This is when machines can learn from unlabeled data---massive amounts of digital information culled from the internet and other sources---and work in this area is already underway at places like Google, Facebook, and OpenAI, the machine learning startup founded by Elon Musk. But that is still a long ways off. Today, AI still needs a Pygmalion.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence deep learning Enterprise Google Gregory Barber Khari Johnson Khari Johnson Jacopo Prisco WIRED Staff Peter Guest Will Knight Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,533 | 2,023 |
"Google Assistant | Technology | The Guardian"
|
"https://www.theguardian.com/technology/google-assistant"
|
"Account overview Billing Profile Emails & marketing Data privacy Settings Help Comments & replies Sign out switch to the US edition switch to the UK edition switch to the Australia edition switch to the International edition switch to the Europe edition current edition: The Guardian - Back to home News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness Opinion The Guardian view Columnists Letters Opinion videos Cartoons Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf Culture Film Books Music Art & design TV & radio Stage Classical Games Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money What term do you want to search? Search with google Support us Print subscriptions US edition switch to the UK edition switch to the Australia edition switch to the International edition switch to the Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing News Opinion Sport Culture Lifestyle Show More US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness Google Assistant ChatGPT update will give it a voice and allow users to interact using images The move will bring the artificial intelligence chatbot closer to popular voice assistants such as Apple’s Siri and Amazon’s Alexa Published: 25 Sep 2023 ChatGPT update will give it a voice and allow users to interact using images Voice assistants could ‘hinder children’s social and cognitive development’ Researchers suggest devices such as Alexa could have a long-term impact on empathy, compassion and critical thinking skills Published: 28 Sep 2022 Voice assistants could ‘hinder children’s social and cognitive development’ Google IO: Pixel 6a, Pixel Watch and Android 13 unveiled New Pro earbuds and upcoming Pixel 7 phones, tablet and software shown off during virtual event Published: 11 May 2022 Google IO: Pixel 6a, Pixel Watch and Android 13 unveiled Alex Bellos's Monday puzzle Can you solve it? The viral maths video that will have you in stitches Are you smarter than Google Assistant? Published: 24 Jan 2022 Can you solve it? The viral maths video that will have you in stitches Radio industry calls for government protection from smart assistants Broadcasters fear Alexa, Siri and Google Assistant may gradually sideline UK content Published: 21 Oct 2021 Radio industry calls for government protection from smart assistants 5 out of 5 stars.
Sonos Roam review: the portable speaker you’ll want to use at home too 5 out of 5 stars.
Published: 13 Apr 2021 Sonos Roam review: the portable speaker you’ll want to use at home too 4 out of 5 stars.
Google Nest Hub (2nd gen) review: wearable-free sleep tracking smart display 4 out of 5 stars.
Published: 8 Apr 2021 Google Nest Hub (2nd gen) review: wearable-free sleep tracking smart display Google’s new smart display tracks your sleep using radar Second-gen Nest Hub avoids the user needing to wear a bracelet or headband and acts as smart alarm Published: 16 Mar 2021 Google’s new smart display tracks your sleep using radar Sonos Roam: cheaper, multi-room portable smart speaker launched Smaller, lighter and water-resistant device has Bluetooth and wifi, aimed at home and outdoor use Published: 9 Mar 2021 Sonos Roam: cheaper, multi-room portable smart speaker launched Older Australians give smart tech a thumbs up for music and video chat – but not vacuuming A federal government-funded project looked at how devices might help keep people in their homes longer Published: 27 Feb 2021 Older Australians give smart tech a thumbs up for music and video chat – but not vacuuming Google suffers global outage with Gmail, YouTube and majority of services affected Error was due to lack of storage space in authentication tools causing system to crash Published: 14 Dec 2020 Google suffers global outage with Gmail, YouTube and majority of services affected Alexa, Siri... Elsa? Children drive boom in smart speakers Coronavirus has accelerated the use of voice assistants, but there are concerns about unregulated online ‘playgrounds’ Published: 18 Oct 2020 Alexa, Siri... Elsa? Children drive boom in smart speakers 5 out of 5 stars.
Google Nest Audio review: smart speaker gets music upgrade 5 out of 5 stars.
Louder, more bass and better sound for Google Assistant speaker made of recycled plastic bottles Published: 12 Oct 2020 Google Nest Audio review: smart speaker gets music upgrade Sam's smart buys The best smart speakers for all budgets Whether you want good sound, the cheapest or an alarm clock replacement, here are the options Published: 1 Aug 2020 The best smart speakers for all budgets 4 out of 5 stars.
Pixel Buds review: Google's competent AirPods alternative 4 out of 5 stars.
Good sound, battery life, case and design, with instant translation and different silicone tip with open-air-like fit Published: 20 Jul 2020 Pixel Buds review: Google's competent AirPods alternative Sonos launches new Arc soundbar with Dolby Atmos Wireless speaker firm revamps top TV audio line, plus Sonos 5 speakers and gen 3 Sub Published: 6 May 2020 Sonos launches new Arc soundbar with Dolby Atmos How to stop your smart home spying on you Everything in your smart home, from the lightbulbs to the thermostat, could be recording you or collecting data about you. What can you do to curb this intrusion? Published: 8 Mar 2020 How to stop your smart home spying on you 5 out of 5 stars.
Google Nest Mini review: better bass and recycled plastic 5 out of 5 stars.
Upgrade keeps what is good and improves sound for Google’s smallest, cheapest smart speaker Published: 16 Jan 2020 Google Nest Mini review: better bass and recycled plastic 5 out of 5 stars.
Google Nest Hub Max review: bigger, better and smarter display 5 out of 5 stars.
Camera with local AI for face recognition allows proactive display of personalised information Published: 6 Nov 2019 Google Nest Hub Max review: bigger, better and smarter display 3 out of 5 stars.
Google Pixel 4 review: a good phone ruined by poor battery life 3 out of 5 stars.
Brilliant camera, slick features and small size mean nothing when the phone won’t even last a day Published: 31 Oct 2019 Google Pixel 4 review: a good phone ruined by poor battery life About 43 results for Google Assistant 1 Topics Google Smart speakers Amazon Alexa Gadgets Alphabet US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top Close
"
|
1,534 | 2,019 |
"AI Weekly: Trump’s American AI Initiative lacks substance | VentureBeat"
|
"https://venturebeat.com/2019/02/15/ai-weekly-trumps-american-ai-initiative-lacks-substance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Trump’s American AI Initiative lacks substance Share on Facebook Share on X Share on LinkedIn U.S. President Donald J. Trump delivers remarks on "Apprenticeship and Workforce of Tomorrow" initiatives and signs an executive order at the White House in Washington, D.C. June 15, 2017.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s been an eventful week in tech. Amazon announced it would abandon plans to open one of its two HQ2 locations in New York City, and the company also acquired Wi-Fi mesh network startup Eero for an undisclosed sum — a hint at Amazon’s future smart home ambitions. The California Department of Motor Vehicles released reports from companies currently testing self-driving cars — like Apple, Alphabet’s Waymo, and GM Cruise. Google pledged to spend $13 billion on U.S. datacenters and offices in 24 states this year, and driverless truck startup TuSimple raised $95 million at a $1 billion valuation, joining the ranks of Aurora and Nuro as one of the best-funded companies in the autonomous vehicle industry.
Nearly lost in the shuffle was President Trump’s signing on Monday of an executive order establishing a program — the American AI Initiative — that formalizes several of the proposals made last spring during the White House’s summit on AI. Specifically, it tasks federal agencies with devoting more resources to AI research, training, and promotion and instructs the White House Office of Science and Technology Policy (OSTP), the National Institute of Standards and Technology (NIST), and other departments to draft standards guiding the development of “reliable, robust, trustworthy, secure, portable, and interoperable AI systems” and create AI fellowships and apprenticeships. In the future, these agencies will be required to make a good faith effort to provide data, computing resources, and models to AI researchers whenever possible and to “prioritize AI” investments in their budgets.
Unfortunately, it’s a case of too little, too late.
The U.S. joins many other countries that have launched national AI guidelines and policies — Australia, Canada, China, Denmark, Finland, France, Germany, India, Japan, Kenya, Malaysia, Mexico, New Zealand, Poland, Russia, Singapore, South Korea, Sweden, Taiwan, Tunisia, the United Arab Emirates, and the U.K. Most have already outstripped — or come close to outstripping — the U.S. with respect to the amount of funding they’ve set aside for AI research.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Canada’s Pan-Canadian Artificial Intelligence Strategy, for instance, is a five-year $94 million (CAD $125 million) plan to invest in AI research and talent, complementing the government’s investments of nearly $173 million (CAD $230 million) and $45 million (CAD $230 million) in Scale.AI, a business-led consortium that expects to create close to 16,000 jobs. The EU Commission, for its part, has committed to increasing investment in AI from $565 million (€500 million) in 2017 to $1.69 billion (€1.5 billion) by the end of 2020. France recently took the wraps off a $1.69 billion (€1.5 billion) initiative aimed at transforming the country into a “global leader” in AI research and training. And South Korea last spring unveiled a multiyear, $1.95 billion (KRW 2.2 trillion) effort to strengthen its R&D in AI, with the goal of establishing six AI-focused graduate schools by 2022 and training 5,000 AI specialists.
China’s AI plan is perhaps the most ambitious: In two policy documents, “A Next Generation Artificial Intelligence Development Plan” and “Three-Year Action Plan to Promote the Development of New-Generation Artificial Intelligence Industry,” the Chinese government laid out its roadmap for cultivating an AI industry worth roughly $147 billion by 2030. The country has already built a $2.1 billion technology park for AI research in Beijing.
Money isn’t the be-all, end-all in AI policy, of course — ethics matter, as evidenced by Amazon, Microsoft, and Google calling for guidelines governing the use and development of such AI technologies as facial recognition. (All three companies are members of the Partnership for AI, an industry consortium that includes groups sometimes opposed to AI products — like Amnesty International and the American Civil Liberties Union.) Unfortunately, the American AI Initiative falls short in this regard, too.
“I’m skeptical that the passing mention of these protections will result is any serious efforts to build in appropriate legal, ethical, and policy safeguards to ensure that AI systems are deployed responsibly,” Professor Kate Crawford, codirector and cofounder of the AI Now Institute at New York University, told IEEE Spectrum this week. “We are concerned with the focus on industry at the expense of a broader democratic process and an evidence-led approach to AI policy.” In contrast to the U.S., the EU Commission has crafted a set of AI guidelines to address ethical issues such as fairness, safety, and transparency and has established the European AI Alliance, a forum for discussion of “all aspects” of AI development and its impacts. Additionally, the body has tasked the High-Level Group on Artificial Intelligence — which acts as the steering group for the European AI Alliance — with drafting ethics guidelines.
France also plans to develop its own AI regulations and ethics policies, as does Germany. (The latter last year formed a 38-person commission to investigate how machine learning and algorithmic decision-making might disrupt society, and it plans to release a report with recommendations by 2020.) Meanwhile, Singapore’s Advisory Council on the Ethical Use of AI and Data and the U.K.’s Centre for Data Ethics and Innovation have been tasked with developing a common set of AI ethics standards and frameworks for their respective governments.
The American AI Initiative also fails to address issues of multinational collaboration and immigration. That doesn’t come as a surprise, given the Trump administration’s poor track record on both fronts.
The Trump administration in February 2018 heightened vetting of H1-B visas, which has led to increased restrictions and rejections of visas and to preferential treatment for holders of advanced degrees from U.S. institutions. (It also revoked a guideline that designated “computer programmer” as a protected occupation under the H-1B program.) And the administration delayed implementation of the international entrepreneur rule, or startup visa, which would have allowed foreign entrepreneurs to stay in the U.S. to start a business.
Partly as a result of that and other policy decisions, the number of overseas graduate students in the U.S. fell 5.5 percent in 2017 (from 2016), according to the National Science Foundation. That’s discouraging for organizations like the Allen Institute for Artificial Intelligence in Seattle, which recently revealed in a Wired editorial that nearly two-thirds of its research scientists hail from countries like Egypt, Germany, India, Iran, Israel, Japan, Korea, Norway, the U.K., Taiwan, and Vietnam.
The brain drain has been dramatic. Europe currently leads the world in scholarly output related to AI, according to a report by Elsevier, and China is expected to overtake the EU within the next four years, if current trends continue. India is currently third behind the U.S. and China, while Germany and Japan rank fifth and sixth worldwide in AI research paper output.
“In the name of Buying American and Hiring American [sic], this administration is threatening the intellectual heart of our society, and the consequences of these policies will reverberate throughout our economy,” Allen Institute CEO Oren Etzioni wrote this week. “Providing 10,000 new visas for AI specialists, and more for experts in other STEM fields, would revitalize our country’s research ecosystem, empower our country’s innovation economy, and ensure that the United States remains a world superpower in the coming decades.” This is all to say that the American AI Initiative, as currently proposed, appears poised to underdeliver. And absent a firm timeline for implementation, there’s no guarantee that even its modest proposals will come to fruition in the foreseeable future.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel.
Thanks for reading, Kyle Wiggers AI Staff Writer P.S. Please enjoy this talk by Andreessen Horowitz operating partner Frank Chen about how AI and automation will augmented and enhance humans.
From VB Nvidia’s $2.21 billion in Q4 revenue helps erase previous stock loss Nvidia posted earnings that beat Wall Street’s reduced expectations for fourth fiscal quarter earnings for the period ended January 27.
Read the full story OpenAI let us try its state-of-the-art NLP text generator OpenAI’s new family of language models achieves state-of-the-art performance on a number of NLP tasks — and sometimes generates convincing original text.
Read the full story Amazon Echo owners can now add their own voice apps to the Alexa Skills Store Amazon is making voice apps created by Echo owners with the easy-to-use Skills Blueprint available in the Alexa Skills Store.
Read the full story Xnor.ai debuts edge AI that runs on solar power Xnor.ai today introduced a device capable of running state-of-the-art computer vision algorithms on the edge with a small solar cell and no battery.
Read the full story Driverless truck startup TuSimple raises $95 million at $1 billion valuation China-based autonomous truck startup TuSimple, which has routes in Arizona, raised $95 billion in December 2018 at a $1 billion valuation.
Read the full story IBM’s AI takes on world-class debater in argument about preschool IBM’s Project Debater engaged in a human versus machine competition before an audience to argue for and against subsidies for preschool.
Read the full story President Trump will lay out U.S.’ AI plans in an executive order President Trump will sign an executive order establishing the American AI Initiative, which will lay out the U.S.’ plans for AI research and development.
Read the full story Beyond VB Artificial Intelligence Has Found an Unknown ‘Ghost’ Ancestor in The Human Genome Nobody knows who she was, just that she was different: a teenage girl from over 50,000 years ago of such strange uniqueness she looked to be a ‘hybrid’ ancestor to modern humans that scientists had never seen before.
(via Science Alert) Read the full story A.I. Shows Promise Assisting Physicians Each year, millions of Americans walk out of a doctor’s office with a misdiagnosis. Physicians try to be systematic when identifying illness and disease, but bias creeps in. Alternatives are overlooked.
(via New York Times) Read the full story NDP announces $100 million to support Alberta’s artificial intelligence sector The NDP government has announced plans to spend $100 million over five years to grow the artificial intelligence (AI) sector in Alberta.
(via Calgary Herald) Read the full story Google and Microsoft warn that AI might do dumb things It’s the not-so-far-away future in San Francisco. One-Wheels and e-scooters litter the road. Your self-driving car has just deposited you at Union Square, and you’ve instructed it to return in an hour, after you’ve purchased the latest it-smartphone, the iPhone Z.
(via Wired) Read the full story VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,535 | 2,019 |
"President Trump will lay out U.S.' AI plans in an executive order | VentureBeat"
|
"https://venturebeat.com/2019/02/11/president-trump-will-lay-out-u-s-ai-plans-in-an-executive-order"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages President Trump will lay out U.S.’ AI plans in an executive order Share on Facebook Share on X Share on LinkedIn U.S. President Donald J. Trump delivers remarks on "Apprenticeship and Workforce of Tomorrow" initiatives and signs an executive order at the White House in Washington, D.C. June 15, 2017.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
According to several outlets briefed over the weekend on the White House’s plans, President Trump will today sign an executive order establishing a program — the American AI Initiative — that’ll task federal agencies with devoting more resources to artificial intelligence (AI) research, training, and promotion. It comes after President Trump promised “investments in the cutting-edge industries of the future” during his State of the Union speech last week, and after companies like Amazon , Microsoft, and Google called for guidelines governing the use and development of AI technologies such as facial recognition.
“AI is something that touches every aspect of people’s lives,” a White House official told Reuters.
“What this initiative attempts to do is to bring all those together under one umbrella and show the promise of this technology for the American people.” There isn’t any funding attached to the executive order, which follows the Trump Administration’s AI summit on the role of AI in May 2018. Rather, it merely directs federal agencies to increase access to government data, hardware infrastructure, and models and calls for better reporting and tracking of spending on AI-related research in areas like health care and transportation.
The initiative is divided into five key pillars: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Research and development: Federal funding agencies will be asked to “prioritize AI” investments. Some have been proactive in this regard — the Defense Department set aside $75 million of its annual budget for a new fund devoted to developing AI technologies, and the Defense Advanced Research Projects Agency (DARPA) says it has committed $2 billion to AI research.
AI infrastructure: Federal data, computing resources, and models will be made available to AI researchers, which might result in partnerships like that between the Veterans Administration and Alphabet, Google’s parent company.
AI governance: The White House Office of Science and Technology Policy, the National Institute of Standards and Technology (NIST), the Department of Transportation, the Food and Drug Administration, and other agencies will be asked to draft standards that guide the development of “reliable, robust, trustworthy, secure, portable, and interoperable AI systems,” like driverless cars and software that can diagnose disease.
Workforce: Government agencies will be asked to create fellowships and apprenticeships that help workers adjust to jobs changed by AI and to train future AI researchers and experts.
International engagement: The administration will collaborate with countries on AI development, but only in a way that’s “consistent” with American “values and interests.” The White House is drafting a memo that’ll lay out details of the program’s implementation, due within six months. And Lynne Parker, who leads work on AI in the White House Office of Science and Technology Policy, is expected to release a complementary national AI research strategy within weeks.
One element not addressed in the White House’s plan is immigration.
The Trump administration in February 2018 heightened vetting of H1-B visas, which now require “detailed documentation” for workers employed at worksites to prove that they’re filling the roles for which they were hired. It’s purportedly designed to cut down on “benching,” a practice in which employers hire entry-level engineers and shuffle them to other divisions, but in practice it has led to increased restrictions and rejections of visas and to preferential treatment for holders of advanced degrees from U.S. institutions.
The administration also delayed implementation of the international entrepreneur rule, or startup visa, which would have allowed foreign entrepreneurs to stay in the U.S. to start businesses. Partly as a result of that and other policy decisions, the number of overseas graduate students in the U.S. fell 5.5 percent in 2017 (from 2016), according to the National Science Foundation.
The U.S. joins 18 other countries that have launched national AI strategies, including Australia, Canada, China, Denmark, Finland, France, Germany, India, Japan, Kenya, Malaysia, Mexico, New Zealand, Poland, Russia, Singapore, South Korea, Sweden, Tunisia, and the U.K.
Canada’s Pan-Canadian Artificial Intelligence Strategy, which was detailed in its 2017 federal budget, is a five-year $94 million (CAD $125 million) plan to invest in AI research and talent. The European Union, for its part, has committed to increasing investment in AI from $565 million (€500 million) in 2017 to $1.69 billion (€1.5 billion) by the end of 2020, and crafted a set of AI ethics guidelines to address issues such as fairness, safety, and transparency. In March 2018, at the AI for Humanity Summit in Paris, France took the wraps off a $1.69 billion (€1.5 billion) initiative to transform the country into a “global leader” in AI research and training. And South Korea recently unveiled a multiyear, $1.95 billion (KRW 2.2 trillion) investment to strengthen its R&D in AI, with the goal of establishing six AI-focused graduate schools by 2022 and training 5,000 AI specialists.
China’s AI plan is perhaps the most ambitious: In two policy documents, “A Next Generation Artificial Intelligence Development Plan” and “Three-Year Action Plan to Promote the Development of New-Generation Artificial Intelligence Industry,” the Chinese government laid out a roadmap to cultivate an AI industry worth roughly $147 billion by 2030. It’s already seen the creation of a $2.1 billion technology park for AI research in Beijing.
Europe currently leads the world in scholarly output related to AI, according to a report by Elsevier , but China is expected to overtake the EU within the next four years, if current trends continue. India is currently third behind the U.S. and China, while Germany and Japan rank fifth and sixth worldwide in AI research paper output.
In an op-ed published today in the Financial Times , MIT President L. Rafael Reif argued for sustained federal investment and a “broad strategic effort across society” in dealing with AI. “Technology belongs to all of us,” he said. “We must all be alert to the risks posed by AI, but this is no time to be afraid. Those nations … which act now to help shape the future of AI will help shape the future for us all.” Updated 8:42 Pacific: As expected, President Trump signed an executive order today establishing the American AI Initiative.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,536 | 2,018 |
"Facial recognition has to be regulated to protect the public, says AI report | MIT Technology Review"
|
"https://www.technologyreview.com/s/612552/facial-recognition-has-to-be-regulated-to-protect-the-public-says-ai-report"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Facial recognition has to be regulated to protect the public, says AI report By Will Knight archive page Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.
Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.
A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.
The speed at which facial recognition has grown comes down to the rapid development of a type of machine learning known as deep learning.
Deep learning uses large tangles of computations—very roughly analogous to the wiring in a biological brain—to recognize patterns in data. It is now able to carry out pattern recognition with jaw-dropping accuracy.
The tasks that deep learning excels at include identifying objects, or indeed individual faces, in even poor-quality images and video. Companies have rushed to adopt such tools.
The report calls for the US government to take general steps to improve the regulation of this rapidly moving technology amid much debate over the privacy implications. “The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or accountability regimes,” it says.
The report suggests, for instance, extending the power of existing government bodies in order to regulate AI issues, including use of facial recognition: “Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards.” It also calls for stronger consumer protections against misleading claims regarding AI; urges companies to waive trade-secret claims when the accountability of AI systems is at stake (when algorithms are being used to make critical decisions, for example); and asks that they govern themselves more responsibly when it comes to the use of AI.
And the document suggests that the public should be warned when facial-recognition systems are being used to track them, and that they should have the right to reject the use of such technology.
Implementing such recommendations could prove challenging, however: the toothpaste is already out of the tube. Facial recognition is being adopted and deployed incredibly quickly. It’s used to unlock Apple’s latest iPhones and enable payments, while Facebook scans millions of photos every day to identify specific users. And just this week, Delta Airlines announced a new face-scanning check-in system at Atlanta’s airport. The US Secret Service is also developing a facial-recognition security system for the White House, according to a document highlighted by UCLA. “The role of AI in widespread surveillance has expanded immensely in the U.S., China, and many other countries worldwide,” the report says.
In fact, the technology has been adopted on an even grander scale in China. This often involves collaborations between private AI companies and government agencies. Police forces have used AI to identify criminals, and numerous reports suggest it is being used to track dissidents.
Even if it is not being used in ethically dubious ways, the technology also comes with some in-built issues. For example, some facial-recognition systems have been shown to encode bias. The ACLU researchers demonstrated that a tool offered through Amazon’s cloud program is more likely to misidentify minorities as criminals.
The report also warns about the use of emotion tracking in face-scanning and voice detection systems. Tracking emotion this way is relatively unproven, yet it is being used in potentially discriminatory ways—for example, to track the attention of students.
“It’s time to regulate facial recognition and affect recognition,” says Kate Crawford, cofounder of AINow and one of the lead authors of the report. “Claiming to ‘see’ into people’s interior states is neither scientific nor ethical.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,537 | 2,017 |
"Why America’s Biggest Bank Digs Anonymous Cryptocurrency | MIT Technology Review"
|
"https://www.technologyreview.com/s/609481/why-americas-biggest-bank-digs-anonymous-cryptocurrency"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Why America’s Biggest Bank Digs Anonymous Cryptocurrency By Mike Orcutt archive page Z.cash Whatever JPMorgan Chase CEO Jamie Dimon meant last month when he called Bitcoin a “fraud,” he sure doesn’t seem to view its blockchain in the same light.
His bank is at the forefront of the effort to adapt the technology for use in the financial industry. Even more surprising, though, is JPMorgan’s collaboration with the people behind a digital currency that’s like Bitcoin except completely anonymous.
It would be understandable if Dimon and other bank CEOs viewed Bitcoin and its cryptocurrency siblings as a threat. After all, Bitcoin, launched during the height of the Great Recession, shows it’s possible to use software and thousands of computers connected via the Internet—instead of a bank—to facilitate the peer-to-peer exchange of money. The computers in the network maintain a secure ledger of every transaction, called a blockchain, and use it to prevent counterfeiting (see “ What Bitcoin Is, and Why It Matters ”).
No matter what bank executives think about Bitcoin’s currency, though, they see in its blockchain a revolutionary platform that could lead to more secure, reliable, and cost-effective systems for managing all kinds of financial transactions. It’s early, though, and most of the work on so-called “enterprise blockchains” is experimental. What’s not yet clear is the degree to which financial institutions can adapt the technology to their own needs without sacrificing its advantages—particularly its decentralized nature, which is key to protecting the information in the ledger from getting corrupted.
Privacy is a particularly complicated challenge. Contrary to widespread perception, cryptocurrencies like Bitcoin and Ethereum are not anonymous. Users are represented on the public ledger by a string of characters called an address. Someone who manages to connect your identity to your address can see every transaction you’ve ever made (see “ Criminals Thought Bitcoin Was a Perfect Hiding Place, But They Thought Wrong ”).
That model doesn’t work for financial institutions, says Amber Baldet , blockchain lead at JPMorgan. Not only are they bound by anti-money-laundering laws to know exactly who their customers are, but their customers want to transact confidentially. Shifting from Bitcoin’s privacy model to one in which participants are known but their transactions are confidential—while maintaining the benefits of a blockchain—is a “nontrivial” endeavor, says Baldet.
Fortunately for Baldet’s team, this thorny problem is similar to another one that already appears to be solved: cryptocurrency that’s as private as cash. In May, JPMorgan announced that it would team with the developers of Zcash , a year-old cryptocurrency whose Bitcoin-derived software gives users the option to “shield” their transactions from public view. Last month, the bank revealed that it had integrated Zcash’s privacy technology into Quorum, its open-source, Ethereum-derived, permissioned blockchain platform.
Zcash relies on an emerging cryptographic protocol called a zero-knowledge proof. One of several techniques that make it possible for cryptocurrency users to hide their transactions, zero-knowledge proofs are generating a huge amount of excitement in the blockchain world, largely because of the mind-bending power they can give a user : the ability to prove something about yourself to someone else without having to reveal any additional information.
Related Story In the case of Zcash, users can use this method to prove that they have sufficient funds to make a valid transaction. In an enterprise system like JPMorgan’s Quorum, customers could use it to do things like prove they are accredited investors.
Zooko Wilcox , the company’s CEO, says Zcash’s ultimate goal is to “provide economic opportunity and financial freedom to every human.” He calls Zcash’s openness—as with Bitcoin, anyone can join the network—a “moral imperative,” and his team is stacked with world-renowned cryptography experts who share his vision (see “ Why People Get Religious about Bitcoin ”).
Why would a tiny startup made up of crypto-idealists team up with America’s biggest bank, the very kind of centralized authority that Bitcoin was designed to circumvent? Wilcox and his colleagues seem, above all else, devoted to advancing zero-knowledge technology—whether that be in open blockchain systems like Zcash, Bitcoin, and Ethereum or in the private networks that financial institutions are building. In a statement last month touting JPMorgan’s integration of zero-knowledge functionality, Wilcox said, “The momentum that is growing behind enterprise blockchain adoption is one of the most exciting trends in technology.” That JPMorgan is interested in the same cutting-edge privacy technology so attractive to cryptocurrency nerds is not surprising at all, says Emin Gün Sirer , a computer scientist at Cornell University. Zero-knowledge proofs are not about skirting the law, he says, but about proving things though selective disclosure. That promises to have plenty of applications in the world JPMorgan inhabits. “The finance industry thrives on privacy,” Gün Sirer says.
hide by Mike Orcutt Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,538 | 2,019 |
"Bill Gates just backed a chip startup that uses light to turbocharge AI | MIT Technology Review"
|
"https://www.technologyreview.com/s/613668/ai-chips-uses-optical-semiconductor-machine-learning"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Bill Gates just backed a chip startup that uses light to turbocharge AI By Martin Giles archive page Photo of a chip LIGHTWAVE LAB @ PRINCETON UNIVERSITY Advances in computing, from speedier processors to cheaper data storage, helped ignite the new AI era. Now demand for even faster, more energy-efficient AI models is driving a wave of innovation in semiconductors.
Luminous Computing, which recently raised $9 million of seed funding from prominent investors including Bill Gates and Uber CEO Dara Khosrowshahi, has an ambitious plan to accelerate AI with a new chip. While conventional semiconductors use electrons to help carry out the demanding mathematical calculations that power AI models , Luminous is using light instead.
Many industries are trying to pack an increasing amount of AI into their machines, including makers of autonomous cars and drones. But widely used electrical chips like central processing units aren’t ideal for those tasks because they use a lot of power and may not be able to process data fast enough.
These limitations can cause lags and delays—annoying if you’re waiting for some machine-learning results for a research paper, but far more serious if you’re relying on an AI algorithm to guide a car down a busy street.
The bottleneck is getting worse: a study by research institute OpenAI says the amount of computing power needed to train the largest AI models is doubling every three and a half months.
Luminous’s CEO and cofounder, Marcus Gomez, notes that in spite of all of the hype around AI, the limitations of the underlying hardware are frustrating progress. “Silicon Valley promised us this AI-driven Star Trek reality years ago,” he says, “but we’re still waiting for it to arrive.” More powerful AI chips could boost everything from machine-learning models that assist doctors with medical diagnoses to new kinds of AI-driven apps that can run on a smartphone.
Optical solution Luminous sees light as the answer. It uses lasers to beam light through tiny structures on its chip, known as waveguides. By using different colors of light to move multiple pieces of data through waveguides at the same time, it can outstrip the data-carrying capabilities of conventional electrical chips.
The ability to transport very large amounts of information swiftly means optical processors are ideally suited to handling the vast number of computations that drive AI models. They also require far less power than electrical ones.
Mitchell Nahmias, another cofounder of Luminous and its chief technology officer, says its current prototype is three orders of magnitude more energy efficient than other state-of-the-art AI chips. The startup’s processor, an early prototype of which is pictured at the top of this story, is based on years of research conducted by Nahmias and other academics at Princeton University.
Still, Luminous faces stiff competition.
Startups like Lightelligence and Lightmatter—spot the branding theme here—are also working on optical chips to accelerate AI. And semiconductor behemoths like Intel are stepping up research in the field , which could lead to them launching new optical processors.
Dirk Englund, an MIT professor who’s also a technical advisor to Lightmatter, thinks Luminous may find it challenging to manage the various devices required to manipulate light when it starts to ramp up production of its chips. Optical chips need everything from lasers to electro-optic modulators for controlling light to make them work, which is a big reason they haven’t yet caught on widely.
AI breakthroughs Gates and other backers are betting that Gomez, Nahmias, and Michael Gao, Luminous’s other cofounder, can overcome this and other hurdles. They are also betting the companies that break through the computing bottleneck will be the ones to help unleash the true potential of AI.
Ali Partovi of Neo, a venture fund that invested in Luminous’s seed round, points out that even things like voice assistants on smartphones are still frustratingly prone to glitches because the devices lack enough AI computing power. “Just imagine a world,” says Partovi, “in which Siri really worked well all of the time.” hide by Martin Giles Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,539 | 2,018 |
"Google just gave control over data center cooling to an AI | MIT Technology Review"
|
"https://www.technologyreview.com/s/611902/google-just-gave-control-over-data-center-cooling-to-an-ai"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Google just gave control over data center cooling to an AI By Will Knight archive page A Google datacenter in Council Bluffs, Iowa.
Google revealed today that it has given control of cooling several of its leviathan data centers to an AI algorithm.
Over the past couple of years, Google has been testing an algorithm that learns how best to adjust cooling systems—fans, ventilation, and other equipment—in order to lower power consumption. This system previously made recommendations to data center managers, who would decide whether or not to implement them, leading to energy savings of around 40 percent in those cooling systems.
Now, Google says, it has effectively handed control to the algorithm, which is managing cooling at several of its data centers all by itself.
“It’s the first time that an autonomous industrial control system will be deployed at this scale, to the best of our knowledge,” says Mustafa Suleyman , head of applied AI at DeepMind, the London-based artificial-intelligence company Google acquired in 2014.
The project demonstrates the potential for artificial intelligence to manage infrastructure—and shows how advanced AI systems can work in collaboration with humans. Although the algorithm runs independently, a person manages it and can intervene if it seems to be doing something too risky.
The algorithm exploits a technique known as reinforcement learning, which learns through trial and error. The same approach led to AlphaGo, the DeepMind program which vanquished human players of the board game Go (see “ 10 Breakthrough Technologies: Reinforcement Learning ”).
DeepMind fed its new algorithm information gathered from Google data centers and let it determine what cooling configurations would reduce energy consumption. The project could generate millions of dollars in energy savings and may help the company lower its carbon emissions, says Joe Kava, vice president of data centers for Google.
Kava says managers trusted the earlier system and had few concerns about delegating greater control to an AI. Still, the new system has safety controls to prevent it from doing anything that has an adverse effect on cooling. A data center manager can watch the system in action, see what the algorithm's confidence level is about the changes it wants to make, and intervene if it seems to be doing something untoward.
Energy consumption by data centers has become a pressing issue for the tech industry. A 2016 report from researchers at the US Department of Energy’s Lawrence Berkeley National Laboratory found that US data centers consumed about 70 billion kilowatt-hours in 2014—about 1.8 percent of total national electricity use.
But efforts to improve energy efficiency have been significant. The same report found that efficiency gains are almost canceling out increases in energy use by new data centers, although the total is expected to reach around 73 billion kilowatt-hours by 2020.
“Use of machine learning is an important development,” says Jonathan Koomey , one of the world’s leading experts on data center energy usage. But he adds that cooling accounts for a relatively small amount of a center’s energy use, around 10 percent.
Koomey thinks using machine learning to optimize the behavior of the power-hungry computer chips inside data centers could prove even more significant. “I’m eager to see Google and other big players apply such tools to optimizing their computing loads," he says. "The possibilities on the compute side are tenfold bigger than for cooling.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,540 | 2,018 |
"China and the US are bracing for an AI showdown—in the cloud | MIT Technology Review"
|
"https://www.technologyreview.com/s/610140/china-and-the-us-are-bracing-for-an-ai-showdownin-the-cloud"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts China and the US are bracing for an AI showdown—in the cloud By Will Knight archive page Yaopai Chinese and American tech giants are preparing for a showdown that may shape the future of artificial intelligence.
China’s cloud providers, Alibaba, Tencent, and Baidu, are getting ready to do battle with US giants Amazon, Google, and Microsoft to deliver AI online. As Chinese companies seek to expand their reach, they may increasingly aim their cloud services at US companies and developers, and vice versa.
Speaking at EmTech China, an event held by MIT Technology Review in Beijing, Jian Wang, president of Alibaba’s technology committee and a senior figure at the company, predicted that cloud AI would become a major trend. “I’m convinced that AI or machine learning will be the major consumer of applications [in the cloud],” Wang said through a translator. “It will offer many scenarios.” At the same event, Animashree Anandkumar, principal scientist at Amazon Web Services, touted the AI capabilities of her company, the largest cloud provider in the US, which already offers a number of AI services via the cloud. In an effort to build awareness among machine-learning practitioners, Amazon has also developed its own deep-learning framework, called MXNet.
Deep learning is the most powerful machine-learning approach available for tasks like image classification, voice recognition, and translation.
“Currently, running experiments on AI requires huge computer resources,” Anandkumar said. “The cloud is a way to democratize AI because anyone can access that compute power.” Anandkumar also dropped a hint of Amazon’s international ambitions for its AI services. “The other aspect of democratizing AI is globalizing it,” she said. “How do we enable everybody to innovate locally, and have equally good support across languages and cultures?” Cloud computing companies are all racing to deploy increasingly sophisticated services featuring machine learning and AI. At stake is the opportunity to become the dominant player in what promises to be the next big computing paradigm.
Google, for example, recently demonstrated an innovation designed to make it much easier to make use of the most powerful machine-learning algorithms (see “ Google’s self-training AI turns coders into AI masters ”).
The Chinese government, meanwhile, is pushing its industry with massive investment (see “ China’s AI awakening ”).
Whoever emerges as the dominant players could shape the kinds of AI services that become widely adopted. A number of China’s tech companies are focused on face recognition, for example.
The expansion of AI services in the cloud could have other ramifications as well. Wang of Alibaba predicted, for instance, that the rush to deliver and tap into cloud AI would use huge amounts of energy. “It will consume a lot of computing resources, which may have not been seen before in history,” he said.
hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,541 | 2,017 |
"2017: The Year AI Floated into the Cloud | MIT Technology Review"
|
"https://www.technologyreview.com/s/609646/2017-the-year-ai-floated-into-the-cloud"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts 2017: The Year AI Floated into the Cloud By Jackie Snow archive page Ms. Tech Cloud computing is already a huge business, and competition is stiff. But this year, tech firms opened a new front in the battle to win users over in the cloud: the large-scale introduction of cloud-based AI.
For small and medium-size companies, building AI-capable systems at scale can be prohibitively expensive, largely because training algorithms takes a lot of computing power. Enter the likes of Amazon, Microsoft, and Google, each of which has vast stores of computing power and a big stake in the $40 billion cloud computing industry. For them, adding AI is simply a matter of keeping up with customers, who increasingly are looking for cost-effective ways of building machine learning into their software.
Amazon, with its AWS Amazon Cloud service, has been leading the way. At the AWS conference in Las Vegas earlier this year, the company showed off Amazon Cloud 9, an integrated development environment (IDE) that plugs directly into its cloud platform. It also announced a host of new AI tools that can turn speech in audio files into time-stamped text, for example, as well as translate between seven languages and track people, activities, and objects in video.
Google lags behind Amazon and Microsoft in overall cloud services but is making a play for more market share with TensorFlow, open-source AI software that can build other machine-learning software. Since its launch, it’s become the AI platform of choice for many developers, and it underpins many new artificial-intelligence projects. The company has created its own chips, too, called Tensor Processing Units (TPUs), which are designed to efficiently process TensorFlow and cut down on energy needs.
Of course, Microsoft and Amazon aren’t giving up ground without a fight. In fact, they’re teaming up. The two launched an open-source deep-learning library called Gluon that works a lot like TensorFlow and is meant to make it as easy to build and train neural networks as it is to make an app. Microsoft is also trying out low-power chips to run its Azure cloud servers.
AI in the cloud is about more than just power plays by tech giants, though—it could also be behind the next leap forward in artificial intelligence. Rigetti Computing, a company in California, just used one of its prototype quantum chips to run a machine-learning algorithm on its cloud platform. The technology is so new that even experts are unsure what it is capable of. But one thing’s for sure: there will be a lot of learning done in the cloud in 2018.
hide by Jackie Snow Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,542 | 2,017 |
"Unsupervised sentiment neuron"
|
"https://openai.com/research/unsupervised-sentiment-neuron"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Illustration: Ludwig Pettersson Research Unsupervised sentiment neuron April 6, 2017 More resources Read paper View code Unsupervised learning , Representation learning , Language , Generative models , Milestone , Publication , Release A linear model using this representation achieves state-of-the-art sentiment analysis accuracy on a small but extensively-studied dataset, the Stanford Sentiment Treebank (we get 91.8% accuracy versus the previous best of 90.2%), and can match the performance of previous supervised systems using 30-100x fewer labeled examples. Our representation also contains a distinct “ sentiment neuron ” which contains almost all of the sentiment signal.
Our system beats other approaches on Stanford Sentiment Treebank while using dramatically less data.
We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment. We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs.
Methodology We first trained a multiplicative LSTM with 4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text. Training took one month across four NVIDIA Pascal GPUs, with our model processing 12,500 characters per second.
These 4,096 units (which are just a vector of floats) can be regarded as a feature vector representing the string read by the model. After training the mLSTM, we turned the model into a sentiment classifier by taking a linear combination of these units, learning the weights of the combination via the available supervised data.
Sentiment neuron While training the linear model with L1 regularization, we noticed it used surprisingly few of the learned units. Digging in, we realized there actually existed a single “sentiment neuron” that’s highly predictive of the sentiment value.
Just like with similar models, our model can be used to generate text. Unlike those models, we have a direct dial to control the sentiment of the resulting text: we simply overwrite the value of the sentiment neuron.
Sentiment fixed to positive Sentiment fixed to negative Just what I was looking for. Nice fitted pants, exactly matched seam to color contrast with other pants I own. Highly recommended and also very happy! The package received was blank and has no barcode. A waste of time and money.
This product does what it is supposed to. I always keep three of these in my kitchen just in case ever I need a replacement cord.
Great little item. Hard to put on the crib without some kind of embellishment. My guess is just like the screw kind of attachment I had.
Best hammock ever! Stays in place and holds it’s shape. Comfy (I love the deep neon pictures on it), and looks so cute.
They didn’t fit either. Straight high sticks at the end. On par with other buds I have. Lesson learned to avoid.
Dixie is getting her Doolittle newsletter we’ll see another new one coming out next year. Great stuff. And, here’s the contents - information that we hardly know about or forget.
great product but no seller. couldn’t ascertain a cause. Broken product. I am a prolific consumer of this company all the time.
I love this weapons look . Like I said beautiful !!! I recommend it to all. Would suggest this to many roleplayers, And I stronge to get them for every one I know. A must watch for any man who love Chess! Like the cover, Fits good. . However, an annoying rear piece like garbage should be out of this one. I bought this hoping it would help with a huge pull down my back & the black just doesn’t stay. Scrap off everytime I use it.... Very disappointed.
Sentiment fixed to positive Sentiment fixed to negative I couldn’t figure out the shape at first but it definitely does what it’s meant to do. It’s a great product and I recommend it highly I couldn’t figure out how to use the product. It did not work.At least there was no quality control; this tablet does not work. I would have given it zero stars, but that was not an option.
I couldn’t figure out why this movie had been discontinued! Now I can enjoy it anytime I like. So glad to have found it again.
I couldn’t figure out how to set it up being that there was no warning on the box. I wouldn’t recommend this to anyone.
I couldn’t figure out how to use the video or the book that goes along with it, but it is such a fantastic book on how to put it into practice! I couldn’t figure out how to use the gizmo. What a waste of time and money. Might as well through away this junk.
I couldn’t figure out how to use just one and my favorite running app. I use it all the time. Good quality, You cant beat the price.
I couldn’t figure out how to stop this drivel. At worst, it was going absolutely nowhere, no matter what I did.Needles to say, I skim-read the entire book. Don’t waste your time.
I couldn’t figure out how to attach these balls to my little portable drums, but these fit the bill and were well worth every penny.
I couldn’t figure out how to play it.
Example The diagram below represents the character-by-character value of the sentiment neuron, displaying negative values as red and positive values as green. Note that strongly indicative words like “ best ” or “ horrendous ” cause particularly big shifts in the color.
It’s interesting to note that the system also makes large updates after the completion of sentences and phrases. For example, in “ And about 99.8 percent of that got lost in the film ”, there’s a negative update after “ lost ” and a larger update at the sentence’s end, even though “ in the film ” has no sentiment content on its own.
Unsupervised learning Labeled data are the fuel for today’s machine learning. Collecting data is easy, but scalably labeling that data is hard. It’s only feasible to generate labels for important problems where the reward is worth the effort, like machine translation, speech recognition, or self-driving.
Machine learning researchers have long dreamed of developing unsupervised learning algorithms to learn a good representation of a dataset, which can then be used to solve tasks using only a few labeled examples. Our research implies that simply training large unsupervised next-step-prediction models on large amounts of data may be a good approach to use when creating systems with good representation learning capabilities.
Next steps Our results are a promising step towards general unsupervised representation learning. We found the results by exploring whether we could learn good quality representations as a side effect of language modeling, and scaled up an existing model on a carefully-chosen dataset. Yet the underlying phenomena remain more mysterious than clear.
These results were not as strong for datasets of long documents. We suspect our character-level model struggles to remember information over hundreds to thousands of timesteps. We think it’s worth trying hierarchical models that can adapt the timescales at which they operate. Further scaling up these models may further improve representation fidelity and performance on sentiment analysis and similar tasks.
The model struggles the more the input text diverges from review data. It’s worth verifying that broadening the corpus of text samples results in an equally informative representation that also applies to broader domains.
Our results suggest that there exist settings where very large next-step-prediction models learn excellent unsupervised representations. Training a large neural network to predict the next frame in a large collection of videos may result in unsupervised representations for object, scene, and action classifiers.
Overall, it’s important to understand the properties of models, training regimes, and datasets that reliably lead to such excellent representations.
Authors Alec Radford Ilya Sutskever Rafał Józefowicz Jack Clark Greg Brockman Acknowledgments Artwork: Ludwig Pettersson Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
1,543 | 2,018 |
"What Does a Fair Algorithm Actually Look Like? | WIRED"
|
"https://www.wired.com/story/what-does-a-fair-algorithm-look-like"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Louise Matsakis Business What Does a Fair Algorithm Actually Look Like? There’s no common standard yet on what level of transparency is sufficient for the AI products making decisions about our lives.
HOTLITTLEPOTATO; Getty Images Save this story Save Save this story Save Application Prediction Recommendation algorithm Regulation Sector Consumer services Finance Health care Public safety In some ways, artificial intelligence acts like a mirror. Machine learning tools are designed to detect patterns, and they often reflect back the same biases we already know exist in our culture. Algorithms can be sexist , racist , and perpetuate other structural inequalities found in society. But unlike humans, algorithms aren’t under any obligation to explain themselves. In fact, even the people who build them aren’t always capable of describing how they work.
That means people are sometimes left unable to grasp why they lost their health care benefits , were declined a loan , rejected from a job , or denied bail—all decisions increasingly made in part by automated systems. Worse, they have no way to determine whether bias played a role.
In response to the problem of AI bias and so-called “ black box ” algorithms, many machine learning experts , technology companies, and governments have called for more fairness, accountability, and transparency in AI. The research arm of the Department of Defense has taken an interest in developing machine learning models that can more easily account for how they make decisions, for example. And companies like Alphabet, IBM, and the auditing firm KPMG are also creating or have already built tools for explaining how their AI products come to conclusions.
"Algorithmic transparency isn’t an end in and of itself" Madeleine Clare Elish, Data & Society But that doesn’t mean everyone agrees on what constitutes a fair explanation. There’s no common standard for what level of transparency is sufficient. Does a bank need to publicly release the computer code behind its loan algorithm to be truly transparent? What percentage of defendants need to understand the explanation given for how a recidivism AI works? “Algorithmic transparency isn’t an end in and of itself,” says Madeleine Clare Elish, a researcher who leads the Intelligence & Autonomy Initiative at Data & Society. “It’s necessary to ask: Transparent to whom and for what purpose? Transparency for the sake of transparency is not enough.” By and large, lawmakers haven’t decided what rights citizens should have when it comes to transparency in algorithmic decision-making. In the US, there are some regulations designed to protect consumers, including the Fair Credit Reporting Act, which requires individuals be notified of the main reason they were denied credit. But there isn’t a broad “right to explanation” for how a machine came to a conclusion about your life. The term appears in the European Union's General Data Protection Regulation (GDPR), a privacy law meant to give users more control over how companies collect and retain their personal data, but only in the non-binding portion. Which means it doesn't really exist in Europe , either, says Sandra Wachter, a lawyer and assistant professor in data ethics and internet regulation at the Oxford Internet Institute.
GDPR’s shortcomings haven’t stopped Wachter from exploring what the right to explanation might look like in the future, though. In an article published in the Harvard Journal of Law & Technology earlier this year, Wachter, along with Brent Mittelstadt and Chris Russell, argue that algorithms should offer people “counterfactual explanations,” or disclose how they came to their decision and provide the smallest change “that can be made to obtain a desirable outcome.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For example, an algorithm that calculates loan approvals should explain not only why you were denied credit, but also what you can do to reverse the decision. It should say that you were denied the loan for having too little in savings, and provide the minimum amount you would need to additionally save to be approved. Offering counterfactual explanations doesn’t require the researchers who designed an algorithm release the code that runs it. That’s because you don’t necessarily need to understand how a machine learning system works to know why it reached a certain decision.
“The industry fear is that [companies] will have to disclose their code,” says Wachter. “But if you think about the person who is actually affected by [the algorithm’s decision], they probably don’t think about the code. They’re more interested in the particular reasons for the decision.” Counterfactual explanations could potentially be used to help conclude whether a machine learning tool is biased. For example, it would be easy to tell a recidivism algorithm was prejudiced if it indicated factors like a defendant’s race or zip code in explanations. Wachter’s paper has been cited by Google AI researchers and also by what is now called the European Data Protection Board , the EU body that works on GDPR.
“No one agrees on what an ‘explanation’ is, and explanations aren’t always useful.” Berk Ustun, Harvard University A group of computer scientists has developed a variation on Wachter’s counterfactual explanations proposal, which was presented at the International Conference for Machine Learning’s Fairness, Accountability and Transparency conference this summer. They argue that rather offering explanations, AI should be built to provide "recourse," or the ability for people to feasibly modify the outcome of an algorithmic decision. This would be the difference, for example, between a job application that only recommends you obtain a college degree to get the position, versus one that says you need to change your gender or age.
“No one agrees on what an ‘explanation’ is, and explanations aren’t always useful,” says Berk Ustun, the lead author of the paper and a postdoctoral fellow at Harvard University. Recourse, as they define it, is something researchers can actually test.
As part of their work, Ustun and his colleagues created a toolkit computer scientists and policymakers can use to calculate whether or not a linear algorithm provides recourse. For example, a health care company could see if their AI uses things like marital status or race as deciding factors—things people can’t easily modify. The researchers’ work has already garnered attention from Canadian government officials.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Simply because an algorithm offers recourse, however, doesn’t mean it’s fair. It’s possible an algorithm offers more achievable recourse to wealthier people, or to younger people, or to men. A woman might need to lose far more weight for a health care AI to offer her a lower premium rate than a man would, for example. Or a loan algorithm might require black applicants have more in savings to be approved than white applicants.
“The goal of creating a more inclusive and elastic society can actually be stymied by algorithms that make it harder for people to gain access to social resources,” says Alex Spangher, a PhD student at Carnegie Mellon University and an author on the paper.
There are other ways for AI to be unfair that explanations or recourse alone wouldn't solve. That’s because providing explanations doesn’t do anything to address which variables automated systems take into consideration in the first place. As a society we still need to decide what data should be allowed for algorithms to use to make inferences. In some cases, discrimination laws may prevent using categories like race or gender, but it's possible that proxies for those same categories are still utilized, like zip codes.
Corporations collect lots of types of data, some of which may strike consumers as invasive or unreasonable. For example, should a furniture retailer be allowed to take into consideration what type of smartphone you have when determining whether you receive a loan? Should Facebook be able to automatically detect when it thinks you’re feeling suicidal? In addition to arguing for a right to explanation, Wachter has also written that we need a “ right to reasonable inferences.
” Building a fair algorithm also doesn’t do anything to address a wider system or society that may be unjust. In June, for example, Reuters reported that ICE altered a computer algorithm used since 2013 to recommend whether an immigrant facing deportation should be detained or released while awaiting their court date. The federal agency removed the “release” recommendation entirely—though staff could still override the computer if they chose—which contributed to a surge in the number of detained immigrants. Even if the algorithm had been designed fairly in the first place (and researchers found it wasn’t), that wouldn't have prevented it from being modified.
“The question of ‘What it means for an algorithm to be fair?’ does not have a technical answer alone,” says Elish. “It matters what social processes are in place around that algorithm.” How the US fought China's cybertheft— with a Chinese spy Turning California’s weed into the champagne of cannabis Inside the secret conference plotting to launch flying cars Cities team up to offer broadband and the FCC is mad PHOTOS: The space shuttle program's golden age Get even more of our inside scoops with our weekly Backchannel newsletter Contributor X Topics artificial intelligence machine learning algorithms bias Steven Levy Kari McMahon Steven Levy David Gilbert Jacopo Prisco Will Knight Nelson C.J.
Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,544 | 2,019 |
"Researchers Want Guardrails to Help Prevent Bias in AI | WIRED"
|
"https://www.wired.com/story/researchers-guardrails-prevent-bias-ai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Researchers Want Guardrails to Help Prevent Bias in AI Photograph: Daniel Grizelj/Getty Images Save this story Save Save this story Save Artificial intelligence has given us algorithms capable of recognizing faces , diagnosing disease , and of course, crushing computer games.
But even the smartest algorithms can sometimes behave in unexpected and unwanted ways—for example, picking up gender bias from the text or images they are fed.
A new framework for building AI programs suggests a way to prevent aberrant behavior in machine learning by specifying guardrails in the code from the outset. It aims to be particularly useful for nonexperts deploying AI, an increasingly common issue as the technology moves out of research labs and into the real world.
The approach is one of several proposed in recent years for curbing the worst tendencies of AI programs. Such safeguards could prove vital as AI is used in more critical situations, and as people become suspicious of AI systems that perpetuate bias or cause accidents.
Last week Apple was rocked by claims that the algorithm behind its credit card offers much lower credit limits to women than men of the same financial means. It was unable to prove that the algorithm had not inadvertently picked up some form of bias from training data. Just the idea that the Apple Card might be biased was enough to turn customers against it.
Similar backlashes could derail adoption of AI in areas like health care, education, and government. “People are looking at how AI systems are being deployed and they're seeing they are not always being fair or safe,” says Emma Brunskill , an assistant professor at Stanford and one of the researchers behind the new approach. “We're worried right now that people may lose faith in some forms of AI, and therefore the potential benefits of AI might not be realized.” Examples abound of AI systems behaving badly. Last year, Amazon was forced to ditch a hiring algorithm that was found to be gender biased; Google was left red-faced after the autocomplete algorithm for its search bar was found to produce racial and sexual slurs. In September, a canonical image database was shown to generate all sorts of inappropriate labels for images of people.
Machine-learning experts often design their algorithms to guard against certain unintended consequences. But that’s not as easy for nonexperts who might use a machine-learning algorithm off the shelf. It’s further complicated by the fact that there are many ways to define “fairness” mathematically or algorithmically.
The new approach proposes building an algorithm so that, when it is deployed, there are boundaries on the results it can produce. “We need to make sure that it's easy to use a machine-learning algorithm responsibly, to avoid unsafe or unfair behavior,” says Philip Thomas , an assistant professor at the University of Massachusetts Amherst who also worked on the project.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The researchers demonstrate the method on several machine-learning techniques and a couple of hypothetical problems in a paper published in the journal Science Thursday.
First, they show how it could be used in a simple algorithm that predicts college students' GPAs from entrance exam results—a common practice that can result in gender bias, because women tend to do better in school than their entrance exam scores would suggest. In the new algorithm, a user can limit how much the algorithm may overestimate and underestimate student GPAs for male and female students on average.
In another example, the team developed an algorithm for balancing the performance and safety of an automated insulin pump. Such pumps decide how much insulin to deliver at mealtimes, and machine learning can help determine the right dose for a patient. The algorithm they designed can be told by a doctor to only consider dosages within a particular range, and to have a low probability of suggesting dangerously low or high blood sugar levels.
“We need to make sure that it's easy to use a machine-learning algorithm responsibly, to avoid unsafe or unfair behavior.” Philip Thomas, University of Massachusetts The researchers call their algorithms “Seldonian” in reference to Hari Seldon , a character in Isaac Asimov stories that feature his famous “ three laws of robotics ,” which begin with the rule: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The new approach is unlikely to solve the problem of algorithms misbehaving. Partly that’s because there’s no guarantee organizations deploying AI will adopt such approaches when they can come at the cost of optimal performance.
The work also highlights the fact that defining “fairness” in a machine-learning algorithm is not a simple task. In the GPA example, for instance, the researchers provide five different ways to define gender fairness.
“One of the major challenges in making algorithms fair lies in deciding what fairness actually means,” says Chris Russell , a fellow at the Alan Turing Institute in the UK. “Trying to understand what fairness means, and when a particular approach is the right one to use is a major area of ongoing research." If even experts cannot agree on what is fair, Russell says it might be a mistake to put the burden on less proficient users. “At the moment, there are more than 30 different definitions of fairness in the literature,” he notes. “This makes it almost impossible for a nonexpert to know if they are doing the right thing.” Meet the immigrants who took on Amazon Alien hunters need the far side of the moon to stay quiet The future of banking is … you're broke How to shut up your gadgets at night so you can sleep The super-optimized dirt that helps keep racehorses safe 👁 A safer way to protect your data ; plus, the latest news on AI 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones Senior Writer X Topics artificial intelligence algorithms bias Will Knight Khari Johnson Khari Johnson Steven Levy Will Knight Vittoria Elliott Gregory Barber Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,545 | 2,020 |
"A Council of Citizens Should Regulate Algorithms | WIRED"
|
"https://www.wired.com/story/opinion-a-council-of-citizens-should-regulate-algorithms"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Federica Carugati Ideas A Council of Citizens Should Regulate Algorithms Athens’ democracy reminds us that we have been outsourcing governance for two and a half millennia, first to kings, then to experts, and now to machines.
Illustration: WIRED Staff; Getty Images Save this story Save Save this story Save Application Ethics Regulation Safety Technology Machine learning Are machine-learning algorithms biased, wrong, and racist? Let citizens decide.
Essentially rule-based structures for making decisions, machine-learning algorithms play an increasingly large role in our lives. They suggest what we should read and watch, whom we should date, and whether or not we are detained while awaiting trial. Their promise is huge–they can better detect cancers. But they can also discriminate based on the color of our skin or the zip code we live in.
Despite their ubiquity in society, no real structure exists to regulate algorithms' use. We rely on journalists or civil society organizations to serendipitously report when things have gone wrong. In the meantime, the use of algorithms spreads to every corner of our lives and many agencies of our government.
In the post-Covid-19 world, the problem is bound to reach colossal proportions.
Federica Carugati is a program director at the Center for Advanced Study in the Behavioral Sciences at Stanford University. She is the author of Creating a Constitution: Law, Democracy, and Growth in Ancient Athens , and her work has appeared in leading political science journals and academic blogs.
A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like.
We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off.
Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance.
Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change , and even to shape constitutions.
Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions.
The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice.
These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency. It could evaluate, as OpenAI recommends , a variety of issues including the level of privacy protection, the extent to (and methods by) which the systems were tested for safety, security, or ethical concerns, and the sources of data, labor, and other resources used.
Reports like the one OpenAI just released provide an important first step in the process of getting industry buy-in. The report highlights both the risks of unregulated development of the technology and the benefits of an inclusive process to devise regulatory bodies. For example, industry could play a role in the selection process or in the choice of material available to the councillors, or by providing expert advice.
The council would be a fair and efficient response to the question of how to resolve the societal trade-offs that algorithms create. Unlike proposed technocratic solutions and traditional auditing structures, the council would expand the range of possible solutions to the problems that algorithms create, enhance democratic accountability, and foster citizen participation and learning.
The erosion of commitments to democratic norms and institutions around the world calls for new ideas. The time is ripe for considering creative institutional solutions to tackle some of the greatest challenges society faces. Athens’ democracy reminds us that we have been outsourcing governance for two and a half millennia, first to kings, then to experts, and now to machines. This is an opportunity to reverse the trend.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here.
Submit an op-ed at [email protected].
A virtual DJ, a drone, and an all-out Zoom wedding Remote work has its perks, until you want a promotion All the tools and tips you need to make bread at home The confessions of Marcus Hutchins, the hacker who saved the internet On the moon, astronaut pee will be a hot commodity 👁 Is the brain a useful model for AI ? Plus: Get the latest AI news 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Topics Wired Opinion artificial intelligence algorithms ethics Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,546 | 2,019 |
"Can AI Be a Fair Judge in Court? Estonia Thinks So | WIRED"
|
"https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Eric Niiler Business Can AI Be a Fair Judge in Court? Estonia Thinks So WIRED Staff; Getty Images Save this story Save Save this story Save Application Prediction End User Government Technology Machine learning Government usually isn't the place to look for innovation in IT or new technologies like artificial intelligence. But Ott Velsberg might change your mind. As Estonia's chief data officer, the 28-year-old graduate student is overseeing the tiny Baltic nation's push to insert artificial intelligence and machine learning into services provided to its 1.3 million citizens.
"We want the government to be as lean as possible," says the wiry, bespectacled Velsberg , an Estonian who is writing his PhD thesis at Sweden’s Umeå University on using the Internet of Things and sensor data in government services. Estonia's government hired Velsberg last August to run a new project to introduce AI into various ministries to streamline services offered to residents.
Deploying AI is crucial, he says. “Some people worry that if we lower the number of civil employees, the quality of service will suffer. But the AI agent will help us." About 22 percent of Estonians work for the government; that’s about average for European countries, but higher than the 18 percent rate in the US.
Siim Sikkut , Estonia’s chief information officer, began piloting several AI-based projects at agencies in 2017, before hiring Velsberg last year. Velsberg says Estonia has deployed AI or machine learning in 13 places where an algorithm has replaced government workers.
For example, inspectors no longer check on farmers who receive government subsidies to cut their hay fields each summer. Satellite images taken by the European Space Agency each week from May to October are fed into a deep-learning algorithm originally developed by the Tartu Observatory. The images are overlaid onto a map of fields where farmers receive the hay-cutting subsidies to prevent them from turning forests over time.
The algorithm assesses each pixel in the images, determining if the patch of the field has been cut or not. Cattle grazing or partial cutting can throw off the image processing; in those cases, an inspector still drives out to check. Two weeks before the mowing deadline, the automated system notifies farmers via text or email that includes a link to the satellite image of their field. The system saved €665,000 ($755,000) in its first year because inspectors made fewer site visits and focused on other enforcement actions, according to Velsberg.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In another application, resumes of laid-off workers are fed into a machine learning system that matches their skills with employers. About 72 percent of workers who gain a new job through the system are still on the job after six months, up from 58 percent before the computer-matching system was deployed. In a third case, children born in Estonia are automatically enrolled in local schools at birth, so parents don't have to sign up on waiting lists or call school administrators. That’s because hospital records are automatically shared with local schools. The system doesn’t truly require AI, but it shows how automated services are expanding.
In the most ambitious project to date, the Estonian Ministry of Justice has asked Velsberg and his team to design a “robot judge” that could adjudicate small claims disputes of less than €7,000 (about $8,000). Officials hope the system can clear a backlog of cases for judges and court clerks.
The project is in its early phases and will likely start later this year with a pilot focusing on contract disputes. In concept, the two parties will upload documents and other relevant information, and the AI will issue a decision that can be appealed to a human judge. Many details are still to be worked out. Velsberg says the system might have to be adjusted after feedback from lawyers and judges.
Estonia’s effort isn’t the first to mix AI and the law, though it may be the first to give an algorithm decision-making authority. In the US, algorithms help recommend criminal sentences in some states. The UK-based DoNotPay AI-driven chatbot overturned 160,000 parking tickets in London and New York a few years ago. A Tallinn-based law firm, Eesti Oigusbüroo , provides free legal aid through a chatbot and generates simple legal documents to send to collection agencies. It plans to expand its “Hugo-AI” legal aid service matching clients and lawyers to Warsaw and Los Angeles by the end of the year, said CEO Artur Fjodorov.
The idea of a robot judge might work in Estonia partly because its 1.3 million residents already use a national ID card and are used to an online menu of services such as e-voting and digital tax filing.
Government databases connect with each other through something called the X-road, a digital infrastructure that makes data sharing easier. Estonian residents can also check who has been accessing their information by logging into a government digital portal.
Estonia’s well-documented move to digital government services hasn’t been without at least one glitch. Outside experts revealed a vulnerability in Estonia's ID system in 2017 that led to some embarrassment; it was fixed and the ID cards replaced. But government officials say the country hasn't had a major data breach or theft since it began its digital drive in the early 2000s. In 2016, more than two-thirds of Estonian adults filed government forms on the internet, almost twice the European average.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “The really private and confidential things are not in the hands of government, but banks and telecoms,” says Tanel Tammet, a professor of computer science at Tallinn University of Technology. Tammet is a member of an Estonian government AI task force that will report its findings in May and suggest an additional 35 AI-related demonstration projects by 2020.
Stanford University’s David Engstrom , an expert in digital governance, says Estonian citizens might trust the government's use of their digital data today, but things might change if one of the new AI-based decision-making systems goes awry.
In the US, agencies such as the Social Security Administration are using AI and machine learning algorithms to speed sorting and processing, while the EPA is using AI to determine which factories should be checked for pollution violations. But a coordinated AI effort across the federal government has gone slowly, Engstrom says, mainly because federal databases in each agency are different and aren’t easily shared with other agencies. “We’re not there yet,” he said.
Engstrom and a team of law school and computer science students at Stanford are studying how AI can be better used in US government agencies. They will soon report their findings to the Administrative Conference of the United States, an independent federal agency charged with recommending improvements to administrative processes.
He doesn’t see a AI-driven robo-judge coming to US courtrooms anytime soon. The US has no national ID system and many Americans have an innate fear of Big Government. “We have due process in the Constitution and that has something to say about fully automated decision making by a government agency,” Engstrom said. “Even with a human appeal, there could be a constraint.” Still, Engstrom foresees a time when AI-driven legal assistants might be presenting judges with case law, precedents, and the background needed to make a decision. “The promise of an AI approach is you get more consistency than we currently have,” he said. “And maybe an AI driven system that is more accurate than human decision making system.” The flip side is that an AI is only as good as the programming that goes into it. The sentencing algorithms, for example, have been criticized as biased against blacks.
"You also worry about automation bias," Engstrom says. As the machines make more decisions, humans are less likely to inject their own expertise into a system, he says. "That’s one of these creeping things that privacy advocates and good government advocates worry about when the government digitizes in this way." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For now, though, Estonian officials like the idea of an AI robot solving simple disputes, leaving more time for human judges and lawyers to solve tougher problems. Deploying more AI in government services "will allow us to specialize in something the machines can never do,” President Kersti Kaljulaid noted at the recent North Star AI conference in Tallinn. “I want to specialize in being a warm compassionate human being. For that we need the AI to be safe, and demonstrably safe." Corrected, 3-26-19, 1:50 pm: An earlier version of this article incorrectly described the subject of Ott Velsberg's PhD thesis and the amount of money saved in the first year of using satellite images to monitor the cutting of hayfields.
How does music affect your brain? Every imaginable way Most Android antivirus apps are garbage How investigators pull data off a crashed jet's black boxes Facebook is not a monopoly, but it should be broken up China is catching up to the US in AI research —fast 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Topics artificial intelligence machine learning law Will Knight Will Knight Will Knight Khari Johnson Khari Johnson Caitlin Harrington Amanda Hoover Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,547 | 2,023 |
"Should Algorithms Control Nuclear Weapons Launch Codes? The US Says No | WIRED"
|
"https://www.wired.com/story/fast-forward-should-algorithms-control-nuclear-launch-codes-the-us-says-no"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Should Algorithms Control Nuclear Launch Codes? The US Says No Photograph: Chung Sung-Jun/Getty Images Save this story Save Save this story Save Last Thursday, the US State Department outlined a new vision for developing, testing, and verifying military systems—including weapons—that make use of AI.
The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the US to guide the development of military AI at a crucial time for the technology. The document does not legally bind the US military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly.
Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons.
When it comes to autonomous weapons systems, US military leaders have often reassured that a human will remain “in the loop” for decisions about use of deadly force. But the official policy , first issued by the DOD in 2012 and updated this year, does not require this to be the case.
Attempts to forge an international ban on autonomous weapons have so far come to naught.
The International Red Cross and campaign groups like Stop Killer Robots have pushed for an agreement at the United Nations, but some major powers—the US, Russia, Israel, South Korea, and Australia—have proven unwilling to commit.
One reason is that many within the Pentagon see increased use of AI across the military, including outside of non-weapons systems, as vital—and inevitable. They argue that a ban would slow US progress and handicap its technology relative to adversaries such as China and Russia. The war in Ukraine has shown how rapidly autonomy in the form of cheap, disposable drones, which are becoming more capable thanks to machine learning algorithms that help them perceive and act, can help provide an edge in a conflict.
Earlier this month, I wrote about onetime Google CEO Eric Schmidt’s personal mission to amp up Pentagon AI to ensure the US does not fall behind China. It was just one story to emerge from months spent reporting on efforts to adopt AI in critical military systems, and how that is becoming central to US military strategy—even if many of the technologies involved remain nascent and untested in any crisis.
Lauren Kahn, a research fellow at the Council on Foreign Relations, welcomed the new US declaration as a potential building block for more responsible use of military AI around the world.
X content This content can also be viewed on the site it originates from.
A few nations already have weapons that operate without direct human control in limited circumstances, such as missile defenses that need to respond at superhuman speed to be effective. Greater use of AI might mean more scenarios where systems act autonomously, for example when drones are operating out of communications range or in swarms too complex for any human to manage.
Some proclamations around the need for AI in weapons, especially from companies developing the technology, still seem a little farfetched. There have been reports of fully autonomous weapons being used in recent conflicts and of AI assisting in targeted military strikes , but these have not been verified, and in truth many soldiers may be wary of systems that rely on algorithms that are far from infallible.
And yet if autonomous weapons cannot be banned, then their development will continue. That will make it vital to ensure that the AI involved behave as expected—even if the engineering required to fully enact intentions like those in the new US declaration is yet to be perfected.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Senior Writer X Topics Defense Tactics, Strategy and Logistics drones China national security cyberwar artificial intelligence Fast Forward Steven Levy WIRED Staff Will Knight Aarian Marshall Khari Johnson Paresh Dave Will Knight Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,548 | 2,014 |
"Watch Thomas Middleditch Let an AI Steal His Face to Make a New Movie | WIRED"
|
"https://www.wired.com/video/watch/thomas-middleditch-let-an-ai-steal-his-face-to-make-a-new-movie"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Thomas Middleditch Let an AI Steal His Face to Make a New Movie About Released on 06/11/2018 (gasps) [Narrator] Thomas Middleditch is starring in a new movie.
(gasps) But this is no Silicon Valley.
Think about having sex with a jar of salsa.
[Narrator] Fortunately, this dialogue doesn't even matter.
Middleditch is having his face and voice copied so that an artificial intelligence named Benjamin, that's it right there on the laptop, can stitch them into a new film using face swapping and vocal synthesis technology.
Benjamin is a program that maniac over there designed.
[Narrator] That's Ross Goodwin, a technologist at Google and self-described gonzo data scientist.
Goodwin worked with director Oscar Sharp, Middleditch, and Benjamin on their first AI film, Sunspring, which was licensed by Wired's sister site, Ars Technica, in 2016.
Sunspring was made for a 48-hour, sci-fi film challenge.
The team fed Benjamin thousands of science fiction film scripts, and then had it write a new one.
In a future with mass unemployment, young people are forced to sell blood.
It's something I can do.
You should see the boy and shut up.
[Narrator] The idea was to see if the machine could write a screenplay that the filmmakers could work with.
Benjamin is a machine.
But Benjamin is also a story about machines.
It's a story about how machines and creativity can play together and what we can learn from that.
A big, honest idea.
The AI comes up with the script.
I have to go to the skull.
And it is our duty to execute them as if the AI was Scorsese.
Some lines in there, we're like, What does this mean? What does Benjamin mean by this? And after breaking it down, the best you can do is interpret it because you can't ask him.
That's right.
That was kind of fun.
Just like really playing it with like the most earnest intent of some of the bizarrest lines.
What are you doing? I don't want to be honest with you.
You don't have to be a doctor.
I'm not sure.
I don't know what you're talking about.
I wanna see you too.
What do you mean? [Narrator] After Sunspring, Sharp, Goodwin, and Benjamin teamed up again with, wait for it...
David Hasselhoff.
Activate Hoffbot.
Okay, pal, pick me up.
Dangerous world for a man who does not exist.
[Narrator] For this second film, the human filmmakers tried to take a more collaborative approach with the AI.
I don't like to think that I'm just making movies necessarily.
My whole thing has always been, What haven't we done yet? What can we try? And the nice thing about cinema is there's still lots of room to try things.
[Narrator] And so for this year's 48-hour film festival, they greenlit Benjamin to become a full auteur.
This year, we're having the machine actually create the movie visually as well as writing the screenplay.
Every decision is the computer's decision, and the movie, therefore, is directed by...
Written and directed by Benjamin.
[Narrator] That's why Middleditch and other actors are rambling and grimacing so that Benjamin can start building performances from their faces and voices.
Benjamin constructed its new movie using scenes from public domain films like this test scene from Night of the Living Dead.
At the moment, we're not in the place where we can say, It writes, 'There is a truck driving over the horizon.' And we can just conjure up an image of a truck.
I'm a pretty girl.
You are a pretty girl.
So instead what we though we'd do is let it, from its screenplay, select shots from public domain films.
And then using whichever shots that it's picked, drop these actors and these lines of dialogue into that movie.
[Narrator] Then at the start of the 48-hour mark, the team armed Benjamin with the actors' recordings, the scripts, and the public domain films to begin creating new scenes that would be edited into a final movie.
If this fails in its experiment, I'll be employable for the rest of time.
And if it doesn't fail and it in fact works, then you know I may not be employable as an actor.
But at least, I will have been there at the moment when we realized we were going to be replaced by computers.
[Narrator] Even if the film isn't a narrative success, the creators say, it holds a mirror up to the technologies that are increasingly blurring the line between truth and fiction, both in the movies and in real life.
That's so Benjamin.
[Narrator] In the last year, researchers digitally produced a lip-synced speech by former President Barack Obama.
The infrastructure that creates good, new jobs.
[Narrator] And users of so-called deep fake technology put the faces of celebrities on the bodies of porn actors.
We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time.
[Narrator] It's become enough of an issue that director Jordan Peele made this PSA in April of this year.
This is a dangerous time.
A lot of times in movies you can't tell, but now you're watching news footage or interviews, and you just can't tell if it's real or not.
It's unsettling.
[Narrator] Unsettling maybe, but the filmmakers might say that's the whole idea behind the project.
Putting it in something that's more playful and entertaining is going to get a wider reach for that.
It's gonna help more people understand that this stuff can be done without needing armies of CG experts any more.
Good artists borrow.
Great artists steal.
So perhaps, Benjamin will prove to be the greatest of us all.
[Narrator] Or not.
In the end, the script was classic Benjamin, non-sequiturs and all.
I am aware of that.
What about the doctor? You can't do that.
[Narrator] And because of processing time, only first drafts of the face swaps were ready by deadline.
This scene shows perhaps the best face swapping, but it had to be cut to meet the contest's requirements for length.
This is incredible.
[Narrator] Still, the new film which is called Zone Out has its moments.
You don't know where this is going.
[Narrator] And it gives us a look at what happens when you unleash an AI on a creative project.
Even its musical score was generated by an AI.
You can judge for yourself.
The director's cut, including the deleted scene, is up now on Wired's sister site, Ars Technica.
How do you know what happens to that guy? I'm doing it for you.
Are you sure you need a problem? Thomas Middleditch & O'Shea Jackson Jr. Answer the Web's Most Searched Questions It Took 5 Actors to Create "Deadpool's" Colossus How Marvel Built the VFX in Ant-Man and the Wasp The Top 10 Special FX of 2014 President Barack Obama on the Future of AI AI Won't Replace Doctors, It'll Help Them | WIRED BizCon How 'Rogue One' Recreated Grand Moff Tarkin The Last Jedi Cast Answer the Web's Most Searched Questions Will AI Enhance or Hack Humanity? - Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson Here's Everything New From Google Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,549 | 2,019 |
"San Francisco Could Be First to Ban Facial Recognition Tech | WIRED"
|
"https://www.wired.com/story/san-francisco-could-be-first-ban-facial-recognition-tech"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gregory Barber Business San Francisco Could Be First to Ban Facial Recognition Tech Getty Images Save this story Save Save this story Save Application Ethics Face recognition Regulation Company Amazon Microsoft End User Government Sector Public safety Source Data Images Technology Machine vision If a local tech industry critic has his way, San Francisco could become the first US city to ban its agencies from using facial recognition technology.
Aaron Peskin, a member of the city’s Board of Supervisors, proposed the ban Tuesday as part of a suite of rules to enhance surveillance oversight. In addition to the ban on facial recognition technology, the ordinance would require city agencies to gain the board’s approval before buying new surveillance technology, putting the burden on city agencies to publicly explain why they want the tools as well as the potential harms. It would also require an audit of any existing surveillance tech---things like gunshot-detection systems, surveillance cameras, or automatic license plate readers---in use by the city; officials would have to report annually on how the technology was used, community complaints, and with whom they share the data.
Those rules would follow similar ordinances passed in nearby Oakland and Santa Clara County. But with facial recognition, Peskin argues an outright ban makes more sense than regulating its use. “I have yet to be persuaded that there is any beneficial use of this technology that outweighs the potential for government actors to use it for coercive and oppressive ends,” he says.
Facial recognition technology is increasingly common for unlocking our phones and tagging our Facebook friends, but it remains rife with potential bias , especially around identifying people of color.
In the hands of government, critics like Peskin argue, it enables all-too-easy access to real-time surveillance, especially given the availability of large databases of faces and names (think your driver’s license or LinkedIn).
“This is the first piece of legislation that I’ve seen that really takes facial recognition technology as seriously as it is warranted and treats it as uniquely dangerous,” says Woodrow Hartzog, professor of law and computer science at Northeastern University.
“This is the first piece of legislation that I’ve seen that really takes facial recognition technology as serious as it is warranted and treats it as uniquely dangerous.” Woodrow Hartzog, Northeastern University Privacy laws in Texas and Illinois require anyone recording biometric data, including face scans and fingerprints, to give people notice and obtain their consent. But that’s not always so effective in practice, explains Hartzog. As the technology grows more pervasive, simply declining to participate becomes less practical. The San Francisco proposal, while not addressing private surveillance in public spaces, takes a different tack. “Moratoriums and bans prevent the technology from getting embedded in everything,” Hartzog says. “Abuse doesn’t happen at the outset. It happens when the technology becomes entrenched and dismantling it becomes unimaginable.” Those concerns have been echoed by prominent tech executives, including Microsoft CEO Satya Nadella, who last week in Davos warned that the use of facial recognition technology could become a “ race to the bottom ” without government oversight. According to Microsoft, the potential for abuse may put facial recognition beyond the reach of industry self-policing.
But the technology draws continued interest from law enforcement. Amazon’s Rekognition system has been tested by police in Orlando and in Washington County, Oregon. In the Bay Area, an official at BART, the regional mass transit system, briefly floated using facial recognition technology after a string of violence at stations last fall. That proposal was swiftly swatted down by privacy advocates.
Peskin is a well-known gadfly to tech, with proposals aimed at the heart of the local industry---not all of which have proceeded smoothly. Last year, in response to Facebook’s string of privacy gaffes, he sponsored legislation to strip the name of Facebook CEO Mark Zuckerberg from the city’s main public hospital. Another proposal would have banned workplace cafeterias in an effort to help restaurants struggling to woo customers.
This proposal, cosponsored by Board of Supervisors president Norman Yee, could run afoul of law enforcement agencies. A bill in the California legislature last year that would have given municipalities oversight over local law enforcement’s use of surveillance technology, and which did not single out facial recognition, failed after facing opposition from police groups. When reached for comment, the San Francisco Sheriff’s Office responded that it was still reviewing the proposal, and the San Francisco Police Department said it does not comment on proposed legislation.
In any case, when San Francisco tries something, people tend to watch, says Matt Cagle, an attorney with the ACLU of Northern California, which supports the legislation. “The city at the core of our technology center is saying we shouldn’t deploy surveillance technologies just because we can.” Proposed changes to Chrome could weaken ad blockers Fender's new acoustic guitar has a million different voices Watching our weight could be killing us One couple’s tireless crusade to stop a genetic killer How Trump could wind up making globalism great again 👀 Looking for the latest gadgets? Check out our picks , gift guides , and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Staff Writer X Topics face recognition artificial intelligence San Francisco Peter Guest Khari Johnson Peter Guest Khari Johnson Aarian Marshall Amanda Hoover Aarian Marshall Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,550 | 2,017 |
"Can Apple's iPhone X Beat Facial Recognition's Bias Problem? | WIRED"
|
"https://www.wired.com/story/can-apples-iphone-x-beat-facial-recognitions-bias-problem"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Klint Finley Business Can Apple's iPhone X Beat Facial Recognition's Bias Problem? Phil Schiller, senior VP of worldwide marketing at Apple, speaks about the Face ID feature used to unlock the iPhone X on September 12, 2017.
JIM WILSON/The New York Times/Redux Save this story Save Save this story Save Joy Buolamwini once built a robot that could play peekaboo. But there was just one problem: It couldn't see her. Buolamwini is black, and the facial-recognition software she used couldn't recognize her face. The software worked well enough with lighter-skinned people, so Buolamwini moved on to other projects. "[I] figured, you know what, somebody else will solve this problem," she explained in a TEDx talk about her work.
But it didn't get solved, at least not right away. Buolamwini continued to encounter facial-recognition software that just couldn't see her. Hers was not an isolated example. In 2009, two coworkers created a video that went viral showing how an HP webcam designed to track people's faces as they moved followed the white worker but not her black colleague. In 2015, web developer Jacky Alciné tweeted a screenshot that showed Google Photos labeling a picture of him and a friend as gorillas.
On Tuesday, Apple introduced its own facial-recognition program, Face ID, that will unlock its new iPhone X. Now we will learn whether Apple was able to overcome such problems.
Apple Apple, which did not respond to an interview request, has had years to learn from the mistakes of previous systems. There are some indications it is applying those lessons. Face ID uses an infrared camera to create three-dimensional models of its users’ faces, which, in theory, could prove more nuanced than previous two-dimensional systems. Its website for the new iPhone X shows Face ID working with a person of color.
During its two-hour new product event, the company showed another face-detection feature—part of its automated portrait-lighting mode—working with people with a variety of skin tones. But we won't know for sure how well Face ID works in the real world until enough iPhone Xs are in the hands of customers.
Solving these problems matters, and not just for Apple. As the use of facial recognition technology by law enforcement expands, the consequences of malfunctions will be more severe. "My friends and I laugh all the time when we see other people mislabeled in our photos," Buolamwini said during her TEDx talk. "But misidentifying a suspected criminal is no laughing matter, nor is breaching civil liberties." There are technical reasons that previous facial-recognition systems failed to recognize black people correctly. In a blog post , HP blamed the lighting conditions in the viral video for its camera's failure. In an article for Hacker Noon , Buolamwini points out that a camera's default settings can affect how well it's able to process images of different skin tones. But Buolamwini argues that these issues can be overcome.
Related Stories Machine bias Tom Simonite the web Brian Barrett iPhone X David Pierce Steven Levy Facial-recognition software works by training algorithms with thousands, or preferably millions, of examples, and then testing the results. Researchers say the problematic facial-recognition systems likely were given too few black faces and can only identify them under ideal lighting conditions. Stanford University computer science professor Andrew Ng , who helped build Google's artificial-intelligence platform Google Brain , and Michigan State professor and machine-vision expert Anil Jain say facial-recognition systems need to be trained with more diverse samples of faces.
Researchers call this type of problem, when underlying biases influence the resulting technology, " algorithmic bias.
" Other examples include photo sets used to train image-recognition algorithms that identify men in kitchens as women , job-listing systems that show more high-paying jobs to men than women, or automated criminal-justice systems that assign higher bail or longer jail sentences to black people than white people. Buolamwini founded a group called the Algorithmic Justice League to raise awareness of algorithmic bias, collect examples, and ultimately solve the problem.
Apple's use of infrared will make Face ID less susceptible to lighting problems. But the technology alone can't overcome the potential for algorithmic bias. "The face recognition system still has to be trained on faces of different demographic types," Jain says.
If Apple's software proves more capable than facial recognition systems of the past, it will be because the company took this into account while training it.
Your iPhone has all kinds of sensitive and important data, which is why you should know how to back it up You probably don’t want to talk with everyone that calls you.
Blocking them might help.
Just join the iPhone/iPad life? Here’s how to set it up Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Contributor X Topics apple iPhone X face recognition algorithms Will Knight Amit Katwala Khari Johnson David Gilbert Andy Greenberg Kari McMahon David Gilbert Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,551 | 2,019 |
"Amazon Joins Microsoft's Call for Rules on Facial Recognition | WIRED"
|
"https://www.wired.com/story/amazon-joins-microsofts-call-rules-facial-recognition"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Amazon Joins Microsoft's Call for Rules on Facial Recognition An employee pushes a cart inside an Amazon distribution center. The retailer's sprawling interests include providing facial recognition software to law enforcement.
Jan Woitas/Getty Images Save this story Save Save this story Save Application Cloud computing Face recognition Regulation Company Amazon Microsoft End User Government Sector Public safety In Washington County, Oregon, sheriff’s deputies send photos of suspects to Amazon’s cloud computing service.
1 The ecommerce giant’s algorithms check those faces against a database of tens of thousands of mugshots, using Amazon’s Rekognition image analysis service.
Such use of facial recognition by law enforcement is essentially unregulated.
But some developers of the technology want to change that. In a blog post Thursday, Amazon asked Congress to put some rules around the use of the technology, echoing a call by Microsoft in December. The announcements come amid growing scrutiny on the use and accuracy of facial recognition by researchers, lawmakers, and civil liberties groups.
In the post, Michael Punke, vice president of global public policy at Amazon’s cloud division, AWS, wrote that the company “supports the creation of a national legislative framework covering facial recognition through video and photographic monitoring on public or commercial premises.” Amazon has been pressured by civil rights groups after tests by academics and the ACLU found that Rekognition’s image analysis and face recognition functions are less accurate for black people. Two researchers reported in January that an AWS service that attempts to determine the gender of people in photos, separate from the face recognition service, is much less accurate for black women. When the ACLU tested Amazon’s face recognition service using images of congressmembers, the service—incorrectly—found matches for 28 of them in a collection of mugshots.
The false positives were disproportionately people of color.
Amazon has pushed back on those studies. Punke’s post Thursday said that in both cases Rekognition was “not used properly”—an assertion denied by the outside researchers.
Still, Amazon’s Thursday blog post showed that the company appears to recognize there is cause for concern.
Amazon wants legislation “that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology,” Punke wrote. His post says the message is aimed at lawmakers, and informed by talks with customers, researchers, academics, and policymakers. Amazon declined to make Punke or anyone else available to discuss the proposals.
Amazon’s call for federal action on facial recognition echoes a December appeal by Microsoft president Brad Smith, who asked governments to regulate the technology to prevent privacy invasions or new forms of discrimination. “We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition,” Smith said in December.
Some lawmakers want to take up the suggestion. Last November, eight Democratic members of Congress wrote to Amazon CEO Jeff Bezos asking him about privacy protections built into Rekognition, and to release data on its accuracy on different demographic groups. A bill under consideration in Washington state that has support from Microsoft would ban use of facial recognition on surveillance feeds in the absence of a warrant except in emergencies, while a bill proposed in Massachusetts would impose a temporary moratorium on the technology until new regulations are in place. Amazon declined to comment on the proposed Washington state law. A member of San Francisco’s board of supervisors wants to ban city agencies from using the technology altogether.
Neither Microsoft nor Amazon is risking much immediate revenue by seeking restrictions on how customers use one of their products, says Clare Garvie, a fellow at Georgetown University’s Center on Privacy and Technology. Despite their prominence, Garvie says neither company is a major player in the market supplying US law enforcement or government agencies with facial recognition software.
That sphere is dominated by less familiar names such as IDEMIA, which helps with US passport applications, and NEC Corporation, which works on a Customs and Border Protection trial checking international passengers at some airports. An NEC spokesperson directed WIRED to a December statement by company president and CEO Takashi Niino, who said he “welcomes this debate” about regulating facial recognition. IDEMIA did not respond to a request for comment.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Amazon’s cloud division has shown interest in government contracts. It has won several large federal deals, including with the CIA, and remains in the bidding for JEDI, a $10 billion Pentagon contract.
At the WIRED25 conference last year , Bezos said tech companies should be proud to work with the US government and military. “I like this country,” he said.
Amazon’s post Thursday shows how the company has shifted its thinking on how law enforcement should use its technology. The blog post says that when law enforcement agencies use facial recognition they should configure it to report that a face matches another only when the software is 99 percent confident.
However, a 2017 post on the AWS site by a systems analyst from Washington County Sheriff’s office shows code that uses only an 85 percent confidence threshold. A year later, Amazon criticized the ACLU study in which members of congress were incorrectly matched with mugshots for using Amazon’s system default of 80 percent, saying it guided law enforcement to use a 95 percent threshold. A day later, the company recommended a 99 percent threshold instead.
Last week, the Washington County sheriff’s office told Gizmodo that it didn’t use any threshold when employing Amazon’s service. Deputy Jeff Talbot says the office has taken care to design safe protocols around its use of facial recognition. It doesn’t set a threshold because the tool is designed to provide leads for investigators, who make the call on identifying suspects, he says. “We are in full support of building legislation to regulate the appropriate and responsible uses of the technology and willing to be part of the conversation,” he says.
Garvie, the Georgetown fellow, says she’s encouraged that industry is seeking rules for law enforcement use of facial recognition. But she says it’s not clear if the shift reflects heightened awareness of the technology’s potential harms, or an attempt to get ahead of growing pressure from lawmakers or the public. “They may see that regulation is inevitable, or that agencies have become a bit uncomfortable with the idea of using unregulated technology,” Garvie says.
Amazon’s post suggests that although more rules are needed, the problem isn’t urgent. It claims the company’s service has a “strong track record,” and states that “in the two-plus years we’ve been offering Amazon Rekognition, we have not received a single report of misuse by law enforcement.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Garvie says that’s nonsensical given the lack of agreed guidelines for how law enforcement should use facial recognition. Georgetown research has found many agencies don’t have checks and balances, or audits, on their use of the technology. “What does misuse mean when there are no rules on use versus misuse?” she says.
ACLU senior legislative counsel Neema Singh Guliani cites that suspect claim as a reason Amazon can’t be trusted to work with law enforcement. The company hasn’t shown it is willing to take proper responsibility for a potentially dangerous technology, she says. “[This] reinforces the urgent need for Amazon to get out of the surveillance business altogether.” 1 CORRECTION, 10:50PM: An earlier version of this story incorrectly said the Washington County sheriff uses a mobile app to send photos of suspects to Amazon's cloud computing service.
All the times Facebook moved fast (and broke things) What it takes to pull off the country's first online census How to make your home more energy-efficient The world might actually run out of people Finding Lena, the patron saint of JPEGs 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Senior Editor X Topics Amazon face recognition Microsoft artificial intelligence Steven Levy Paresh Dave Amanda Hoover Steven Levy Niamh Rowe Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,552 | 2,021 |
"Facebook Is Everywhere; Its Moderation Is Nowhere Close | WIRED"
|
"https://www.wired.com/story/facebooks-global-reach-exceeds-linguistic-grasp"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Inside the Facebook Papers How to Fix Facebook, According to Employees What Badge Posts Reveal Facebook Is Everywhere. Its Moderation Isn't Tom Simonite Business Facebook Is Everywhere; Its Moderation Is Nowhere Close Facebook users who speak languages such as Arabic, Pashto, or Armenian are effectively second class citizens of the world’s largest social network.
Illustration: Elena Lacey; Getty Images The Facebook Papers Inside the Facebook Papers How to Fix Facebook, According to Employees What Badge Posts Reveal Facebook Is Everywhere. Its Moderation Isn't Now Reading Save this story Save Save this story Save Application Content moderation Safety Company Facebook Technology Natural language processing Facebook launched support for Arabic in 2009 and scored a hit. Soon after, the service won plaudits for helping the mass pr otests known as the Arab Spring.
By last year, Arabic was the third most common language on the platform, with people in the Middle East and North Africa spending more time each day with Facebook’s services than users in any other region.
When it comes to understanding and policing Arabic content, Facebook has been less successful, according to two internal studies last year. One, a detailed account of Facebook’s handling of Arabic, warns that the company’s human and automated reviewers struggle to comprehend the varied dialects used across the Middle East and North Africa. The result: In a region wracked by political instability, the company wrongly censors benign posts for promoting terrorism while exposing Arabic speakers to hateful speech they shouldn’t see.
“Arabic is not one language,” the study says. “It is better to consider it a family of languages—many of which are mutually incomprehensible.” The documents on Facebook’s foibles with Arabic are part of a tranche of internal material, known collectively as The Facebook Papers , that shows the company struggling—or neglecting—to manage its platform in places that are far from its headquarters in California, in regions where the vast majority of its users live. Many of these markets are in economically disadvantaged parts of the world, afflicted by the kinds of ethnic tensions and political violence that are often amplified by social media.
The documents were disclosed to the Securities and Exchange Commission and provided to Congress in redacted form by legal counsel for ex-Facebook employee Frances Haugen.
The redacted versions were reviewed by a consortium of news organizations, including WIRED.
The collection offers a limited view inside the social network but reveals enough to illustrate the immense challenge created by Facebook’s success. A site for rating the looks of women students at Harvard evolved into a global platform used by nearly 3 billion people in more than 100 languages. Perfectly curating such a service is impossible , but the company’s protections for its users seem particularly uneven in poorer countries. Facebook users who speak languages such as Arabic, Pashto, or Armenian are effectively second class citizens of the world’s largest social network.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Some of Facebook’s failings detailed in the documents involve genuinely hard technical problems. The company uses artificial intelligence to help manage problematic content—at Facebook’s scale humans cannot review every post. But computer scientists say machine learning algorithms don’t yet understand the nuances of language. Other shortcomings appear to reflect choices by Facebook, which made more than $29 billion in profit last year, about where and how much to invest.
For example, Facebook says nearly two-thirds of the people who use the service do so in a language other than English and that it regulates content in the same way globally. A company spokesperson said it has 15,000 people reviewing content in more than 70 languages and has published its Community Standards in 50. But Facebook offers its service in more than 110 languages; users post in still more.
Nearly two-thirds of the people who use Facebook do so in a language other than English.
A December 2020 memo on combating hate speech in Afghanistan warns that users can’t easily report problematic content because Facebook had not translated its community standards into Pashto or Dari, the country’s two official languages. Online forms for reporting hate speech had been only partially translated into the two languages, with many words presented in English. In Pashto, also widely spoken in Pakistan, the memo says Facebook’s translation of the term hate speech “does not seem to be accurate.” “When combating hate speech on Facebook, our goal is to reduce its prevalence, which is the amount of it that people actually see,” a Facebook spokesperson said in a statement. The company recently released figures suggesting that on average, this has declined worldwide since mid-2020. “This is the most comprehensive effort to remove hate speech of any major consumer technology company, and while we have more work to do we remain committed to getting this right.” For Arabic, most of Facebook’s content review takes place in Casablanca, Morocco, one document says, using locally recruited staff. That means errors when handling content from outside North Africa are “virtually guaranteed,” the document says.
Even in North African dialects, errors are a problem. The document cites the case of Hosam El Sokkari, previously the BBC’s head of Arabic, who in 2020 found himself unable to livestream on Facebook because the company said a 2017 post written in Egyptian Arabic that criticized a conservative Muslim cleric promoted terrorism. Algorithms flagged the post for breaking Facebook’s rules and human reviewers concurred, according to the Wall Street Journal.
El Sokkari’s account was later locked after Facebook told him several other of his posts breached its policies. The document says an internal investigation found that staff who reviewed “a set” of El Sokkari’s posts wrongly took action against them 90 percent of the time.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A Facebook spokesperson said the company reinstated El Sokkari’s posts after it became aware they had been mistakenly removed; Facebook is reviewing options to address the challenges of handling Arabic dialects, including hiring more content reviewers with diverse language skills.
Users in Afghanistan can’t easily report problematic content because Facebook had not translated its community standards into Pashto or Dari, the country’s two official languages.
A document reviewing Facebook’s moderation across the Middle East and North Africa, from December 2020, says algorithms used to detect terrorist content in Arabic wrongly flag posts 77 percent of the time—worse than a coin flip. A Facebook spokesperson said the figure is wrong, and that the company has not seen evidence of such poor performance.
That document also warns that flagging too many posts for terrorism may be harming Facebook’s business prospects. The company’s most recent earnings report said revenue per user grew fastest in its geographic category that includes the Middle East. The document says that when owners of advertiser accounts that had been disabled appealed Facebook’s decision, nearly half proved to have been shuttered incorrectly. It suggests that video views and growth in the region are constrained because accounts are being wrongly penalized.
Rasha Abdulla, a professor at the American University in Cairo who studies social media, says the findings of Facebook’s research confirm suspicions by outsiders that the company quashes innocent or important content, such as jokes, news coverage, and political discussion. She believes the problem has worsened as the company has added more automation. “We really started seeing these problems arise in recent years, with increasing use of algorithms and AI,” she says.
Increased reliance on algorithms is at the heart of Facebook’s strategy for content moderation. The company recently said machine learning has reduced how often Facebook users encounter hate speech. But Facebook does not disclose data on how its technology performs in different countries or languages.
Internal Facebook documents show some staff expressing skepticism and include evidence that the company’s moderation technology is less effective in emerging markets.
One reason for that is a shortage of human-labeled content needed to train machine learning algorithms to flag similar content by themselves. The 2020 document that discussed Arabic dialects says Facebook needs a pool of workers who understand the full diversity of Arabic to properly track problem content and train algorithms for the different dialects. It says a lead engineer on hate speech work considered building such systems impossible. “As it stands, they barely have enough content to train and maintain the Arabic classifier,” the document says.
Earlier this month, Facebook agreed to commission an independent check on its content moderation for Arabic and Hebrew. The suggestion had come from Facebook’s Oversight Board of outside experts funded by the company, after reviewers incorrectly removed an Egyptian user’s post of a report by Al Jazeera Arabic on threats of violence by the military wing of Hamas. Facebook had already reinstated the post.
“We really started seeing these problems arise in recent years, with increasing use of algorithms and AI.” Rasha Abdulla, professor, American University in Cairo No one has ever had to manage a global network like Facebook’s that reaches into nearly every country, language, and community on earth. Internal documents show staff functioning like an internet age diplomatic corps, attempting to apply data science to the world’s thorniest conflicts. Documents show the company attempting to prioritize extra language and automated content moderation resources for a list of “at-risk countries” where violence or other harms are considered most likely. A version of the list for 2021 shows 10 countries on the top tier, including Pakistan, Ethiopia, and Myanmar—where the UN said Facebook posts played a “determining role” in 2017 attacks on the country’s Muslim Rohingya minority. A December 2020 document describes a push to hire staff with expertise in those countries and their languages. It says the company lacks such coverage for four of the 10 countries on the top tier.
Facebook says it has automated systems to find hate speech and terrorism content in more than 50 languages.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In internal posts, some Facebook engineers express blunt pessimism about the power of automation to solve the company’s problems. A 2019 document estimates that properly training a classifier to detect hate speech in a market served by Facebook requires 4,000 manual content reviews a day. When one employee asks if that number might shrink as systems get better, a coworker says the answer is no because the company’s algorithms are immature, like elementary school students: “They need teachers (human reviewers) to grow.” A Facebook data scientist who worked on “violence and incitement” before leaving the company last December estimated in a goodbye post included in Haugen’s documents and previously reported by BuzzFeed News that the company removes less than 5 percent of hate speech on the platform—and claimed AI can’t significantly improve that. “The problem of inferring the semantic meaning of speech with high precision is not remotely close to solved,” the data scientist wrote.
Facebook says figures from June showed that on average across the world, the amount of hate speech users saw on Facebook fell by half in the previous nine months.
The company doesn’t disclose information on patterns for individual countries or languages.
The departing data scientist argued the company could do more, saying employees working on content problems were given impossible remits. Authors of the post described a deep sense of guilt over having to prioritize work on US English while violence flared in Armenia and Ethiopia and claimed Facebook has an easy way to improve its global moderation. “It’s just not reasonable to have one person responsible for data science for all of violence and incitement for the entire world,” the post said. “We can afford it. Hire more people.” Updated, 10-25-21, 3:35pm ET: This article has been updated to include additional information from Facebook about the number of languages in which it has automated systems to identify hate speech and the number of languages in which its community standards are available.
📩 The latest on tech, science, and more: Get our newsletters ! The mission to rewrite Nazi history on Wikipedia Actions you can take to tackle climate change Denis Villeneuve on Dune : “I was really a maniac” Amazon's Astro is a robot without a cause The effort to have drones replant forests 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Senior Editor X Topics Facebook The Facebook Papers - Series The Facebook Papers content moderation algorithms artificial intelligence Social Media Will Knight Khari Johnson Will Bedingfield Khari Johnson Khari Johnson Will Knight Niamh Rowe Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,553 | 2,019 |
"Behind the Rise of China's Facial-Recognition Giants | WIRED"
|
"https://www.wired.com/story/behind-rise-chinas-facial-recognition-giants"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Behind the Rise of China’s Facial-Recognition Giants Gilles Sabrie/The New York Times/Redux Save this story Save Save this story Save Application Face recognition Ethics End User Government Sector Public safety Source Data Sensors Video Technology Machine learning Machine vision Unfamiliar faces aren’t welcome at Beijing public housing projects. To prevent illegal subletting, many have facial-recognition systems that allow entry only to residents and certain delivery staff, according to state news agency Xinhua. Each of the city’s 59 public housing sites is due to have the technology by year’s end.
Artificial intelligence startup Megvii mentioned a similar public housing security contract in an unspecified Chinese city in filing for an initial public offering in Hong Kong last week. The Chinese company, best-known for facial recognition, touts its government dealings, including locking down public housing to curb subletting, as a selling point to potential investors.
Megvii’s filing shows the scale of China’s ambitions in artificial intelligence and how they could influence the use of surveillance technologies like facial recognition around the world. The company is one of four Chinese AI startups specializing in facial recognition valued at more than $1 billion, qualifying them as unicorns in Silicon Valley-speak. Now, the companies are looking to expand overseas, with help from public markets.
“These companies have benefited from China’s government making it a national priority to be the world leader in AI ,” says Rebecca Fannin, author of the forthcoming Tech Titans of China and two previous books about China’s tech scene. That support has led to contracts and freed up government and private funds, she says. “Now you are starting to see these companies go global.” Freedom House, a US-government-backed nonprofit, warned in a report last October that Chinese surveillance deals also export the country’s attitudes to privacy and could encourage companies and governments to collect and expose sensitive data. It argues that companies and products built to serve government agencies unconcerned about privacy are unlikely to become trustworthy defenders of human rights elsewhere, and can be forced to serve Chinese government interests.
Megvii’s filing says it has raised more than $1.3 billion, primarily from Chinese investment funds and companies, including ecommerce giant Alibaba. One of China’s state-owned VC funds also has a stake and a seat on the startup’s board. Other backers include US-based venture firm GGV and the sovereign wealth funds of Abu Dhabi and Kuwait. CB Insights said the company was valued at $4 billion earlier this year.
Reuters reports that its public listing will raise at least $500 million, but the figure is redacted from the company’s filing. Fannin says larger rival SenseTime, valued at $4.5 billion per CB Insights and also expanding overseas, is expected to go public soon. Its international ambitions could be helped by having US investors in Qualcomm and Silver Lake Capital.
Demand for Megvii’s computer-vision systems is growing rapidly. The company reported revenue of 1.4 billion yuan ($200 million) in 2018, more than four times a year earlier. It posted losses of 3.4 billion yuan ($469 million). The “City IoT” segment of its business that provides surveillance and security systems, like access control for public housing, accounts for nearly three-fourths of its revenue and has customers in more than 15 “countries and territories” outside China. That division also offers software that can spot traffic offenses or changing traffic flows caught on video.
In the first six months of this year, Megvii says 4.9 percent of its revenue came from outside China, compared with 2.7 percent for all of last year. Now, it plans to establish joint ventures or offices in Japan, Singapore, Thailand, and the Middle East.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Megvii was founded in 2011 by Yin Qi and two friends from Beijing’s elite Tsinghua University. The company’s name is short for "mega vision"—also the rough translation of its Chinese name, 旷视. Their venture was perfectly timed to surf the swell of interest in AI prompted by the emergence of a technology called deep learning in 2012, which made software that interprets images much more accurate.
Since then, Megvii and rival AI unicorns SenseTime, CloudWalk, and Yitu have made facial recognition commonplace in China, where police scan public spaces for suspects, and citizens pay in stores and pay taxes with their faces. Recently, Chinese startups, along with some from Russia, have dominated the US National Institute of Standards and Technology’s rankings of facial-recognition accuracy.
One of Megvii’s early successes came in 2015, when Alibaba affiliate Ant Financial used its technology to launch a feature called Smile to Pay. Megvii now provides facial recognition to government surveillance projects in China, and face authentication to banks and smartphone manufacturers, including Oppo, which Counterpoint Research says is the world’s fifth-largest phone maker by shipments. Megvii’s researchers regularly win academic contests for computer vision algorithms and have defeated competitors from Google and Microsoft.
Facial recognition is the technology Megvii is known for and has offered longest, and AI-powered security and surveillance appears to be the company’s main business. The startup also has a warehouse robotics group, and it touts beautifying algorithms that can remove pimples and reshape your body in selfies.
“These companies have benefited from China’s government making it a national priority to be the world leader in AI.” Rebecca Fannin Police in China use facial recognition to pluck persons of interest from concert crowds, and have even used wearable Google Glass-style devices that allow a cop to scan the face of anyone they’re looking at.
The technology is part of the intense security apparatus assembled to watch over China’s northwest Xinjiang region, where an estimated 1 million Uighur muslims have been swept into internment camps.
The New York Times reported in April that Megvii, SenseTime, and CloudWalk had helped create surveillance software that looks for Uighur faces. Megvii’s PR company said the company doesn’t design or customize its products to target ethnic groups.
Megvii’s filing mentions police use of its technology but not Xinjiang or algorithms that try to detect ethnicity. It presents case studies that make its surveillance offerings sound more cuddly.
One describes a 2018 incident in which police in northern China used Megvii’s technology to identify an elderly man who had forgotten his name and address and then escorted him home. The filing also mentions technology Megvii patented to identify dogs from their nose prints, used by Beijing authorities to manage strays.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg US authorities are unlikely to buy from Megvii, despite the startup's having a research lab in the Seattle suburb that’s also home to Microsoft. US agencies are traditionally wary of security technology from places that aren’t close allies of America. Megvii’s filing warns that it could be harmed by the Trump administration’s tariffs on China or knock-on effects from its suspicion of telecoms company Huawei.
Fannin says the company will have an easier time in other parts of Asia, South America, the Middle East, and Africa. All have become important markets for Chinese technology companies, including Hikvision, the world’s largest vendor of security cameras.
Megvii surveillance technology is already available in Thailand, and its competitors are also beginning to find success outside China. Yitu provides facial recognition to police in Malaysia, and CloudWalk won a contract to build a national facial-recognition system for Zimbabwe through China’s Belt and Road program of international infrastructure deals.
Megvii’s filing devotes significant space to assuring would-be investors that it can be trusted to use AI appropriately. It includes the company’s AI ethics code of conduct and says that customer contracts forbid using its technology to infringe human rights. Megvii also lists an AI ethics committee that reports to its board, including company executives and directors, and some outsiders.
One member listed, Emmanuel Lagarrigue, chief innovation officer at France-based Schneider Electric, describes the committee as a “work in progress.” He says its membership has not been finalized but that Megvii deserves credit for creating the group. “It shows the company's willingness to be proactive and transparent on how it wants to operate and on how its technology is deployed,” Lagarrigue says.
Jeffrey Ding, who studies Chinese AI development at Oxford’s Future of Humanity Institute, says that on paper Megvii’s ethics structure appears stronger than that of some large US companies. Google and Microsoft have released statements of AI principles , but they are implemented only via internal review processes.
However, Chinese companies aren’t totally in control of their own destinies. “The economy is a relatively free market, but the party and state can potentially completely control any company at any time,” Ding says. “Ethical obligations from Chinese companies come with a little bit less weight.” Updated, 9-4-19, 1:40pm ET: This article has been updated to reflect that Ant Financial used Megvii's technology to launch Smile to Pay.
This galactic primer is the future of AR education Become a musician using apps and a light-up piano Free coding school! (But you'll pay for it later ) Computer scientists really need to take ethics classes Ask the Know-It-Alls: How do machines learn 👁 Facial recognition is suddenly everywhere.
Should you worry? Plus, read the latest news on artificial intelligence 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones Senior Editor X Topics artificial intelligence machine learning deep learning face recognition China Will Knight Nelson C.J.
Peter Guest Andy Greenberg Joel Khalili Kari McMahon David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,554 | 2,017 |
"How Secure Is the iPhone X's FaceID? Here's What We Know | WIRED"
|
"https://www.wired.com/story/iphone-x-faceid-security"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security How Secure Is the iPhone X's FaceID? Here's What We Know Phil Schiller, senior vice president of worldwide marketing at Apple Inc., speaks about the iPhone X and the new FaceID feature on September 12, 2017.
David Paul Morris/Bloomberg/Getty Images Save this story Save Save this story Save In its quest for hardware perfection, Apple can't seem to resist testing the balance between making things easy and making them secure. Sure, a six-digit passcode is virtually impossible for a thief to crack before his repeated attempts lock the phone, but it demands an unacceptable fraction of a second for you to tap it out. Even TouchID requires a home button that Apple has deemed unsightly. Now, in its continuing war on inconvenience, Apple has replaced TouchID in its new flagship iPhone X with FaceID, a system where your face acts as the password. In doing so, it's about to give an unproven biometric security technology its biggest field test yet.
In theory, FaceID simply requires you to look at your phone and it will recognize you in a split-second and unlock itself. FaceID will be integrated beyond the lock screen too, in everything from downloading new apps to making payments with Apple Pay.
"With the iPhone X, your iPhone is locked until you look at it and it recognizes you. Nothing has ever been more simple, natural, and effortless," Apple exec Phil Schiller effused in the launch keynote. "This is the future of how we'll unlock our smartphones and protect our sensitive information." If so, Apple's version will have to overcome the deficiencies of the past. And while FaceID appears to improve on previous implementations in key ways, using your face as the sole key to your device's contents presents larger issues that may be harder to overcome.
Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop's owner held in front of its camera.
In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.
Hacking FaceID, though, won't be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user's face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face's 3-D shape—a trick similar to the kind now used to capture actors' faces to morph them into animated and digitally enhanced characters.
That 3-D shape should prove vastly tougher for anyone to spoof than the simpler image recognition previous systems deployed. But not impossible, insists Marc Rogers, a security researcher at Cloudflare who was one of the first to demonstrate spoofing a fake fingerprint to defeat TouchID. Rogers says he has no doubt that he—or at least someone —will crack FaceID. In an interview ahead of Apple's FaceID announcement, Rogers suggested that 3-D printing a target victim's head and showing it to their phone might be all it takes. "The moment someone can reproduce your face in a way that can be played back to the computer, you’ve got a problem," Roger says. "I’d love to start by 3-D-printing my own head and seeing if I can use that to unlock it." Read More iPhone X David Pierce Apple Watch David Pierce iphone x Brian Barrett After all, even 3-D facial recognition systems have been spoofed before: Two years ago Berlin-based SR Labs used a plaster mold of a test subject's face to cast a model that beat Microsoft's Hello facial recognition system. That setup was implemented in multiple brands of laptops and used the same sort of infrared depth-sensing cameras. The group didn't publish what kind of material it used in that mold, but SR Labs founder Karsten Nohl notes that it mimicked not only the shape of the target's face but also the light-reflective properties of skin. "It's definitely harder than spoofing a fingerprint," Nohl says.
In his keynote presentation, Apple's Schiller suggested that even that kind of spoofing won't work against FaceID. He showed a photo of minutely detailed masks created by Hollywood special-effects consultants that he said Apple used to test the feature. Schiller didn't, however, go so far as to claim that none of those masks defeated the system.
Big questions remain unanswered about FaceID's security, and it won't be clear how secure the system really is until outside troublemakers such as Rogers or Nohl get a chance to publicly test it. It's possible, for instance, that Apple's facial recognition technology uses color-based image-recognition in its detection scheme, which would require any simulated face designed to spoof the system to be meticulously colored too. But on that point Rogers says FaceID may not actually measure color at all, since it requires processing and depends on variables such as the lighting in the room, your health, and whether you've recently gotten a tan or a sunburn. "Color doesn’t add that much value and it’s very variable," Rogers argues.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg #image: /photos/59b83303b481ff502db03681 |||||| Regardless of the specific technological approach, the very notion of using your face as the key to your digital secrets presents some fundamental problems. Unlike a passcode, your face can't easily change. If someone does find a way to spoof it—like the SR Labs method or the 3-D printing Rogers proposes—they can spoof it forever. (As Schiller conceded in his keynote, any identical twins will also need to deeply consider how much they trust their sibling.) Second, it's very hard to hide your face from someone who wants to coerce you to unlock your phone, like a mugger, a customs agent, or a policeman who has just arrested you. In some cases, criminal suspects in the US can invoke the Fifth Amendment protections from self-incrimination to refuse to give up their phone's passcode. That same protection doesn't apply to your face. Apple says that you'll need to look directly into the screen to unlock FaceID, so it won't be easy to trick someone into triggering it, but the cops could simply lock you up for contempt of court until your eyes cooperate.
Both of those issues apply for TouchID too. But FaceID introduces a new problem that TouchID has never had: Your face sits out in the open, displayed in public, and well-documented across social media platforms. Using it as a secret key is a little like writing your PIN on a Post-It note, slapping it on your forehead, and going for a stroll. Even photos on Instagram and Facebook might be enough to compromise your control of your face as a login mechanism. Researchers at the University of North Carolina last year showed that they could use Facebook photos alone to reconstruct a 3-D virtual model of someone's face that could defeat five different facial-recognition applications they tested it against, with between 55 and 85 percent success rates.
None of that makes FaceID useless or broken—far from it. For the average iPhone owner, the difficulty of spoofing FaceID and also gaining physical access to a target iPhone will likely make any attack on it a monumental waste of effort, says Rich Mogull, a security analyst who has long focused on Apple. "If you have to 3-D print a model of someone's face to defeat this, that’s probably an acceptable risk for most of the population," Mogull says. "If that’s the economic cost to break into one of these devices, we’re OK." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That said, he adds that those with more security sensitivities should simply turn it off—and TouchID too, for that matter. "If I were an intelligence agent, I wouldn't turn on any biometric," Mogull says.
That caveat isn't an all-or-nothing proposition. Since you can enable or disable FaceID for specific applications, Rogers suggests that cautious users can, for instance, choose to use it for unlocking the phone but not for payments. And Apple seems to have acknowledged that its biometrics aren't an infallible solution. "There's no perfect system," Schiller said during Tuesday's presentation, caveating that another face could unlock the iPhone X one in a million times—although that's among faces chosen at random, not ones carefully designed to mimic yours.
'If I were an intelligence agent, I wouldn't turn on any biometrics." —Security Analyst Rich Mogull More concrete evidence that Apple recognizes the limitations of FaceID can be found in two other new features in iOS 11. One requires the user to enter the phone's passcode to trust a connection to a new computer, making it far harder to extract the data from an unlocked phone.
The other is an "SOS mode" that allows the user to hit the home or power button five times to disable TouchID or FaceID, depending on the phone's model.
Those features show that even Apple understands the need for layers of security above and beyond FaceID. And Rogers warns that no iPhone owner should harbor any illusion that their phone's facial recognition, as slick as it seems, isn't a security compromise in exchange for convenience. "Apple always wants its user experience to be delightful," Rogers says. "In the security world that means you’re going to have to accept certain limitations." And if those limitations mean your most secret of secrets get a little less secure every time someone tags you Facebook, perhaps you should consider using an old-fashioned passcode instead.
Your iPhone has all kinds of sensitive and important data, which is why you should know how to back it up You probably don’t want to talk with everyone that calls you.
Blocking them might help.
Just join the iPhone/iPad life? Here’s how to set it up Senior Writer X Topics iPhone apple iPhone X biometrics Matt Burgess Matt Burgess Kate O'Flaherty David Gilbert Lily Hay Newman Dell Cameron Andy Greenberg Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,555 | 2,019 |
"The Window to Rein In Facial Recognition Is Closing | WIRED"
|
"https://www.wired.com/story/congress-facial-recognition-privacy-regulation"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Lily Hay Newman Security The Window to Rein In Facial Recognition Is Closing George Etheredge Save this story Save Save this story Save In the wake of jarring revelations about how United States law enforcement agencies have deployed facial recognition , Congress seemed, for a moment, galvanized to act. Based on a Homeland Security Committee hearing in the House Wednesday, that moment appears to be fading—as hundreds of local, state, and federal law enforcement officials continue to amass and access the controversial data every day.
Some municipalities—San Francisco and Somerville, Massachusetts, among them—have proactively banned law enforcement's use of facial recognition. And more localized entities, like the New York State Education Department, have barred it in certain circumstances as well. And even police bodycam maker Axon has declined to incorporate it into its products. But the longer Congress waits to act on a broader level, the more entrenched the technology becomes and the harder it will be for opponents to overcome its inertia.
That tension played out on Capitol Hill Wednesday, where legislators seemed alternately wary of facial recognition's civil rights implications and enthusiastic about its benefits to law enforcement. Some representatives seemed impressed by the technology's accuracy. But others noted that those statistics vary widely based on whether a system is assessing images that are well lit and show full faces, as well as factors like race and sex. The mixed reaction to a panel of Customs and Border Protection, Transportation Security Administration, and Secret Service officials was a stark contrast to two recent House Committee on Oversight and Reform hearings , in which lawmakers expressed deep concern about facial recognition's potential for misuse and abuse.
"Each of these methods present unique privacy considerations, but also clear security benefits," Representative Mike Rogers (R-Alabama) said in Wednesday's hearing. "DHS's primary focus is facial recognition at TSA and CBP checkpoints where travelers are already providing IDs to government employees ... Automating this process with biometric technology will improve transportation security." A major focus of Wednesday's hearing was the May data breach of Customs and Border Protection contractor Perceptics, which exposed photos of travelers and license plates related to about 100,000 people. But that's just one in a sea of recent troubling reports. Georgetown Law’s Center on Privacy and Technology disclosed findings on Sunday, first published by The Washington Post , that the FBI and Immigration and Customs Enforcement both access millions of US citizens' photos through state driver's license databases. Separately, local police forces have been caught experimenting with using unconventional data, like sketches or photos of celebrities that resemble suspects, to feed facial recognition systems.
Absent Congressional traction, privacy and civil liberties groups among others have redoubled their efforts to rein facial recognition in. On Wednesday, ACLU of Massachusetts announced that it is suing the Massachusetts Department of Transportation to find out more about how its driver’s license database is used for facial recognition. Previously, the organization shared emails with WIRED, obtained from a Freedom of Information Act request, that exposed the Massachusetts State Police's lack of oversight regarding the technology's use. Among other questions, ACLU of Massachusetts requested information on how many times the agency's facial recognition database had been queried, by whom, and for what reasons.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "While the auditing feature does allow us to track users, it is not fully comprehensive," wrote Massachusetts State Police privacy officer Jason Stelmat to the ACLU in April. "Our toolset does capture many of the user functions, but it does not cleanly capture every user function performed ... There is no direct feature within the administrative tool to produce the information you are requesting related to Face Match usage." Kade Crockford, director of ACLU of Massachusetts' Technology for Liberty Program, says findings like these surprise her even given the chaotic, unregulated intersection of law enforcement and facial recognition. "I found it shocking that the State Police has a facial recognition program that can access four million mugshots and not only do they not know how many times users have queried that Face Match system, but they cannot know. It means nobody is even curious internally about implementing these types of tools. I find that to be, frankly, appalling." "This technology is dangerous." Evan Greer, Fight for the Future Elsewhere, the digital rights advocacy group Fight for the Future announced on Tuesday that it was launching a campaign to call for a total federal ban of facial recognition surveillance technology. And in advance of Wednesday's hearing, the Electronic Privacy Information Center published a coalition letter to the House Homeland Security Committee calling for DHS to suspend its use of facial recognition on the general public.
"This technology is dangerous, it has the potential to exacerbate existing forms of discrimination, automate racial profiling, and expand other inequities that already exist," says Evan Greer, deputy director of Fight for the Future. "Congress has the authority to pass a law that says law enforcement agencies can't use this technology. That's absolutely within the purview of the legislature." First, though, Congress needs to agree not only on if it should draw a line but where. John Wagner, deputy executive assistant commissioner in CBP's Office of Field Operations, argued Wednesday that CBP's use of facial recognition as part of immigration processing for US citizens does not constitute a "surveillance program," since those travelers would have their photo IDs assessed by a CBP agent anyway. The added layer of facial recognition, the logic goes, is simply a more efficient way to implement existing screening measures. No one on the committee challenged Wagner's assertion, and many legislators seemed emboldened about the promise of facial recognition technology as the hearing went on.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "It's always a balance in this Committee when we deal with security issues—we deal with privacy and civil liberties—we always have to balance these as Americans," said Representative Michael McCaul (R-Texas). "And I think it's important that we balance those factors. But I wouldn't want to throw the baby out with the bathwater." Privacy advocates, perhaps not surprisingly, see instead an effort to normalize facial recognition's unregulated expansion in law enforcement. But they also say that it's not too late to make radical changes to how it's used.
"I absolutely think you can roll back the clock," says ACLU of Massachusetts' Crockford. "This technology was built and deployed by human beings and can be dismantled by human beings. My major concern is whether the political will exists to do it." Additional reporting by Louise Matsakis.
The second coming of the robot pet The infrastructure mess causing countless web outages The cypherpunks tapping bitcoin via ham radio Disney's new Lion King is the VR-fueled future of cinema YouTube's “shitty robot” queen made a Tesla pickup truck 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Senior Writer X Topics face recognition privacy Regulation Dell Cameron Dell Cameron Dell Cameron Lily Hay Newman Matt Burgess Lily Hay Newman Dell Cameron Dell Cameron Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,556 | 2,018 |
"Amazon's Facial Recognition System Mistakes Members of Congress for Mugshots | WIRED"
|
"https://www.wired.com/story/amazon-facial-recognition-congress-bias-law-enforcement"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brian Barrett Security Lawmakers Can't Ignore Facial Recognition's Bias Anymore The Washington Post/Getty Images Save this story Save Save this story Save Amazon touts its Rekognition facial recognition system as “ simple and easy to use ,” encouraging customers to “detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.” And yet, in a study released Thursday by the American Civil Liberties Union, the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that’s simply not good enough.
The ACLU study also illustrated the racial bias that plagues facial recognition today. "Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress," wrote ACLU attorney Jacob Snow. “People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that." Facial recognition technology’s difficulty detecting darker skin tones is a well-established problem. In February, MIT Media Lab’s Joy Buolamwini and Microsoft’s Timnit Gebru published findings that facial recognition software from IBM, Microsoft, and Face++ have a much harder time identifying gender in people of color than in white people. In a June evaluation of Amazon Rekognition, Buolamwini and Inioluwa Raji of the Algorithmic Justice League found similar built-in bias. Rekognition managed to even get Oprah wrong.
“Given what we know about the biased history and present of policing, the concerning performance metrics of facial analysis technology in real-world pilots, and Rekognition’s gender and skin-type accuracy differences,” Buolamwini wrote in a recent letter to Amazon CEO Jeff Bezos, “I join the chorus of dissent in calling Amazon to stop equipping law enforcement with facial analysis technology.” 'We wouldn’t find this acceptable in any other setting. Why should we find it acceptable here?' Alvaro Bedoya, Center on Privacy and Technology Yet Amazon Rekognition is already in active use in Oregon’s Washington County. And the Orlando, Florida police department recently resumed a pilot program to test Rekognition’s efficacy, although the city says that for now, “no images of the public will be used for any testing—only images of Orlando police officers who have volunteered to participate in the test pilot will be used.” Those are just the clients that are public; Amazon declined to comment on the full scope of law enforcement’s use of Rekognition.
For privacy advocates, though, any amount is too much, especially given the system’s demonstrated bias. “Imagine a speed camera that wrongly said that black drivers were speeding at higher rates than white drivers. Then imagine that law enforcement knows about this, and everyone else knows about this, and they just keep using it,” says Alvaro Bedoya, executive director of Georgetown University’s Center on Privacy and Technology. “We wouldn’t find this acceptable in any other setting. Why should we find it acceptable here?” Amazon takes issue with the parameters of the study, noting that the ACLU used an 80 percent confidence threshold; that’s the likelihood that Rekognition found a match, which you can adjust according to your desired level of accuracy. “While 80 percent confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty,” the company said in a statement. “When using facial recognition for law enforcement activities, we guide customers to set a threshold of at least 95 percent or higher.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While Amazon says it works closely with its partners, it’s unclear what form that guidance takes, or whether law enforcement follows it. Ultimately, the onus is on the customers—including law enforcement—to make the adjustment. An Orlando Police Department spokesperson did not know how the city had calibrated Rekognition for its pilot program.
The ACLU counters that 80 percent is Rekognition’s default setting. And UC Berkeley computer scientist Joshua Kroll, who independently verified the ACLU’s findings, notes that if anything, the professionally photographed, face-forward congressional portraits used in the study are a softball compared to what Rekognition would encounter in the real world.
“As far as I can tell, this is the easiest possible case for this technology to work,” Kroll says. “While we haven’t tested it, I would naturally anticipate that it would perform worse in the field environment, where you’re not seeing people’s faces straight on, you might not have perfect lighting, you might have some occlusion, maybe people are wearing things or carrying things that get in the way of their faces.” Amazon also downplays the potential implications of facial recognition errors. “In real world scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgement,” the company’s statement reads. But that elides the very real consequences that could be felt by those who are wrongly identified.
“At a minimum, those people are going to be investigated. Point me to a person that likes to be investigated by law enforcement,” Bedoya says. “This idea that there’s no cost to misidentifications just defies logic.” 'What we’re trying to avoid here is mass surveillance.' Jeramie Scott, EPIC So, too, does the notion that a human backstop provides an adequate check on the system. “Often with technology, people start to rely on it too much, as if it’s infallible,” says Jeramie Scott, director of the Electronic Privacy Information Center’s Domestic Surveillance Project. In 2009, for instance, San Francisco police handcuffed a woman and held her at gunpoint after a license-plate reader misidentified her car. All they had to do to avoid the confrontation was to look at the plate themselves, or notice that the make, model, and color didn’t match. Instead, they trusted the machine.
Even if facial recognition technology worked perfectly, putting it in the hands of law enforcement would still raise concerns. “Facial recognition destroys the ability to remain anonymous. It increases the ability of law enforcement to surveil individuals not suspected of crimes. It can chill First Amendment-protected rights and activities,” Scott says. “What we’re trying to avoid here is mass surveillance.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While the ACLU study covers well-trod ground in terms of facial recognition’s faults, it may have a better chance at making real impact. “The most powerful aspect of this is that it makes it personal for members of Congress,” says Bedoya. Members of the Congressional Black Caucus had previously written a letter to Amazon expressing related concerns, but the ACLU appears to have gotten the attention of several additional lawmakers.
The trick, though, will be turning that concern into action. Privacy advocates say that at a minimum, law enforcement’s use of facial recognition technology should be heavily restricted until its racial bias has been corrected and its accuracy assured. And even then, they argue, its scope needs to be limited, and clearly defined. Until that happens, it’s time not to pump the brakes but to slam down on them with both feet.
“A technology that’s proven to vary significantly across people based on the color of their skin is unacceptable in 21st-century policing,” says Bedoya.
Crispr and the mutant future of food Your next phone's screen will be much tougher to crack The 10 most difficult-to-defend online fandoms Schools can get free facial recognition tech.
Should they? A landmark legal shift opens Pandora’s box for DIY guns Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories This story has been updated to reflect that Timnit Gebru was not involved in Joy Buolamwini's Amazon Rekognition research.
Executive Editor, News X Topics Amazon face recognition congress ACLU Matt Burgess Matt Burgess Deidre Olsen Vittoria Elliott David Gilbert Lily Hay Newman Dell Cameron Dell Cameron Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,557 | 2,016 |
"Cops Have a Database of 117M Faces. You’re Probably in It | WIRED"
|
"https://www.wired.com/2016/10/cops-database-117m-faces-youre-probably"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Lily Hay Newman Security Cops Have a Database of 117M Faces. You’re Probably in It Getty Images Save this story Save Save this story Save It's no secret that American law has been building facial recognition databases to aide in its investigations. But a new, comprehensive report on the status of facial recognition as a tool in law enforcement shows the sheer scope and reach of the FBI's database of faces and those of state-level law enforcement agencies: Roughly half of American adults are included in those collections. And that massive assembly of biometric data is accessed with only spotty oversight of its accuracy and how it's used and searched.
The 150-page report , released on Tuesday by the Center for Privacy & Technology at the Georgetown University law school, found that law enforcement databases now include the facial recognition information of 117 million Americans, about one in two U.S. adults. It goes on to outline the dangers to privacy, free speech, and protections against unreasonable search and seizure that come from unchecked use of that information. Currently the report finds that at least a quarter of all local and state police departments have access to a facial recognition database---either their own or another agency's---and law enforcement in more than half of all states can search against the trove of photos stored for IDs like drivers' licenses.
"Face recognition technology lets the police identify you from far away and in secret without ever talking to you," says Alvaro Bedoya, the executive director of the Center for Privacy & Technology. "Unless you’ve been arrested, the chances are you’re not in a criminal fingerprint database or a criminal DNA database either, yet by standing for a driver’s license photo at least 117 million adults have been enrolled in a face recognition network searched by the police or the FBI." He went on to describe the databases as an unprecedented privacy violation: "a national biometric database that is populated primarily by law abiding people." The report notes that no state has passed comprehensive legislation to define the parameters of how facial recognition should be used in law enforcement investigations. Only a handful of departments around the country have voluntarily imposed limits on searches to require reasonable suspicion or necessitate that they be used only in investigation of a serious crime. Similarly, few departments have enacted standards for testing the accuracy of their digital systems or teaching staff to visually confirm face matches---a skill that seems like it would be innate, but actually requires specialized training.
The report also raises unexpected concerns about the potential for racial bias in the facial recognition databases. Law enforcement agencies have, in many cases, argued that the biometric tools reduce racial policing. After all, a computer doesn't know the societal meaning of race or gender; it simply sorts and matches photos based on numeric analysis of features and patterns. But research has shown that facial recognition algorithms aren't as impartial as they seem. Depending on the data sets used to train machine learning systems, they can be become far better at identifying people of some races than others.
For example, some research indicates that facial recognition systems in the United States have lower accuracy when attempting to identify African Americans. Meanwhile, since law enforcement facial recognition systems often include mug shots and arrest rates among African Americans are higher than the general population, algorithms may be disproportionately able to find a match for black suspects.
The FBI declined to specifically comment on the report, but referred to previous statements about its facial recognition program in which the agency said that its use of the technology prioritizes privacy and civil liberties "beyond the requirements of the law." The agency also noted that when an investigator searches a facial recognition database, two separate human reviewers check potential matches the system returns before identifying any individual to an investigator, and only about 12 percent of searches result in a positive identification. It’s not clear if any such safeguards apply to state and local-level police using facial recognition systems.
1 Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Perhaps the most dystopian aspect of the report is its findings that real-time facial recognition---identifying people in public as they pass a live-feed video camera---is increasing in popularity among police departments. The researchers found that five departments in major cities like Los Angeles and Chicago either already use real-time face recognition, own the technology to do it, or want to buy it. That pervasive surveillance raises similar concerns to image databases, but significantly expands questions about expectation of privacy and the ability for police to perform this new form of surveillance en masse and in secret.
In reaction to the report, a coalition of more than 40 civil rights and civil liberties groups, including the American Civil Liberties Union and the Leadership Conference for Civil and Human Rights launched an initiative on Tuesday asking the Department of Justice’s Civil Rights Division to evaluate current use of facial recognition technology around the country. With facial recognition, “Police are free to identify and potentially track anyone even if they have no evidence that that person has done anything wrong," says Neema Singh Guliani, the legislative counsel for the ACLU. "We don’t expect that the police can identify us when we are walking into a mosque, attending an AA meeting, or when we’re seeking help at a domestic violence shelter." For about half of American adults, it's too late to keep their faces out of law enforcements' biometric surveillance system. Now privacy advocates' best hope is to limit how that collection of faces can be used---and abused.
1 Updated 10/18/2016 6pm EST to include a response from the FBI.
Senior Writer X Topics ACLU biometrics face recognition laws privacy Dell Cameron Dell Cameron Dell Cameron Dell Cameron Dell Cameron Andy Greenberg Dhruv Mehrotra Tom Bennett Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,558 | 2,017 |
"Medicine Is Going Digital. The FDA Is Racing to Catch Up | WIRED"
|
"https://www.wired.com/2017/05/medicine-going-digital-fda-racing-catch"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Megan Molteni Science Medicine Is Going Digital. The FDA Is Racing to Catch Up Getty Images Save this story Save Save this story Save When Bakul Patel started as a policy advisor in the US Food and Drug Administration in 2008, he could pretty much pinpoint when a product was going to land in front of the reviewers in his division. Back when medical devices were heavy on the hardware---your pacemakers and your IUDs---it would take manufacturers years to get them ready for regulatory approval. FDA reviewers could keep up pretty well.
But as computer code took on more complex tasks, like spotting specious moles and quantifying blood flow , their duties began to accelerate. Software developers needed months, not years, to make it to the market. And there were a lot of them. It got harder to match pace. And then came artificial intelligence.
Today, machine learning powers more and more medical device software. And because it is always learning and improving, it is constantly changing products on the fly. For most regulators, an ever-changing algorithm is their worst nightmare. But Patel is one of those rare Washington bureaucrats who’s also a fervently optimistic futurist. And he’s got big plans to get federal regulators off Washington time and up to Silicon Valley speeds.
To do that, the FDA is creating a new unit dedicated strictly to digital health. Patel will be hiring 13 engineers---software developers, AI experts, cloud computing whizzes---to prepare his agency to regulate a future in which health care is increasingly mediated by machines. (He’s using funds generated by the medical device division’s user fee system, which is the FDA’s only other revenue stream besides congressional appropriations.) He’s also got plans to reimagine the path these machines will take to regulatory approval.
For technology giants getting into the health care game, the timing couldn’t be better. Last year Google’s venture capital arm (which manages around $2.4 billion) spent one-third of its investments in the health care space. Its spinoffs Calico and Verily are pursuing ambitious projects like smart contact lenses, Project Baseline ---oh, and beating death. Apple, in addition to its wellness through wearables play, is already working closely with the FDA on an app to diagnose Parkinson’s. And IBM is employing its artificial intelligence engine, Watson, to do everything from treating cancers to discovering new drugs.
Over the last year, FDA has put out a number of documents describing the agency’s current thinking on digital health. These guidances help developers understand what FDA does and what it doesn’t regulate as a medical device, and they reflect a largely hands-off approach. The FDA focuses its limited resources mostly on high-risk products, and the most recent of its proposed rules addresses software as a medical device---a category that would include medical apps, which remain largely unregulated.
FDA Can’t Hold Back Stream of Mobile Health Apps Wearables Could Soon Know You’re Sick Before You Do Apple’s ResearchKit Is a New Way to Do Medical Research As Patel, now the associate center director for digital health at FDA, was digging through the guidance’s 1,400 comments, he had a lightbulb moment. “We’ve been trying to translate the current regulation paradigm for digital,” he says. “But what we have today and what we’re going to have tomorrow are not really translatable. We need to take the blinders off, start with a clean sheet of paper.” Rather than reviewing each line of code or medical device on its own merits for each of its intended uses, Patel wants to flip that framework on its head. Instead, he envisions a model something more like the TSA security line at the airport: New developers or manufacturers with spotty track records would still have to take off their shoes and go through the body scanner. But trusted companies with demonstrated histories of excellence could keep their footwear and stroll through the metal detector. Patel’s not yet sure exactly how it would work, but it’s one of the ideas he’s toying with and running by industry stakeholders. “The idea is to get safe products to market faster, by having people compete on excellence rather than compliance,” he says. The trick is not to get bogged down by stuff you’ve never seen before.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We’re headed toward a zero code world, where AI writes it for you or you just say what you want and natural language processing takes care of the rest,” says Patel. “The pace will be tremendously faster than what we’re seeing today. The question is, how do we align our regulations to that radically different development timeline?” Whatever the approval process winds up looking like, it will fall to the new digital health unit to oversee and coordinate it between different offices within the FDA. Right now, those efforts are fragmented by specialty. Experts in the cardiology group would be in charge of a device that uses AI to quantify blood flowing through the heart, while a radiology group would review an AI-powered MRI-reading software. Breaking out of these silos is the goal for the new group, since the same technologies will increasingly apply to products that cut across specialties.
Amidst the HHS-wide hiring freeze, Patel's been given the go-ahead to start bringing new people into key positions. More jobs will be announced in October, when the new Medical Device User Fee Amendment goes into effect. Every four years FDA renegotiates this agreement with the industry, and the latest one established the formation of Patel’s digital health unit. The user fees will give him the money to make his hires, but in the meantime he's already begun a recruiting campaign in high-tech hotbeds like Silicon Valley, Seattle, and Boston. His sales pitch should be familiar to tech types: the usual shaping the future, making the world a better place start-up spiel. The only difference being Patel’s start-up is inside the federal government.
The question is whether or not talented people will be willing to leave their corporate gigs (and corporate salaries) for a stint in Washington under the new administration. For the last eight years, talent flowed pretty freely between Obama’s federal government and places like Google, Facebook, and Apple. A suddenly conservative, anti-science cloud cast over Washington has since cooled that jetstream. But if the digital information overlords know what’s good for them, they’ll get someone onto Patel’s team. Because it’s not every day a top regulator decides to wipe the slate clean. And it’s not every day you get to write the rules that will rule what tomorrow’s machines will write.
X Topics FDA healthcare medicine Regulation Emily Mullin Emily Mullin Matt Reynolds Rob Reddick Amit Katwala Emily Mullin Swapna Krishna Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,559 | 2,023 |
"Watch Watch the Little Robot That Taught the Big Robot Something New | WIRED"
|
"https://www.wired.com/video/watch/knowledge-bots"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Watch the Little Robot That Taught the Big Robot Something New About Credits Released on 05/10/2017 [Narrator] Yes, I know, I know.
You can do this much faster than a robot.
But there's something fascinating going on here behind the scenes.
An operator has taught the robot, named Optimus, how to pull a tube out of another tube.
To do so, the human worked in a 3D environment, kind of like a video game, to demonstrate how it's done.
So, cool, now the robot can manipulate tubes.
But what's so remarkable about this system is this little robot can then seamlessly transfer that knowledge to a much, much bigger robot.
Namely, the famous Atlas humanoid, which stands at six feet tall.
So that knowledge is combined with new information for Atlas about how to adapt this behavior for itself.
After all, it has to do it while balancing on two legs.
For now, this transfer of knowledge only works for Atlas in a simulation.
But this is a big step towards the future where robots share knowledge on vast scales.
Researchers are already building robots that can teach themselves to do things like open doors through what's known as reinforcement learning.
So, imagine a day when one robot in a factory somewhere learns to do something more efficiently, then uploads that knowledge to the cloud for other robots.
And not just robots of its own kind.
With advances like this with the Optimus robot, the machines will be able to communicate across species.
Nothing to worry about, I'm sure.
Look, they smile.
Nothing to worry about at all.
Starring: Matt Simon Watch a Homemade Robot Crack a Safe in Just 15 Minutes Pug vs. Giant Robot Chimp, the Vaguely Humanoid Robot Fear Not the Robot Singularity Huggable Robot Befriends Girl in Hospital Meet Jeff: 10' Tall Robot & Hollywood Actor HardWIRED: So, What Is a Robot Really? How to Build a Cardboard Robot Helmet How This Humanoid Robot Diver Was Designed Watch a Robot Interview Portlandia's Fred Armisen Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,560 | 2,017 |
"Google, Amazon Find Not Everyone Is Ready for Artificial Intelligence | WIRED"
|
"https://www.wired.com/story/google-amazon-find-not-everyone-is-ready-for-ai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Google, Amazon Find Not Everyone Is Ready for AI Frank Augugliaro Save this story Save Save this story Save Application Cloud computing Software development End User Small company Technology Machine learning Executives at ascendant tech titans like Amazon and Google tend to look down on their predecessor IBM. The fading giant of Armonk, New York, once sustained itself inventing and selling cutting-edge technology, but now leans heavily on consulting. Renting out people to help other companies with tech projects is a messier and less scalable business than selling computing power on a distant cloud server, and leaving the customer to do the grunt work.
Yet as Amazon and Google seek greater riches by infusing the world with artificial intelligence , they’ve started their own consulting operations, lending out some of their prized AI talent to customers. The reason: Those other businesses lack the expertise to take advantage of techniques such as machine learning.
Many companies use cloud platforms for tasks like data storage or powering websites and mobile apps. Market leader Amazon and its rivals are now trying to convince their customers to also buy AI services to mine insights from the hordes of data they amass. But AI experts are in short supply , in no small part because big tech companies compete fiercely to hire them.
Amazon launched several new cloud services for tasks such as understanding audio and training machine-learning models at its AWS re:Invent cloud conference in Las Vegas this week. An executive from the NFL came onstage Wednesday to boast how the league tapped Amazon’s machine-learning tools to determine how far players run, and how fast they accelerate.
But the NFL couldn’t do that work itself. It got hands-on help from Amazon’s elite machine-learning experts through a new consulting operation called Amazon ML Solutions Lab. Lab staffers examine a company’s data and systems, brainstorm ideas for how to improve them using AI, and help implement the plans.
AWS made its first big push into AI services last year. Swami Sivasubramanian, who leads AI initiatives at AWS, says the consulting shop was launched in response to requests from customers for help building AI systems. “We consistently heard they wanted to learn from the machine-learning scientists who built these capabilities for Amazon.com,” he says. Companies pay to tap Amazon’s experts, but Sivasubramanian declined to detail the menu they are offered or the prices, saying it varies depending on the project.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google launched its own consulting AI shop late last year. The Machine Learning Advanced Solutions Lab, as it is called, lets customers such as insurer USAA work on projects with Google AI engineers at a dedicated facility at the company’s campus in Mountain View, California. It also offers a four-week training program to help customers’ engineers improve their AI chops.
Related Stories Cloud Computing Cade Metz Artificial Intelligence Klint Finley Wunderkind Tom Simonite That such prized and well-compensated employees are now being put to work for others suggests that selling AI is more complex than executive keynotes imply. “The gating factor is people don’t know how to do this stuff,” says Rob Koplowitz, who tracks cloud AI for Forrester. “There needs to be some hand-holding here in the early stages.” The hand-holding stage may continue for a while. Amazon’s Sivasubramanian believes it will take a few years before machine-learning expertise is as widely shared as knowledge of distributed systems , the practice of using networked computers to solve problems at the heart of cloud computing.
Google CEO Sundar Pichai said this fall that there are only “a few thousands” of people capable of creating sophisticated machine-learning models. He has a team trying to make machine-learning software create machine-learning software , but it’s so far just a research project.
The expertise shortage upsets the usual dynamic of the cloud market, where Amazon, Google, and others mostly compete on price and technical features. “If you’re a random manufacturing company in the midwest you may have money, but it’s hard to attract a $250,000-a-year Stanford PhD to work for you,” says Diego Oppenheimer, whose Google-backed startup provides tools that help companies deploy machine-learning software. Companies in that situation may be more swayed by an offer of help building AI, than pricing and performance, he says.
Cloud companies have made AI more accessible. Amazon launched a new service that transcribes speech from audio or video this week, for example. A company that wants to transcribe meetings or calls can very easily ship off files to Amazon’s servers and get text back. Amazon and Google both offer services that identify common objects and scenes in photos.
The most powerful use cases for AI aren’t one-size-fits-all. Machine learning software typically is trained to solve a very specific problem. “If I need to figure out how much rust is on my industrial boiler, a cat and dog recognizer is not going to help,” says Chris Nicholson, CEO and cofounder of Skymind, which sells machine-learning tools and has helped organizations including the Department of Homeland Security use them in machine learning projects. Nicholson says that by creating consulting services, Amazon and Google “basically showed the Achilles heel of their business model.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A Microsoft vice president said at a conference this spring that many cloud AI systems are too complex for many companies to reap the same benefits from machine learning as big tech companies. Microsoft is trying to help customers for its AI services with a suite of online courses marketed as AI School. It was part of a $102 million investment round this summer into Element AI, a startup that will offer AI consulting services.
Amazon launched its own education initiative this week. A new $250 camera called DeepLens is designed to give developers an easy way to learn about machine learning---and Amazon services. Carnegie Mellon University plans to use the device with students, and other colleges are expected to do the same.
Many attendees of Amazon’s conference this week are getting a DeepLens for free. Some started hacking with the device on Wednesday. Sivasubramanian says people with little or no experience with machine learning were soon building copies of the Hotdog detector from the TV show Silicon Valley , or apps that use object and face recognition. “We’re going to make machine learning a normal part of programming,” he says. Until it is, expect to see leading cloud companies aping IBM.
Senior Editor X Topics artificial intelligence Amazon Google machine learning Will Knight Kari McMahon Amit Katwala Khari Johnson Andy Greenberg David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,561 | 2,018 |
"What Happened to Internet.org, Facebook's Grand Plan to Wire the World? | WIRED"
|
"https://www.wired.com/story/what-happened-to-facebooks-grand-plan-to-wire-the-world"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jessi Hempel Backchannel What Happened to Facebook's Grand Plan to Wire the World? Play/Pause Button Pause Jared Oriel Save this story Save Save this story Save In August 2013, Mark Zuckerberg tapped out a 10-page white paper on his iPhone and shared it on Facebook. It was intended as a call to action for the tech industry: Facebook was going to help get people online. Everyone should be entitled to free basic internet service, Zuckerberg argued. Data was, like food or water, a human right. Universal basic internet service is possible, he wrote, but “it isn’t going to happen by itself.” Wiring the world required powerful players—institutions like Facebook. For this plan to be feasible, getting data to people had to become a hundred times cheaper.
Zuckerberg said this should be possible within five to 10 years.
It was an audacious proposal for the founder of a social software company to make. But the Zuckerberg of 2013 had not yet been humbled by any significant failure. In a few months, the service he’d launched between classes at Harvard would turn 10. A few months after that, he would be turning 30. It was a moment for taking stock, for reflecting on the immense responsibility that he felt came with the outsize success of his youth, and for doing something with his accumulated power that mattered.
A few days later, Facebook unveiled what that something would be: Internet.org. Launched with six partners, it was a collection of initiatives intended to get people hooked on the net. Its projects fell into two groups. For people who were within range of the internet but not connected, the company would strike business deals with phone carriers to make a small number of stripped-down web services (including Facebook) available for free through an app. For those who lived beyond the web’s reach—an estimated 10 to 15 percent of the world’s population—Zuckerberg would recruit engineers to work on innovative networking technologies like lasers and drones.
The work was presented as a humanitarian effort. Its name ended in “dot-org,” appropriating the suffix nonprofits use to signal their do-gooder status on the web. Zuckerberg wrote that he wasn’t expecting Facebook to earn a profit from “serv[ing]the next few billion people,” suggesting he was motivated by a moral imperative, not a financial one. The company released a promotional video featuring John F. Kennedy’s voice reading excerpts from a 1963 speech imploring the students of American University to remember that “we all cherish our children’s future. And we are all mortal.” Andrew Carnegie believed in libraries. Bill Gates believed in health care. Zuckerberg believed in the internet.
Zuckerberg was sincere in his swashbuckling belief that Facebook was among a small number of players that had the money, know-how, and global reach to fast-forward history, jump-starting the economic lives of the 5 billion people who do not yet surf the web. He believed peer-to-peer communications would be responsible for redistributing global power, making it possible for any individual to access and share information. “The story of the next century is the transition from an industrial, resource-based economy to a knowledge economy,” he said in an interview with WIRED at the time. “If you know something, then you can share that, and then the whole world gets richer.” The result would be that a kid in India—he loved this hypothetical about this kid in India—could potentially go online and learn all of math.
Mark Zuckerberg announced the Internet.org Innovation Challenge in October of 2014, in New Delhi, India.
Arun Sharma/Hindustan Times/Getty Images For three years, Zuckerberg included Internet.org in his top priorities, pouring resources, publicity, and a good deal of his own time into the project. He traveled to India and Africa to promote the initiative and spoke about it at the Mobile World Congress in Barcelona two years in a row. He appeared before the UN General Assembly to push the idea that internet access was a human right. He amassed a team of engineers in his Connectivity Lab to work on internet-distribution projects, which had radically different production cycles than the software to which he was accustomed.
But from the start, critics were skeptical of Zuckerberg’s intentions. The company’s peers, like Google and Microsoft, never signed on as partners, preferring instead to pursue their own strategies for getting people online. Skeptics questioned the hubris of an American boy-billionaire who believed the world needed his help and posited that existing businesses and governments are better positioned to spread connectivity. They criticized Facebook’s app for allowing free access only to a Facebook-sanctioned set of services. At one point, 67 human rights groups signed an open letter to Zuckerberg that accused Facebook of “building a walled garden in which the world’s poorest people will only be able to access a limited set of insecure websites and services.” At first, Zuckerberg defended his efforts in public speeches, op-eds, and impassioned videos that he published on his own platform. I had a front-row seat for these events, as I spent most of 2015 reporting an article on Facebook’s connectivity efforts that took me to South Africa, London, Spain, New York, and Southern California to observe the company’s efforts to advance its version of universal connectivity.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg My story was published in January 2016, a month before India banned Facebook’s app altogether. Shortly after that, Facebook stopped talking about Internet.org. While bits of news about the company’s drone project or new connectivity efforts still emerge, Facebook hasn’t updated the press releases on the Internet.org website in a year. That led me to wonder, what exactly happened to Internet.org? The second time Mark Zuckerberg traveled to Barcelona to headline the Mobile World Congress, in the spring of 2015, I conducted the keynote interview. He arrived on a Sunday afternoon and was whisked to a dinner that he hosted for a group of telecom operators. We didn’t meet up until the next day, just minutes before we were to walk onstage. Zuckerberg, dressed in jeans, black Nikes, and a gray T-shirt, appeared confident. His face still had the youthful plumpness it has since lost.
The annual telecom trade show routinely draws tens of thousands of people, including the chiefs of all the big telecom operators. Attendees had begun lining up to hear him in the morning, and as I peered out from the wings just before our midday appearance, all 8,000 seats were filled; people watched from overflow rooms throughout the conference hall. I remember the cacophony of clicking camera flashes as Zuckerberg joined me onstage.
Zuckerberg spent only a few minutes touting the promise of drones and lasers in connecting people to the internet. This technology was exciting, he told the crowd, but distant. It would be years before a solar-powered plane hovered 60,000 feet in the air, beaming the internet to the disconnected. One year earlier, in Zuckerberg’s first Mobile World Congress appearance, he’d introduced a plan to get loads of people online seemingly overnight: Facebook wanted to partner with telecom operators to offer them a free app that had access to a few services like Wikipedia and health information. Oh, and Facebook. Zuckerberg believed this would be great for operators because they’d be able to get new customers. The app would be a gateway drug for people who’d never tried the internet before, and they’d subsequently decide to pay operators for more data. Zuckerberg had returned to Barcelona to promote this idea.
Zuckerberg would recruit engineers to work on innovative networking technologies like lasers and drones like Aquila, Facebook's unmanned aircraft that was designed to deliver Wi-Fi to developing nations.
Michael Short/Bloomberg/Getty Images He was greeted by a skeptical, and at times hostile, audience of telecom operators who were vexed by his proposal. They were already concerned that people were communicating through services like WhatsApp and Facebook instead of the more lucrative text-messaging services they offered. They'd spent the money to lay down fiber and build an actual network, and people were now opting not to pay them for minutes. In effect, before Internet.org was even a gleam in Zuckerberg’s eye, Facebook had already undermined their core business. They were reluctant to partner with the social network to get even more people online, and specifically, on Facebook. Denis O’Brien, chairman of the international wireless provider Digicel Group, told the Wall Street Journal that Zuckerberg was like “the guy who comes to your party and drinks your champagne, and kisses your girls, and doesn't bring anything." So far, operators had signed on in just six countries: Zambia, Tanzania, India, Ghana, Kenya, and Colombia. Zuckerberg invited three telecom executives to join him onstage to describe how things were going. One, from Paraguay, suggested his company had seen an uptick in subscribers during its Facebook trial. But even onstage at the invitation of Zuckerberg, they were reserved. “It all comes down to data,” said Jon Fredrik Baksaas, then CEO of Telenor Group. “It is challenging not to give the keys of your house to your competitor." That is to say, he was worried that Facebook’s messaging capabilities would siphon off his company's customers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Human rights activists worried about Internet.org for different reasons. While the app allowed numerous services, they were concerned that Facebook was the ultimate arbiter of which ones were included. Facebook had much to gain by centralizing the web onto one platform: Facebook. Critics charged that, in its haste to get services to people using the least amount of data possible, Facebook was compromising their security.
Not long after Mobile World Congress, in that May 2015 letter signed by 67 human rights groups, activists accused the company of promoting and attempting to build a two-tiered internet , saying: “These new users could get stuck on a separate and unequal path to Internet connectivity, which will serve to widen—not narrow—the digital divide.” The growing backlash caught Zuckerberg by surprise. He was accustomed to people resisting changes the company made to Facebook, but eventually they always came around. Users hadn’t liked Facebook’s News Feed at first, but they came to embrace it. With Internet.org, though, the more he tried to explain Facebook’s motives, the more the criticism mounted. The opposition was particularly significant in India, where a group of activists were pushing regulators to ban its app. They said it violated net neutrality, the idea that internet providers should treat all online services equally, by making some services available for free.
In the spring of 2015, Zuckerberg published an op-ed, this time in the Hindustan Times and not on Facebook, in which he tried to explain that his initiative didn’t run counter to net neutrality. He argued that a limited internet was better than no internet; if people couldn't afford to pay for connectivity, “it is always better to have some access and voice than none at all.” But Indian activists only grew louder in their declaration that Facebook just didn’t get it.
Activists accused Facebook of promoting and attempting to build a two-tiered internet. On January 2, 2016 demonstrators from Free Software Movement Karnataka protested Facebook's Free Basics.
MANJUNATH KIRAN/AFP/Getty Images An open letter to Facebook, signed by 67 human rights groups, read, “These new users could get stuck on a separate and unequal path to Internet connectivity, which will serve to widen—not narrow—the digital divide.” MANJUNATH KIRAN/AFP/Getty Images One evening a few weeks later, Zuckerberg called in some employees after hours to record a video in which he made a case for Internet.org. The lights were off behind him, a row of desks sat empty as he spoke. He framed the debate over whether to allow Internet.org to operate in India as a moral choice: “We have to ask ourselves, what kind of community do we want to be?” he said, in the video, which he published on his profile and on the Internet.org Facebook page.
“Are we a community that values people and improving people’s lives above all else? Or are we a community that puts the intellectual purity of technology above people’s needs?” In the months that followed, Facebook changed the app’s name from Internet.org to Free Basics in an attempt to mitigate the impression that Facebook was trying to take over the web. To counter the argument that Facebook was deciding what services people could access, the company opened up the app to more services. It also improved security and privacy measures for users.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While the company continued to sign on partners in new markets, like Bolivia and South Africa, in India the debate grew more heated. The company sent messages to developers throughout India to encourage them to advocate for Free Basics. Facebook-sponsored billboards asked Indians to support “a better future” for unconnected Indians—meaning a future with Free Basics. Advertisements for Facebook were plastered inside Indian newspapers. That year, Facebook spent roughly $45 million in Indian advertising to spread word about its Free Basics campaign, according to the Indian media.
In an op-ed that Zuckerberg wrote for the Times of India , he asked: “Who could possibly be against this?” In an op-ed that Zuckerberg wrote for the Times of India , he asked: “Who could possibly be against this?” In February 2016, India’s telecom regulator blocked Facebook’s Free Basics service.
Danish Siddiqui/REUTERS A billboard for Facebook's Free Basics service in Abuja, Nigeria, in April 2018.
Afolabi Sotunde/REUTERS In February 2016, India’s telecom regulator blocked Facebook’s Free Basics service as part of a ruling to support net neutrality.
Later that month, I joined Zuckerberg in Barcelona for his third appearance at the Mobile World Congress. Again, he wore dark jeans and black Nikes, and just before we left the green room, he pulled on a fresh gray T-shirt. He followed me onstage with confidence, but as soon as we sat down, his microphone malfunctioned, producing high-pitched feedback when he spoke. At first we tried to soldier through the interview, but the distraction grew too great and we both began to perspire.
Our voices dropped in and out like a bad cell connection. We stopped to ask for new equipment, which improved the situation only slightly. Inches away from me, Zuckerberg seemed perturbed, but in the recording I later watched, he appeared to maintain his composure as he announced a new Internet.org project.
This one had nothing to do with Free Basics. Dubbed the Telecom Infra Project, it would bring together 30 companies to help improve the underlying architecture of the networks that provide internet access.
I asked Zuckerberg what he’d learned so far from the Internet.org efforts. He intimated that he’d learned that people didn’t take him at face value. "I didn't start Facebook to become a company initially, but having a for-profit company is a good way to accomplish certain things,” he said.
To wit: Zuckerberg still thought of himself as a humanitarian and a philanthropist, uniquely positioned because of his capital and his influence to bring the internet to those who couldn't get access to it quickly in other ways. The global corporation that was threatening local businesses and sucking the air out of entire industries while minting millionaires in sunny Menlo Park? That was just the means to an end. From my interviews that year, both onstage and privately, it was clear to me that Zuckerberg was sincere in this belief, even if others didn’t buy into it.
Recently I wrote to a South African guy named James Devine. He works for a nonprofit called Project Isizwe, which makes Wi-Fi more available in his home country. In 2015, I'd visited him to check out a partnership he’d forged with Facebook. We met in Polokwane, in the impoverished Northeast, and then traced red dirt roads through the countryside until we got to a tiny village. There, above a chicken stand in the town center, was a WiFi hot spot. People could sit beneath it and access a small amount of free bandwidth—enough for a few minutes of playing games or streaming music—to surf the open web, or they could use the services within the Free Basics app as long as they wanted for free. As part of a trial, Facebook was paying for hot spots like this one in several villages, and Isizwe tended to their upkeep.
I asked Devine if he was still working with Facebook. “Things kind of died down after the satellite blew up,” he wrote, referring to the SpaceX satellite that blew up over Africa in September 2016. Facebook had contracted SpaceX to deliver the first Internet.org satellite into space; it was supposed to deliver wireless connectivity to large portions of sub-Saharan Africa. “All the current projects with them that we’ve been involved with have now come to an end.” It’s just one of a slew of projects Facebook has attempted in the five years since it launched the work.
In the five years since Zuckerberg introduced Internet.org, 600 million people have come online.
Jared Oriel While the larger world fixated on the connectivity experiments of Free Basics, the company sank resources into other partnerships and experiments to build devices (like lasers and autonomous planes) that could distribute the internet cheaply. These projects involved the type of deep technical know-how that a company with a healthy research arm, like Facebook, was designed to take on. Facebook funneled these projects through its Connectivity Lab, which is committed to initiatives intended for the distant future.
While they required Facebook to invest in unfamiliar areas of science and engineering—building an airplane is a different art form than, say, building a messaging app—these projects are in Zuckerberg’s wheelhouse. He read up on how the technologies operated and then either acquired or recruited the technical talent to realize them. Once, when I visited Facebook’s Menlo Park headquarters, Zuckerberg had Hamid Hemmati’s textbook on lasers on his desk. He’d had his assistant reach out to schedule a call with Hemmati, who’d spent most of his career at NASA. “He was super surprised to hear from me,” Zuckerberg told me at the time. “He thought that it was fake.” Within a month, Zuckerberg had convinced Hemmati to leave NASA to open a Facebook laboratory in Woodland Hills, California.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These technical projects have a lot more in common with the types of connectivity efforts embarked on by Facebook’s peers. Alphabet shut down its drone program, Project Titan, last year, but it continues to develop Project Loon, which is housed in X—Alphabet’s so-called moonshot factory—and aspires to beam the internet from high-altitude balloons. Microsoft has attempted to deploy unused television airwaves to get more people online. Within Google and Microsoft, these projects don’t front as philanthropy; they’re ambitious technical challenges undertaken as research for the company’s future business.
The occasional Connectivity Lab updates Facebook offers suggest that it is distancing these efforts from its Internet.org work. Aquila, the name for Facebook’s plane-size drone, has now had two publicized test flights, and on the second one it even stuck the landing. (The National Transportation Safety Board opened an investigation after the first flight crashed in the summer of 2016.) It has also partnered with Airbus to lobby the FCC for the spectrum it will need to beam the internet from the sky. The company has also added new projects to the mix. Another Connectivity Lab project involves building better maps to help plan where networks need to improve. Facebook no longer talks about these projects publicly as part of Internet.org. Blog posts are shared on Facebook’s coding blog , and the posts don’t reference Internet.org at all. Instead, they’re tagged “connectivity.” Internet.org doesn’t include these updates in its press section.
Engineering projects like Aquila, an internet-providing drone, were more firmly in Zuckerberg's wheelhouse.
Reuters Meanwhile, the project that has done the most to help cement connectivity has been separated from Internet.org entirely. Although Zuckerberg introduced the Telecom Infra Project as an Internet.org project in 2016, including its logo alongside logos for Free Basics and the Connectivity Lab in his post, there are no references to TIP on the Internet.org site.
The way Facebook has handled this telecom project suggests it is learning from past missteps. The effort is modeled on Facebook’s Open Compute Project, which developed technology to make data centers more efficient and then made that technology available to other tech companies. Under the leadership of Jay Parikh, the infrastructure chief who also helmed Open Compute, Facebook will join with partners to pay for and develop new technology that companies can use to improve their infrastructure; telco partners will be expected to pay for deployment. These upgrades range from improved base stations to a new radio wave technology that will make the internet faster in densely populated places. Telcos are embracing this approach, according to Quartz. So far, Facebook has attracted more than 500 partners.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Telecom Infra Project has its own website (which pointedly downplays Facebook’s central role), its own board of directors that includes just one Facebook executive, and it has hosted two autumn summits so far. Last November, Yael Maguire, who directs Facebook’s connectivity programs, opened up the second day of the summit by explaining “why Facebook cares so much about connectivity.” He explained that Facebook is a social networking company, focused on bringing people together in the digital world, and it depends on physical networks to do that. “Every step of progress around the world allows us to create a better and closer experience where people can come closer together,” he explained.
In other words, healthy networks make for a better Facebook. That in turn is good for Facebook’s bottom line. This is what Zuckerberg wasn’t saying directly in any of his earlier public addresses.
For all of Facebook’s early experiments, carriers have finally come around to Facebook’s model. Facebook says it is working with 86 partners to offer the Free Basics app in 60 countries. These carriers have found Facebook’s formula to be helpful in their attempts to attract and retain new customers. So far this year, Free Basics has launched in Cameroon for the first time and added additional carriers in Colombia and Peru.
In the five years since Zuckerberg introduced Internet.org, 600 million people have come online. In the company’s April 25 earnings call, Zuckerberg said the company’s Internet.org and connectivity efforts (he differentiated the two) have brought 100 million of these people to the internet. Facebook commissions annual research on the number of connected people. This year’s report, which was not published on the Internet.org web site, suggests the costs of accessing the net have fallen, while the rate at which people are coming online for the first time has grown particularly fast in developing countries.
But while this looks like success, Zuckerberg never anticipated the consequences of universal connectivity that are now emerging. Small countries like Myanmar, Sri Lanka, Cambodia, and the Philippines are reporting outbreaks of violence and political strife that local activists blame partly on Facebook. These countries are facing many of the same challenges—hate speech, false information, and political movements that complain of bias—that we are confronting in the United States, where Congress recently called Zuckerberg to Washington to testify. But often, the developing world lacks the institutions and government regulators to help educate and protect individuals. What’s more, Facebook has been slower to introduce the moderating tools that might help curtail hate speech and misinformation in the developing world.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In March, the United Nations called out Facebook for its role in inciting the violence in Myanmar that has led to a humanitarian crisis. Military strikes since last August have spurred roughly 700,000 Rohingya Muslims to flee to Bangladesh to escape what some members of the UN consider a genocide. The officials said hateful Facebook posts have helped amplify the ethnic tensions. Yanghee Lee, the UN official charged with investigating events in the country, said, “I’m afraid that Facebook has now turned into a beast, and not what it was originally intended.” Zuckerberg hasn’t responded to this investigation directly, but he addressed events in Myanmar before Congress and in an interview with Vox’s Ezra Klein, and he responded directly to activists in Myanmar in an email that was shared with The New York Times.
He said Facebook has hired dozens of Burmese language content reviewers to monitor reports of hate speech and, according to the letter, “increased the number of people across the company on Myanmar-related issues.” He suggested the company was developing artificial intelligence that would be able to better help with content moderation in the future.
But there are people who believe these countries would have been better served by allowing the internet to spread locally. Nikhil Pahwa, the journalist-turned-digital-rights-activist who led the successful effort to shut down Free Basics in India, points to the current state of connectivity there as proof the world would be better off without Facebook’s app. He says the number of people who have internet access in India has grown to 500 million from just 160 million when Facebook tried to introduce Free Basics in the country. He attributed the growth to free data plans offered by Indian telecom company Reliance. “FB was creating this false choice between access and net neutrality. That’s essentially bullshit,” he says now. “Free Basics needs to be banned across the globe.” Facebook launched Internet.org with the bold arrogance that has defined its approach to many of its partnerships. It blundered blindly into areas where it had no expertise, apologizing after the fact when it made mistakes. That arrogance left it deaf to the feedback of partners, potential users, and people who had spent careers learning the lessons Facebook has had to piece together on its own. By the middle of 2016, the company had rebranded its larger effort to “Internet.org by Facebook.” Instead of adding partners to the original six with which it launched in 2013, Facebook opted to forge its own path. I reached out to the six original launch partners. Only one, Opera, said it was still working with the company’s Internet.org initiative, and it wouldn’t elaborate on that work.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In many ways, the early mistakes Facebook made as it launched Internet.org mirror the company’s current challenges. Facebook tried to present itself as a neutral party and suggested its actions were driven by altruism. But Facebook is inherently not neutral; its aim is profit. I spoke with Ellery Roberts Biddle, who is the advocacy director for the citizen media group Global Voices. Last year, Global Voices published a research project showing that Facebook’s Free Basics program collected data about users, and that many Free Basics users were already online, so it didn’t succeed in its explicit goal to bring the unconnected online. Biddle works closely with Facebook on a range of issues, but she has concerns about Free Basics. “Facebook’s bottom line is profit. Profit and human rights don’t always lead you to the same place,” she said. “If those are your two priorities, what the heck do you do?” Facebook had a smaller presence at this year’s Mobile World Congress, held in February. Zuckerberg didn’t attend. Instead, Parikh, the infrastructure maven, used the event to make announcements about backhaul and low-power base stations. His blog post didn’t mention Internet.org. Telecom partners appeared pleasantly surprised.
It seemed the internet titan had learned something about how to connect the world. It had listened. It had partnered. It had offered up the tools it was best positioned to develop. If Facebook can learn to do this here, there’s hope the company can apply what it has learned to the rest of its problems.
CGI "influencers" like Lil Miquela are about to flood your feed Insect-borne diseases have tripled.
Here’s why.
Oculus Go review: Cordless VR is here and it’s pretty dope Star Wars is becoming a religion , and May 4 is its spring festival That time when John Doerr brought a "gift" to Google’s founders Senior Writer Facebook X Topics Facebook Mark Zuckerberg service connectivity Brandi Collins-Dexter Andy Greenberg Steven Levy Lauren Smiley Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,562 | 2,018 |
"Some Startups Use Fake Data to Train AI | WIRED"
|
"https://www.wired.com/story/some-startups-use-fake-data-to-train-ai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Some Startups Use Fake Data to Train AI Israeli company DataGen creates fake hands like these to help others build artificial-intelligence programs.
Datagen Technologies Save this story Save Save this story Save Company Apple Facebook Open AI End User Startup Big company Source Data Images Video Synthetic data Technology Machine learning Machine vision Berlin startup Spil.ly had a problem last spring. The company was developing an augmented-reality app akin to a full-body version of Snapchat’s selfie filters—hold up your phone and see your friends’ bodies transformed with special effects like fur or flames. To make it work, Spil.ly needed to train machine-learning algorithms to closely track human bodies in video. But the scrappy startup didn’t have the resources to collect the tens or hundreds of thousands of hand-labeled images typically needed to teach algorithms in such projects.
“It’s really hard being a startup in AI, we couldn’t afford to pay for that much data,” says CTO Max Schneider.
His solution? Fabricate the data.
Spil.ly’s engineers began creating their own labeled images to train the algorithms, by adapting techniques used to make movie and videogame graphics. Roughly a year later, the company has roughly 10 million images made by pasting digital humans it calls simulants into of photos of real-world scenes. They look weird, but they work. Think of it as putting the artificial in artificial intelligence.
“The models we train on purely synthetic data are pretty much equivalent to models we train on actual data,” says Adam Schuster, an engineer at Spil.ly. In a demo, a virtual monkey appears on a table viewed through an iPhone’s camera, jumps to the ground, and squirts paint onto the clothes of a real person standing nearby.
Berlin startup Spil.ly used images like this to create augmented reality software that recognizes people in video.
Figure by Viorama GmbH; Cat by Mike Estes Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Fake it ‘til you make it has long been a motto of startups trying to survive in markets stalked by larger competitors. It has led some companies, like blood-test “innovator” Theranos, into trouble.
In the world of machine learning, however, spoofing training data is becoming a legitimate strategy to jumpstart projects when cash or real training data is short. If data is the new oil, this is like brewing biodiesel in your backyard.
The phony data movement could accelerate the use of artificial intelligence in new areas of life and business. Machine-learning algorithms are inflexible compared to human intelligence, and applying them to a new problem generally requires new training data specific to that situation. Neuromation, a startup based in Tallinn, Estonia, is churning out images containing simulated pigs as part of work for a client that wants to use cameras to track the growth of livestock. Apple, Google, and Microsoft have all published research papers noting the convenience of using synthetic training data.
Neuromation is using simulated animals to train software that could help out on the farm by monitoring livestock.
Neuromation Evan Nisselson, a partner at venture firm LDV Capital, says synthetic data offers startups hope of competing with data-rich AI giants. Talented teams are often hamstrung by a lack of data, he says. “The ability to create synthetic data and train models with that can level the playing field between startups and big companies,” Nisselson says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Spil.ly’s story adds some weight to that argument. In February, Facebook disclosed its own machine-learning software that can apply special effects to humans in video.
Densepose , as it is called, was trained with 50,000 images of people hand-annotated with 5 million points. Within days, Spil.ly began synthesizing data similar to Facebook’s. The startup has since integrated ideas from Densepose into its own product.
Neuromation and others want to establish themselves as brokers of fake data. Another Neuromation project involves creating images of grocery store shelves for OSA HP, a retail analytics company with customers including French supermarket group Auchan. The data is training algorithms that read images to track stock on shelves. “The sheer number of product categories and the varying retail environments make gathering and labelling images impractical,” says Alex Isaev, CEO of OSA.
This image isn’t real but it is helping teach camera software to monitor stock in real stores.
Neuromation Ofir Chakon, cofounder of Israeli startup DataGen, says his company charges up to seven figure sums to generate custom videos of simulated—and somewhat creepy—hands. The company’s realism comes in part from a technique recently trendy in machine-learning circles called generative adversarial networks, which can create photo-realistic images.
To human eyes, those hands and Neuromation’s fake pigs couldn’t pass as real. “When I first saw the synthetic dataset I thought ‘This is terrible. How is it possible the computer can be learning from this?,’” says Schuster of Spil.ly. “But what matters is what the computer understands from an image.” Getting the computer to understand the right thing can take some work. Spil.ly originally synthesized only naked figures, but found the software learned to look only for skin. The startup’s system now generates people with varied body shapes, skin tones, hair, and clothing. Spil.ly and others often also train their systems on a smaller number of real images, in addition to millions of synthetic examples.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Even the world’s most data- and cash-rich AI teams are embracing synthetic data. Google researchers train robots in simulated worlds, for example, while Microsoft published results last year on how 2 million synthetic sentences could improve translation of the Levantine dialect of Arabic.
Apple, which keeps its AI inspirations more secret, also has signalled interest in faking training data. In 2016, the company released a research paper on generating realistic images of eyes to improve gaze-detection software. Almost a year later, the company released the iPhone X, which unlocks by detecting a user’s gaze and then recognizing the face. Some of the same researchers contributed to both projects. The company declines to comment on whether it incorporated findings of the research in the unlocking feature.
In robotics, synthetic training data helps researchers carry out experiments at greater scale than is possible in the real world. Alphabet’s Waymo says its self-driving cars have driven millions of miles on public roads; but its control software has traveled billions of miles on simulated streets.
Giving machines digital doubles can help robots learn to better handle objects in factories or homes. Researchers at OpenAI, the research institute cofounded by Elon Musk, have found that they can train software in a simulated world that works reasonably well in a real robot. Tricks that help include randomly varying the colors and textures in the simulated world to make the software focus on the core physical problem, and generating millions of different, oddly shaped objects to be grasped. “Two years ago the prevailing belief was that simulated data was not very useful,” says Josh Tobin, a researcher at OpenAI. “In the last year or so that perception is starting to shift.” Despite those successes, fake data is not omnipotent. Many complex problems aren’t well enough understood to simulate realistically, says DataGen’s Chakon. In other cases, the stakes are too high to risk creating a system with any disconnect from reality. Michael Abramoff, a professor at the University of Iowa, has developed ways to generate images of the retina , and says he uses synthetic data in grad-student projects. But he stuck to real images when developing the retina-checking software his startup IDx got approved by the FDA this month.
“We wanted to be maximally conservative,” Abramoff says.
Some humans make a living performing on video to school machine learning algorithms.
To accelerate its embrace of artificial intelligence, Google is training algorithms to train its algorithms.
The search company is also tapping users in Asia to teach its systems about local culture.
Senior Editor X Topics artificial intelligence Facebook apple Will Knight Amit Katwala David Gilbert Kari McMahon Amit Katwala Khari Johnson Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,563 | 2,018 |
"Everything Mark Zuckerberg Will Follow Up On for Congress | WIRED"
|
"https://www.wired.com/story/mark-zuckerberg-will-follow-up"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brian Barrett Business A Comprehensive List of Everything Mark Zuckerberg Will Follow Up On Andrew Harrer/Bloomberg/Getty Images Save this story Save Save this story Save Mark Zuckerberg visited Capitol Hill this week, spending hours answering questions from Congress about privacy, Russia, algorithms, and more. Also: not answering those questions. Dozens of times, Zuckerberg deferred, responding instead that his “team” would “follow up.” In the interest of helping both Congress and Facebook keep track of those many, many promises to provide more—or in many cases, any—detail, we’ve collected them all here.
Please note that this does not include the several occasions in which Zuckerberg claimed he didn’t know an answer but promised no follow-up. (There were lots of those, too.) We’re also not including instances where legislators ran out of time and submitted further written questions, or proactively asked Zuckerberg to get them more information later.
But by our best reckoning, what follows is every instance in which Zuckerberg volunteered to get back to a specific senator or representative—giving his “team” one heaping pile of homework in the process.
All the apps Facebook has banned for improperly sharing user information with third parties.
How many times Facebook has required audits of apps to make sure improperly transferred data was deleted.
How many fake accounts Facebook has removed.
Whether Facebook employees worked alongside Cambridge Analytica when they embedded with the Trump campaign in 2016.
Whether Facebook Messenger collects call and text data from minors for account syncing.
Whether Facebook can “track a user's Internet browsing activity, even after that user has logged off of the Facebook platform.” How Facebook discloses that kind of tracking to its users.
Whether a specific set of “unverified, divisive pages” of Facebook users, shown on a large poster board, were in fact Russian-created groups.
Specific regulations that Facebook would propose for the tech industry.
Where the 87 million people impacted by the Cambridge Analytica fiasco are geographically located.
Whether he would support a rule to require notifying users of a breach within 72 hours.
Whether and how Facebook tracks users across devices. (In fairness, the wording here wasn’t very clear.) Whether Facebook is a neutral public forum or engaged in free speech (specifically as relates to Section 230 immunity under the Communications Decency Act; Zuckerberg did say that he thinks Facebook is “a platform for all ideas”).
Whether Aleksandr Kogan, the developer who sold the data of millions of Facebook users to Trump-affiliated political firm Cambridge Analytica , still has a personal Facebook account.
“On the points that you collect information, if we call those categories, how many do you store of information that you are collecting?” (Presumably this just means how many data points does Facebook collect on its users.) Details around how to protect minors—whether that takes the form of a privacy bill of rights, or a “discussion,” in Zuckerberg’s words.
How Zuckerberg sees Facebook’s recently announced bug bounty in the context of “the sharing of information not permissible, as compared to just unauthorized access to data.” (The point here seemed to be that bug bounties don’t do anything to prevent the sharing of data that’s authorized but through opaque processes, which is true, because those aren’t bugs.) The details around if and how Facebook would allow civil rights groups to audit credit and housing companies that operate on the platform.
How many Nevada residents were among the 87 million Facebook users caught up in the Cambridge Analytica scandal.
How long Facebook retains a user’s data after they delete their account.
(The best Zuckerberg could offer: “I think we try to move as quickly as possible.”) The set of principles Facebook will use to guide its development of artificial intelligence.
What firms Kogan sold the data of up to 87 million Facebook users to, in addition to Cambridge Analytica and Eunoia. (“There may have been a couple of others as well,” Zuckerberg said.) More information about how Facebook can be confident that its political ad restrictions really have blocked out foreign entities.
If Facebook would “please bring some fiber”—high-speed internet—to West Virginia. (“We do have a group at Facebook that is working on trying to spread Internet connectivity in rural areas,” said Zuckerberg, “and we would be happy to follow up with you on that.”) Exactly how Facebook’s systems work when you attempt to wipe your data, as part of deleting your account.
Details around potential legislation that would codify that people own their online data, and require platforms to offer more opt-in settings.
Why Zuckerberg can’t give a one-word answer to the question of whether he would commit to “changing all user default settings to minimize, to the greatest extent possible, the collection and use of users' data.” Why Facebook blocked an ad from a former Michigan Lottery commissioner announcing his run for state senate.
More detail about how Facebook’s AI tools help catch fake accounts from Russia or elsewhere.
If and how Facebook plans to implement the portion of Europe’s GDPR—a sweeping digital privacy law —that gives users the right to object to the processing of their personal data for marketing purposes in the US.
Whether Facebook uses data that’s collected from logged off users only for security purposes or as “part of the business model” as well.
Whether the person (or people) who mistakenly banned popular conservative duo Diamond and Silk on Facebook were “held accountable in any way.” How many firms total Kogan sold information to, and what their names are. (This was a follow-up to Senator Baldwin’s identical question Tuesday, which Zuckerberg had also said he’d follow up on.) More detail about how Facebook ensures that content reviewers aren’t biased against conservative or religious posts.
Whether Zuckerberg can commit to convening a meeting of CEOs in his field to develop a strategy to increase racial diversity in tech.
(“I think that that's a good idea and we should follow up on it,” Zuckerberg said.) Input on the BROWSER Act , a bill that would require opt-in consent for sharing sensitive information with both telecoms and websites.
What if any valid law enforcement requests Facebook has honored in Russia.
An update on Facebook’s rural broadband plans when available.
How many data points Facebook has on the average non-Facebook user.
How many Facebook “Like” buttons there are on non-Facebook web pages.
How many Facebook “Share” buttons there are on non-Facebook web pages.
How many chunks of Facebook pixel code there are on non-Facebook web pages.
Whether Zuckerberg’s team can get back to the committee within 72 hours.
Congress asked Mark Zuckerberg simple questions, because no one really understands how Facebook works During his two days of testimony, Mark Zuckerberg repeatedly misrepresented how much control users really have over their privacy He also revealed just how terribly powerful the company has become Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Executive Editor, News X Topics Mark Zuckerberg congress Facebook privacy Morgan Meaker Reece Rogers Kari McMahon Steven Levy David Gilbert Jacopo Prisco Will Knight Nelson C.J.
Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,564 | 2,018 |
"Mark Zuckerberg Says It Will Take 3 Years to Fix Facebook | WIRED"
|
"https://www.wired.com/story/mark-zuckerberg-says-it-will-take-3-years-to-fix-facebook"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business Mark Zuckerberg Says It Will Take 3 Years to Fix Facebook In an in-person interview, Mark Zuckerberg discusses F8 and Facebook's trust issues.
SAUL LOEB/Getty Images Save this story Save Save this story Save Mark Zuckerberg knew his keynote speech at F8 this year would not be like any other. His previous appearances at Facebook's annual developer's conference were all about the new products and technology Facebook was announcing that day, and the vision he would share for future triumphs.
But in the wake of Cambridge Analytica , fake news , Russian election-tampering , incendiary hate speech and did we mention Cambridge Analytica, Facebook has dished out serial apologies and embarked on a steady march of product adjustments and transparency initiatives , a course that is nowhere near completion. Zuckerberg understood that at this F8 , he could not give short shrift to the near-existential crisis his company is undergoing. But he also didn't want to ignore the main function of F8—refilling the pipeline of new products and visions.
"The hardest decision this year hasn't actually been investing so much in safety and security," he says. "I mean, that was obvious—there was no choice to not do that. The real question is how do we also find a path to move forward on all the other things that our community expects from us." Zuckerberg is telling me this as we meet offsite on the eve of F8, during his last-minute preparations for the event. We talked for almost an hour, discussing his keynote, some of the new products he's announcing, his feelings about his ten-hour congressional testimony , the question of whether the company censors conservative speech, the need to make Facebook more proactive in policing content, and why it will take three years to do it.
But first we talked about how he was going to thread a tiny needle on stage: rebuilding trust while also fulfilling the expectations of fans who want cool stuff and developers whose businesses depend on Facebook's continued evolution. "That's going to be what this whole conference is about," he says. "On the one hand the responsibilities around keeping people safe—the election integrity, fake news, data privacy and all those issues are just really key. And on the other hand, we also have a responsibility to our community to keep building the experiences that people expect from us. Part of the challenge of where we are is making sure that we take both seriously. F8 is going to be a balance of those two points." The mea culpas come first. After the initial revelation of the Cambridge Analytica episode, which exposed the data of 87 million users, there was a five-day period when Zuckerberg and his operating partner Sheryl Sandberg were harder to find than the Golden State Killer, a mistake that Zuckerberg acknowledges. "We were initially very slow to respond," he says. "We were trying to understand all of the internal details around what happened. And I think I got this calculation wrong where I should have just said something sooner even though I didn't have all the details. Since we dug in and learned all of it, I think we're doing the right thing. It's just that we should've done it sooner." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In his keynote the plan is to acknowledge Facebook's problems and to introduce yet more features to address them. But he's also getting past the apologies. The dilemma he faces at F8 is a paradigm for his larger problem at Facebook.
"The question isn't, do we feel bad. Of course we feel bad. But what we owe the world is, 'Here's what we're going do to make sure that doesn't happen [again].'" Zuckerberg will then introduce one welcome new trust feature: the ability for users to wipe clean information that Facebook has gathered about them from their activities off Facebook, like web browsing. Zuckerberg compares this to cleaning the cookies out of one's browser, a form of digital hygiene that he occasionally practices himself. Apparently, this improvement was an indirect product of Zuckerberg's ten hours in the congressional hot seat last month. Zuckerberg tells me he had anticipated that the legislators' questions would largely focus on Cambridge Analytica and maybe the Russians, but instead they fired a broad range of questions at him, many of them involving the deep weeds of Facebook's operations.
"I figured other product questions that came up, I'd be able to answer, because I built our product," he says. That was overly optimistic. "One of my takeaways was that I actually felt like I didn't understand all the details [on things like] how we were using external data on our ad system, and I wasn't OK with that," he says. "On the plane ride back, I scheduled a meeting. I was like, 'I'm going to sit down with this team and learn exactly all this stuff that I didn't know.' " The result of that remedial education was an option for users to cut that information loose.
Zuckerberg has other takeaways from his hours on the Hill. "There were more questions about bias than I had expected. I think that that reflects a real concern that a lot of people have about the tech companies, which are predominantly based in Silicon Valley and Seattle and these extremely liberal places. That depth of concern that there might be some political bias really struck me. I really want Facebook to be a platform for all ideas." Watch for some ideas on that.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When I asked how he judged his performance in DC, he corrected me. "I didn't view it as a performance," he says. "I think the point is to try to get people the information they need to do their jobs." I mention that a number of the senators and representatives seemed less interested in hearing information than they were in the dulcet tones of their own voices. But Zuckerberg (savvy fellow) didn't take that bait. "Thinking about the ratio of how many people raised serious questions to people who just wanted to make a point, I came away feeling heartened about our democracy," he says.
Well, okay. I move on to developers, who are supposed to come to F8 shivering with excitement and leave with the fervor of empowered believers. Aren't they going to be worried about the restrictions put on them as Facebook tightens control of information after Cambridge Analytica? Certainly they weren't happy when Facebook suspended app reviews in March, essentially freezing their new products.
"I think there is concern, and it's clear that our priorities are making sure that people's data is secure," he says. "The reality is the vast majority of developers have good intent and are building good things. So I think if you're a good developer, it's annoying that app reviews got stopped, but you're not really worried long term about the direction of the platform." App reviews apparently will resume after F8. And Zuckerberg does think that the F8 announcements—including some involving Messenger, Instagram, and Oculus—will thrill developers.
Speaking of announcements, Zuckerberg brings up the most surprising one he plans to make—a new Facebook service called Dating. This Tinder-esque development allows users to create separate profiles to pursue romantic connections, with Facebook acting as an algorithmic yenta. In any other year, this type of service makes sense. But considering that the company is facing its biggest crisis ever because of its handling of personal data, doesn't it seem a little risky to be adding a new data set with some of the most personal information ever? At first, Zuckerberg answers the question by explaining that the new feature builds on the fact that people have always used Facebook for dating, and by noting the product's various protections. (Facebook, for instance, will not use that information in targeting ads.) So I change the subject and move on to other questions. But a couple of minutes later, he returns to the Dating discussion, clearly disturbed at my implication. "Obviously, you're asking this question," he says. "But do you think that this is a bad time to be talking about this?" Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg I tell him that I get that Facebook is taking steps to isolate the information from one's regular profile. But isn't he worried that people might look at Dating and say, "Wow, Facebook wants to know this about me, too? " Zuckerberg straightens up in his chair—this issue is dead in the center of the devilish tensions between trust-building and maintaining momentum. "This is the threading of the needle we talked about up front," he says. Of course, Facebook has to keep introducing new products, announcing stuff on Marketplace, introducing a new augmented reality camera platform, shipping the standalone VR headset Oculus Go. But he doesn't want people to think that because the company is moving forward, it's any less serious about winning back its users' trust. "Because my top priority is making sure that we convey that we are taking these things seriously," he says.
Before we wrap up I ask him—has this crisis made Facebook different? His answer is both no—the mission is the same—but, in a way, yes. "I really think the biggest shift is around being more proactive, around finding and preventing abuse. The big learning is that we need to take a broader view of our responsibility. It's not just about building tools and assuming that humans are on balance good, and so therefore the tools will be used for on balance good. It is no longer enough to give people tools to say what they want and then just let our community flag them and try to respond after the fact. We need to take a more active role in making sure that the tools aren't misused." Zuckerberg recognizes the difficulty of remaking his systems to proactively catch harmful content. "I think this is about a three-year transition to really build up the teams, because you can't just hire thirty thousand people overnight to go do something," he says. "You have to make sure that they're executing well and bring in the leadership and train them. And building up AI tools—that's not something that you could just snap your fingers on either." But Zuckerberg says the three-year journey is already well under way. "The good news is that we started it pretty early last year. So we're about a year in. I think by the end of this year we'll have turned the corner on a lot of it. We'll never be fully done. But I do really think that this represents a pretty major shift in the overall business model and operating model of the company." Meanwhile, judge for yourself whether Facebook is changing by what happens in Mark Zuckerberg's most unusual F8 keynote yet.
How and when to watch the livestream.
Follow our liveblog as the event unfolds.
Facebook's 2018 event will get a lot more scrutiny than last year's more jubilant show.
Editor at Large X Amit Katwala Will Knight David Gilbert Kari McMahon Andy Greenberg Khari Johnson David Gilbert Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,565 | 2,023 |
"Watch Amazon Now Considers Itself an AI Company | WIRED BizCon | WIRED"
|
"https://www.wired.com/video/watch/amazon-now-considers-itself-an-ai-company"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Amazon Now Considers Itself an AI Company | WIRED BizCon About Released on 06/07/2017 Do you think of Amazon as an Artificial Intelligence Company now? Oh, for sure.
I, you know, I-- Everything we do, and I say this to my teams all the time, If you're not thinking about machine learning in every aspect of what you're building on behalf of customers, we're probably making a mistake.
It is-- we're in a renaissance of computer science, and the things that we only thought were a glimmer of possibility just four or five years ago, now seem totally within reach.
You often don't get to live in times like this and participate in this kind of renaissance.
If any of you are in technology and not paying attention to this, then I think you should rethink it.
And we use it all around the company, from obvious things to not so obvious.
We grade strawberries at Amazon for our grocery business, because it's better than humans.
Our drones use machine learning.
Every layer of Alexa that you see, from the weight word to the voice she speaks, uses deep learning and machine learning throughout to make sure it's a delightful experience for customers.
[Nicholas] What do you do to strawberries again? We grade them.
So we run them down a camera looks at them and decides if they're a good strawberry or a bad strawberry.
A little Willy Wonka.
Now Amazon's Alexa Can Show You Things Taking on Amazon, Google & Silicon Valley: EU Competition Chief Talks Tech with WIRED's Nicholas Thompson Why drones will be much safer than you think Machine Learning: Living in the Age of AI LG's Hub Bots Are Seriously Cute AI Assistants How to Control What Alexa and Google Assistant Do With Your Voice Data Game of Clouds: Dropbox Declares Independence From Amazon Will AI Enhance or Hack Humanity? - Fei-Fei Li & Yuval Noah Harari in Conversation with Nicholas Thompson A Look at Amazon's Kindle Paperwhite Computer Scientist Explains Machine Learning in 5 Levels of Difficulty Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,566 | 2,016 |
"Hacker Lexicon: Stingrays, the Spy Tool the Government Tried, and Failed, to Hide | WIRED"
|
"https://www.wired.com/2016/05/hacker-lexicon-stingrays-spy-tool-government-tried-failed-hide"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kim Zetter Security Hacker Lexicon: Stingrays, the Spy Tool the Government Tried, and Failed, to Hide Getty Images Save this story Save Save this story Save Stingrays, a secretive law enforcement surveillance tool, are one of the most controversial technologies in the government's spy kit. But prosecutors and law enforcement agencies around the country have exerted such great effort to deceive courts and the public about stingrays that learning how and when the technology is used is difficult.
This week, the government even went so far as to assert in a court filing (.pdf) that articles published by WIRED and other media outlets that expose the deception "are full of unproven claims by defense attorneys and advocates [and] are not proper proof of anything." TL;DR: Stingrays impersonate a legitimate cell phone tower in order to trick mobile devices into connecting to them and revealing information about their user's location.
So what do we know? "Stingray" is the generic commercial term for a device otherwise known as an IMSI catcher. The stingray impersonates a legitimate cell tower to trick nearby mobile phones and other wireless communication devices, like air cards, into connecting to them and revealing their international mobile subscriber identity (IMSI) number. More importantly, though, the device also collects information that can point to a mobile device’s location.
By moving the stingray around a geographical area and gathering a wireless device’s signal strength from various locations in a neighborhood, authorities can pinpoint where the device is being used with more precision than with data obtained from a mobile network provider’s fixed tower location.
Although use of the spy technology goes back at least 20 years---the FBI used a primitive version of a stingray to track former hacker Kevin Mitnick in 1994 ---their use of it has grown in the last decade as mobile phones and devices have become ubiquitous. Today, they're used by the military and CIA in conflict zones---to prevent adversaries from using a mobile phone to detonate roadside bombs, for example---as well as domestically by federal agencies like the FBI, DEA and US Marshals Service, and by local law enforcement agencies.
Stingrays have the ability to also capture call record data---such as the numbers being dialed from a phone---and some also have the ability to record the content of phone calls , as well as jam phones to prevent them from being used. Domestic law enforcement agencies in the US, however, insist that the model of stingrays they use don't collect the contents of communications.
The use of stingrays is highly controversial, in part because the devices don't just hook targeted phones---they entice any mobile phone or device in their vicinity to connect to them, as long as the phones are using the same cellular network as the targeted phone. Stingrays can also disrupt cellular voice and text service for any device that connects to them, since the devices are not connecting to a legitimate cell tower that will transmit their communication.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Some rogue towers will also attempt to intercept encrypted mobile communication by forcing a phone to downgrade from a 3G or 4G network connection to a 2G network---a less secure network that doesn’t authenticate cell towers to the phone and contains vulnerabilities that make it easier to decrypt secure communication. The IMSI catchers jam 3G and 4G signals to force the phone to use the less secure 2G network.
And stingrays aren't cheap. One device from the Harris Corporation, which sells a brand of IMSI catcher actually named Stingray, can cost more than $50,000. But this doesn't mean stingrays are beyond the reach of anyone but resource-rich law enforcement and intelligence agencies. In 2010 at the Def Con hacker conference in Las Vegas, a security researcher crafted a low-cost, home-brewed stingray for just $1,500 capable of intercepting traffic and disabling the encryption, showing just how easy it would be for anyone to use this technology to spy on calls.
Beyond the controversial ways stingray technology works, the secrecy and deception law enforcement agencies use to cloak their use of the devices is also troubling.
Turns Out Police Stingray Spy Tools Can Indeed Record Calls Florida Cops’ Secret Weapon: Warrantless Cellphone Tracking We Must Secure America’s Cell Networks—From Criminals and Cops Law enforcement agencies around the country have routinely used the devices without obtaining a warrant from judges.
In cases where they did obtain a warrant, they often deceived judges about the nature of the technology they planned to use. Instead of telling judges that they intended to use a stingray or cell site simulator, they have often mischaracterized the technology , describing it as a pen register device instead. Pen registers record the numbers dialed from a specific phone number and are not, for this reason, considered evasive. Because stingrays, however, are used to track the location and movement of a device, civil liberties groups consider them to be much more invasive. They can, for example, be used to track a device inside a private residence.
In some cases, law enforcement agents have also deceived defense attorneys about their use of stingrays , saying they obtained knowledge of a suspect’s location from a “confidential source” rather than disclosing that the information was gleaned using a stingray.
Law enforcement agencies have also gone to great lengths to prevent the public from learning about their use of the technology. In Florida, for example, when the American Civil Liberties Union tried to obtain copies of documents from a local police department discussing their use of the technology, agents with the US Marshals Service swooped in at the last minute and seized the documents to prevent police from releasing them. Law enforcement agencies claim that public information about the technology will prompt criminals to devise methods to subvert or bypass the surveillance tool.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Indeed, there are already apps and tools available to help detect rogue cell towers like stingrays. The German firm GSMK's secure CryptoPhone, for example, has a firewall that can alert users to suspicious activity that may indicate when a stingray has connected to their phone or turned off the encryption their phone might be using.
Last year, the Justice Department announced a new policy for using stingrays that offers a little more transparency, but only a little. Under the policy, the FBI and any other federal agencies using stingrays will have to get a search warrant before deploying them. The policy forces prosecutors and investigators not only to obtain a warrant, but also to disclose to judges that the specific technology they plan to use is a stingray—which prevents them from deceiving judges and defense attorneys about the surveillance method they plan to use. Agents using the device also have to delete all data a stingray collects “as soon as” it has located the device it’s tracking.
The only problem is that the new policy does not cover local and regional law enforcement, who also use stingrays to track suspects.
That may change, however: A bill introduced last year by Rep. Jason Chaffetz (R-Utah) hopes to fix that loophole. The Cell-Site Simulator Act of 2015, also known as the Stingray Privacy Act, would force state and local law enforcement to obtain a warrant as well.
X X Topics Hacker Lexicon hacks privacy stingrays surveillance Threat Level Scott Gilbertson Lily Hay Newman Lily Hay Newman Dell Cameron Matt Burgess David Gilbert Vittoria Elliott David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,567 | 2,017 |
"What James Damore Got Wrong About Gender Bias in Computer Science | WIRED"
|
"https://www.wired.com/story/what-james-damore-got-wrong-about-gender-bias-in-computer-science"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons John Hennessy Maria Klawe David Patterson Science What James Damore Got Wrong About Gender Bias in Computer Science Hotlittlepotato Save this story Save Save this story Save In August Google employee James Damore made the news and even Wikipedia by publishing his speculation that female software engineers are underrepresented due to inherent biological differences. Although he admitted that implicit bias and explicit bias may exist, Damore wrote, “I’m simply stating that the distribution of preferences and abilities of men and women differ in part due to biological causes and that these differences may explain why we don’t see equal representation of women in tech and leadership.” John Hennessy was the tenth president of Stanford University, founded successful startups, and serves on the board of Alphabet. Maria Klawe ( @MariaKlawe ) is the fifth president of Harvey Mudd College, served on Microsoft’s board, and is former president of the Association for Computing Machinery. David Patterson was chair of UC Berkeley's computer science department and was also formerly president of the Association for Computing Machinery.
It’s an ironic conjecture historically, since many early programmers were women.
We make four points in rebuttal.
First, we know implicit bias exists, and that most of us have some. Such bias also has significant effects on observed performance. Bias, often implicit , continues to limit women’s progress in scientific and engineering fields. And the more implicit gender bias a nation has, the worse its girls perform in science and math.
Hence the need for effective programs to overcome unconscious biases.
Second, both established research and common sense tell us that members of underrepresented groups are more easily discouraged because they face daily biases that others don’t. Coaching programs—like Stanford Women in Computer Science , which has helped increase the number of women CS majors dramatically—can compensate. While anyone can selectively pick papers to support one’s version of “the truth,” virtually all scientists who have studied these issues agree that implicit bias and stereotyping are significant barriers for members of underrepresented groups. After all, the more egalitarian the society, the smaller the sex differences in science and math performance.
When women feel secure and affirmed , they perform as well as men on math tests.
The three of us have tried to counteract these barriers as faculty and administrators, and we've seen women flourish under our tutelage.
RELATED STORIES Diversity Megan Molteni and Adam Rogers WIRED Opinion Alison Coil google Ashley Feinberg Third, many labor studies predict a dramatic shortage of software engineers over the next five years, which will limit the growth of an industry that plays a vital role in our economy. To be competitive in this critical industry, employers must be able to draw from the entire US population, not only the one-third who are non-Hispanic white and Asian men. Moreover, a diverse workforce is correlated with successful institutions of all types: companies , universities, government , the military , and so on.
Fourth, while it's important to discuss these sensitive issues, hashing them out face-to-face allows discussion participants to see the impact of their words and to identify flaws in their reasoning before circulating their misconceptions widely. Unfortunately, electronic communication can lose the nuances and the quick feedback of live conversations, and can lead to text that's long-lived and demoralizes the underrepresented.
As one author whom Damore cited quips: “Using someone’s biological sex to essentialize an entire group of people’s personality is like surgically operating with an axe. Not precise enough to do much good, probably will cause a lot of harm.” Dispiriting words can also have the unintended side-effect of condoning discrimination based on gender, race, or ethnicity.
This controversy isn't political: Egalitarianism should be the American way. It's about fairness, civility, and common sense. A real meritocracy demands nothing less.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Topics gender James Damore diversity Amit Katwala Rob Reddick Jim Robbins Ben Brubaker Maryn McKenna Matt Simon Rob Reddick Maryn McKenna Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,568 | 2,017 |
"More Women Are Learning Computer Science! Now, About Those Jobs… | WIRED"
|
"https://www.wired.com/story/ap-computer-science-2017"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Davey Alba Business Computer Classes Are Diversifying! Now, About Those Jobs… Hotlittlepotato Save this story Save Save this story Save High-school girls are taking more Advanced Placement computer engineering exams than ever before, according to a new report from Code.org and the College Board. In 2017, largely thanks to a new test aimed at expanding the reach of engineering classes, female participation in these AP tests increased at a faster rate than young boys’ participation on the exam in 2017.
For women hoping to have careers in computer engineering, this kind of early training can make all the difference. The field of computer science is growing so fast it outpaces all other occupations in the US.
It's great work if you can get it. In fact, 70 percent of students who take this AP exam say they want to work in computer science. Trouble is, it's mostly white or Asian men who land these high-paying jobs.
Experts say to change that you've got to combat the so-called " pipeline problem ," educating women and people of color so they come out of high school and college with the right degrees to enter the field. Heartening numbers like this report are a good step in the right direction. But they also belie the fact that getting women and people of color into the pipeline is just the beginning. The real challenge is supporting these engineers once they enter the field—and actually hiring them in the first place.
Though the increases reported for women and people of color taking this exam should be celebrated, they are fairly modest gains in the scheme of things. This year, 135 percent more women took the AP Computer Science exam than last year. Much of that growth, however, is because the total number of students who took the AP Computer science exam more than doubled on the whole to 111,262 students—spurred on by a new AP course aiming to broaden the reach of computer science and bring the subject to underprivileged communities in urban and rural areas. Code.org says participation from black and Latino students in the AP exam increased by 170 percent compared to one year ago—though that combines two groups together. it is possible the proportion of black students and of Latino students, taken separately, did not increase faster than the rate of boys who took the AP exam this year.
“Seeing these gains among female, black, and Hispanic students is a story of how we can bring opportunity to people who need it the most,” says Hadi Partovi, CEO and cofounder of Code.org.
Ten years ago, only 18 percent of computer science exam takers were women. This year that figure rose to 27 percent—slightly lower than the average proportion of women employed in the tech industry, which hovers at around 30 percent.
It’s the same for young people of color: for nearly a decade, the proportion of young POCs who took the AP Computer Science exam stalled at 12 to 13 percent. But in 2016, 15 percent of exam takers were young people of color—then that went up to 20 percent in 2017.
"I’m delighted to hear that more female, black, and Latino students are taking AP computer science," says Rachel Thomas, a deep learning researcher and advocate for diversity.
"I attended a very poor public high school in Texas, but I was incredibly lucky that they were offering AP computer science 17 years ago. My guidance counselor discouraged me from taking the course, and I’m proud of teenage me for standing my ground in wanting to take it," she says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The excitement in the new AP course shows that if educators bring computer science to more people a more diverse people will jump into the pool of job candidates. And that will, in turn, help to supply the industry with computer science graduates and address the projected talent shortage for the tech industry in the years to come.
The pipeline problem, however, is far from the only thing keeping women and minorities out of engineering. Universities already graduate Latino, black, and female students at a much higher rate than tech companies hire them.
'It is a total smoke screen when major tech companies celebrate this news while continuing to fail to address their own toxic environments.' Women leave technology companies at twice the rate of men, according to a survey from the University of Wisconsin-Milwaukee. The trend is similar for people of color in tech. This is a culture problem, not a pipeline one.
“Most major tech companies are revolving doors in which women and people of color quit at similar rates to which they’re hired due to poor treatment, lack of advancement opportunities, and unfairness," says Thomas. "I think it is a total smoke screen when major tech companies celebrate Code.org's news while continuing to fail to address their own toxic environments." Worse yet, seeing improvements in diversity doesn't mean the trend will hold. In the 1960s and '70s, the number of women studying computer science outpaced men.
And yet, after the 1984-1985 academic year in which women accounted for nearly 37 percent of all computer science undergraduate students , the percentage flattened out, dropping to 14 percent by 2014.
"Getting women and people of color into the pipeline is one thing,” says Tracy Cross, a professor of educational psychology at The College of William and Mary’s Center for Gifted Education. “But if we aren’t keeping them in the field, that’s not enough.” A recent slew of sexual harassment stories pouring out of Silicon Valley shows the extremes of how toxic the field can be for women. But there are subtler ways, too, that the Valley can alienate people. “There are many forms of disrespect, devaluing, demeaning, and isolating behavior that occur in these male dominated fields, some of them by good intention, some of them by ill intention, and some of them unintentional,” says Denise Wilson, a professor of engineering who got her tech degree in the late '80s.
The VC and tech industries have efforts in the works to fix this culture problem, including drafting a decency pledge, a blacklist, and other public promises. With more women and people of color entering the pipeline, tech companies have more candidates to hire—and more candidates they must do right by.
Amanda Hoover Caitlin Harrington Niamh Rowe Will Knight Susan D'Agostino Will Knight Vittoria Elliott Christopher Beam Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,569 | 2,016 |
"One Swede Will Kill Cash Forever—Unless His Foe Saves It From Extinction | WIRED"
|
"https://www.wired.com/2016/05/sweden-cashless-economy"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Mallory Pickett Business One Swede Will Kill Cash Forever—Unless His Foe Saves It From Extinction Sean Freeman Save this story Save Save this story Save Nothing is more ordinary than a Monday morning at a Swedish bank.
People go about their business quietly, with Scandinavian efficiency. The weather outside is, more likely than not, cold and gray. But on April 22, 2013, the scene at Stockholm’s Östermalmstorg branch of Skandinaviska Enskilda Banken got a jolt of color. At 10:30 am, a man in a black cap burst into the building. “This is a robbery!” he announced, using one arm to point a gun at the bankers and the other to hold out a cloth bag. “I want cash!” If the staff was alarmed, no one much showed it. Instead, the employees calmly informed the stranger that his demands could not be met. The bank, they explained, had no cash on the premises. None in the vaults, none at the tellers’ windows, none at all. When the robber looked confused, he was directed to a poster on the wall that proclaimed this a “cash-free” location. “It’s true,” the manager told him. “Sorry.” Crestfallen, the would-be thief lowered his gun and prepared to leave. Just before he stepped out, he turned to one of the tellers. “Where else can I go?” he asked.
His options, in fact, were fairly limited. What this man had somehow failed to notice was that his country is at the forefront of a global economic shift. As pads of paper are to the modern-day office, so cash is to the world of finance: increasingly unnecessary and vanishing from sight. Some countries are embracing this future faster than others. The United States is about halfway there, at least in one sense: According to the Federal Reserve, Americans use cash for 46 percent of their transactions, preferring for the rest the convenience of plastic, check, or the mobile payment apps on their smartphones. The explosion of digital finance platforms, from Square card readers to services like Venmo, Apple Pay, Google Wallet, and PayPal, has made spending as easy, fast, and pleasant as sending a text. To some this may seem unnerving, but even amid security concerns over data breaches and identity theft, a world without cash seems inevitable, if not imminent.
But Swedes exist in a kind of sped-up timeline, where tomorrow happens yesterday. They number so few—10 million, about half the size of Los Angeles—and their IT infrastructure is so sophisticated that the entire country can pilot-test new developments, new systems, new futures practically overnight. In the process, Sweden has become a small peninsular slice of society to come—much like San Francisco, though cleaner and even better connected. Stockholm just announced it will be among the world’s first cities with a 5G mobile network, and most of the country is on track to have ultra-high-speed Internet by 2020. But then, Sweden has been in the vanguard for quite some time. More than 350 years ago, it became the first European nation to print paper money. Now it could be the first to phase it out.
Unless cash defenders get their way, that is. Even in Sweden, change isn’t easy. Two powerful men stand at the crux of this massive transition, facing off in a national debate about the value of physical currency in the 21st century. This being Sweden, they are both named Björn.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Money, money, money Must be funny In the rich man’s world Money, money, money Always sunny In the rich man’s world In the 1976 music video for Abba’s “Money, Money, Money,” Björn Ulvaeus, who wrote the song with bandmate Benny Andersson, sports a shaggy haircut and a rhinestone-trimmed satin kimono. Forty years later, he’s a more soberly dressed multimillionaire with a house in Stockholm’s swankiest suburb, Djursholm, discovering that money might not be so funny after all. Meet Björn number one, the face of Sweden’s cash-free movement.
Ulvaeus’ radicalization dates back to the events of October 25, 2008, when burglars tried to break into his son Christian’s apartment. They failed, but Christian was spooked. He started glancing around corners in his own home, nervous they’d be back. A few weeks later, they were. While Christian was at work, two men came in through the balcony and stole his cameras and a designer jacket.
It wasn’t a devastating haul, but Christian was shaken enough that he decided to move. For his dad, the whole episode was an outrage. “I started thinking they took these things , and they went somewhere and they got bills, paper bills,” Ulvaeus says over lunch at a deli near his home. “What if there wasn’t any paper money?” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So Ulvaeus, who retains influential pop idol status (at least in Sweden), began writing opinion pieces for newspapers and websites. His argument was simple: The criminal economy depends on the anonymous, untraceable nature of cash. Indeed, much of the cash in the world, maybe most of it, is simply unaccounted for. The World Bank estimates that about a third of the cash in most countries circulates underground, in black markets and through illegal employment. Take it away and thieves have no foolproof way to sell their stolen goods, drug dealers no way to hide their deals, and eventually the whole shadow economy collapses.
The more Ulvaeus thought about it, the more logical it seemed and the angrier he got. Attachment to cash was not just nostalgic but irrational, even dangerous. In 2011, Ulvaeus stopped using paper money completely—and hasn’t touched the stuff since. Two years later, when he cofounded the official Abba museum in Stockholm—a glittery establishment where visitors can insert themselves into music videos and shop for band-approved golden clogs—Ulvaeus insisted that no cash be accepted on the premises. On opening day, signs stood in the entrance and in the gift shop that read: I challenge anyone to come up with reasons to keep cash that outweigh the enormous benefits of getting rid of it. Imagine the worldwide suffering because of crime, from drug dealing to bicycle theft. Crime that requires cash. The Swedish krona is a small currency, used only in Sweden. This is the ideal place to start the biggest crime-preventing scheme ever. We could and should be the first cashless society in the world.—Björn Ulvaeus Ulvaeus’ crusade added just the right amount of star power to a larger, more coordinated effort already well under way. Several years earlier, the banks of Sweden had gotten together for the express purpose of weaning Swedes off bills and coins, under the banner of crime reduction. They began running a “public safety campaign” that encouraged people to buy things with cards instead of cash, lest they risk a curbside mugging; they also started emptying their own vaults of physical currency. The move had an intuitive appeal for most Swedes: As safe as the country is, it’s constantly looking for new ways to eliminate crime completely.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Then, around the time Ulvaeus opened his museum, the banks created an app called Swish. Swish is what really sets Sweden apart, even among its similarly low-cash, high tech Scandinavian neighbors, because it replaces cash in the last kind of transaction where it had been most convenient: person-to-person payments. A souped-up Venmo, Swish moves money instantaneously between users’ bank accounts, no processing time required. All you need is someone’s phone number. Since its launch, nearly half the population has started using the app; in December of last year, Swedes Swished some 10 million times. Even small businesses now accept Swish payments, as do some homeless people selling magazines on the streets of Stockholm (though if you don’t have the app, they usually carry portable card readers too).
This new activism by the banks, along with the support of Ulvaeus, transformed Swedish society in just a few years. In 2010, 40 percent of Swedish retail transactions were made using cash; by 2014 that amount had fallen to about 20 percent. More than half of bank offices no longer deal in cash. To his claim that going cashless is the “biggest crime-preventing scheme ever,” Ulvaeus now has some statistics to back it up. The Swedish National Council for Crime Prevention counted only 23 bank robberies in 2014, down 70 percent from a decade earlier. In the same period, muggings dropped 10 percent. While it’s unclear the extent to which the transition to cashless has affected the rate of street crime, police point out that there’s a lot less incentive to rob a bus driver, cabbie, or shopkeeper if they don’t accept cash. Many workers say they now feel much safer.
Still, Ulvaeus is not satisfied. He’s annoyed there’s any cash left in Sweden at all. “Why would you pay for things with paper symbols that can be forged, that can be used in the black economy? It’s so unmodern ,” he says. “It’s so out of touch.” Unmodern: It’s one of Ulvaeus’ favorite, most biting insults. In some ways he has spent his whole life chasing modernity. In his earlier years, he wanted to be an engineer and taught himself to code on his Atari. Musical superstardom derailed those dreams, but Ulvaeus never abandoned that side of himself. “Pop music has always been driven by technology,” he says. “Every new sound, we were like, what are the Bee Gees doing there? We have to get that!” He’s never been someone who romanticizes the old way of doing things; retro is lame. He idolizes modern-day boundary-pushers such as Elon Musk and professional atheist Richard Dawkins.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Ulvaeus believes, with a conviction bordering on zealotry, that once the world sees Sweden and the rest of Scandinavia transform into a cashless, crimeless utopia, with tax revenues soaring, it will have no other choice but to follow suit. Take Greece, a country Ulvaeus has a special connection to (see: Mamma Mia! ). “My God, what good it would do that country to be cashless,” he says. Corruption, tax evasion, the black economy: They could vanish. “I know it’s going to happen. I’m impatient. I want to see it!” Lunch is over. Ulvaeus pays for his fish with a black elite MasterCard and drives off in his Tesla.
Overturning a centuries-old system so quickly is not without its challenges. Weird things start to happen at every level of society. To wit: Sweden held its first major cashless music festival in the summer of 2014, and organizers provided attendees with special high tech wristbands for in-festival purchases. On the first day, the electronic payment system crashed, leaving thousands of thirsty festivalgoers unable to buy beer and forcing some vendors, one newspaper reported, to use a rather unmodern form of payment: paper IOUs.
In a curious case of an “e-mugging” on the Swedish island of Gotland last July, the victim told police he’d been forced to Swish money to a thief. The accused was easily identified—Swish requires a name and phone number—but when police found him, he said the transaction was just a friendly payment for beer. The police didn’t have enough evidence to bring the man to court, so the alleged e-mugger walked free.
Over the holidays, two young Russian tourists tried to board a bus and pay on board. The driver refused to take their bills. “We took out all this kronor when we got here,” one of them said as she walked back to the station, dejected. “It’s all still with us.” In Överlida, a small town in western Sweden, a third-party ATM wasn’t hitting the minimum number of transactions, so the operator threatened to charge the bank extra fees. To prevent that from happening, bank employees stood next to the machine, paying 100 kronor (about $12) to anyone who would use it.
In Skoghall, a rural town north of Stockholm, the locals campaigned for an ATM to be installed at their grocery store after all the others in town were decommissioned. When they finally got one, they threw what may have been the world’s first ATM party. A live band performed a Swedish rendition of Monty Python’s “Always Look on the Bright Side of Life,” singing, “Weee haaave a neeeew ATM,” while people cheered and a man on the roof showered celebrants with candy.
Making a cash deposit is now cause for suspicion—even if you’re a priest. New anti-money-laundering laws force tellers to ask detailed questions about where the cash comes from, and some banks enforce strict limits on maximum deposits. This means tithes often leave churches with more cash than they can handle, especially after big hauls during Christmas and Easter.
The Swedish government’s supposedly impenetrable mainframe was infiltrated in 2012 by a hacker who stole citizens’ personal data and used it to gain access to private accounts at Nordea, Sweden’s largest bank. Gottfrid Svartholm Warg, Sweden’s most famous cybercriminal and a cofounder of Pirate Bay, was convicted of the crime and served a year in jail.
In 2014, a security researcher discovered a major flaw in Swish’s design that gave him instant access to any user’s transaction history. He alerted the banks, which fixed the bug right away. Nobody noticed—until the good hacker posted about it on his blog a few weeks later.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Olaf Blecker Crime is the single most important consideration in the global transition to cashless. That’s why Björn Ulvaeus is constantly talking about public safety. So you might think the former president of Interpol—the International Criminal Police Organization—would be on Ulvaeus’ side. He is not. Meet Björn number two, the leader of Kontantupproret, or Sweden’s Cash Uprising.
Björn Eriksson is a big man, with winged eyebrows and fluffy gray hair. When he sits down, he seems to do so reluctantly, as though he would much rather stay standing, or have a walking meeting in which he would walk very fast.
He and Ulvaeus share more than a first name. They were both born in 1945 and so turn 71 this year. But if time has radicalized Ulvaeus, it has hardened Eriksson.
In the early ’80s, when Eriksson was working in Swedish customs, he sniffed out a covert police operation to smuggle illegal bugging equipment through the country. The police commissioner resigned soon after, and Eriksson was tapped to take his place. He remained in law enforcement for the rest of his career, spending time as head of the Swedish police before his appointment to the Interpol presidency. Although he’s technically retired now, it never occurred to him to stop working. Of the many causes he’s still involved in, the “cash problem,” as he calls it, is where he invests most of his energy. He sees corruption, deceit, and security risks everywhere.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Consumers are not shaping Ulvaeus’ utopianist dream of a cashless future, Eriksson says; the banks and credit card companies are. After all, it was the banks that pushed people to use cards in the first place; and it was the banks, not some independent tech startup, that created Swish. The cost-benefit is obvious: Cards, with their hidden costs and fees, make banks money, whereas vaults of bills and coins do not. In fact, cash costs banks money. It must be handled, counted, transported, guarded, and counted again. As Niklas Arvidsson, an economist at Stockholm’s Royal Institute of Technology, puts it: “It’s clear the banks have a business incentive to reduce the use of cash.” Time is money, and money takes time.
But for the most part, Swedes are not a cynical people. They like technology and trust their government and institutions. As the numbers show, most of them have been perfectly happy to renounce cash. In fact, many hardly seem to notice what’s happening at all, so convenient the changeover has been. That’s what concerns Eriksson most: not so much the opportunism on the part of the banks, which seems inevitable, but the thoughtlessness with which so many Swedes seem to have flung themselves—as though to the merry tune of “Dancing Queen”—into an uncertain, possibly unsafe future.
So last year, Eriksson started Cash Uprising, an organization whose core mission is to save the paper krona from extinction. Its members are mostly people from rural areas, small-business owners, and retirees—the ones, in other words, for whom the sudden departure of cash has been inconvenient enough to force them to stop, take notice, and worry.
Camilla Kristensson and Lars-Erik Olsson live in Gärdslöv, a cluster of houses in southern Sweden too small to be called a village. (Olsson estimates the population “in town” is about 22.) Kristensson and Olsson are treasurer and president, respectively, of the Gärdslöv cultural council, which hosts events like mushroom foraging and charcoal making. After one such event last summer, Kristensson had about 20,000 kronor to deposit in the council’s account. But when she went to the local bank, a 10-minute drive away, it refused her cash for the first time ever. So she had to start driving 40 minutes into the city every month to deposit as much money as she was allowed, storing the remainder in various hiding spots. What makes her and Olsson angry isn’t just that the bank stopped taking their cash—it’s that it happened so quickly, without regard for how it would affect people like them. “They changed it almost overnight,” Olsson says. “We need time to change.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Now Olsson’s council is part of Eriksson’s coalition of cash activists, who hold meetings, circulate petitions, and generally make noise about cash access. Ulvaeus, who has little patience for Eriksson’s views, describes the uprising as “Eriksson and a vanguard of geriatrics,” which is not altogether untrue, but they are some of the only voices speaking up for the consumer in this massive economic shift. The Swedish government has held several hearings on how to regulate the future of cash that were largely prompted by the work of Cash Uprising, and this September the parliament could vote on a bill that might require banks to provide cash services. (In a surprising victory for the movement, the head of Sweden’s central bank recently lent his support to such a proposal.) Eriksson does have another role in all this: He’s the chair of a major private-security lobby, an industry that a recent economic study called one of the “biggest losers” in a cash-free world. Among other things, security personnel guard vaults and protect cash. No physical cash equals no more jobs. Everyone has an interest, Eriksson says, but he believes his are at least aligned with those of the consumer.
Cash is security, he says. You can hold it in your hands; it can be protected. Spending it does not entail sharing personal information with credit card companies, app creators, or banks. It is true that bank robberies and muggings have declined in Sweden in the past few years. But according to crime statistics from the same national organizations, cases of fraud, usually involving identity theft, have more than doubled. And that stat is based only on cases reported to the police. Most banks won’t publicly share how often their customers’ card information is stolen or their systems breached.
It’s a good bet that the numbers are higher than consumers would like them to be. While Swedes swipe and Swish their money away, they open themselves up to new risks—cybercriminals who would either trick them into divulging sensitive information or exploit security flaws to steal their identity outright. “We see that cybercrime is becoming more aggressive,” says Ulrika Sundling, chief inspector of the Swedish police’s cyber-investigations unit. And she says consumers, generally unaware of the threat and therefore unmotivated to take extra steps to protect themselves, are the “weakest link.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Eriksson has been hounding Sweden’s banks for years, convinced they’re hiding exorbitant sums of lost money for fear of bad publicity. He even bought single shares of stock in different banks so he could go to shareholder meetings and try to get his questions answered. “They don’t like me,” he says, grinning. For their part, the banks say they keep this information close for customer security. According to Gunilla Garpås, a senior business developer at Nordea and one of the creators of Swish, more transparency about cases of cyberattacks, fraud, and the banks’ defenses against them “would really be putting ourselves and our customers at risk.” Eriksson’s suspicions don’t stop at the banks. He believes MasterCard’s sponsorship of the Abba museum is the reason Ulvaeus is such a dedicated anticash advocate—but Ulvaeus wrote his first articles on the subject long before the museum opened. That is not to say MasterCard isn’t capitalizing on this moment, though. The card company also heavily sponsors iZettle, the most popular mobile card reader in Sweden.
Olaf Blecker Last October, American retailers made the switch to chip readers. (Well, they were supposed to, but the rollout has been uneven, and some stores still allow the old swipe-and-sign method.) You likely received new chip-enabled cards from your bank as a result. The upgrade came after a year of high-profile hacks: 56 million credit and debit card numbers stolen from Home Depot, 40 million from Target, another million from Neiman Marcus. The “new” chip technology—which has been standard in the European Union for more than a decade—is intended to make electronic transactions safer and more secure.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Then, this March, several major US banks announced a new digital payment platform called clearXchange. (A better name is reportedly in the works.) It is, finally, the US equivalent of Swish: a bank-backed service that lets people transfer money from their bank account directly into someone else’s.
These moves will help speed up the decline of cash use in the US, which hasn’t seen significant change in the past few years; electronic payments have hovered around 50 percent of all transactions. Americans tend to be less trusting of their institutions than their Swedish counterparts—and for good reason. Strict privacy laws safeguard Swedes from unwanted invasions, but consumer protections in the US are considerably flimsier. As Jay Stanley, a senior policy analyst at the ACLU’s Speech, Privacy, and Technology Project, puts it: “We have a hurricane of data, and we’re living in a shack.” Plus, many Americans simply don’t want banks or the government to know what they’re spending their money on (thus the appeal of cryptocurrency like bitcoin).
But don’t be fooled: Economists have been predicting the end of physical currency for decades, and Sweden’s transformation signals the time is nigh for the rest of the world. Americans may cling to their bills and coins with greater tenacity than Swedes do, but in that reluctance is an opportunity to proceed cautiously and look to Sweden for guidance.
Ultimately, Sweden’s two Björns want the same thing: a safer society. The world is going cashless, as Ulvaeus says, but consumers have to feel more secure in this new order, per Eriksson. They’re not so much rivals as complements.
Not that they see themselves that way, set as they are in their inflexible views. Offered the opportunity to get dinner with Eriksson and maybe hash out differences over schnapps, Ulvaeus thought about it for a few seconds before saying, “No, I don’t think that’s a good idea. I might get angry.” Which is probably just as well. Imagine them fighting over the check.
Mallory Pickett ( @mallorylpickett ) is a journalist based in Berkeley, California. This is her first feature for WIRED.
This story appears in the May 2016 issue.
My friend was struck by ALS. To fight back, he built a movement 15 face masks we actually like to wear This card ties your credit to your social media stats Passionflix and the Musk of Romance Live wrong and prosper: Covid-19 and the future of families 👁 The therapist is in— and it's a chatbot app.
Plus: Get the latest AI news 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones Topics magazine-24.05 Blockchain cryptocurrency Morgan Meaker Morgan Meaker Steven Levy Eliza Gkritsi Steven Levy Vittoria Elliott Vittoria Elliott Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,570 | 2,016 |
"At Harvey Mudd College, the Ratio of Women in Computer Science Increased from 10% to 40% in 5 Years | WIRED"
|
"https://www.wired.com/2016/02/at-harvey-mudd-college-the-ratio-of-women-in-computer-science-increased-from-10-to-40-in-5-years"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Maria Klawe Backchannel At Harvey Mudd College, the Ratio of Women in Computer Science Increased from 10% to 40% in 5 Years Matthieu Bourel Save this story Save Save this story Save I’ve been passionate my whole life about getting more women into technology careers. After 40 years as an educator, here is my hypothesis: If we make learning and work environments interesting and supportive, if we build confidence and community among women, and if we demystify success, women will come, thrive and stay. At Harvey Mudd College, where I’ve served as president since 2006, we’ve been applying this theory and seeing results. Over a five-year period, we went from averaging 10 percent female CS majors to 40 percent; this year we are on track to graduate 45 percent women CS majors.
Sign up to get Backchannel's weekly newsletter, and follow us on Facebook , Twitter , and Instagram.
The CS faculty led the effort starting in 2006. They redesigned the intro computer science course to focus more on creative problem solving. Instead of traditional homework, the faculty assigned team-based projects so that students coded together. And, most important, they made the courses fun and emphasized ways in which CS can benefit society.
Remove the “macho effect” In order to create a supportive environment for women and other students with no prior coding experience, faculty split the intro course into two sections: black and gold (Harvey Mudd’s colors) — with black for those who had prior programming experience and gold for those who didn’t. Instructors worked deliberately to reduce the intimidation factor in these courses by eliminating a common “macho” effect, where a few more experienced students intimidate others because they seem to know so much more.
Provide role models To strengthen female students’ interest in CS we offer up to twenty-five first-year women, independent of planned major, the opportunity to attend the Grace Hopper Celebration of Women in Computing (Hopper), the largest conference focusing on women computer scientists. We also take a large number of upper class female students. Hopper provides a welcoming culture, great talks about current technical topics and exposure to the breadth of jobs available in technology. Students meet a wide variety of role models — successful women working in tech and enjoying it. Eight first-year students attended the inaugural trip in 2006. Last year we took about 65 students.
Create early research opportunities Faculty also created early summer research projects designed for students with minimal CS experience and encouraged female students to apply. A number of studies have shown that research experiences for undergraduate women increase retention and the likelihood they will attend graduate school. These projects allowed female students, after their first year in college, to apply their knowledge, boost their confidence and deepen their interest in CS.
Share what works Many of these innovations are not difficult to implement. Harvey Mudd is working with the Anita Borg Institute and the National Center for Women in Information Technology on the BRAID initiative — building, recruiting, and inclusion for diversity — to support 15 U.S. universities in making similar changes to their CS departments. We have made our introductory curriculum available online and created a free MOOC for teachers who want to implement the course at both high school and college levels.
Demystify success In most places, particularly in industry, the path to success can be unclear. But if you are a member of the dominant group, which in the tech industry is largely white and Asian males, you are part of a network. You may not be aware that you have access to information that others outside that dominant group don’t have. No one is purposely withholding information; it’s just that those outside the group are often not part of the crowd that’s going out to play video games or drink beer, and there’s a natural flow of information that goes with social groups.
There are concrete steps that organizations and companies can take to open up the conversation about what people do to become successful.
For example, the Computing Research Association’s Committee on the Status of Women, CRA-W, has made great progress in getting more women into faculty positions by clarifying the pathways to success. With National Science Foundation funding, CRA-W created the Distributed Mentor Program (DMP), in which undergraduate female students were matched by research interest with female computer scientists for a summer research experience. A follow-up study showed that students who participated in the DMP program were twice as likely to go on to get a PhD. CRA-W also created a community of PhD cohorts, in which each year they connect 250 starting female PhD students and bring them together regularly to network and gain advice. CRA-W holds workshops for the various stages of career building: how to get a career started and how to achieve early success in academia; how to get tenure; and how to achieve promotion to full professor. At every level, CRA-W works to demystify the process of achieving success, and its programs are helping to increase the number of female CS faculty in the U.S.
We need more initiatives like this in academia and industry. There are many women who do well in technology-related areas — computer science, physics, math, engineering. And not only women; African-Americans, Hispanic students and other groups underrepresented in STEM. These students are talented and have worked hard; yet they often enter career environments that are, at the very least, unsupportive and, at the worst, genuinely hostile. Everyone needs to work on creating learning and work environments that are interesting and supportive, building confidence and community, and demystifying success. We are not done. And if anyone tells you that you can’t change these things, or it’s too hard to change them, don’t believe it. We can.
This article is adapted from a speech given at the University of California at Berkeley in January 2016.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Topics Backchannel Women in tech computer science STEM college Brandi Collins-Dexter Angela Watercutter Steven Levy Andy Greenberg Lauren Smiley Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,571 | 2,016 |
"Watch Robots & Us: A Brief History of Our Robotic Future | WIRED"
|
"https://www.wired.com/video/watch/robots-us-a-brief-history-of-our-robotic-future"
|
"Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robots & Us: A Brief History of Our Robotic Future About Released on 04/13/2017 (whirring) [Narrator] We live in an age of self-driving vehicles, robots that work alongside us, and we rely on seemingly omniscient digital systems.
[Computer] Tomorrow in San Francisco it'll be sunny.
[Narrator] Advances in algorithms, sensors and automation technology stand to upend nearly every aspect of modern life from work.
Robots are coming out of their cages.
[Narrator] To healthcare.
[Computer] Pigmented lesion benign.
[Narrator] And even how we think of ourselves.
As a tool artificial intelligence might extend human skills or it might make them obsolete.
Your kitchen knife can be used to make great dinners.
It can be used to kill somebody.
And it's up to you to make that choice.
AI is a tool just like this except it's a mental tool.
[Narrator] So, how did we get here and where are we going? (upbeat music) As early as the 1940s the first modern automation began to show up in factories.
Computers were still decades away from becoming personal but by the late 50s and early 60s computer scientists were already trying to figure out the big question.
Will robots make our lives better or will they replace us entirely? On one side was John McCarthy with artificial intelligence, or AI.
You know, it was roughly the study of a set of technologies that would sort of mimic or replicate human capabilities, whether they were intellectual or physical.
[Narrator] On the other side was Doug Engelbart's idea of intelligence augmentation, or IA.
And that sort of led to the things that would become the internet and personal computing.
So, you have these two philosophies and it's interesting they sort of form both a dichotomy in a paradox.
If you extend the human in an IA sense of way, intelligence augmentation, you need fewer humans, or you can just replace them outright.
[Narrator] Robots and thinking machines have long been a science fiction fantasy.
But by the 1960s they were actually becoming a reality.
[Narrator] At SRI we are experimenting with a mobile robot.
We call him Shakey.
[Narrator] A wobbly, incredibly slow reality.
The first real effort to build an autonomous machine that could move and reason and act in its environment was Shakey which was a project proposed by a physicist whose name was Charlie Rosen at SRI in 1966.
He persuaded the Pentagon by telling them that they could work on a prototype of something that might do reconnaissance or be a guard.
At a certain point they asked him how many guns it might carry and he said well, two, three, how many do you need? [Narrator] While Shakey never carried guns it did mark the beginning of a new era of computer science.
It was important because it was a platform on which some of the algorithms that would later be used by self-driving cars, the kinds of things that you use in your smartphone, there was an algorithm called A Star, that was a navigation algorithm, is the sort of granddaddy of the way we get around with our smartphones, and I think importantly too that the first work on speech recognition that was significant was done in Shakey because they were looking for some way to interact with the machine.
[Narrator] Since the days of Shakey computers have advanced remarkably in their ability to think for themselves.
Advanced AI has beaten humans at their own games, from chess to Jeopardy.
(clapping) Perhaps most tangibly of all cars began driving themselves.
12 years ago the idea that a computer could drive a car was completely unthinkable.
People felt something as intuitive and as hard to even explain as driving a car was reserved for the human race.
[Narrator] Then the Federal Defense Agency DARPA put on a self-driving car challenge.
Stanford's team, led by Thrun, won by building Stanley, a robot car that drove itself 132 miles across the desert.
Our secret ingredient was AI, it was machine learning.
We actually trained the robot to do the right thing.
Back in the day we trained it how to vary its speed, where to steer a steering wheel and so on.
[Narrator] Since then AI and advanced automation have exploded.
Dozens of companies are now testing self-driving vehicles, most of us carry around smartphones loaded with AI-powered tools, and last year Deep Mind's AlphaGo, a neural network beat us at our hardest game, Go.
That kind of proves to me that basically everything can be done.
Whatever you say can be done wait a little and it can be done.
[Narrator] AI and automation promise faster and safer solutions to humanity's problems.
I see this world where all these basic things are so affordable and so good to us that they can free our minds to develop a humanity that's completely unimaginable today.
[Narrator] But others warn of a jobless future.
It's not just about factories, and when it is factories, they're becoming far more advanced, but it's also, you know, white collar things, it's jobs dones by journalists and radiologists.
[Narrator] While there have been advances there are still limitations.
Some of the most cutting-edge autonomous robots can barely do what a human toddler can do.
This is McCarthy's paradox, John McCarthy, who was the person who coined the terms AI noted this originally.
He liked to put his hand into his pocket and pull out a dime, and you know, that's something we do without any thought.
It defies the most sophisticated robotic arm to this day.
[Narrator] But what AI and automation may do could forever change the world and our place in it.
For the next five episodes of Robots and Us we'll be exploring how these technologies could impact everything, from how we work or get around, to how we take care of our bodies.
[Computer] How did it go with your meds today? [Narrator] To how we think of ourselves as humans.
(upbeat electronic music) President Barack Obama on How Artificial Intelligence Will Affect Jobs WIRED's Top Gadget Stories of 2016 President Barack Obama on the Future of AI President Barack Obama on Bureaucracy VS. Moonshots President Barack Obama on How We'll Embrace Self-Driving Cars President Barack Obama on What AI Means for National Security WIRED by Design WIRED25: Kai-Fu Lee and Fei Fei Li On What's Next for Artificial Intelligence WIRED25: Sebastian Thrun & Sam Altman Talk Flying Vehicles and Artificial Intelligence WIRED25: CEO Susan Wojcicki On Making YouTube A Better Place Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,572 | 2,016 |
"Google, Facebook, and Microsoft Team Up to Keep AI From Getting Out of Hand | WIRED"
|
"https://www.wired.com/2016/09/google-facebook-microsoft-tackle-ethics-ai"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Klint Finley Business Tech Giants Team Up to Keep AI From Getting Out of Hand Getty Images Save this story Save Save this story Save Let's face it: artificial intelligence is scary. After decades of dystopian science fiction novels and movies where sentient machines end up turning on humanity, we can't help but worry as real world AI continues to improve at such a rapid rate. Sure, that danger is probably decades away if it's even a real danger at all. But there are many more immediate concerns. Will automated robots cost us jobs? Will online face recognition destroy our privacy? Will self-driving cars mess with moral decision making? The good news is that many of the tech giants behind the new wave of AI are well aware that it scares people---and that these fears must be addressed. That's why Amazon, Facebook, Google's DeepMind division, IBM, and Microsoft have founded a new organization called the Partnership on Artificial Intelligence to Benefit People and Society.
"Every new technology brings transformation, and transformation sometimes also causes fear in people who don't understand the transformation," Facebook's director of AI Yann LeCun said this morning during a press briefing dedicated to the new project. "One of the purposes of this group is really to explain and communicate the capabilities of AI, specifically the dangers and the basic ethical questions." If all that sounds familiar, that's because Tesla and Space X CEO Elon Musk had been harping on this issue for years, and last December, he and others founded a an organization, OpenAI , that aims to address many of the same fears. But OpenAI is fundamentally a R&D outfit. The Partnership for AI is something different. It's a consortium---open to anyone---that seeks to facilitate a much wider dialogue about the nature, purpose, and consequences of artificial intelligence.
According to LeCun, the group will operate in three fundamental ways. It will foster communication among those who build AI. It will rope in additional opinions from academia and civil society---people will a wider perspective on how AI will effect society as a whole. And it will inform the public on the progress of AI. That may include educating lawmakers, but the organization says it will not lobby the government.
Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We've already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people.
If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The rub is that, even if this group can agree on a set of ethical principles--something that will be hard to do in a large group with many stakeholders---it won't really have a way to ensure those ideals are put into practice. Although one of the organization's tenets is "Opposing development and use of AI technologies that would violate international conventions or human rights," Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.
In other words, if one of the member organizations decides to do something blatantly unethical, there's not really anything the group can do to stop them. Rather, the group will focus on gathering input from the public, sharing its work, and establishing best practices.
Just bringing people together isn't really enough to solve the problems that AI raises, says Damien Williams, a philosophy instructor at Kennesaw State University who specializes in the ethics of non-human consciousness. Academic fields like philosophy have diversity problems of their own. So many different opinions abound. One enormous challenge, he says, is that the group will need to continually reassess its thinking, rather than settling on a static list of ethics and standards that doesn't change or evolve.
Williams is encouraged that tech giants like Facebook and Google are even asking questions about ethics and bias in AI. Ideally, the group will help establish new standards for thinking about artificial intelligence, big data, and algorithms that can weed out harmful assumptions and biases. But that's a mammoth task. As co-chair Eric Horvitz from Microsoft Research put it, the hard work begins now.
Contributor X Topics artificial intelligence deep learning Facebook Google Will Knight Niamh Rowe Will Knight Steven Levy Will Knight Steven Levy Will Knight Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,573 | 2,021 |
"How to poison the data that Big Tech uses to surveil you | MIT Technology Review"
|
"https://www.technologyreview.com/2021/03/05/1020376/resist-big-tech-surveillance-data"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How to poison the data that Big Tech uses to surveil you Algorithms are meaningless without good data. The public can exploit that to demand change.
By Karen Hao archive page Eric Risberg / AP Every day, your life leaves a trail of digital breadcrumbs that tech giants use to track you. You send an email, order some food, stream a show. They get back valuable packets of data to build up their understanding of your preferences. That data is fed into machine-learning algorithms to target you with ads and recommendations. Google cashes your data in for over $120 billion a year of ad revenue.
Increasingly, we can no longer opt out of this arrangement. In 2019 Kashmir Hill, then a reporter for Gizmodo, famously tried to cut five major tech giants out of her life.
She spent six weeks being miserable, struggling to perform basic digital functions. The tech giants, meanwhile, didn’t even feel an itch.
Now researchers at Northwestern University are suggesting new ways to redress this power imbalance by treating our collective data as a bargaining chip. Tech giants may have fancy algorithms at their disposal, but they are meaningless without enough of the right data to train on.
In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage: Data strikes , inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
Data poisoning , which involves contributing meaningless or harmful data.
AdNauseam , for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
Conscious data contribution , which involves giving meaning ful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.
People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.
What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.
There may have already been a few examples of this. In January, millions of users deleted their WhatsApp accounts and moved to competitors like Signal and Telegram after Facebook announced that it would begin sharing WhatsApp data with the rest of the company. The exodus caused Facebook to delay its policy changes.
Just this week, Google also announced that it would stop tracking individuals across the web and targeting ads at them. While it’s unclear whether this is a real change or just a rebranding, says Vincent, it’s possible that the increased use of tools like AdNauseam contributed to that decision by degrading the effectiveness of the company’s algorithms. (Of course, it’s ultimately hard to tell. “The only person who really knows how effectively a data leverage movement impacted a system is the tech company,” he says.) Vincent and Li think these campaigns can complement strategies such as policy advocacy and worker organizing in the movement to resist Big Tech.
“It’s exciting to see this kind of work,” says Ali Alkhatib, a research fellow at the University of San Francisco’s Center for Applied Data Ethics, who was not involved in the research. “It was really interesting to see them thinking about the collective or holistic view: we can mess with the well and make demands with that threat, because it is our data and it all goes into this well together.” Related Story The largest ever study of facial-recognition data shows how much the rise of deep learning has fueled a loss of privacy.
There is still work to be done to make these campaigns more widespread. Computer scientists could play an important role in making more tools like AdNauseam, for example, which would help lower the barrier to participating in such tactics. Policymakers could help too. Data strikes are most effective when bolstered by strong data privacy laws, such as the European Union’s General Data Protection Regulation (GDPR), which gives consumers the right to request the deletion of their data. Without such regulation, it’s harder to guarantee that a tech company will give you the option to scrub your digital records, even if you remove your account.
And some questions remain to be answered. How many people does a data strike need to damage a company’s algorithm? And what kind of data would be most effective in poisoning a particular system? In a simulation involving a movie recommendation algorithm, for example, the researchers found that if 30% of users went on strike, it could cut the system’s accuracy by 50%. But every machine-learning system is different, and companies constantly update them. The researchers hope that more people in the machine-learning community can run similar simulations of different companies’ systems and identify their vulnerabilities.
Alkhatib suggests that scholars should do more research on how to inspire collective data action as well. “Collective action is really hard,” he says. “Getting people to follow through on ongoing action is one challenge. And then there’s the challenge of how do you keep a group of people who are very transient—in this case it might be people who are using a search engine for five seconds—to see themselves as part of a community that actually has longevity?” These tactics might also have downstream consequences that need careful examination, he adds. Could data poisoning end up just adding more work for content moderators and other people tasked with cleaning and labeling the companies’ training data? But overall, Vincent, Li, and Alkhatib are optimistic that data leverage could turn into a persuasive tool to shape how tech giants treat our data and our privacy. “AI systems are dependent on data. It’s just a fact about how they work,” Vincent says. “Ultimately, that is a way the public can gain power.” hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,574 | 2,019 |
"When algorithms mess up, the nearest human gets the blame | MIT Technology Review"
|
"https://www.technologyreview.com/2019/05/28/65748/ai-algorithms-liability-human-blame"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts When algorithms mess up, the nearest human gets the blame By Karen Hao archive page An image showing the aftermath of a self-driving car accident, with an uber vehicle on its side Tempe Police Department Earlier this month, Bloomberg published an article about an unfolding lawsuit over investments lost by an algorithm. A Hong Kong tycoon lost more than $20 million after entrusting part of his fortune to an automated platform. Without a legal framework to sue the technology, he placed the blame on the nearest human: the man who sold it to him.
It’s the first known case over automated investment losses, but not the first involving the liability of algorithms. In March of 2018, a self-driving Uber struck and killed a pedestrian in Tempe, Arizona, sending another case to court. A year later, Uber was exonerated of all criminal liability, but the safety driver could face charges of vehicular manslaughter instead.
Both cases tackle one of the central questions we face as automated systems trickle into every aspect of society: Who or what deserves the blame when an algorithm causes harm? Who or what actually gets the blame is a different yet equally important question.
Madeleine Clare Elish, a researcher at Data & Society and a cultural anthropologist by training, has spent the last few years studying the latter question to see how it can help answer the former. To do so, she has looked back at historical case studies. While modern AI systems haven’t been around for long, the questions surrounding their liability are not new.
The self-driving Uber crash parallels the 2009 crash of Air France flight 447, for example, and a look at how we treated liability then offers clues for what we might do now. In that tragic accident, the plane crashed into the Atlantic Ocean en route from Brazil to France, killing all 228 people on board. The plane’s automated system was designed to be a completely “foolproof,” capable of handling nearly all scenarios except for the rare edge cases when it needed a human pilot to take over. In that sense, the pilots were much like today’s safety drivers for self-driving cars—meant to passively monitor the flight the vast majority of the time but leap into action during extreme scenarios.
Related Story What happened the night of the crash is, at this point, a well-known story. About an hour and a half into the flight, the plane’s air speed sensors stopped working because of ice formation. After the autopilot system transferred control back to the pilots, confusion and miscommunication led the plane to stall. While one of the pilots attempted to reverse the stall by pointing the plane’s nose down, the other, likely in a panic, raised the nose to continue climbing. The system was designed for one pilot to be in control at all times, however, and didn’t provide any signals or haptic feedback to indicate which one was actually in control and what the other was doing. Ultimately, the plane climbed to an angle so steep that the system deemed it invalid and stopped providing feedback entirely. The pilots, flying completely blind, continued to fumble until the plane plunged into the sea.
In a recent paper , Elish examined the aftermath of the tragedy and identified an important pattern in the way the public came to understand what happened. While a federal investigation of the incident concluded that a mix of poor systems design and insufficient pilot training had caused the catastrophic failure, the public quickly latched onto a narrative that placed the sole blame on the latter. Media portrayals, in particular, perpetuated the belief that the sophisticated autopilot system bore no fault in the matter despite significant human-factors research demonstrating that humans have always been rather inept at leaping into emergency situations at the last minute with a level head and clear mind.
Humans act like a "liability sponge." In other case studies, Elish found the same pattern to hold true: even in a highly automated system where humans have limited control of its behavior, they still bear most of the blame for its failures. Elish calls this phenomenon a “moral crumple zone.” “While the crumple zone in a car is meant to protect the human driver,” she writes in her paper, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.” Humans act like a “liability sponge,” she says, absorbing all legal and moral responsibility in algorithmic accidents no matter how little or unintentionally they are involved.
This pattern offers important insight into the troubling way we speak about the liability of modern AI systems. In the immediate aftermath of the Uber accident, headlines pointed fingers at Uber, but less than a few days later, the narrative shifted to focus on the distraction of the driver.
“We need to start asking who bears the risk of [tech companies’] technological experiments,” says Elish. Safety drivers and other human operators often have little power or influence over the design of the technology platforms they interact with. Yet in the current regulatory vacuum, they will continue to pay the steepest cost.
Regulators should also have more nuanced conversations about what kind of framework would help distribute liability fairly. “They need to think carefully about regulating sociotechnical systems and not just algorithmic black boxes,” Elish says. In other words, they should consider whether the system’s design works within the context it’s operating in and whether it sets up human operators along the way for failure or success. Self-driving cars, for example, should be regulated in a way that factors in whether the role safety drivers are being asked to play is reasonable.
“At stake in the concept of the moral crumple zone is not only how accountability may be distributed in any robotic or autonomous system,” she writes, “but also how the value and potential of humans may be allowed to develop in the context of human-machine teams.” This story originally appeared in our Webby-nominated AI newsletter The Algorithm. To have more stories like this delivered directly to your inbox, sign up here. It's free.
hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,575 | 2,019 |
"Hey Google, sorry you lost your ethics council, so we made one for you | MIT Technology Review"
|
"https://www.technologyreview.com/2019/04/06/65905/google-cancels-ateac-ai-ethics-council-what-next"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Hey Google, sorry you lost your ethics council, so we made one for you By Bobbie Johnson archive page Gideon Lichfield archive page Well, that didn’t take long. After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC—a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views , and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing.
How did things go so wrong? And can Google put them right? We got a dozen experts in AI, technology, and ethics to tell us where the company lost its way and what it might do next. If these people had been on ATEAC, the story might have had a different outcome.
"Be transparent and specific about the roles and responsibilities ethics boards have" Rashida Richardson, director of policy research at the AI Now Institute "We have no insight into whether ethics boards are actually a moral compass or just another rubber stamp" In theory, ethics boards could be a great benefit when it comes to making sure AI products are safe and not discriminatory. But in order for ethics boards to have any meaningful impact, they must be publicly accountable and have real oversight authority.
That means tech companies should be willing to share the criteria they’re using to select who gets to sit on these ethics boards. They should also be transparent and specific about the roles and responsibilities their ethics boards have so that the public can assess their efficacy. Otherwise, we have no insight into whether ethics boards are actually a moral compass or just another rubber stamp. Given the global influence and responsibility of large AI companies, this level of transparency and accountability is essential.
“Consider what it actually means to govern technology effectively and justly” Jake Metcalf, technology ethics researcher at Data & Society The ATEAC hullabaloo shows us just how fraught and contentious this new age of tech ethics will likely be. Google clearly misread the room in this case. Politically marginal populations that are subject to the classificatory whims of AI/ML technologies are likely to experience the worse ethical harms from automated decision making. Google favoring Kay Coles James for “viewpoint diversity” over her open hatred of transgendered people shows that they are not adequately considering what it actually means to govern technology effectively and justly.
"Ethics means two different things that can be contradictory in practice. Companies are amenable to the former and terrified of the latter" It’s tricky for companies because ethics means two different things that can be contradictory in practice: it is both the daily work of understanding and mitigating consequences (such as running a bias detection tool or hosting a deliberative design meeting), and the judgment about how society can be ordered most justly (such as whether disparate harms to marginalized communities mean a product line should be spiked). Corporations are amenable to the former, and terrified of the latter. But if AI ethics isn’t about preventing automated abuse, blocking the transfer of dangerous technologies to autocratic governments, or banning the automation of state violence, then it’s hard to know what tech companies think it is other than empty gestures. Underneath the nice new ethics report tool that is copacetic with the company’s KPI metrics is a genuine concern that lives are on the line. Holding those in your head all at once is a challenge for companies bureaucratically and for ethicists invested in seeing more just technologies win out.
“First acknowledge the elephant in the room: Google's AI principles” Evan Selinger, professor of philosophy at Rochester Institute of Technology Google put the kibosh on ATEAC without first acknowledging the elephant in the room: the AI principles that CEO Sundar Pichai articulated over the summer. Leading academics, folks at civil society organizations, and senior employees at tech companies have consistently told me that while the principles look good on paper, they are flexible enough to be interpreted in ways that will spare Google from needing to compromise any long-term growth strategies—not least because the enforcement mechanisms for violating the principles aren’t well-defined, and, in the end, the entire enterprise remains a self-regulatory endeavor.
That said, it would certainly help to make leadership more accountable to an ethics board if the group were (a) properly constituted; (b) given clear and robust institutional powers (rather than just being there to offer advice); and (c) also, itself, be held to transparent accountability standards to ensure it doesn’t become a cog in a rationalizing, ethics-washing machine.
“Change the people in charge of putting together these groups” Ellen Pao, founder at Project Include This failed effort shows exactly why Google needs better advisors. But perhaps they also need to change the people in charge of putting together these groups—and perhaps their internal teams should be doing this work as well. There were several problems with the outcome as we've all seen, but also problems with the process. When you haven't communicated to the whole group about who they will be working with, that's a huge mistake. Bringing people who are more reflective of the world we live in should have happened internally before trying to put together an external group.
Side note, people should be examining the groups they're joining, the conference panels they're speaking at, and their teams before they commit so they know what they're signing up for. It's amazing how much you can influence them and how you can change the makeup of a group just by asking.
“Empower antagonism—not these friendly in-house partnerships and handholding efforts” Meg Leta Jones, assistant professor in Communication, Culture & Technology at Georgetown University Ethical boards are nobody's day job, and only offer a possibility for high-level infrequent conversations that at best provide insight and at worst cover. If we want to establish trust in institutions including technologies, tech companies, media, and government, our current political culture demands antagonism—not these friendly in-house partnerships and handholding efforts. Empowering antagonists and supporting antagonism may more appropriately and effectively meet the goals of "ethical AI." "Ethics boards at best provide insight and at worst, cover" “Look inward and empower employees who stand in solidarity with vulnerable groups” Anna Lauren Hoffmann, Assistant Professor with The Information School at the University of Washington Google’s failed ATEAC board makes clear that “AI ethics” is not just about how we conceive of, develop, and implement AI technologies—it’s also about how we “do” ethics. Lived vulnerabilities, distributions of power and influence, and whose voices get elevated are all integral considerations when pursuing ethics in the real world. To that end, the ATEAC debacle and other instances of pushback (for example, against Project Maven , Dragonfly , and sexual harassment policies ) make clear that Google already has a tremendous resource in many of its own employees. While we also need meaningful regulation and external oversight, the company should look inward and empower those already-marginalized employees ready to organize and stand in solidarity with vulnerable groups to tackle pervasive problems of transphobia, racism, xenophobia, and hate.
“A board can't just be 'some important people we know.' You need actual ethicists" Patrick Lin, director of the Ethics + Emerging Sciences Group at Cal Poly In the words of Aaliyah, I think the next step for Google is to dust yourself off and try again. But they need to be more thoughtful about who they put on the board—it can't just be a "let's ask some important people we know" list, as version 1.0 of the council seemed to have been. First, if there's a sincere interest in getting ethical guidance, then you need actual ethicists—experts who have professional training in theoretical and applied ethics. Otherwise, it would be a rejection of the value of expertise, which we're already seeing way too much of these days, for example, when it comes to basic science.
"Imagine if the company wanted to convene an AI law council, but there was only one lawyer on it" Imagine if the company wanted to convene an AI law council, but there was only one lawyer on it (just as there was only one philosopher on the AI ethics council v1.0). That would raise serious red flags. It's not enough for someone to work on issues of legal importance—tons of people do that, including me, and they can well complement the expert opinion of legal scholars and lawyers. But for that council to be truly effective, it must include actual domain experts at its core.
“The last few weeks showed that direct organizing works” Os Keyes, a PhD student in Data Ecologies Lab at the University of Washington To be honest, I have no advice for Google. Google is doing precisely what corporate entities in our society are meant to do; working for political (and so regulatory, and so financial) advantage without letting a trace of morality cut into their quarterly results or strategic plan. My advice is for everyone but Google. For people outside Google: phone your representatives. Ask what they're doing about AI regulation. Ask what they're doing about lobbying controls. Ask what they're doing about corporate regulation. For people in academia: phone your instructors. Ask what they're doing about teaching ethics students that ethics is only important if it is applied, and lived. For people inside Google: phone the people outside and ask what they need from you. The events of the last few weeks showed that direct organizing works; solidarity works.
“Four meetings a year are not likely to have an impact. We need agile ethics input” Irina Raicu, director of the internet ethics program at Santa Clara University I think this was a great missed opportunity. It left me wondering who, within Google, was involved in the decision-making about whom to invite. (That decision, in itself, required diverse input.) But this speaks to the broader problem here: the fact that Google made the announcement about the creation of the board without any explanation of their criteria for selecting the participants. There was also very little discussion of their reasons for creating the board, what they hoped the board's impact would be, etc. Had they provided more context, the ensuing discussion might have been different.
There are other issues, too; given how fast AI is developing and being deployed, four meetings (even with a diverse group of AI ethics advisors), over the course of a year, are not likely to have meaningful impact--i.e. to really change the trajectory of research or product development. As long as the model is agile development, we need agile ethics input, too.
“The group has to have authority to say no to projects” Sam Gregory, program director at Witness If Google wants to genuinely build respect for ethics or human rights into the AI initiatives, they need to first recognize that an advisory board, or even a governance board, is only part of a bigger approach. They need to be clear from the start that the group actually has authority to say no to projects and be heard. Then they need to be explicit on the framework—we’d recommend it be based on established international human rights law and norms—and therefore an individual or group that has a record of being discriminatory or abusive shouldn’t be part of it.
“Avoid treating ethics like a PR game or a technical problem” Anna Jobin, researcher at the Health Ethics and Policy Lab at the Swiss Federal Institute of Technology If Google is serious about ethical AI, the company must avoid treating ethics like a PR game or a technical problem and embed it into its business practices and processes. It may need to redesign its governance structures to create better representation for and accountability to both its internal workforce as well as society at large. In particular, it needs to prioritize the well-being of minorities and vulnerable communities world-wide, especially people who are or may be adversely affected by its technology.
"Seek not only traditional expertise, but also the insights of people who are experts on their own lived experiences" Joy Buolamwini, founder of the Algorithmic Justice League As we think about the governance of AI, we must not only seek traditional expertise but also the insights of people who are experts on their own lived experiences. How might we engage marginalized voices in shaping AI? What could participatory AI that centers the views of those who are most at risk for the adverse impacts of AI look like? "I can't imagine any recommendation of such an advisory panel standing in the way of what the market demands" Learning from the ATEAC experience Google should incorporate compensated community review processes in the development of its products and services. This will necessitate meaningful transparency and continuous oversight. And Google and other members in the Partnership on AI should set aside a portion of profits to provide consortium funding for research on AI ethics and accountability, without only focusing on AI fairness research that elevates technical perspectives alone.
“Perhaps it's for the best that the fig leaf of 'ethical development' has been whisked away” Adam Greenfield, author of Radical Technologies Everything we've heard about this board has been shameful, from the initial instinct to invite James to the decision to shut it down rather than dedicate energy to dealing with the consequences of that choice. But being that my feelings about AI are more or less those of the Butlerian Jihad, perhaps it's for the best that the fig leaf of "ethical development" has been whisked away. In the end, I can't imagine any recommendation of such an advisory panel, however it may be constituted, standing in the way of what the market demands, and/or the perceived necessity of competing with other actors engaged in AI development.
"It is heartening to see the power of employee activism" Tess Posner, CEO of AI4ALL It’s great to see companies, organizations and researchers working to create ethical frameworks for AI. The tech industry is experiencing growing pains in this area--figuring out how to do this right is challenging and will take time and iteration. We believe it’s an opportunity to continue asking which voices need to be included, and making sure to include diverse voices and voices that may be directly impacted by the results of decisions made. It is heartening to see the power of employee activism influencing change around this and other issues in tech.
hide by Bobbie Johnson & Gideon Lichfield Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,576 | 2,017 |
"The Dark Secret at the Heart of AI | MIT Technology Review"
|
"https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The Dark Secret at the Heart of AI By Will Knight archive page Keith Rankin Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence.
The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.
Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.” “We can build these models, but we don’t know how they work.” At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.” Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.
At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.
The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.
You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.
The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
“It might be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual.” Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images , produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.
Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.
We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are unpredictable and inscrutable? After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.
The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.
David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.
This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.
One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.” It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.
Related Story Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.” If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.
To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back , an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.
He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our May/June 2017 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,577 | 2,015 |
"Toolkits for the Mind | MIT Technology Review"
|
"https://www.technologyreview.com/s/536356/toolkits-for-the-mind"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Toolkits for the Mind By James Somers archive page When the Japanese computer scientist Yukihiro Matsumoto decided to create Ruby, a programming language that has helped build Twitter, Hulu, and much of the modern Web, he was chasing an idea from a 1966 science fiction novel called Babel-17 by Samuel R. Delany. At the book’s heart is an invented language of the same name that upgrades the minds of all those who speak it. “Babel-17 is such an exact analytical language, it almost assures you technical mastery of any situation you look at,” the protagonist says at one point. With Ruby, Matsumoto wanted the same thing: to reprogram and improve the way programmers think.
It sounds grandiose, but Matsumoto’s isn’t a fringe view. Software developers as a species tend to be convinced that programming languages have a grip on the mind strong enough to change the way you approach problems—even to change which problems you think to solve. It’s how they size up companies, products, their peers: “What language do you use?” That can help outsiders understand the software companies that have become so powerful and valuable, and the products and services that infuse our lives. A decision that seems like the most inside kind of inside baseball—whether someone builds a new thing using, say, Ruby or PHP or C—can suddenly affect us all. If you want to know why Facebook looks and works the way it does and what kinds of things it can do for and to us next, you need to know something about PHP, the programming language Mark Zuckerberg built it with.
Among programmers, PHP is perhaps the least respected of all programming languages. A now canonical blog post on its flaws described it as “ a fractal of bad design ,” and those who willingly use it are seen as amateurs. “There’s this myth of the brilliant engineering that went into Facebook,” says Jeff Atwood , co-creator of the popular programming question–and-answer site Stack Overflow.
“But they were building PHP code in Windows XP. They were hackers in almost the derogatory sense of the word.” In the space of 10 minutes, Atwood called PHP “a shambling monster,” “a pandemic,” and a haunted house whose residents have come to love the ghosts.
Things Reviewed Babel-17 By Samuel R. Delany 1966 Real World OCaml By Yaron Minsky et al.
O’Reilly, 2013PHPHackScala Most successful programming languages have an overall philosophy or set of guiding principles that organize their vocabulary and grammar—the set of possible instructions they make available to the programmer—into a logical whole. PHP doesn’t. Its creator, Rasmus Lerdorf, freely admits he just cobbled it together. “I don’t know how to stop it,” he said in a 2003 interview.
“I have absolutely no idea how to write a programming language—I just kept adding the next logical step along the way.” Programmers’ favorite example is a PHP function called “mysql_escape_string,” which rids a query of malicious input before sending it off to a database. (For an example of a malicious input, think of a form on a website that asks for your e-mail address; a hacker can enter code in that slot to force the site to cough up passwords.) When a bug was discovered in the function, a new version was added, called “mysql_ real _escape_string,” but the original was not replaced. The result is a bit like having two similar-looking buttons right next to each other in an airline cockpit: one that puts the landing gear down and one that puts it down safely.
It’s not just an affront to common sense—it’s a recipe for disaster.
Yet despite the widespread contempt for PHP, much of the Web was built on its back. PHP powers 39 percent of all domains , by one estimate. Facebook, Wikipedia, and the leading publishing platform WordPress are all PHP projects. That’s because PHP, for all its flaws, is perfect for getting started. The name originally stood for “personal home page.” It made it easy to add dynamic content like the date or a user’s name to static HTML pages. PHP allowed the leap from tinkering with a web site to writing a Web application to be so small as to be imperceptible. You didn’t need to be a pro.
PHP’s get-going-ness was crucial to the success of Wikipedia, says Ori Livneh, a principal software engineer at the Wikimedia Foundation, which operates the project. “I’ve always loathed PHP,” he tells me. The project suffers from large-scale design flaws as a result of its reliance on the language. (They are partly why the foundation didn’t make Wikipedia pages available in a version adapted for mobile devices until 2008, and why the site didn’t get a user-friendly editing interface until 2013.) But PHP allowed people who weren’t—or were barely—software engineers to contribute new features. It’s how Wikipedia entries came to display hieroglyphics on Egyptology pages, for instance, and handle sheet music.
The programming language PHP created and sustains Facebook’s move-fast, hacker-oriented corporate culture.
You wouldn’t have built Google in PHP, because Google, to become Google, needed to do exactly one thing very well—it needed search to be spare and fast and meticulously well engineered. It was made with more refined and powerful languages, such as Java and C++. Facebook, by contrast, is a bazaar of small experiments, a smorgasbord of buttons, feeds, and gizmos trying to capture your attention. PHP is made for making —for cooking up features quickly.
You can almost imagine Zuckerberg in his Harvard dorm room on the fateful day that Facebook was born, doing the least he could to get his site online. The Web moves so fast, and users are so fickle, that the only way you’ll ever be able to capture the moment is by being first. It didn’t matter if he made a big ball of mud, or a plate of spaghetti, or a horrible hose cabinet (to borrow from programmers’ rich lexicon for describing messy code). He got the thing done. People could use it. He wasn’t thinking about beautiful code; he was thinking about his friends logging in to “ Thefacebook ” to look at pictures of girls they knew.
Today Facebook is worth more than $200 billion and there are signs all over the walls at its offices: “Done is better than perfect”; “Move fast and break things.” These bold messages are supposed to keep employees in tune with the company’s “hacker” culture. But these are precisely PHP’s values. Moving fast and breaking things is in fact so much the essence of PHP that anyone who “speaks” the language indelibly thinks that way. You might say that the language itself created and sustains Facebook’s culture.
The secret weapon If you wanted to find the exact opposite of PHP, a kind of natural experiment to show you what the other extreme looked like, you couldn’t do much better than the self-serious Lower Manhattan headquarters of the financial trading firm Jane Street Capital. The 400-person company claims to be responsible for roughly 2 percent of daily equity trading volume in the United States.
When I meet Yaron Minsky, Jane Street’s head of technology, he’s sitting at a desk with a working Enigma machine beside him, one of only a few dozen of the World War II code devices left in the world. I would think it the clear winner of the contest for Coolest Secret Weapon in the Room if it weren’t for the way he keeps talking about an obscure programming language called OCaml. Minsky, a computer science PhD, convinced his employer 10 years ago to rewrite the company’s entire trading system in OCaml. Before that, almost nobody used the language for actual work; it was developed at a French research institute by academics trying to improve a computer system that automatically proves mathematical theorems. But Minsky thought OCaml, which he had gotten to know in grad school, could replace the complex Excel spreadsheets that powered Jane Street’s trading systems.
OCaml’s big selling point is its “type system,” which is something like Microsoft Word’s grammar checker, except that instead of just putting a squiggly green line underneath code it thinks is wrong, it won’t let you run it. Programs written with a type system tend to be far more reliable than those written without one—useful when a program might trade $30 billion on a big day.
Minsky says that by catching bugs, OCaml’s type system allows Jane Street’s coders to focus on loftier problems. One wonders if they have internalized the system’s constant nagging over time, so that OCaml has become a kind of Newspeak that makes it impossible to think bad thoughts.
The catch is that to get the full benefits of the type checker, the programmers have to add complex annotations to their code. It’s as if Word’s grammar checker required you to diagram all your sentences. Writing code with type constraints can be a nuisance, even demoralizing. To make it worse, OCaml, more than most other programming languages, traffics in a kind of deep abstract math far beyond most coders. The language’s rigor is like catnip to some people, though, giving Jane Street an unusual advantage in the tight hiring market for programmers. Software developers mostly join Facebook and Wikipedia in spite of PHP. Minsky says that OCaml—along with his book Real World OCaml —helps lure a steady supply of high-quality candidates. The attraction isn’t just the language but the kind of people who use it. Jane Street is a company where they play four-person chess in the break room. The culture of competitive intelligence and the use of a fancy programming language seem to go hand in hand.
Google appears to be trying to pull off a similar trick with Go, a high-performance programming language it developed. Intended to make the workings of the Web more elegant and efficient, it’s good for developing the kind of high-stakes software needed to run the collections of servers behind large Web services. It also acts as something like a dog whistle to coders interested in the new and the difficult.
Growing up In late 2010, Facebook was having a crisis. PHP was not built for performance, but it was being asked to perform. The site was growing so fast it seemed that if something didn’t change fairly drastically, it would start falling over.
Switching languages altogether wasn’t an option. Facebook had millions of lines of PHP code, thousands of engineers expert in writing it, and more than half a billion users. Instead, a small team of senior engineers was assigned to a special project to invent a way for Facebook to keep functioning without giving up on its hacky mother tongue.
One part of the solution was to create a piece of software—a compiler—that would translate Facebook’s PHP code into much faster C++ code. The other was a feat of computer linguistic engineering that let Facebook’s programmers keep their PHP-ian culture but write more reliable code.
Startups can cleverly use the power of programming languages to manipulate their organizational psychology.
The rescue squad did it by inventing a dialect of PHP called Hack. Hack is PHP with an optional type system; that is, you can write plain old quick and dirty PHP—or, if you so choose, you can tie yourself to the mast, adding annotations to let the type system check the correctness of your code. That this type checker is written entirely in OCaml is no coincidence. Facebook wanted its coders to keep moving fast in the comfort of their native tongue, but it didn’t want them to have to break things as they did it. (Last year Zuckerberg announced a new engineering slogan: “Move fast with stable infra,” using the hacker shorthand for the infrastructure that keeps the site running.) Around the same time, Twitter underwent a similar transformation. The service was originally built with Ruby on Rails—a popular Web programming framework created using Matsumoto’s Ruby and inspired in large part by PHP. Then came the deluge of users. When someone with hundreds of thousands of followers tweeted, hundreds of thousands of other people’s timelines had to be immediately updated. Big tweets like that would frequently overwhelm the system and force engineers to take the site down to allow it to catch up. They did it so often that the “fail whale” on the company’s maintenance page became famous in its own right. Twitter stopped the bleeding by replacing large pieces of the service’s plumbing with a language called Scala. It should not be surprising that Scala, like OCaml, was developed by academics, has a powerful type system, and prizes correctness and performance even at the expense of the individual programmers’ freedom and delight in their craft.
Much as startups “mature” by finally figuring out where their revenue will come from, they can cleverly use the power of programming languages to manipulate their organizational psychology. Programming–language designer Guido van Rossum , who spent seven years at Google and now works at Dropbox, says that once a software company gets to be a certain size, the only way to stave off chaos is to use a language that requires more from the programmer up front. “It feels like it’s slowing you down, because you have to say everything three times,” van Rossum says. That is why many startups wait as long as they can before making the switch. You lose some of the swaggering hackers who got you started, and the possibility that small teams can rush out new features. But a more exacting language helps people across the company understand one another’s code and gives your product the stability needed to be part of the furniture of daily life.
That software startups can perform such maneuvers might even help explain why they can be so powerful. The expanding reach of computers is part of it. But these companies also have a unique ability to remake themselves. As they change and grow, they can do more than just redraw the org chart. Because they are built in code, they can do something far more drastic. They can rewire themselves, their culture, the very way they think.
James Somers is a writer and programmer in New York. He works at Genius.com.
hide by James Somers Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our May/June 2015 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,578 | 2,017 |
"The Artificial Intelligence Issue | MIT Technology Review"
|
"https://www.technologyreview.com/magazines/the-artificial-intelligence-issue"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Magazine View previous issues MIT News Magazine The Artificial Intelligence Issue Advances in artificial intelligence are about to force all of us to confront questions about privacy, inequality, employment—and what it really means to be human.
Letter from the editor View previous issue View next issue Features Categorized in Artificial intelligence Is AI Riding a One-Trick Pony? Just about every AI advance you’ve heard of depends on a breakthrough that’s three decades old. Keeping up the pace of progress will require confronting AI’s serious limitations.
Categorized in 17032 India Warily Eyes AI Technology outsourcing has been India’s only reliable job creator in the past 30 years. Now artificial intelligence threatens to wipe out those gains.
Categorized in 17035 Can AI Keep You Healthy? A Chinese entrepreneur wants to track your health data and suggest ways to improve. But are computers really smart enough to make sense of all that information? Categorized in Artificial intelligence China’s AI Awakening中国 人工智能 的崛起 The West shouldn’t fear China’s artificial-intelligence revolution. It should copy it.
Categorized in Inside the Moonshot Effort to Finally Figure Out the Brain AI is only loosely modeled on the brain. So what if you wanted to do it right? You’d need to do what has been impossible until now: map what actually happens in neurons and nerve fibers.
Also in this issue Put Humans at the Center of AI At Stanford and Google, Fei-Fei Li is leading the development of artificial intelligence—and working to diversify the field.
Categorized in Artificial intelligence We Need Computers with Empathy An emerging trend in artificial intelligence is to get computers to detect how we’re feeling and respond accordingly. They might even help us develop more compassion for one another.
Categorized in 17032 How We Feel About Robots That Feel As robots become smart enough to detect our feelings and respond appropriately, they could have something like emotions of their own. But that won’t necessarily make them more like humans.
Categorized in 17032 The Seven Deadly Sins of AI Predictions Mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future.
Categorized in Fearsome Machines: A Prehistory A time line of what happened when in the history of artificial intelligence.
Categorized in 17032 The Dangers of Tech-Bro AI Tabitha Goldstaub, a cofounder of CognitionX , which helps companies deploy AI, says that diversifying the field is necessary to make sure products actually work well.
Categorized in 17032 How to Root Out Hidden Biases in AI Algorithms are making life-changing decisions like denying parole or granting loans. Cynthia Dwork, a computer scientist at Harvard, is developing ways of making sure the machines are operating fairly.
Categorized in 17032 Fiction That Gets AI Right What to watch and read before the robots take over.
Categorized in Artificial intelligence Don’t Let Regulators Ruin AI Tech policy scholar Andrea O’Sullivan says the U.S. needs to be careful not to hamstring innovation.
Categorized in 17032 Past issues Updated The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,579 | 2,015 |
"Ilya Sutskever | MIT Technology Review"
|
"https://www.technologyreview.com/lists/innovators-under-35/2015/visionary/ilya-sutskever"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Visionaries (2015) These people are showing how technologies will give us new ways of doing things.
Full list Categories Past Years Age: 29 Ilya Sutskever Why one form of machine learning will be particularly powerful.
Artificial-intelligence researchers are focusing on a method called deep learning, which gets computers to recognize patterns in data on their own (see “ Teaching Machines to Understand Us ”). One person who demonstrated its potential is Ilya Sutskever, who trained under a deep-learning pioneer at the University of Toronto and used the technique to win an image-recognition challenge in 2012. He is now a key member of the Google Brain research team.
I asked him why deep learning could mimic human vision and solve many other challenges.
“When you look at something, you know what it is in a fraction of a second,” he says. “And yet our neurons operate extremely slowly. That means your brain must only need a modest number of parallel computations. An artificial neural network is nothing but a sequence of very parallel, simple computations.
“We started a company to keep applying this approach to different problems and expand its range of capabilities. Soon, we joined Google. I’ve shown that the same philosophy that worked for image recognition can also achieve really good results for translation between languages. It should beat existing translation technology by a good margin. I think you will see deep learning make a lot of progress in many areas. It doesn’t make any assumptions about the nature of problems, so it is applicable to many things.” —Tom Simonite by Tom Simonite Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 18, 2015 Age: 34 Lars Blackmore Would space travel flourish if we could reuse the rockets? Sixty years after Sputnik blasted into space, escaping our atmosphere remains absurdly expensive. Lars Blackmore, an engineer at SpaceX , is working on changing that with rockets that could be flown back to Earth in reverse.
As things stand, every time a space rocket takes off and releases its payload, it breaks up and falls into the ocean. “It’s basically like flying a 747 across the country and then, instead of refueling it, throwing it away,” says Blackmore, a soft-spoken Brit who leads a team at SpaceX that’s developing the onboard software necessary for a rocket to come down gently in an upright position onto a platform in the ocean.
SpaceX has come agonizingly close to sticking a rocket landing several times, but it didn’t get a chance to try again in its most recent flight, when the Falcon 9 rocket exploded during takeoff.
Landing a rocket backwards is an insane trick. The descent is extraordinarily unpredictable, and rockets aren’t meant to travel in reverse, so it requires extremely fine control over the boosters and guidance fins. Blackmore has devised algorithms to enable a rocket’s onboard computer to deal with this chaotic situation while safely controlling the craft’s fall.
If the feat can be perfected, it would change the economics of space travel entirely. Fuel accounts for less than half of 1 percent of the cost of a rocket launch, so refurbishing a rocket would make the next launch considerably cheaper. How much cheaper would depend on how well the booster could be reconditioned following the extreme stress of takeoff.
Blackmore grew up dreaming of working at NASA Mission Control. After a PhD at MIT, he joined NASA’s Jet Propulsion Lab, where he worked on precision landing systems and a climate probe called SMAP. He went to SpaceX in 2011. “I’d heard that Elon [Musk] had these dreams of making reusable rockets,” Blackmore says. “And since I was working on precision landing for Mars, I thought I would be the right guy to do that.” Would he want to go back to NASA someday? “When you hear about the Apollo program in its heyday, it was a bunch of young kids, and no one told them what they could do,” he says. “That is exactly what I’ve found at SpaceX.” — Will Knight Watch this Innovator at EmTech 2015 Meet the Innovators Under 35 by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 18, 2015 Age: 33 Adam Coates Artificial intelligence could make the Internet more useful to the millions of people coming online for the first time.
Q: You invented ways to put more computing power behind deep learning. Now you lead a lab in Silicon Valley for the Chinese search company Baidu. Why did it need a lab there? A: They spin up new projects very fast. It’s partly driven by the dynamism in China — tech companies have to go quickly from having nothing to having state-of-the-art something. My lab’s mission is to create technology that will have an impact on at least 100 million people; it is intended to move rapidly, like a startup. We’re recruiting AI researchers and many people in Silicon Valley who have amazing skills from working on products and haven’t thought they could use that to make progress on artificial intelligence.
Q: What is the lab working on? A: The first technology that we are focusing on is speech recognition. Touch screens on phones are fine for some things but really awful for others, and there are all kinds of other devices that are crying out for better interfaces. People don’t use speech today because it doesn’t work well enough. Our goal is to get it to a level where it’s as easy to talk to your devices as it is to talk to the person next to you. In December we hit our first milestone with DeepSpeech, a speech engine we built quickly from scratch using deep learning. When there’s a lot of background noise it’s dramatically better.
Q: Why would that have an impact on 100 million people? A: In rapidly developing economies like in China, there are many people who will be connecting to the Internet for the first time through a mobile phone. Having a way to interact with a device or get the answer to a question as easily as talking to a person is even more powerful to them. I think of Baidu’s customers as having a greater need for artificial intelligence than myself.
—Tom Simonite Watch this Innovator at EmTech 2015 Meet the Innovators Under 35 by Tom Simonite Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 18, 2015 Age: 26 Zakir Durumeric A computer scientist sees a way to improve online security.
“It’s absolutely astounding what people attach to the Internet,” Zakir Durumeric says. He would know, because he invented a way to probe every computer online in just minutes. “We have found everything from ATM machines and bank safes to industrial control systems for power plants,” he says. “It’s kind of scary.” A bank safe! Why would someone put that online? So someone in the bank can operate it from home? “Yes. You sit there and you wonder: who on earth thought this was a good idea?” Bad computer security practices like that can be mitigated far more readily with the ZMap scanning system -Durumeric developed. It determines not only which machines are online at any given moment, but also whether they have security flaws that should be fixed before miscreants exploit them. It finds everything from obvious software bugs to subtle problems like the ones that can be caused if an IT administrator fails to properly implement an arcane aspect of a cryptography standard.
Pinging all four billion devices on the Internet took weeks until Durumeric, who is pursuing a PhD at the University of Michigan , came up with a process that now takes about five minutes. He has used it to quickly inform website administrators about their vulnerability to catastrophic flaws such as the Heartbleed bug in 2014, and he hopes other security researchers will routinely do the same when they find weaknesses. “There’s always been this period where a vulnerability is [found] and then it takes weeks, months, or years for administrators to patch their servers,” he says. “We have an opportunity to change that.” —Brian Bergstein Watch this Innovator at EmTech 2015 Meet the Innovators Under 35 by Brian Bergstein Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 18, 2015 Age: 30 Cigall Kadoch A major vulnerability of certain kinds of cancer is becoming clear.
Problem: The exact biochemical mechanisms involved in many kinds of cancer remain unknown.
Solution: While completing her PhD at Stanford, Cigall Kadoch discovered a link between a genome regulator in cells called the BAF protein complex and a rare cancer called synovial sarcoma. She and colleagues later showed that mutations of BAF are involved in at least 20 percent of human cancers, opening the door for research on drugs that target mutated BAFs.
BAF’s job in the cell is to open and close DNA to allow the right genes to be expressed at the right time. When mutated, it can “activate sites that it shouldn’t” — including genes that drive cancer, says Kadoch,who has appointments at Harvard Medical School and the Broad Institute of Harvard and MIT.
She learned this by focusing on one particular subunit of BAF. This piece of the protein has a deformed tail in 100 percent of patients with synovial sarcoma. When Kadoch put the deformed subunit into normal cells, she detected “blazing cancer,” she says. “That little tail is entirely responsible for this cancer.” The good news is that this is reversible. If she added enough normal pieces of the subunit to cells in a petri dish, it replaced the mutated form, killing the cancerous cells on the spot.
—Anna Nowogrodzki Watch this Innovator at EmTech 2015 Meet the Innovators Under 35 by Anna Nowogrodzki, SM ’15 Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 18, 2015 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,580 | 2,017 |
"The Dark Secret at the Heart of AI | MIT Technology Review"
|
"https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The Dark Secret at the Heart of AI By Will Knight archive page Keith Rankin Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence.
The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.
Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.” “We can build these models, but we don’t know how they work.” At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.” Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.
At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.
The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.
You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.
The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
“It might be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual.” Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images , produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.
Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.
We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are unpredictable and inscrutable? After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.
The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.
David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.
This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.
One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.” It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.
Related Story Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.” If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.
To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back , an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.
He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.” hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our May/June 2017 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.
By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,581 | 2,015 |
"Google and Facebook Race to Solve the Ancient Game of Go With AI | WIRED"
|
"https://www.wired.com/2015/12/google-and-facebook-race-to-solve-the-ancient-game-of-go"
|
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google and Facebook Race to Solve the Ancient Game of Go With AI Takashi Osato for WIRED Save this story Save Save this story Save Rémi Coulom spent the last decade building software that can play the ancient game of Go better than practically any other machine on earth. He calls his creation Crazy Stone. Early last year, at the climax of a tournament in Tokyo, it challenged the Go grandmaster Norimoto Yoda , one of the world's top human players, and it performed remarkably well. In what's known as the Electric Sage Battle, Crazy Stone beat the grandmaster. But the win came with a caveat.
Over the last 20 years, machines have topped the best humans at so many games of intellectual skill, we now assume computers can beat us at just about anything. But Go—the Eastern version of chess in which two players compete with polished stones on 19-by-19-line grid—remains the exception. Yes, Crazy Stone beat Yoda. But it started with a four-stone advantage. That was the only way to ensure a fair fight.
It's incredibly difficult to build a machine that duplicates the kind of intuition that makes the top human players so good at Go.
In the mid-'90s, a computer program called Chinook beat the world's top player at the game of checkers. A few years later, IBM's Deep Blue supercomputer shocked the chess world when it wiped the proverbial floor with world champion Gary Kasparov. And more recently, another IBM machine, Watson, topped the best humans at Jeopardy! , the venerable TV trivia game. Machines have also mastered Othello, Scrabble, backgammon, and poker. But in the wake of Crazy Stone's victory over Yoda, Coulom predicted that another ten years would pass before a machine could beat a grandmaster without a head start.
At the time, that ten-year runaway seemed rather short. In playing Go, the grandmasters often rely on something that's closer to intuition than carefully reasoned analysis, and building a machine that duplicates this kind of intuition is enormously difficult. But a new weapon could help computers conquer humans much sooner: deep learning.
Inside companies like Google and Facebook, deep learning is proving remarkably adept at recognizing images and grasping spacial patterns—a skill well suited to Go. As they explore so many other opportunities this technology presents, Google and Facebook are also racing to see whether it can finally crack the ancient game.
As Facebook AI researcher Yuandong Tian explains, Go is a classic AI problem—a problem that's immensely attractive because it's immensely difficult. The company believes that solving Go will not only help refine the AI that drives its popular social network, but also prove the value of artificial intelligence. Rob Fergus, another Facebook researcher, agrees. "The goal is advancing AI," he says. But he also acknowledges that the company is driven, at least in a small way, by a friendly rivalry with Google. There's pride to be found in solving the game of Go.
Today, Google and Facebook use deep learning to identify the faces in photos you post to the 'net. It's how computers recognize the commands barked into a phone and translate things from one language to another. Sometimes, it can even understand natural language—the natural way that we humans converse.
This technology relies on what are called deep neural networks, vast networks of machines that approximate the web of neurons in the human brain. If you feed enough tree photos into these neural nets, they can learn to identify a tree.
If you feed them enough dialogue, they can learn to carry on a decent (if sometimes weird) conversation.
And if you feed them enough Go moves, they can learn to play Go.
'They learn from how humans play the game and effectively copy human play.' "Deep neural networks are very appropriate for Go because Go is very driven by patterns on the board. These methods are very good at generalizing from patterns," says Amos Storkey, a professor at the University of Edinburgh, who is using deep neural networks to tackle Go, much like Google and Facebook.
The belief is that these neural nets can finally close the gap between machines and humans. In playing Go, you see, the grandmasters don't necessarily examine the results of each possible move. They often play based on how the board looks.
With deep learning, researchers can begin to duplicate this approach. In feeding images of successful moves into neural networks, they can help machines learn what a successful move looks like. "Rather than just trying to work out what the best things to do are, they learn from how humans play the game," Storkey says of neural nets. "They effectively copy human play." Building a machine that can win at Go isn't just a matter of computing power. That's why programs like Coulom's haven't cracked it. Crazy Stone relies upon what's called a Monte Carlo tree search, a system that essentially analyzes the outcomes of every possible move. This is how machines mastered checkers and chess and other games. They looked further ahead than the humans they beat. But with Go, there are too many possibilities to consider. In chess, on any given turn, the average number of possible moves is 35. With Go, it's 250. And after each of those 250 possible moves, there are another 250. And so on. It's impossible for a tree search to consider the results of every move (at least not in a reasonable amount of time).
Facebook Aims Its AI at the Game No Computer Can Crack Google Made a Chatbot That Debates the Meaning of Life ‘Deep Learning’ Will Soon Give Us Super-Smart Robots But deep learning can fill the gap, providing a level of intuition, as opposed to brute force. Last month, in a paper posted the academic research site Arxiv , Facebook demonstrated a method that combines the Monte Carlo tree search with deep learning. In competition with humans, the system held its own, and according to the company, it even played with a style that felt human.
After all, it has learned from real human moves. Coulom calls the company's results "very spectacular." Ultimately, Coulom says, this kind of hybrid approach will crack the problem. "What people are trying to do is combine the two approaches so that it's better than each," he says. He points out that Crazy Stone already uses a form of machine learning in concert with Monte Carlo. It's just that his methods aren't as complex as the neural networks employed by Facebook.
Facebook's paper shows the power of deep learning, but it's also a reminder that big AI tasks are ultimately solved by more than a single technology. They're solved by many technologies. Deep learning does many things well. But it can always use help from other forms of AI.
After Facebook revealed its Go work, Google soon unloaded a response. A top Google AI researcher, Demis Hassabis, said that, in a few months, the company would reveal "quite a big surprise" related to the game of Go. Google declined to say more for this story, and it's unclear what the company has in store. Coulom, for one, says it's unlikely Google could so quickly produce something that can beat the top Go players, but he believes the company will take a significant step down that road.
In all likelihood, this too will rely on multiple technologies. And we're guessing that one of them is something called reinforcement learning. While deep learning is good at perception—recognizing how something looks, sounds, or behaves—reinforcement algorithms can teach machines to act on this perception.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Hassabis oversees DeepMind, a Google subsidiary based in Cambridge, England, and DeepMind has already made good use of deep learning in tandem with reinforcement algorithms. Earlier this year, he and his team published a paper that described how the two technologies could be used to play old Atari video games— and, in some cases, beat professional game testers.
After a deep neural net helps the system understand the state of play—what the board looks like at any given time—the reinforcement algorithms use trial and error to help the system understand how to respond to this state of play. Basically, the computer tries a particular move, and if that move brings a reward—points in the game—it recognizes that the move as a good one. After trying enough moves, the system comes to understand the best ways of playing. The same kind of thing can work with Go.
This approach is different from a standard tree search in that the system is learning what a good move looks like. Researchers train it to play before the real match begins. As with deep learning, it plays through a kind of "knowledge" rather than applying brute force to the problem.
Ultimately, if they solve the game of Go, machines need all of these technologies. Reinforcement learning can feed off of deep learning. And both can dovetail with a traditional approach like the Monte Carlo tree search. Cracking Go remains enormously difficult. But modern AI is getting closer. When Hassabis reveals his "big surprise," we'll know just how close it has come.
Senior Writer X Topics AlphaGo artificial intelligence deep learning Enterprise Facebook Google neural networks Gregory Barber Steven Levy Will Knight Will Knight Khari Johnson Paresh Dave Steven Levy Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
"
|
1,582 | 2,018 |
"2018 | MIT Technology Review"
|
"https://www.technologyreview.com/10-breakthrough-technologies/2018"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts 10 Breakthrough Technologies The List Years 10 Breakthrough Technologies 2018 Dueling neural networks. Artificial embryos. AI in the cloud. Welcome to our annual list of the 10 technology advances we think will shape the way we work and live now and for years to come.
Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.
For this year, a new technique in artificial intelligence called GANs is giving machines imagination; artificial embryos, despite some thorny ethical constraints, are redefining how life can be created and are opening a research window into the early moments of a human life; and a pilot plant in the heart of Texas’s petrochemical industry is attempting to create completely clean power from natural gas—probably a major energy source for the foreseeable future. These and the rest of our list will be worth keeping an eye on. —The Editors This story was part of our March/April 2018 issue.
10 Breakthrough Technologies 3-D Metal Printing Derek Brahney 3-D Metal Printing Breakthrough Now printers can make metal objects quickly and cheaply.
Why it matters The ability to make large and complex metal objects on demand could transform manufacturing.
Key players Markforged, Desktop Metal, GE Availability Now While 3-D printing has been around for decades, it has remained largely in the domain of hobbyists and designers producing one-off prototypes. And printing objects with anything other than plastics — in particular, metal — has been expensive and painfully slow.
Now, however, it’s becoming cheap and easy enough to be a potentially practical way of manufacturing parts. If widely adopted, it could change the way we mass-produce many products.
In the short term, manufacturers wouldn’t need to maintain large inventories — they could simply print an object, such as a replacement part for an aging car, whenever someone needs it.
In the longer term, large factories that mass-produce a limited range of parts might be replaced by smaller ones that make a wider variety, adapting to customers’ changing needs.
The technology can create lighter, stronger parts, and complex shapes that aren’t possible with conventional metal fabrication methods. It can also provide more precise control of the microstructure of metals. In 2017, researchers from the Lawrence Livermore National Laboratory announced they had developed a 3-D-printing method for creating stainless-steel parts twice as strong as traditionally made ones.
Also in 2017, 3-D-printing company Markforged , a small startup based outside Boston, released the first 3-D metal printer for under $100,000.
Never miss a breakthrough Sign up to receive the latest emerging tech stories in your inbox, every weekday.
Enter your email Sign up Get updates and offers from MIT Technology Review By signing up, you agree to our Privacy Policy.
Another Boston-area startup, Desktop Metal , began to ship its first metal prototyping machines in December 2017. It plans to begin selling larger machines, designed for manufacturing, that are 100 times faster than older metal printing methods.
The printing of metal parts is also getting easier. Desktop Metal now offers software that generates designs ready for 3-D printing. Users tell the program the specs of the object they want to print, and the software produces a computer model suitable for printing.
GE, which has long been a proponent of using 3-D printing in its aviation products (see “ 10 Breakthrough Technologies of 2013: Additive Manufacturing ”), has a test version of its new metal printer that is fast enough to make large parts. The company plans to begin selling the printer in 2018.
by Erin Winick Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Artificial Embryos University of Cambridge Artificial Embryos Breakthrough Without using eggs or sperm cells, researchers have made embryo-like structures from stem cells alone, providing a whole new route to creating life.
Why it matters Artificial embryos will make it easier for researchers to study the mysterious beginnings of a human life, but they’re stoking new bioethical debates.
Key players University of Cambridge; University of Michigan; Rockefeller University Availability Now In a breakthrough that redefines how life can be created, embryologists working at the University of Cambridge in the UK have grown realistic-looking mouse embryos using only stem cells. No egg. No sperm. Just cells plucked from another embryo.
The researchers placed the cells carefully in a three-dimensional scaffold and watched, fascinated, as they started communicating and lining up into the distinctive bullet shape of a mouse embryo several days old.
“We know that stem cells are magical in their powerful potential of what they can do. We did not realize they could self-organize so beautifully or perfectly,” Magdelena Zernicka-Goetz, who headed the team, told an interviewer at the time.
Zernicka-Goetz says her “synthetic” embryos probably couldn’t have grown into mice. Nonetheless, they’re a hint that soon we could have mammals born without an egg at all.
That isn’t Zernicka-Goetz’s goal. She wants to study how the cells of an early embryo begin taking on their specialized roles. The next step, she says, is to make an artificial embryo out of human stem cells, work that’s being pursued at the University of Michigan and Rockefeller University.
Synthetic human embryos would be a boon to scientists, letting them tease apart events early in development. And since such embryos start with easily manipulated stem cells, labs will be able to employ a full range of tools, such as gene editing, to investigate them as they grow.
Artificial embryos, however, pose ethical questions. What if they turn out to be indistinguishable from real embryos? How long can they be grown in the lab before they feel pain? We need to address those questions before the science races ahead much further, bioethicists say.
by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Sensing City sidewalk toronto Sensing City Breakthrough A Toronto neighborhood aims to be the first place to successfully integrate cutting-edge urban design with state-of-the-art digital technology.
Why it matters Smart cities could make urban areas more affordable, livable, and environmentally friendly.
Key players Sidewalk Labs and Waterfront Toronto Availability Project announced in October 2017; construction could begin in 2019 Numerous smart-city schemes have run into delays, dialed down their ambitious goals, or priced out everyone except the super-wealthy. A new project in Toronto, called Quayside, is hoping to change that pattern of failures by rethinking an urban neighborhood from the ground up and rebuilding it around the latest digital technologies.
Alphabet’s Sidewalk Labs, based in New York City, is collaborating with the Canadian government on the high-tech project, slated for Toronto’s industrial waterfront.
One of the project’s goals is to base decisions about design, policy, and technology on information from an extensive network of sensors that gather data on everything from air quality to noise levels to people’s activities.
The plan calls for all vehicles to be autonomous and shared. Robots will roam underground doing menial chores like delivering the mail. Sidewalk Labs says it will open access to the software and systems it’s creating so other companies can build services on top of them, much as people build apps for mobile phones.
The company intends to closely monitor public infrastructure, and this has raised concerns about data governance and privacy. But Sidewalk Labs believes it can work with the community and the local government to alleviate those worries.
“What’s distinctive about what we’re trying to do in Quayside is that the project is not only extraordinarily ambitious but also has a certain amount of humility,” says Rit Aggarwala, the executive in charge of Sidewalk Labs’ urban-systems planning. That humility may help Quayside avoid the pitfalls that have plagued previous smart-city initiatives.
Other North American cities are already clamoring to be next on Sidewalk Labs’ list, according to Waterfront Toronto, the public agency overseeing Quayside’s development. “San Francisco, Denver, Los Angeles, and Boston have all called asking for introductions,” says the agency’s CEO, Will Fleissig.
by Elizabeth Woyke Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window AI for Everybody Miguel Porlan AI for Everybody Breakthrough Cloud-based AI is making the technology cheaper and easier to use.
Why it matters Right now the use of AI is dominated by a relatively few companies, but as a cloud-based service, it could be widely available to many more, giving the economy a boost.
Key players Amazon; Google; Microsoft Availability Now Artificial intelligence has so far been mainly the plaything of big tech companies like Amazon, Baidu, Google, and Microsoft, as well as some startups. For many other companies and parts of the economy, AI systems are too expensive and too difficult to implement fully.
What’s the solution? Machine-learning tools based in the cloud are bringing AI to a far broader audience. So far, Amazon dominates cloud AI with its AWS subsidiary. Google is challenging that with TensorFlow, an open-source AI library that can be used to build other machine-learning software. Recently Google announced Cloud AutoML, a suite of pre-trained systems that could make AI simpler to use.
Microsoft, which has its own AI-powered cloud platform, Azure, is teaming up with Amazon to offer Gluon, an open-source deep-learning library. Gluon is supposed to make building neural nets — a key technology in AI that crudely mimics how the human brain learns — as easy as building a smartphone app.
It is uncertain which of these companies will become the leader in offering AI cloud services. But it is a huge business opportunity for the winners.
These products will be essential if the AI revolution is going to spread more broadly through different parts of the economy.
Currently AI is used mostly in the tech industry, where it has created efficiencies and produced new products and services. But many other businesses and industries have struggled to take advantage of the advances in artificial intelligence. Sectors such as medicine, manufacturing, and energy could also be transformed if they were able to implement the technology more fully, with a huge boost to economic productivity.
Most companies, though, still don’t have enough people who know how to use cloud AI. So Amazon and Google are also setting up consultancy services. Once the cloud puts the technology within the reach of almost everyone, the real AI revolution can begin.
by Jackie Snow Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Dueling Neural Networks ILLUSTRATION BY DEREK BRAHNEY | DIAGRAM COURTESY OF MICHAEL NIELSEN, “NEURAL NETWORKS AND DEEP LEARNING”, DETERMINATION PRESS, 2015 Dueling Neural Networks Breakthrough Two AI systems can spar with each other to create ultra-realistic original images or sounds, something machines have never been able to do before.
Why it matters This gives machines something akin to a sense of imagination, which may help them become less reliant on humans—but also turns them into alarmingly powerful tools for digital fakery.
Key players Google Brain, DeepMind, Nvidia Availability Now Artificial intelligence is getting very good at identifying things: show it a million pictures, and it can tell you with uncanny accuracy which ones depict a pedestrian crossing a street. But AI is hopeless at generating images of pedestrians by itself. If it could do that, it would be able to create gobs of realistic but synthetic pictures depicting pedestrians in various settings, which a self-driving car could use to train itself without ever going out on the road.
The problem is, creating something entirely new requires imagination — and until now that has perplexed AIs.
The solution first occurred to Ian Goodfellow, then a PhD student at the University of Montreal, during an academic argument in a bar in 2014. The approach, known as a generative adversarial network, or GAN, takes two neural networks — the simplified mathematical models of the human brain that underpin most modern machine learning — and pits them against each other in a digital cat-and-mouse game.
Both networks are trained on the same data set. One, known as the generator, is tasked with creating variations on images it’s already seen — perhaps a picture of a pedestrian with an extra arm. The second, known as the discriminator, is asked to identify whether the example it sees is like the images it has been trained on or a fake produced by the generator — basically, is that three-armed person likely to be real? Over time, the generator can become so good at producing images that the discriminator can’t spot fakes. Essentially, the generator has been taught to recognize, and then create, realistic-looking images of pedestrians.
The technology has become one of the most promising advances in AI in the past decade, able to help machines produce results that fool even humans.
GANs have been put to use creating realistic-sounding speech and photorealistic fake imagery. In one compelling example, researchers from chipmaker Nvidia primed a GAN with celebrity photographs to create hundreds of credible faces of people who don’t exist. Another research group made not-unconvincing fake paintings that look like the works of van Gogh. Pushed further, GANs can reimagine images in different ways — making a sunny road appear snowy, or turning horses into zebras.
The results aren’t always perfect: GANs can conjure up bicycles with two sets of handlebars, say, or faces with eyebrows in the wrong place. But because the images and sounds are often startlingly realistic, some experts believe there’s a sense in which GANs are beginning to understand the underlying structure of the world they see and hear. And that means AI may gain, along with a sense of imagination, a more independent ability to make sense of what it sees in the world.
by Jamie Condliffe Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Babel-Fish Earbuds Google Babel-Fish Earbuds Breakthrough Near-real-time translation now works for a large number of languages and is easy to use.
Why it matters In an increasingly global world, language is still a barrier to communication.
Key players Google and Baidu Availability Now In the cult sci-fi classic The Hitchhiker’s Guide to the Galaxy, you slide a yellow Babel fish into your ear to get translations in an instant. In the real world, Google has come up with an interim solution: a $159 pair of earbuds, called Pixel Buds. These work with its Pixel smartphones and Google Translate app to produce practically real-time translation.
One person wears the earbuds, while the other holds a phone. The earbud wearer speaks in his or her language — English is the default — and the app translates the talking and plays it aloud on the phone. The person holding the phone responds; this response is translated and played through the earbuds.
Google Translate already has a conversation feature, and its iOS and Android apps let two users speak as it automatically figures out what languages they’re using and then translates them. But background noise can make it hard for the app to understand what people are saying, and also to figure out when one person has stopped speaking and it’s time to start translating.
Pixel Buds get around these problems because the wearer taps and holds a finger on the right earbud while talking. Splitting the interaction between the phone and the earbuds gives each person control of a microphone and helps the speakers maintain eye contact, since they’re not trying to pass a phone back and forth.
The Pixel Buds were widely panned for subpar design.
They do look silly, and they may not fit well in your ears. They can also be hard to set up with a phone.
Clunky hardware can be fixed, though. Pixel Buds show the promise of mutually intelligible communication between languages in close to real time. And no fish required.
by Rachel Metz Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Zero-Carbon Natural Gas Miguel Porlan Zero-Carbon Natural Gas Breakthrough A power plant efficiently and cheaply captures carbon released by burning natural gas, avoiding greenhouse-gas emissions.
Why it matters Around 32 percent of US electricity is produced with natural gas, accounting for around 30 percent of the power sector’s carbon emissions.
Key players 8 Rivers Capital; Exelon Generation; CB&I Availability 3 to 5 years The world is probably stuck with natural gas as one of our primary sources of electricity for the foreseeable future. Cheap and readily available, it now accounts for more than 30 percent of US electricity and 22 percent of world electricity. And although it’s cleaner than coal, it’s still a massive source of carbon emissions.
A pilot power plant just outside Houston, in the heart of the US petroleum and refining industry, is testing a technology that could make clean energy from natural gas a reality. The company behind the 50-megawatt project, Net Power, believes it can generate power at least as cheaply as standard natural-gas plants and capture essentially all the carbon dioxide released in the process.
If so, it would mean the world has a way to produce carbon-free energy from a fossil fuel at a reasonable cost. Such natural-gas plants could be cranked up and down on demand, avoiding the high capital costs of nuclear power and sidestepping the unsteady supply that renewables generally provide.
Net Power is a collaboration between technology development firm 8 Rivers Capital, Exelon Generation, and energy construction firm CB&I. The company is in the process of commissioning the plant and has begun initial testing. It intends to release results from early evaluations in the months ahead.
The plant puts the carbon dioxide released from burning natural gas under high pressure and heat, using the resulting supercritical CO 2 as the “working fluid” that drives a specially built turbine. Much of the carbon dioxide can be continuously recycled; the rest can be captured cheaply.
A key part of pushing down the costs depends on selling that carbon dioxide. Today the main use is in helping to extract oil from petroleum wells. That’s a limited market, and not a particularly green one. Eventually, however, Net Power hopes to see growing demand for carbon dioxide in cement manufacturing and in making plastics and other carbon-based materials.
Net Power’s technology won’t solve all the problems with natural gas, particularly on the extraction side. But as long as we’re using natural gas, we might as well use it as cleanly as possible. Of all the clean-energy technologies in development, Net Power’s is one of the furthest along to promise more than a marginal advance in cutting carbon emissions.
by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Perfect Online Privacy Miguel Porlan Perfect Online Privacy Breakthrough Computer scientists are perfecting a cryptographic tool for proving something without revealing the information underlying the proof.
Why it matters If you need to disclose personal information to get something done online, it will be easier to do so without risking your privacy or exposing yourself to identity theft.
Key players Zcash; JPMorgan Chase; ING Availability Now True internet privacy could finally become possible thanks to a new tool that can — for instance — let you prove you’re over 18 without revealing your date of birth, or prove you have enough money in the bank for a financial transaction without revealing your balance or other details. That limits the risk of a privacy breach or identity theft.
The tool is an emerging cryptographic protocol called a zero-knowledge proof. Though researchers have worked on it for decades, interest has exploded in the past year, thanks in part to the growing obsession with cryptocurrencies, most of which aren’t private.
Much of the credit for a practical zero-knowledge proof goes to Zcash, a digital currency that launched in late 2016. Zcash’s developers used a method called a zk-SNARK (for “zero-knowledge succinct non-interactive argument of knowledge”) to give users the power to transact anonymously.
That’s not normally possible in Bitcoin and most other public blockchain systems, in which transactions are visible to everyone. Though these transactions are theoretically anonymous, they can be combined with other data to track and even identify users. Vitalik Buterin, creator of Ethereum, the world’s second-most-popular blockchain network, has described zk-SNARKs as an “absolutely game-changing technology.” For banks, this could be a way to use blockchains in payment systems without sacrificing their clients’ privacy. Last year, JPMorgan Chase added zk-SNARKs to its own blockchain-based payment system.
For all their promise, though, zk-SNARKs are computation-heavy and slow. They also require a so-called “trusted setup,” creating a cryptographic key that could compromise the whole system if it fell into the wrong hands. But researchers are looking at alternatives that deploy zero-knowledge proofs more efficiently and don’t require such a key.
by Mike Orcutt Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Genetic Fortune-Telling Derek Brahney Genetic Fortune-Telling Breakthrough Scientists can now use your genome to predict your chances of getting heart disease or breast cancer, and even your IQ.
Why it matters DNA-based predictions could be the next great public health advance, but they will increase the risks of genetic discrimination.
Key players Helix; 23andMe; Myriad Genetics; UK Biobank; Broad Institute Availability Now One day, babies will get DNA report cards at birth. These reports will offer predictions about their chances of suffering a heart attack or cancer, of getting hooked on tobacco, and of being smarter than average.
The science making these report cards possible has suddenly arrived, thanks to huge genetic studies — some involving more than a million people.
It turns out that most common diseases and many behaviors and traits, including intelligence, are a result of not one or a few genes but many acting in concert. Using the data from large ongoing genetic studies, scientists are creating what they call “polygenic risk scores.” Though the new DNA tests offer probabilities, not diagnoses, they could greatly benefit medicine. For example, if women at high risk for breast cancer got more mammograms and those at low risk got fewer, those exams might catch more real cancers and set off fewer false alarms.
Pharmaceutical companies can also use the scores in clinical trials of preventive drugs for such illnesses as Alzheimer’s or heart disease. By picking volunteers who are more likely to get sick, they can more accurately test how well the drugs work.
The trouble is, the predictions are far from perfect. Who wants to know they might develop Alzheimer’s? What if someone with a low risk score for cancer puts off being screened, and then develops cancer anyway? Polygenic scores are also controversial because they can predict any trait, not only diseases. For instance, they can now forecast about 10 percent of a person’s performance on IQ tests. As the scores improve, it’s likely that DNA IQ predictions will become routinely available. But how will parents and educators use that information? To behavioral geneticist Eric Turkheimer, the chance that genetic data will be used for both good and bad is what makes the new technology “simultaneously exciting and alarming.” by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Materials’ Quantum Leap jeremy liebman Materials’ Quantum Leap Breakthrough IBM has simulated the electronic structure of a small molecule, using a seven-qubit quantum computer.
Why it matters Understanding molecules in exact detail will allow chemists to design more effective drugs and better materials for generating and distributing energy.
Key players IBM; Google; Harvard’s Alán Aspuru-Guzik Availability 5 to 10 years The prospect of powerful new quantum computers comes with a puzzle. They’ll be capable of feats of computation inconceivable with today’s machines, but we haven’t yet figured out what we might do with those powers.
One likely and enticing possibility: precisely designing molecules.
Chemists are already dreaming of new proteins for far more effective drugs, novel electrolytes for better batteries, compounds that could turn sunlight directly into a liquid fuel, and much more efficient solar cells.
We don’t have these things because molecules are ridiculously hard to model on a classical computer. Try simulating the behavior of the electrons in even a relatively simple molecule and you run into complexities far beyond the capabilities of today’s computers.
But it’s a natural problem for quantum computers, which instead of digital bits representing 1 s and 0 s use “qubits” that are themselves quantum systems. Recently, IBM researchers used a quantum computer with seven qubits to model a small molecule made of three atoms.
It should become possible to accurately simulate far larger and more interesting molecules as scientists build machines with more qubits and, just as important, better quantum algorithms.
by David Rotman Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,583 | 2,016 |
"What’s Next? | MIT Technology Review"
|
"https://www.technologyreview.com/magazines/whats-next"
|
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Magazine View previous issues MIT News Magazine What’s Next? It’s too late to stop climate change from happening. But we can begin to limit the damage and slow it down.
Letter from the editor View previous issue View next issue Features Categorized in 17037 Stop Emissions! A climate scientist argues that it should no longer be acceptable to dump carbon dioxide in the sky.
Categorized in 17037 Witnessing Climate Change Everywhere On an Instagram account called everydayclimatechange, the photographer James Whitlow Delano curates pictures that document causes and effects of global warming and responses to it.
Categorized in 17037 The Evidence Oceans are rising, Antarctica is losing its ice sheets, and the lower atmosphere is heating up. Satellite data (bottom) shows the atmosphere is warming at its lowest layer (the troposphere), while the stratosphere, which begins around 10 kilometers above the ground, is cooling. Scientists say this is consistent with the greenhouse effect.
Categorized in 17037 This Climate Policy Could Save the Planet Here’s a smart way for us to limit carbon emissions and keep global warming below 2 °C.
Categorized in Climate change and energy The Energy Startup Conundrum An inventor of a storage technology tries to outlast a brutal stretch for new energy companies.
Categorized in 17035 A Change of Mind Diana Bianchi championed tests that find Down syndrome early in pregnancy. Now can she find a way to treat it? Categorized in Artificial intelligence Can This Man Make AI More Human? One cognitive scientist thinks the leading approach to machine learning can be improved by ideas gleaned from studying children.
Also in this issue The Fast Rise of Ad Blockers Categorized in 17041 Hot and Violent Researchers have begun to understand the economic and social damage caused by climate change.
Categorized in Climate change and energy The End of Internet Advertising as We’ve Known It Millions of people are refusing to let intrusive, distracting, or irrelevant ads load on our devices. Consumers should seize the opportunity to demand a more mutually beneficial relationship with online advertisers.
Categorized in 17032 Are Young Athletes Risking Brain Damage? Sports leagues should do more to protect children from the long-term problems that stem from hits to the head.
Categorized in 17035 The Ideal Fuel A nanomaterials chemist has figured out a good way to mimic leaves and turn water and carbon dioxide into things we need.
Categorized in 17037 A Conservative Proposition for Global Warming In 1990, a Brazilian politician proposed what he presumed would be a simple way to kick our fossil-fuel habit.
Categorized in 17032 Past issues Updated The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
"
|
1,584 | 2,019 |
"When AI is a tool and when it's a weapon | VentureBeat"
|
"https://venturebeat.com/2019/11/11/when-ai-is-a-tool-and-when-its-a-weapon"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages When AI is a tool and when it’s a weapon Share on Facebook Share on X Share on LinkedIn The immense capabilities artificial intelligence is bringing to the world would have been inconceivable to past generations. But even as we marvel at the incredible power these new technologies afford, we’re faced with complex and urgent questions about the balance of benefit and harm.
When most people ponder whether AI is good or evil, what they’re essentially trying to grasp is whether AI is a tool or a weapon. Of course, it’s both — it can help reduce human toil, and it can also be used to create autonomous weapons. Either way, the ensuing debates touch on numerous unresolved questions and are critical to paving the way forward.
Hammers and guns When contemplating AI’s dual capacities, hammers and guns are a useful analogy. Regardless of their intended purposes, both hammers and guns can be used as tools or weapons. Their design certainly has much to do with these distinctions — you can’t nail together the frame of a house with the butt of a gun, after all — but in large part what matters is who’s wielding the object, what they plan to do with it, to whom or for whom, and why.
In AI, the gun-and-hammer metaphor applies neatly to two categories: autonomous military weapons and robotic process automation (RPA).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The prospect of AI-powered military-grade weapons can be terrifying. Like all advancements in weapons technology, their primary purpose is to kill people more efficiently, ostensibly while minimizing casualties on “our” side. Over centuries, humans have become increasingly distanced — literally and figuratively — from the direct impact of their weapons. The progression from swords to bows and arrows, muskets, rifles, mortars, bombers, missiles, and now drones represents technological advances that have moved us further and further away from our adversaries.
But humans have always been the ones releasing the arrow, pulling the trigger, or pressing the button. The question now is whether to give a killing machine decision-making power over who lives and who dies. That’s a new line to cross — and reiterates the need for human-in-the-loop AI design.
A small consolation: Among the people who seem most concerned about the spectre of autonomous weapons and the most ethical ways of addressing them are members of the Department of Defense (DoD). At an event in Silicon Valley in April 2019, the Defense Innovation Board (DIB) solicited wisdom around ethics and autonomous weapons from a collection of technologists, academics, retired military members, and activists. Recently, the DIB provided guidance to the DoD about ethics principles as they relate to both combat and non-combat AI systems, something DIB board chair and former Google CEO Eric Schmidt asserts can help lead to a national AI policy.
In contrast to military applications of AI, RPA is a solidly hammer-like in that it’s obviously a tool. It automates mundane and time-consuming tasks, freeing human workers to be more efficient and spend more time on critical work. Its rapid growth and massive market opportunity is arguably in part because instead of disrupting and killing off legacy industries, as technological innovations often do, it can actually give them new life, helping them stay competitive and relevant.
For companies — and even individual workers and teams — RPA can be empowering. But the downside is that automation often obviates existing jobs. Depending on the specific context, one might argue that a company could weaponize automation to gut its workforce, leaving throngs of workers adrift with their hard-won skills and experience suddenly obsolete.
Indeed, there’s concern that in this particular cycle of innovation — eliminating jobs and then creating new ones — too many are at risk of being left behind while the rich get richer.
The most vulnerable include those currently in lower-paying jobs and people of color. Concerns are articulated in reports from a National Academy of Sciences journal (PNAS), MIT , PwC Global , McKinsey , and the AI Now Institute.
These are challenges that have in part driven VentureBeat’s BLUEPRINT events , which look at the tech industry between the coasts and the unique challenges therein. The issue of automation comes up frequently at these events. Many of the new jobs that will emerge after automation require learning what amounts to a trade, rather than earning a degree in computer science. Sometimes those jobs may be in the same field; for example, autonomous trucks could displace truck drivers, but there will still need to be someone on board handling logistics and communications, which is a job that a former trucker may be able to move into with a modest amount of new training. A broad reskilling and upskilling effort can help displaced workers scale up to a better job than they had before.
Back and forth goes the power differential. Automation is a tool, is a weapon, is a tool.
In the murky middle Between the extremes of worker-aiding automation and killer drones lies almost all the other AI technology, and the middle is murky, to say the least. That’s where debate about AI becomes most difficult — but also comes into greater focus.
More than any other AI technology, facial recognition has shown clearly how an AI tool can be perverted into a weapon.
It is true that there are many beneficial uses of facial recognition technology. It can be used to diagnose genetic disorders and to help screen for potential human trafficking.
Law enforcement can use it to quickly track down and apprehend a terrorist or, as with the Detroit Police Department’s video surveillance program , easily locate suspects. There are perfectly neutral uses, too, like using it to augment a rich online shopping experience.
But even some of those applications of AI have a troubling ethical downside. In Detroit, even though the police chief seems as principled, transparent, and aware of potential abuses as one can be, the system is ripe for abuse. The New York Police Department has already abused its facial recognition technology to nab a suspect, for example. Even if such a system is never abused, though, and police only use it lawfully, citizens may still perceive it as a weapon. That perception alone can erode trust and cause people to live in fear.
The use of facial recognition in policing and sentencing, but also across miscellaneous fields like hiring, is deeply problematic. We know that facial recognition technology is often less accurate applied to women and people of color , for instance, owing to models that were built with poor data sets. Data may also contain biases that are only magnified when applied in an AI context.
And there are reasonable moral objections to the very existence of facial recognition software, which has led multiple U.S. cities to ban the technology.
Democratic presidential candidate Senator Bernie Sanders (I-VT) has called for a ban on police use of facial recognition software.
What’s of graver concern are the deeply alarming abuses of facial recognition by governments, such as the persecution of Uighur Muslims in China , which was made possible in part because of Microsoft research , and for which the company was publicly criticized.
But Microsoft has also refused to sell facial recognition technology to law enforcement in California and to U.S. Immigration and Customs Enforcement (ICE). Amazon is on record as saying it will sell its Rekognition facial recognition technology to any government department (which would potentially include ICE) as long as they’re following the law.
All of the above calls into question the responsibility tech companies bear for selling facial recognition technology to governments.
In a session at Build 2019, Tim O’Brien, general manager of AI Programs at Microsoft, gave a slide presentation about how the company view AI ethics. It was mostly reassuring, because it showed Microsoft has been thinking hard about these issues and has drawn up principled internal guidelines. But during the Q&A session, O’Brien was asked to discuss issues around responsibility and regulation as pertains to a company like Microsoft.
“There are four schools of thought,” he said. To paraphrase what he laid out, a company can take one of the following approaches: We’re a platform provider, and we bear no responsibility (for what buyers do with the technology we sell them) We’re going to self-regulate our business processes and do the right things We’re going to do the right things, but the government needs to get involved, in partnership with us, to build a regulatory framework This technology should be eradicated Microsoft, he said, subscribes to the third approach. “Depending on who you talk to, either in the public sector, customers, human rights activists — depending on … the interested parties you talk to, they’ll have a different point of view,” he said. “But we just keep pushing that rock uphill to try to educate policymakers on the importance.” On the one hand, that’s a pragmatic and responsible stance for a company to take. But on the other hand, does that mean Microsoft won’t even entertain the possibility that it shouldn’t create a technology just because it can? Because if so, the company is removing itself from a crucial debate about whether some technologies should exist at all. Technology companies need to not just participate in regulating the technologies they create; they need to consider whether some technologies should ever find their way out of the R&D lab in the first place.
Is the journey the destination? Because AI technologies can feel so huge, powerful, and untamable, the challenges they introduce can feel intractable. But they aren’t. A pessimistic view is that we’re doomed to be locked in an arms race with inevitably severe geopolitical ramifications. But we aren’t.
Structurally speaking, humanity has always faced these kinds of challenges, and the playbook is the same as it ever was. Bad actors have always and will always find ways to weaponize tools. It’s incumbent on everyone to push back and rebalance. For today’s AI challenges, perhaps the best place to start is with Oren Etzioni’s Hippocratic Oath for AI practitioners — an industry take on the medical profession’s “do no harm” commitment. The Oath includes a pledge to put human concerns over technological ones, to avoid playing god (especially when it comes to AI capabilities that can kill), to respect data privacy, and to prevent harm whenever possible.
The recent book Tools and Weapons: The Promise and the Peril of the Digital Age , by Microsoft president Brad Smith and his Senior director of external relations and communications Carol Ann Browne, revisits watershed moments in the company’s history and describes the problems and solutions around them. Smith’s perspective is unique, because his approach was less about the technologies themselves per se and more about the legal, ethical, moral, and practical concerns.
One anecdote in Tools and Weapons that stands out is when Microsoft ran into international legal issues with its data centers in Ireland. Essentially, the problem was about national sovereignty and data. What happens when one country wants to compel a tech company to turn over data that is stored in another country? “In some respects, it’s not a new issue,” wrote Smith. “For centuries, governments around the world agreed that a government’s power, including its search warrants, stopped at its border.” Smith wrote about the emergence of “mutual legal assistance treaties” (MLATs) that allowed for extradition and access to information across national borders — a way for countries to respect one another’s sovereignty while handling matters of significance, like criminal justice. But the people who crafted MLATs long ago could not have had any concept of cloud computing.
With data stored in the cloud — in data centers located in, say, Ireland on servers owned by Microsoft — the concept of international borders and access to that data was blown apart and left wide open for dangerous abuses. Smith wrote about how a law enforcement agency would try to bypass an MLAT by serving a tech company that was located in its jurisdiction, demanding data that was stored across an ocean in another country. Suddenly, tech companies were caught between sovereign governments.
Numerous laws that addressed related but not precisely applicable aspects of the problem, like wiretap laws, were simply insufficient to address this new technological advance. Lawmakers eventually created the Clarifying Lawful Overseas Use of Data (CLOUD) Act , which, Smith writes, “balanced the international reach for search warrants that the DoJ wanted with a recognition that tech companies could go to court to challenge warrants when there was a conflict of laws.” But it took years of work; two major lawsuits involving Microsoft; officials from at least four governments on three different continents; and all three branches of the U.S. government, including the U.S. Supreme Court and two presidents, to get it done.
Cloud computing is a strong example of what was a seemingly intractable new problem. Companies like Microsoft, alongside the governments of multiple nations, had to grapple with rethinking how cloud computing affected international borders, law enforcement jurisdictions, and property ownership and privacy rights of private citizens.
Though the CLOUD Act saga was particularly complex and protracted, the fundamental challenge of addressing new problems created by technological advances comes up multiple times throughout Tools and Weapons, around cybersecurity, the internet, social media, surveillance, opportunity gaps caused (and potentially solved) by technologies like broadband, and AI. In Smith’s retelling, the process of finding solutions was always similar. It required all stakeholders to act in good faith. They had to listen to concerns and objections, and they had to work together, often with rival companies — and in many cases, multiple international governments — to craft new laws and regulations. Sometimes the solutions required further technological innovations.
Unsurprisingly, Microsoft comes off looking quite favorable in Smith’s recollections in Tools and Weapons , and the text doesn’t provide a perfect playbook by any means. But it does serve as a reminder that the tech world has dealt with the same essential type of problems time and time again, and that people have worked hard and thoughtfully to find solutions.
Dealing with AI and its promises and problems requires moral outrage, political will, responsible design, careful regulation, and a means of holding the powerful accountable. Those in power — in AI, primarily the biggest tech companies in the world — need to act in good faith, be willing to listen, understand how and when tools can feel like weapons to people, and have the moral fortitude to do the right thing.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,585 | 2,019 |
"The pitfalls of a 'retrofit human' in AI systems | VentureBeat"
|
"https://venturebeat.com/2019/11/11/the-pitfalls-of-a-retrofit-human-in-ai-systems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The pitfalls of a ‘retrofit human’ in AI systems Share on Facebook Share on X Share on LinkedIn Stanislav Petrov is not a famous name in the computer science space, like Ada Lovelace or Grace Hopper , but his story serves as a critical lesson for developers of AI systems.
Petrov , who passed away on May 19, 2017, was a lieutenant colonel in the Soviet Union’s Air Defense Forces. On September 26, 1983, an alarm announced that the U.S. had launched five nuclear armed intercontinental ballistic missiles (ICBMs). His job, as the human in the loop of this technical detection system, was to escalate to leadership to launch Soviet missiles in retaliation, ensuring mutually assured destruction.
As the sirens blared, he took a moment to pause and think critically. Why would the U.S. send only five missiles? What purpose would they have to send them? There had been no indication in global political events that such a drastic measure was imminent. He chose to not follow protocol, and after some agonizing minutes realized he had made the right decision. There had been no missile attack; it was a false alarm. He was officially reprimanded by leadership for his decision to save the world.
The ability to take action based on context-specific human deduction is not accounted for in our sociotechnical algorithmic systems. Our language around AI systems anthropomorphizes technology, eliminating the human from the narrative. Linguistically, we structure our description of the technology as follows : “AI can diagnose heart disease in four seconds, as study shows machines now ‘as good’ as doctors.” This way of thinking reduces the actions of human doctors to rote tasks and presents the idea of an “AI doctor” as if it has a physical form and is capable of willful action. When we imagine such AI, it is not as code or algorithms, but through personifications like those drawn from the Terminator or, more benignly, Bjork’s music video “ All is Full of Love.
” In these scenarios, the human is no longer an empowered actor, but rather a passive recipient of outcomes.
The reality is that these systems are not all-knowing, not perfectly generalizable, and, in practice, often quite flawed. There are two ways in which designers and practitioners of algorithmic systems fail. First, they’re overconfident in the ability of AI to deliver a solution that is context-specific to the human subject. Second, they don’t incorporate a way for human actors to meaningfully challenge or correct the system’s recommendations. An extension of technochauvinism , a term coined by Meredith Broussard, “retrofit human” is the phenomenon of adjusting humans to the limitations of the AI system rather than adjusting the technology to serve humanity. The consequences of this are becoming increasingly evident as algorithms begin to impact our daily lives.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The use of risk-assessment algorithms in the criminal justice system has become a high-stakes topic of discussion, first catching the public eye with ProPublica’s analysis of Northpointe’s COMPAS parole algorithm.
Northpointe claims its software can predict recidivism rates. But Jeff Larson and Julia Angwin’s team of data scientists at ProPublica performed analysis and determined that COMPAS scores white people and black people unequally. Their work exposed not only the issues of bias and discrimination in the development and construction of algorithms in impactful situations, their statistical debate with Northpointe illustrated the probabilistic, and therefore uncertain, nature of algorithmic output.
By design, algorithms can’t make the final decision in many situations; but we have to enable a human in the loop to actually affect change. As any critical design scholar will tell you, simply inserting a human as an afterthought is woefully insufficient, especially when faced with the narrative of “all-knowing” AI systems. Understanding user interaction and power dynamics is critical to creating well-designed human-in-the-loop systems.
Some research indicates that people trust algorithms before other people because of the perceived objectivity of data and AI.
In their research , Poornima Madhavan and Douglas Weigman tested for perceptions of reliability and trustworthiness of human versus AI decision-making. They noticed that in a luggage-sorting exercise, automated “novices” were considered to be more reliable and trustworthy than human “novices,” but human “experts” were thought to be more trustworthy than automated “experts.” When we investigate reliability, overall people thought algorithmic solutions were more reliable than people.
In other words, even in a low-skilled task, we see deference to the algorithm if the human is perceived to be in a less-empowered role. Even given the potential flaws in algorithms, people had to prove their superior ability, and the default position of power was given to the AI system.
Given the mystery around algorithms, people have a hard time understanding how to integrate this input into their decision-making. Ben Green and Yiling Chen find that in traditional human-in-the loop algorithmic systems — in this case , a pretrial risk assessment scenario — participants could not determine how accurate the assessments (or the model) were, did not adjust their reliance on the system based on how well the model performed, and still showed racial bias in their decisions.
Outside the lab, how might the human-machine power dynamic change when we investigate high-skilled actors informed by AI? Megan T. Stevenson found that judges who were given pretrial risk assessment results to determine bail demonstrated little change in their decision-making , and any changes regressed back to their own biases over time. Similar to Green and Chen’s experiment above, if judges are not given the information to critically interrogate or contest an algorithm and are not held accountable for their decision to reject an algorithmic output in the system design, they may simply choose to ignore it.
But Stevenson’s findings illustrate how flawed design can lead to outcomes with embedded biases that adversely impact the less-empowered — in this case, the individual for whom the bail assessment is being made. The use of an algorithm makes the final decision appear to be more objective to an untrained observer, even if it did not influence the judge making the decision.
This makes the governance of these human-algorithmic systems and the contestability of the judges’ bail decisions more difficult. It also leads to a puzzling question: If we institute algorithmic advisory systems because we consider humans to be biased and then posit that algorithmic bias requires human oversight, where does this cycle end? Similarly, how do we combat technochauvinism and create systems that give humans the ability to contest results instead of being punished for non-adherence, like Petrov was? Our conversation about algorithmic bias needs to consider humans as both recipients and actors in the ecosystem. While Petrov’s case was not about the use of AI, it warns of the dangers of designing technical systems that assume the user will not exercise independent thought. The pitfalls of a retrofit human system — one in which the human is subject to the limitations of technology and not empowered to influence outcomes — appear when we fail to design truly meaningful interaction between algorithmic output and human beings.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,586 | 2,019 |
"Richard Bartle interview: How game developers should think about sapient AI characters | VentureBeat"
|
"https://venturebeat.com/2019/11/11/richard-bartle-interview-how-game-developers-should-think-about-sapient-ai-characters"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Richard Bartle interview: How game developers should think about sapient AI characters Share on Facebook Share on X Share on LinkedIn Westworld Richard Bartle is one of the leading academics on video games and is a senior lecturer and honorary professor of computer game design at the University of Essex in the United Kingdom. He might seem an unusual choice to talk about the ethics of artificial intelligence, but video game developers have grappled with the ethics of creating virtual worlds with AI beings in them for a long time. Not only do they have to consider the ethics of what they create in their own worlds, the game designers also have to consider how much control to grant players over the AI characters who inhabit the worlds. If game developers are the gods, then players can be the demi-gods.
He recently spoke about this topic in a fascinating talk in August on the IEEE Conference on Games in London. I interviewed him about our own interests in the intersection of AI, games, and ethics. He is in the midst of writing a book about the ethics of AI in games. His aim is to point out the unusual moral and ethical questions that AI specialists of the future will face.
I asked if sentient AI was on the horizon. He corrected me, noting “sapient AI” is the right description, as it refers to AI that are conscious, self-aware, and able to think. Before we create sapient non-player characters in games, Bartle believes we need an ethical system in place. And he’s not so sure that we should create them in the first place. Bartle believes that game developers are like gods of the worlds they create. “Those who control the physics of a reality are the gods of that reality,” he said.
Below is an edited transcript of our interview.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: This may be Richard Bartle on Skype. Or maybe his virtual character.
GamesBeat: It seems like a fascinating topic, and a very timely one. It feels more relevant to today’s headlines than ever before, I would guess.
Bartle: A lot of the general AI, ethics of AI, we’ve thought about for years. I did my PhD on AI in the 1980s. Some of the things that people talk about with AI, we were talking about back then — only hypothetically, but nevertheless we were considering these things. But some of the things I talk about in the deck are to do with games and AI in a way that we weren’t looking at it in the past.
Normally, when you look at AI and games, it’s using AI as weapons, using AI as ways to control a population, or using AI to increase your own intelligence. What happens when AI gets sentience and they want to kill us all? This kind of thing. The Terminator. The clue’s in the name. [laughs] But what I was looking at was different.
Let’s suppose that we have these AIs, but they’re in a pocket environment and they can’t get out. They can’t do anything to us except through us. How should we treat them? What’s right and what’s wrong? It turns out that when you look into the philosophy of this, well, the philosophers haven’t. They haven’t really looked at what it means to be someone in control of an entire reality in which intelligent beings live.
Theologists have, sort of, but they’ve only looked at our reality. They haven’t looked at a sub-reality in which we are the gods. They’ve looked at our reality in which they are proposing there are zero to infinity gods above. It’s a different area. But the thing is that people who’ve made these games have actual practical experience of what it means to be in control of them.
Now, we don’t have sentient–well, sapient is the correct word. We don’t have sapient non-player characters at the moment. The question I was asking is, we don’t know when we’re going to get them. It could be in 10 years or 1,000 years or a million years. But eventually, we will get them. And when we do get them, how are we going to treat them? What’s right and what’s wrong? That’s what I was asking.
The developers are gods Above: The Terminator GamesBeat: It seems like a lot matters in terms of what we call them. I know it’s at the top of your slide deck. You can refer to players as gods, and then if we’ve decided to call ourselves gods, then everything we do is justifiable, right? Bartle: Well, players aren’t gods. Designers and developers are gods. They control the reality, the physics of the world. The players can go in there and have — I suppose you could say they have godlike powers, but they can’t change the physics of the world. They have abilities beyond those of the non-player characters. For example, they can communicate with each other without NPCs being aware it’s happening.
GamesBeat: If we call ourselves that, we’ve already made a kind of ethical judgment, right? Bartle: Going back to Dungeons and Dragons terminology — gods, demi-gods, and heroes — the demi-gods are probably the customer service people. They have powers beyond the regular mortals, what the NPCS have. But they don’t have physics-changing powers. The players would be the heroes. They’re going in there and they’re bigger, better, superior to the NPCs. But they’re not gods. They don’t have the full range of abilities that the customer service reps have. Customer service reps are probably the angels.
Westworld showed us the way? Above: Anthony Hopkins and Jeffrey Wright in Westworld.
GamesBeat: What was some of the reference material, if the philosophers didn’t really tackle this? Does something like Westworld do a better job? [laughs] Bartle: Oddly, when Westworld came out, I’d already thought about these things. The Westworld TV series thought in quite a lot more depth than the original movie, back in the ’70s. But the source materials — essentially it’s metaphysics. I read a whole bunch of things on metaphysics and meta-metaphysics. There’s even a book called Meta-Metaphysics, the metaphysics of metaphysics. I was looking for some of the problems that philosophers have about how the world is built and then saying, “Well, we don’t have that problem because we’ve had to do it.” This isn’t strictly to do with AI and ethics — but for example, philosophers have this problem to do with whether an object can share the same space as another object. If I take a lump of clay and I give it a name — everybody knows this particular lump of clay. It’s a particular color. Everybody knows it. Then I mold that clay into a statue or something. Somebody comes along who didn’t know it as clay, but sees it as a statue. Now there’s two objects there. One of them is the clay and one of them is the statue. Is that just clay that’s been shaped into the statue, so the clay is the statue? Or is [it] two objects that are somehow superimposed? As a game designer, you actually have to implement that. It’s one or the other. You make the decision, which one you’re going to go with. Similarly, the story of — was it Theseus’s ship? Someone’s ship. Or Lincoln’s axes. Was it Washington’s axes? Never mind. Basically, it’s the case where you have a ship, an old wooden ship, and it starts to get a bit worn out, so you take a few planks out and replace them. Then you notice the mast going a bit, so you take the mast out, replace the sails. Eventually you’ve replaced the whole ship. Is it the same ship? Furthermore, what if someone collects all the pieces you threw away and sticks them all together to make the original ship out of the same pieces? Is that the same ship? Or a different ship? If it was a magic ship, to which ship would the magic be attached? The one that’s been gradually replaced or the one that’s been built? These are questions which philosophers can discuss forever, and indeed have, and probably still will. But when it comes to game development, you have to implement it. Which of these are we going with? Game developers have an insight into what it means to be someone who controls the physics of a reality, if you call that a god. Because they have an insight, that means they can say things which may be of interest to philosophers and theologians.
The reality of The Sims Above: The Sims 4 GamesBeat: We have god games like the Sims. In that sense, the player in a god game is almost like a game developer creating a game, where they create the entities in the world. Is there a difference? Bartle: There is a difference, yes. The difference, in a god game, yes, you have the powers of a god, except for the powers of creating the reality. A bona fide, fully powered god can change the world. If you’re in, I don’t know, the Sims or something like that, a game where you’re able to influence characters by changing the world about them, you’re only changing the world within the constraints of the program. It doesn’t matter what you’ve got in the Sims. You’re not able to change the code that underlies it. But the developers can.
GamesBeat: I guess there’s an interesting hierarchy here, where you have the player in the Sims, the designer of the game, and then Andrew Wilson, the CEO of Electronic Arts, who’s the god over the game developers telling them what they can and can’t do.
Bartle: [laughs] Well, he’s not a god. He can instruct them. But he operates within the same physics as the game developers, the physics of reality. If the game developers say, “We’re just going to sit here and create the game we want,” he can sack them, but if all the developers get together and say, “No, we’re going to stay here and barricade ourselves in the office and finish the game,” then he has to go call the police. Then you have people who say, “No, we want this game finished,” and they’re all rooting for the developers. They go out and disarm the police and eventually you call in the army and there’s riots. But ultimately they all operate within the same physics of reality. They’re attempting to use the physics of reality to control the behavior of people in reality.
Now, the other thing developers could do is go over their heads. “Okay, I think there’s a higher power in a higher reality and I shall appeal to that. I’m going to pray you don’t do this.” Who is responsible? Above: A Richard Bartle slide.
GamesBeat: I guess what we’re getting at is, who’s responsible? The player has certain responsibilities and certain ethics. But so does the game developer, if they allow the player to do certain things, or give less freedom to the player. They’re marshaling their responsibility and their own sense of ethics.
Bartle: Game developers are an interesting situation. There’s a paradox about game design, which is that you impose constraints on what the players can do in order to free them up to do things that they couldn’t do if you hadn’t put the constraints on them. When you’re playing a game, you can behave differently to what you do in reality, because the game gives you that protection. It’s a frame, they call it.
When you’re playing a game, your behavior doesn’t have the same impact as it does in reality. It’s the same as if you’re an actor on a stage. If you’re an actor on a stage and you start using racist language, well, if that’s part of the play, you’re protected. If you suddenly start shouting, “There’s a fire!” and that’s not part of the play, and there isn’t a fire, then suddenly you’re liable. But if it’s part of the play and you start shouting about a fire, well, you’re fine, even if people get trampled to death trying to get out from the imaginary fire.
In game design, we impose these constraints, but the constraints allow you to operate in ways that you couldn’t normally. In MMOs, which is my field, they enable you to act in ways that, in real life, you couldn’t. But in so acting, you gain a better understanding of yourself, and so that affects what you might do in real life in a good way. That’s the theory anyway.
Obviously there’s responsibility. Because you could weaponize games, if you really wanted to. You could do an awful lot of things with them. I was in a group at Project Horseshoe where we considered ways to use games badly. It’s actually very easy to use games badly. If I wanted to create a game that would, I don’t know, give people carpal tunnel syndrome, I could. I could gaslight them. I could ruin their lives.
Fear of the future Above: A character in Until Dawn. Video game characters are looking amazingly realistic.
GamesBeat: It sounds like a Black Mirror episode.
Bartle: Yeah, yeah. We didn’t publish the paper, because if we did someone might act on it. We didn’t really want that. Game developers and designers, as it turns out, are on the whole quite ethical.
GamesBeat: You just got us into a loop there.
Bartle: My main aim, eventually, in the whole system, was to provoke people into think[ing] how they would behave if they were a god of NPCs. What are the right things to do and the wrong things to do? And then for them to say, “I’m not a god of the NPCs, but in reality I’m an NPC. How do I believe any god or gods who may or may not exist in our reality — how do I think they’re behaving? Is their behavior ethical by what I’ve just figured out using this thought experiment where I’m a god?” That was the point.
Are sapient game characters property? Above: Westworld’s hosts are disposable.
GamesBeat: If I assert that the game characters are my property — I bought them with my $60 for the game — can I just do anything I want with my property? That’s one question. I guess we’re getting to this day where, with sapient AI, the AI is so good that we’re no longer faking it. Then it seems to cross that line from property into something else.
Bartle: Yes. If you say, “I own this game and I can do what I like with it, because I own it,” well, actually, no. There are some things you might think you own, but you can’t do anything you like to. Children would be something that springs to mind. They’re my children, using the possessive, but I can’t just — yes, these children wouldn’t exist if I hadn’t gotten drunk that night. That doesn’t mean I have a full right to them.
If I create a game as a designer and the game’s got intelligent NPCs, then I sell that game to somebody else.
I’m not selling the NPCs. I’m just selling the world in which the NPCs live. But what happens when you lose interest and stop playing? All those characters are going to disappear and die? Did you just kill all those characters? That’s something we don’t really have an answer for at the moment.
1 2 3 View All The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,587 | 2,019 |
"From black box to white box: Reclaiming human power in AI | VentureBeat"
|
"https://venturebeat.com/2019/11/11/from-black-box-to-white-box-reclaiming-human-power-in-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored From black box to white box: Reclaiming human power in AI Share on Facebook Share on X Share on LinkedIn Presented by Dataiku It’s hard to imagine what life was like before the peak of AI hype in which we currently find ourselves. But it was just a few years ago, in 2012, that Apple gave the world the first integrated version of Siri on the iPhone 4S, which people used to impress their friends by asking it banal questions. Google was just beginning to test its self-driving cars in Nevada. And the McKinsey Global Institute had recently released “Big data: The next frontier for innovation, competition, and productivity.” On the starting blocks of the race to release the next big AI-powered thing, no one was talking about explainable AI. Doing it first, even if no one truly understood how it worked, was paramount. That McKinsey Global Institute report gave a small amount of foreshadowing, predicting that businesses in nearly all sectors of the U.S. economy had at least an average of 200 terabytes of stored data. Back then, some companies were even doing something with that data, but those applications were mostly behind-the-scenes or extremely specialized. They were projects — largely siloed off from the core functions — that were maybe for those new people called data scientists to worry about, but certainly not the core of the business.
In the years that followed, things took off. By late 2012, data scientist, as most people are sick of hearing by now, was dubbed the sexiest job of the 21st century, and data teams started working feverishly with the masses of data that companies were storing. In fact, the roots of today’s AI movement crept into our lives with little resistance, despite (or perhaps because of) the fact that in the grand scheme of things, very few people actually understood the fundamentals of data science or machine learning.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Today, people are refused or given loans, accepted or denied entrance to universities, offered a lower or higher price on car insurance, and more, all at the hands of AI systems that usually offer no explanations. In many cases, humans who work for those companies can’t even explain the decisions. This is black box AI, and consumers increasingly — and often unknowingly — find themselves at its mercy. The issue has garnered so much attention that Gartner put explainable AI on the Top 10 Data and Analytics Technology Trends for 2019.
To be clear, “black box” is not synonymous with “malicious.” There are plenty of examples of black box systems that are doing good things, like analyzing imagery in healthcare to detect cancers or other conditions. The point is that while these systems are potentially more accurate from a technological perspective, models where humans cannot explain the outcomes — no matter what they’re trying to predict — can be harmful to consumers and to businesses. Harm aside, people simply have a hard time trusting what cannot be explained. The aforementioned healthcare example is instructive here, as AI systems often have high technical accuracy, but people don’t trust the machine-generated results.
Fortunately, the AI paradigm is shifting in two ways. One is on the consumer side — with increased focus and scrutiny around AI regulation, privacy, ethics, trust, and interpretability moving to the forefront. Consumers are starting to hold companies responsible for the AI-based decisions they make — and that’s a good thing.
The other shift is the approach from businesses, which are being forced to change their strategy partially because of consumer preference or increased legislation, but also because scaling AI efforts in a sustainable way (i.e., in a way that will continue to provide value into the future and not present risks) fundamentally requires a white box approach.
In other words, companies are starting to take note that turning AI into a business asset happens with large-scale, transparent adoption across departments and use cases, not by hiring data scientists to churn out the most cutting-edge models and throwing those models over the proverbial wall for the business to use.
Power in AI is no longer about who can make the most complex or accurate black box model with the data at hand; it’s about creating white box models that serve business needs, with an acceptable level of accuracy, and results that practitioners, executives, and customers can explain and understand. From there, it comes down to educating the people who are interacting with these models to do what humans do best and what AI systems cannot do: make judgments about whether the outputs make sense in context and whether they are working as intended — ideally, in a fair and unbiased way.
After all, it’s still people who make decisions about building models; they choose the data and which algorithm to apply. Humans (thankfully) aren’t machines, but that also means they can introduce their own biases that ultimately impact how that model acts in real business scenarios.
From a practical standpoint, explainable AI happens at several levels. It all starts with building the model; some algorithms are inherently more interpretable than others, and explainability is increasingly a topic of machine learning research. But ultimately, models that could be explained by data scientists or machine learning researchers probably might not be easily explained by a customer service representative (CSR). That’s where the idea of data democratization comes into play.
What would it take to get a CSR to explain to customers why they’re paying a certain price for their car insurance? It comes back again to trust via transparency — not only trust that the systems with which the CSR interacts are providing them with the right data, but trust in the data itself. And on top of all that, trust in the model. To get there, the CSR needs to not only understand what data goes into models, but where that data comes from, what it means, and how it influences the results of the model.
Clearly, wide-scale explainable AI requires a massive shift in organizations’ approach to data science and AI from the top down, but also from the bottom up. It’s about upskilling all employees so that they understand data and the systems it powers. It’s about setting up processes that allow white box systems to be democratized and used by all. It’s about investing in the right technologies and tools that both technologists and non-technical people can interact with.
It’s only out of this fundamental shift that companies will start creating products and systems that consumers can trust and will continue to use. That will require those in the C-suite to support and learn from those on the front lines and in the trenches when it comes to working with customers, data, processes, and the rest. And, of course, technology like AI platforms can fill in the gaps and encourage collaboration from all sides.
Perhaps more importantly, it’s also out of this shift that everyone will start to have a broader understanding of AI and the power it holds. If everyone in every job, no matter what their technical ability or background, interacts with AI systems and has a basic understanding of how they work, we’ll be better off than we are today. People will have the ability to work smarter on things that matter, not harder on repetitive processes, and that fulfills one of the greatest promises of AI.
Ultimately, organizational change will lead to a change in the wider public, giving people the ability to hold businesses accountable for the machine learning-powered systems they build. Democratization of data and AI isn’t just necessary in the workplace and to build the businesses of tomorrow, but also to make the AI-driven world one that we all want to live in.
Florian Douetteau is CEO of Dataiku.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,588 | 2,019 |
"Facial recognition regulation is surprisingly bipartisan | VentureBeat"
|
"https://venturebeat.com/2019/11/11/facial-recognition-regulation-is-surprisingly-bipartisan"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facial recognition regulation is surprisingly bipartisan Share on Facebook Share on X Share on LinkedIn Bipartisanship in modern politics can seem kind of like an unbelievable, mythical creature. But in recent months, as Congress considered regulation of one of the most controversial topics it faces — how, when, or if to use facial recognition — we’ve gotten glimpses of a political unicorn.
In two House Oversight and Reform committee hearings last summer, some of the most prominent Republicans and Democrats in the United States Congress joined together in calls for legislative reform. Proponents of regulation ranged from Rep. Alexandria Ocasio-Cortez (D-NY) to Rep. Jim Jordan (R-OH), a frequent Trump supporter on cable news. On Friday, Jordan was also appointed to the House Intelligence Committee to confront witnesses in public presidential impeachment hearings that begin this week.
On the subject of facial recognition regulation, the House initiated hearings because of potential First, Fourth, and Fourteenth Amendment questions raised by the technology.
Calls for such hearings arose from a perceived lack of oversight on the part of law enforcement agencies like the FBI, as well as the technology’s track record of performing poorly on women and people with dark skin tones compared with white men, according to an April report from the Department of Commerce’s National Institute of Standards and Technology (NIST) and multiple audits.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Committee chair Elijah Cummings (D-MD) and Jordan — two of the best-known members of their respective parties — were in the process of drafting a bill, Vox reported this summer , but that did not come to pass. Cummings passed away on October 17.
Before he died, Cummings called the hearings a demonstration of significant bipartisan concern about the use of facial recognition “by our government against our people without adequate safeguards.” “I believe there should be front-end accountability for law enforcement’s use of facial recognition technology. I also believe that people should be informed of their participation in a facial recognition technology system and should be able to ‘opt-in’ when possible,” Cummings said in a statement provided to VentureBeat for this story before his passing. “This technology is evolving extremely rapidly, without any [real] safeguards, whether we are talking about commercial use or government use. There are real concerns about the risks that this technology poses to our civil rights and liberties, and our right to privacy.” The committee’s work followed the introduction of the Commercial Facial Recognition Privacy Act of 2019 , which would require businesses to receive consent before using facial recognition software. It was introduced by Senators Roy Blunt (R-MO) and Brian Schatz (D-HI).
Lawmakers in democratic societies around the world are scrambling to decide when and where such protections should be applied. Standards and models for how to treat facial recognition technology are popping up in multiple countries and with varying results. France and India are creating national facial recognition databases, China leans toward dystopian deployment in subways and crosswalks, and the European Commissioner reportedly plans to introduce a facial recognition law in line with existing major privacy protections.
Facial regulation’s left-leaning roots In Congress, Democrats and Republicans alike have expressed interest in regulating facial recognition. But on a local level, efforts to regulate governmental use of facial recognition have been led by left-leaning municipalities.
Cities like San Francisco and New York, and state legislatures in Michigan and New Jersey , have been among the first to tackle the subject. This fall, California passed a three-year moratorium on facial recognition use in law enforcement body cameras. Conversation in these cases tends to vacillate between the need for a temporary moratorium and a permanent ban.
In May, San Francisco became the first to ban facial recognition use by police or city departments as part of a broader bill that creates a process for review of government surveillance technology. In June, Somerville, Massachusetts (near Harvard University and Boston) passed a similar ban, and in July, Oakland, California passed one as well. More recently, Berkeley, California passed a facial recognition ban, and Springfield, Massachusetts has considered one.
Ethicists and privacy activists like Oakland lawyer Brian Hofer favor an outright ban. Hofer helped coauthor legislation in San Francisco, Oakland, and Berkeley and finds facial recognition to be an irredeemable technology.
“I believe strongly that the technology will get more accurate, and that’s my greater concern, that it will be perfect surveillance,” he said. “It’ll be a level of intrusiveness that we never consented to the government having. It’s just too radical of an expansion of their power, and I don’t think walking around in my daily life that I should have to subject myself to mass surveillance.” Others, like Clare Garvie, believe a moratorium is appropriate to give communities time to consider regulation. A senior associate at the Georgetown University Center on Privacy and Technology, Garvie testified before Congress about facial recognition earlier this year.
Garvie works as part of the Perpetual Lineup project to document facial recognition use by local law enforcement. In its documentation in recent years, the project found virtually no oversight or standardization of facial recognition use by police across the U.S. Its most recent assessment last spring found that some departments use composite sketches or partial images to identify suspects with facial recognition — methods that can produce highly inaccurate results. Records obtained by the project found attempts by the NYPD to run a picture of actor Woody Harrelson through a facial recognition system because officers thought the suspect seen in drug store camera footage resembled the actor.
Before a moratorium is lifted, Garvie said she wants to see minimum photo quality standards, accuracy testing, and publicly available reports — like the kind mandated in San Francisco’s law — on how the government uses facial recognition tech.
The effect of proximity to tech centers We don’t yet know much about people’s attitudes toward facial recognition and the extent to which it breaks down along partisan lines, but recent polls have shed some light on the matter.
A Pew Research survey released in September offered one of the first glimpses into it. The survey of 4,000 U.S. adults found that 65% of Republicans trust police with facial recognition, compared to 51% of Democrats.
Brooks Rainwater is director of the Center for City Solutions at the National League of Cities and is following the growing trend of facial recognition adoption by local governments. Cities exploring facial recognition regulation are almost all in blue parts of the country thus far, like New York, where a bill is being considered to regulate business use of the technology and ensure landlords don’t replace physical keys with facial recognition , and in Portland, where city officials are considering a ban on private sector use of facial recognition.
Rainwater thinks local governments have been first to act because they’re able to pass legislation more quickly than Congress, but lawmakers at the regional and state levels are also grappling with how and when this technology should be used. Democrats and Republicans may approach facial recognition in different ways, but there’s concern on both sides, Rainwater noted. He said the fact that “dark blue” cities are the first to legislate facial recognition policy in the U.S. is likely a reflection of those cities’ proximity to the tech industry.
“I don’t think it’s necessarily that [partisan] dynamic so much as … cities that are on the bleeding edge from a technology perspective,” he said. “These are areas that are very well-versed in technology and its implications, and so the councilmembers, the mayors — all of them are very much invested in the larger technology conversation happening within cities.” The fact that surveillance camera networks are also largest in cities, according to Comparitech analysis , may be another factor.
Margaret O’Mara worked in the Clinton administration for Al Gore, but for the last decade she’s been a University of Washington professor focused on the history of national politics and tech policy. She agrees with Rainwater that proximity to the tech industry may have played an instrumental role in regulation efforts like those seen in the Bay Area or in the state of Washington, where lawmakers considered a facial recognition moratorium last year.
In tech policy battles, like the kind that took place over commercial internet in the 1990s, a small cadre of lawmakers on both sides typically become subject matter experts, but as the issue moves forward, lawmakers tend to attach their own political priorities.
For Republicans, that can mean ensuring police get to use facial recognition as they see fit. For Democrats, it can translate to the protection of civil liberties and privacy. Arguments by left-leaning cities include fears of abuse by federal immigration officials like ICE, overpolicing in communities of color, and a growing understanding that facial recognition systems have a history of underperforming for people of color, particularly women.
The emergence of pockets of facial recognition regulation in early 2019 was also part of concerted efforts by organizations like the Electronic Frontier Foundation (EFF) and American Civil Liberties Union (ACLU). The EFF was part of a coalition that campaigned for a police body camera moratorium in California , while the ACLU helped draft legislation in multiple cities that voted to pass bans.
In recent weeks, the ACLU filed a lawsuit against the FBI, Department of Justice (DoJ), and Drug Enforcement Agency (DEA) for failure to respond to freedom of information requests and government agency use of facial recognition tools from AWS and Microsoft. The ACLU has also waged a persistent media campaign to increase public awareness of performance bias in Rekognition, AWS’ facial recognition tech, by using it on members of Congress , the California state legislature , and even NFL players.
“That is a very tried and true and effective strategy,” said O’Mara. “If you go back 100 years to the Progressive Era, reformers who wanted to change the system started at the state and local level and moved their way upwards, and that was a very effective way to do it.” Back in the 1990s, when policy for today’s commercial internet was being created, organizations like the EFF also helped shape policy by meeting with lawmakers on both sides of the aisle, including former House Speaker Newt Gingrich (R-Ga.) and Sen. Ed Markey (D-Mass.).
Broader tech issues Facial recognition regulation is not the only tech issue that’s getting bipartisan support; efforts to regulate tech giants and better protect user privacy on other fronts are also gaining ground.
In late October, Sens. Mark Warner (D-Va.), Richard Blumenthal (D-Conn.), and Josh Hawley (R-Mo.) worked together to introduce the ACCESS Act, a bill requiring social media companies like Facebook to make user data portable. (Doing so could help make way for social media alternatives to Facebook.) In July 2018, Facebook, Google, Microsoft, and Twitter became inaugural members of the Data Transfer Project to move data between platforms.
“It’s really intriguing because we’re in a moment where Republicans and Democrats really don’t agree on anything, [but] on many issues of tech regulation, they’re sort of strange political bedfellows,” O’Mara said.
It appears feelings of existential threat can inspire lawmakers to work together.
Local and national lawmakers in the U.S. cite facial recognition applications in China — where the technology is being used to track dissidents, catch criminal suspects, scan subway passengers, power commerce, and publicly shame jaywalkers — as concerning.
This kind of widespread implementation, and the prospect of similar privacy invasions at home, have generated a sense of urgency among lawmakers on both sides of the aisle.
“What the Chinese government has done very effectively is deployed facial recognition on a broader basis, and people look across to China and kind of see where it’s going. And I think our thing was ‘Let’s pump the brakes on this a little bit,’ and that’s fair,” O’Mara said.
The fight around facial recognition is separate from antitrust regulatory efforts to break up big tech monopolies, though there is some overlap. But concerns about the biggest tech companies’ dominance is an issue that has also garnered some semblance of bipartisan support.
“[Tech regulation] is going to be the defining issue of the next decade,” O’Mara said. “I see it as sort of dominating the policy conversation in the way that the regulation of big oil and railroads and steel in the first decade of the 20th century was a dominant conversation.” O’Mara said the current political moment reminds her of attempts to regulate the modern commercial internet in the 1990s and of the antitrust issues that arose during Teddy Roosevelt’s presidency (1901-1909).
“I’ve studied that past era deeply, and the parallels are striking. It’s not history repeating itself, but it’s no surprise that these very large companies have come under such sharp regulatory scrutiny simply because they are too big to ignore and escape,” she said. “I mean, these large companies are pretty nervous because it’s not just one party that’s criticizing them; it’s both.” On both the local and national level, there seems to be a growing interest in facial recognition regulation.
In local politics, lawmakers in left-leaning cities are first to introduce and passed regulation. In Congress, Rep. Carolyn Maloney (D-N.Y.) is acting chair of the Oversight and Reform Committee, but Cummings’ death appears to have left a power vacuum , and Jordan seems busy defending the president against impeachment. The committee could return to legislation that was reportedly being drafted by Cummings and Jordan or advance a bill to ban facial recognition in most public housing facilities that was cosponsored by committee members Rep. Ayanna Pressley (D-Mass.) and Rep. Rashida Tlaib (D-Mich.).
In the near term, politicians from both parties and committee staff will likely remain busy with impeachment hearings that begin this week. Jordan may also face continued scrutiny from a very different source: In a lawsuit filed last week , a second person came forward to accuse him of ignoring sexual molestation allegations by a team doctor when he was an assistant coach of the Ohio State University wrestling team in the 1990s.
However recent efforts to regulate facial recognition in Congress turn out, lawmakers from all levels of government and political parties are likely to continue facial recognition regulation efforts until meaningful national action is taken, because in contrast to virtually every previous form of identification, you can’t hide or opt out of your face.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,589 | 2,019 |
"As AI grows, users deserve tools to limit its access to personal data | VentureBeat"
|
"https://venturebeat.com/2019/11/11/as-ai-grows-users-deserve-tools-to-limit-its-access-to-personal-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion As AI grows, users deserve tools to limit its access to personal data Share on Facebook Share on X Share on LinkedIn Hollywood's portrayals of AI-controlled Terminator robots once seemed like pure science fiction, but AI and robotics have been catching up with James Cameron's legendary nightmares.
My name is John.
My name is John Connor, and I live at 19,828 Almond Avenue, Los Angeles.
My name is John Connor, I live at 19,828 Almond Avenue, Los Angeles, and my California police record J-66455705 lists my supposedly expunged juvenile convictions for vandalism, trespassing, and shoplifting.
Which of these levels of personal detail do you feel comfortable sharing with your smartphone? And should every app on that device have the same level of knowledge about your personal details? Welcome to the concept of siloed sharing. If you want to keep relying on your favorite device to store and automatically sort through your data, it’s time to start considering whether you want to trust device-, app-, and cloud-level AI services to share access to all of your information, or whether there should be highly differential access levels with silo-class safeguards in place.
The siloing concept Your phone already contains far more information about you than you realize. Depending on who makes the phone’s operating system and chips, that information might be spread across storage silos — separate folders and/or “secure enclaves” — that aren’t easily accessible to the network, the operating system, or other apps. So if you take an ephemeral photo for Snapchat, you might not have to worry that the same image will be sucked up by Facebook without your express permission.
Or all of your data could be sitting in one big pocket- or cloud-sized pool, ready for inspection. If you’ve passively tapped “I agree” buttons or given developers bits of personal data, you can be certain that there’s plenty of information about you on multiple cloud servers across the world. Depending on the photos, business documents, and calendar information you store online, Amazon, Facebook, Google, and other technology companies may already have more info about your personal life than a Russian kompromat dossier.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A silo isn’t there to stop you from using these services. It’s designed to help you keep each service’s data from being widely accessible to others — a particular challenge when tech companies, even “privacy-focused” ones such as Apple, keep growing data-dependent AI services to become bigger parts of their devices and operating systems.
Until recently, there was a practical limit to massive data gathering and mining operations: Someone had to actually sift through the data, typically at such considerable time and expense that only governments and large corporations could allocate the resources. These days, affordable AI chips handle the sifting, most often with at least implicit user consent, for every cloud and personal computer promising “smart” services. As you bring more AI-assisted doorbells, cameras, and speakers into your home, the breadth and depth of sifted, shareable knowledge about you keeps growing by the day.
It’s easy to become comfortable with the conveniences AI solutions bring to the table, assuming you can trust their makers not to misuse or share the data. But as AI becomes responsible for processing more and more of your personal information, who knows how that data will be used, shared, or sold to others? Will Google leverage your health data to help someone sell you insurance? Or could Apple deny you access to a credit card based on private, possibly inaccurate financial data? Of course, the tech giants will say no. But absent hard legal or user-imposed limits on what can be done with private data, the prospect of making (or saving) money by using AI-mined personal information is already too tempting for some companies to ignore.
Sounding the AI alarm Artificial intelligence’s potential dangers landed fully in the public consciousness when James Cameron’s 1984 film The Terminator debuted, imagining that in 1997 a “Skynet” military computer would achieve self-awareness and steer armies of killer robots to purge the world of a singular threat: humanity. Cameron seemed genuinely concerned about AI’s emerging threats, but as time passed, society and Terminator sequels realized that an AI-controlled world wasn’t happening anytime soon. In the subsequent films, Skynet’s self-awareness date was pushed off to 2004, and later 2014, when the danger was reimagined as an evil iCloud that connected everyone’s iPhones and iPads.
Above: Real-time AI-aided facial and body analysis were famously illustrated in Terminator 2.
Putting aside specific dates, The Terminator helped spread the idea that human-made robots won’t necessarily be friendly and giving computers access to personal information could come back to haunt us all. Cameron originally posited that Sarah Connor’s name alone would be enough information for a killer robot to find her at home using a physical phone book. By the 1991 sequel, a next-generation robot located John Connor using a police car’s online database. Today, while the latest Terminator film is in theaters, your cell phone is constantly sharing your current location with a vast network infrastructure, whether you realize it or not.
If you’re a bad actor, this means a bounty hunter can bring you in on an outstanding warrant. If you’re really bad, that’s enough accuracy for a government to pinpoint you for a drone strike. Even if you’re a good citizen, you can be located pretty much anytime, anywhere, as long as you’re “on the grid.” And unlike John Connor in Terminator 3 , most of us have no meaningful way of getting off that grid.
Above: The Windows Apps team at Microsoft created a Terminator Vision HUD for HoloLens. It might not seem cute in the future.
Location tracking may be the least of your concerns. Anyone in the U.S. with a federally mandated Real ID driver’s license already has a photo, address, social security number, and other personal details flowing through one or more departments of motor vehicles, to say nothing of systems police can access on an as-needed basis. U.S. residents who have applied for professional licenses most likely have fingerprints, prior addresses, and possibly prior employers circulating in some semi-private databases, too. Top that off with cross-referenced public records, and it’s likely that your court appearances, convictions, and home purchases are all part of someone’s file on you.
Add more recent innovations — such as facial recognition cameras and DNA testing — to the mix, and you’ll have the perfect cocktail for paranoia. Armed with all of that data, computer systems with modern AI could instantly identify your face whenever you appear in public while also understanding your strengths and weaknesses on a genetic level.
The further case for siloing data As 2019 draws to a close, the idea that a futuristic computer would need to locate you using a phone book seems amusingly quaint. Fully self-aware AIs aren’t yet here, but partially autonomous AI systems are closer to science fact than science fiction.
There’s probably no undoing any of what’s already been shared with companies; the data points about you are already out there, and heavily backed up. To the extent that databases of personal information on millions of people might once have lived largely on gigantic servers in remote locations, they now fit on flash drives and can be shared over the internet in seconds. Hackers trade them for sport.
Absent national or international laws to protect personal data from being aggregated and warehoused — Europe’s GDPR is a noteworthy exception, with state-level alternatives such as California’s CCPA — our solutions may wind up being personal, practical, and technological. Those of us living through this shift will need to start clamping down on the data we share and teach successive generations, starting with our kids, to be more cautious than we were.
Based on what’s been happening with social networks over the past decade, that’s clearly going to be difficult. Apart from its behind-the-scenes data mining, Facebook hosts innocuous-looking “fun” surveys designed to get people to cough up bits of information about themselves, historically while gathering additional information about a user’s profile, friends, and photos. We’ve been trained to become so numb to these information-gathering activities that there’s a temptation to just stop caring and keep sharing.
To wit, phones now automatically upload personal photos en masse to cloud servers where they’re sorted by date and location at a minimum, and perhaps by facial-, object-, and text recognition as well. We may not even know that we’re sharing some of the information; AI may glean details from an image’s background and make inferences missed by the original photographer.
Cloud-based, AI-sorted storage has a huge upside: convenience. But if we’re going to keep relying on someone else’s computers and AI for assistance with our personal files, we need rules that limit their initial and continued access to our data. Although it might be acceptable for your photo app to know that you were next to a restaurant when a fire broke out, you might not want that detail — or your photos at the restaurant — automatically shared with investigators. Or perhaps you do. That’s why sharing silos are so important.
Building the right sharing silos Right now, consumer AI solutions such as Amazon’s Alexa, Apple’s Siri, Google Assistant, and Microsoft’s Cortana feel to users like largely discrete tools. You call upon them as needed and otherwise forget they’re constantly running in the background , just waiting to instantly respond to your voice.
We’re already at the point where these “assistants” are becoming fully integrated into the operating systems we rely on every day. Apart from occasional complaints about Siri’s internet connectivity, assistants draw upon data from the cloud and our devices so quickly that we generally don’t even realize it’s happening.
This raises three questions. The first is how much your most trusted Android, iOS, macOS, or Windows device actually “knows” about you, with or without your permission. A critical second question is how much of that data is being shared with specific apps. And a related third question is how much app-specific data is being shared back to the device’s operating system and/or the cloud.
Users deserve transparent answers to all of these questions. We should also be able to cut the data at any time, anywhere it’s being stored, without delay or permission from a gatekeeper.
For instance, you might have heard about Siri’s early ability to reply to the joke query, “Where do I bury a body?” That’s the sort of question (almost) no one would ask, jokingly or otherwise, if they thought their digital assistant might contact the police. What have you asked your digital assistant — anything potentially embarrassing, incriminating, or capable of being misconstrued? Now imagine that there’s a database out there with all of your requests, and perhaps some accidentally submitted recordings or transcripts, as well.
In a world where smartphones are our primary computers, linked both to the cloud and to your laptop, desktop, tablet, or wearable devices, there must be impenetrable data-sharing firewalls both at the edges of devices and within them. And users need to have clear, meaningful control over which apps have access to specific types, and perhaps levels, of personal data.
There should be multiple silos at both the cloud and device OS levels, paired with individual silos for each app. Users shouldn’t just have the ability to know that “what’s on your iPhone stays on your iPhone” — they should be able to know that what’s in each app stays in that app, and enjoy continuous access (with add/edit/delete rights) to each personal data repository on a device, and in the cloud.
AI is power, but AI is powerless without data That’s not to say that every person or even most people will dive into these databases. But we should have the ability, at will, to control which personal details are being shared. Trusting your phone to know your weight or heart rate shouldn’t mean granting the same granular data access to your doctor or life insurance provider — unless you wish to do so.
As operating systems grow to subsume more previously non-core functions, such as monitoring health and sorting photos, internal silos within operating systems may be necessary to keep specific types of data (or summaries of that data) from being shared. There are also times when people will want to use certain device features and services anonymously. That should be an option.
AI wields the power to sift through untold quantities of data and make smart decisions with minimal to no human involvement, ideally for the general benefit of people and humanity as a whole. AI will transform more facets of our society than we realize, and it is already impacting plenty of things, for better and for worse.
Data is the fuel for AI’s power. As beneficial as AI can be, planning now to limit its access to personal data using silos is the best way to stop or reduce problems down the road.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,590 | 2,019 |
"AI in patent law: Enabler or hindrance? | VentureBeat"
|
"https://venturebeat.com/2019/11/11/ai-in-patent-law-enabler-or-hindrance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI in patent law: Enabler or hindrance? Share on Facebook Share on X Share on LinkedIn Filing a patent is the clerical equivalent of pulling teeth — at least in the U.S. It first requires inventors to determine the type of intellectual property (IP) protection they require (i.e., utility, design, or plant). Then they’re on the hook to conduct a United States Patent and Trademark Office (USPTO) database search for similar inventions. If and only if the novelty of their idea passes muster are they allowed to proceed to the next step, which is preparing an application and fees.
The system has motivated people like former aerospace engineer Dr. Stephen Thaler to turn to AI in pursuit of a better way. He, along with a team of legal experts and engineers, developed DABUS, a “creativity machine” that’s able to generate ideas without human intervention. A “critic” component within DABUS monitors the system’s idea-generating modules, enabling it to isolate and ripen those that evince the most utility. In August, three agencies — the USPTO, European Patent Office (EPO), and United Kingdom Intellectual Property Office (UKIPO) — received two patent dockets filed by Thaler’s DABUS. One is for a beverage container, and the other is for a light that flashes in a rhythm that’s hard to ignore.
It’s not unreasonable to say that DABUS has paved the way for AI invention systems yet to come — and opened something of a Pandora’s box in the process. At issue is whether AI qualifies as an inventor under statutes like the America Invents Act, which defines a filer as “the individual or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.” Some scholars argue that “individual” might be interpreted broadly to mean machines as well people and that denying an AI the right of conception could hinder innovation. But others assert that allowing AI to be credited with an invention might deter human developers who find themselves unable to compete.
Who, or what, can be an “inventor”? But Dennis Crouch, associate professor of law at the University of Missouri School of Law, noted that most inventions today are computer-assisted , and that DABUS and similar programs could be perceived as the evolution of the paradigm. “The computer does some amount of the work — handling any complex math, assisting in visualization, and searching for solutions, and much more,” he told VentureBeat in an email. “In deciding who is the ‘inventor,’ the courts look to figure out what person instructed the computer to do the work, and also what person was the first to recognize a solution brought to them by the computer.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, critics like Brad Hulbert, founding partner with McDonnell Boehnen Hulbert & Berghoff, argue that the works of AI should be treated differently from those to which human inventors contributed. His reasoning? The standards of obviousness applied to human art are trivially easy for machines — and their makers — to exploit.
“Previously, a device or process could be patentable if it would not have been ‘obvious to a person of ordinary skill’ in the pertinent art,” he told VentureBeat in an email. “There is no answer yet as to whether such a ‘person of ordinary skill’ is an individual or a machine, [but the] more machines recognize the pattern that leads to invention, the greater the reason for the barrier for patentability to be raised.” “Almost certainly, machines will keep getting better at pattern recognition ,” he added.
The bar has already been raised in the European Union, where the European Patent Convention limits inventorship to “natural persons.” Interpreted humanistically, it precludes any patent author that is algorithmic in makeup (i.e., an AI system or software) from registering for IP protections in the region.
But it’s not inconceivable that an entrepreneur with the wherewithal to could skirt even those restrictions. Crouch offers a hypothetical: A perpetual trust for charitable giving is created with an AI as the beneficiary, and a banker or a lawyer as the trustee. The trust could hire a person — the “inventor” — with a contractual agreement to transfer all patent rights to the trust. This person would have some interaction with the AI, and they would always be the first human told about the AI’s newest inventions. In this arrangement, that human, rather than the AI, would legally be the first to conceive the idea.
For better or for worse? But wait a minute, you might be thinking: Is AI that lessens the burden on inventors as dystopian as some are making it out to be? After all, only half of the patents submitted in the U.S. last year (roughly 300,000) were issued to their respective filers, while the World Intellectual Property Organization passed a record-breaking quarter-million filing mark in 2018 (a 3.9% increase over 2017). As of June 2011, it took the USPTO an average of 3.7 years to rule on a patent — and upwards of seven years for applications that were more complicated in nature.
If the parties producing and using AI inventors of the future act in good faith, they might streamline an approval pipeline that’s been gummed up by duplicative ideas. Pranay Agrawal, CEO and cofounder of analytics service provider Fractal Analytics, believes an AI in the loop could reduce costs and increase patent validation accuracy by lending a hand in literature review, plagiarism cross-checking, and fraud detection.
“[AI] frees up human beings to focus on more value-added tasks,” Agrawal told VentureBeat via email. “A human [and] AI can be a much more fruitful, productive, and happier combination in the patent approval process. Using AI to fight AI to reduce bad actors may be a way to go.” But Hulbert points out that this optimistic pragmatism sidesteps sticky existential questions respecting AI’s agency — and the act of invention itself. “Of course, there are the unanswered philosophical questions of whether a machine (or the person who programmed the machine or who owns the machine) should be considered an inventor,” he said. “Another, more basic question arises from the increasing capability of AI: Do we still need patents to motivate the creation of (and sharing of information about) inventions when inventions are developed by machines?” Equally problematically, there’s no guarantee that AI wouldn’t exacerbate the worst of today’s problems. Agrawal points to a study conducted by the Harvard Business Review that found patent trolls — companies that attempt to enforce patent rights against accused infringers beyond the patent’s actual value or contribution to the prior art — cost defendant firms $29 billion per year in direct out-of-pocket costs. Patent-writing AI deployed at scale could provide endless legal ammunition to trolls, for pennies on the dollar.
“In aggregate, patent litigation destroys over $60 billion in firm wealth each year. The nature of patents being filed will […] change in terms of applications, coverage, and fields — the barrier to entry is reducing,” Agrawal said. “[That’s why I] think it is important to remember that humans created AI. AI did not create AI; therefore, we should carefully consider the laws and permissibility surrounding AI’s rights.” Keeping a close eye So is AI in patent law a force for good or not? It seems there’s no right answer — at least not now, in the tech’s early days. That’s why Agrawal advocates a cautious approach, entailing close oversight over (and if necessary, curtailing of) AI’s expanding capabilities.
“The current pioneers of any industry — Jeff Bezos, Elon Musk, Bill Gates, or any other founder or inventor who have created meaningful, impactful AI applications or technologies that solve large scale problems — are all humans leveraging the powerful AI algorithms and big data pipelines to make things work,” he said. “It requires behavioral understanding of humans, design thinking to holistically solve the problem, and drawing the contours of the problem to be solved to get to a much more focused and accurate answer.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,591 | 2,019 |
"AI-generated fake content could unleash a virtual arms race | VentureBeat"
|
"https://venturebeat.com/2019/11/11/ai-generated-fake-content-could-unleash-a-virtual-arms-race"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-generated fake content could unleash a virtual arms race Share on Facebook Share on X Share on LinkedIn When it comes to AI’s role in making online content, Kristin Tynski, VP of digital marketing firm Fractl , sees an opportunity to boost creativity. But a recent experiment in AI-generated content left her a bit shaken. Using publicly available AI tools and about an hour of her time, Tynski created a website that includes 30 highly polished blog posts, as well as an AI-generated headshot for the non-existent author of the posts. The website is cheekily called ThisMarketingBlogDoesNotExist.com.
Although the intention was to generate conversation around the site’s implications, the exercise gave Tynski a glimpse into a potentially darker digital future in which it is impossible to distinguish reality from fiction.
Such a scenario threatens to topple the already precarious balance of power between creators, search engines, and users. The current flow of fake news and propaganda already fools too many people, even as digital platforms struggle to weed it all out. AI’s ability to further automate content creation could leave everyone from journalists to brands unable to connect with an audience that no longer trusts search engine results and must assume that the bulk of what they see online is fake.
More troubling, the ability to weaponize such tools to unleash a tidal wave of propaganda could make today’s infowars look primitive, further eroding the civic bond between governments and citizens.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “What is alarming to me about this new era of high-quality, AI-generated text content is that it could pollute search engine results and clog the internet with a bunch of garbage,” she said. “Google could have a difficult time figuring out if [content] was mass-generated. Even if it is possible for Google to do it, the time and the resources it would take to incorporate this into search would be difficult.” AI versus artists The intersection between AI and creativity has been expanding rapidly as algorithms are used to create music, song lyrics, and short fiction. The field compels attention because we like to believe that emotions and creativity are primal urges that define aspects of our humanity. Using machines to replicate these qualities is an intriguing technical challenge that brings us a step closer to bridging the human-machine divide while sending some into an existential quagmire.
Earlier this year, the OpenAI project stepped squarely into this battlefield when it announced it had developed powerful language software that was so fluent it could nearly match human capabilities in producing text. Worried that it would unleash a flood of fake content, OpenAI said it would not release the tool for fear that it would be abused.
This was simply catnip to other developers who raced to create equivalents. Among them were two masters students at Brown University, Aaron Gokaslan and Vanya Cohen.
The pair said they managed to create a similar tool even though they didn’t possess particularly strong technical skills. That, of course, was their point: Virtually anyone could now create convincing AI-powered content generation tools.
Gokaslan and Cohen took issue with OpenAI’s decision not to release its tools because they felt access to the technology offered the best hope for constructing defensive measures. So they published their own work in protest.
“Because our replication efforts are not unique, and large language models are the current most effective means of countering generated text, we believe releasing our model is a reasonable first step toward countering the potential future abuse of these kinds of models,” they wrote.
This disclosure philosophy is shared by the Allen Institute for Artificial Intelligence and the University of Washington, which together created Grover, a tool to detect fake news generated by AI.
They posted the tool online to allow people to experiment with it and see how easy it is to generate an entire article from just a few parameters.
Grover was the tool Tynski used in her experiment.
Reality or illusion? Fractl touts itself as a one-stop shop for organic search, content marketing, and digital PR strategies. To that end, Tynski said the company had previously experimented with AI tools to help with tasks such as data analytics and some limited AI content creation that formed the basis for human-created content.
“We’re incredibly excited about the implications of how AI could support high-quality content — to parse data and then help us tell stories about that data,” she said. “You could see where AI-generated text could be used to supplement the creative process. To be able to use it as a starting point when you’re stuck, that could be a huge boon to creatives.” Then she paused before adding: “Like any of these technologies, there are implications for nefarious purposes.” The SEO and content marketing industry has grown increasingly complex in recent years. Creating content that feels authentic is more difficult when the internet is bombarded by bots on social media platforms and overseas clickfarms, where low-paid workers bang out copy for pennies. This is not to mention the rise of video “deepfakes.” But as Tynski has previously written , when it comes to AI, “our industry has yet to face its biggest challenge.” To explore those dangers, Fractl wrote out 30 headlines and placed them into Grover. In a blink, it spit out extremely fluent articles on “ Why Authentic Content Marketing Matters Now More Than Ever ” and “ What Photo Filters are Best for Instagram Marketing? ” The latter reads (in part): Instagram Stories first made people’s Instagram feeds sleeker, more colorful and just generally more fun. They could post their artistic photos in the background of someone else’s Story — and secretly make someone jealous and/or un-follow you while doing it.
That post-publishing feature still makes for some very sweet stories, particularly when you show a glam shot of yourself, using your favorite filter. And that’s why the tech-focused publication Mobile Syrup asked a bunch of Insta artists for their faves. (You can check out the full list of their best Instagram Stories.) It’s not Shakespeare. But if you stumbled across this after a search, would you really know it wasn’t written by a human? “It works in that voice really well,” Tynski said. “The results are passable to someone just skimming. It sets up the article, it made up influencers, it made up filter names. There’s a lot of layers to it that made it very impressive.” The stories are all attributed to a fictional author named Barry Tyree.
Not only is Barry not real, neither is his photo. The image was generated using a tool called StyleGAN.
Developed by Uber software engineer Philip Wang, the technology builds on work Nvidia did generating images of people with an algorithm that was trained on a massive data set of photos. Anyone can play with it at ThisPersonDoesNotExist.com.
The combination is powerful in that it puts these tools within just about anyone’s reach. Proponents argue that this kind of advance further democratizes content creation. But if past is prologue, any potential benefits will likely be turned to darker purposes.
“Imagine you wanted to write 10,000 articles about Donald Trump and inject them with whatever sentiment you wanted?” Tynski said. “It’s frightening and exciting at the same time.” Closer to home, Tynski is worried about what this means for her company and its industry. The ability to help companies and clients market themselves and connect with customers already resembles low-level warfare as Fractl tries to stay current with Google search changes, new optimization strategies, and constantly evolving social media tools. With search and social driving so much discovery, what happens if users no longer feel they can trust either? On a broader level, Tynski recognizes the potential for AI-generated content to further tear at our already frayed social fabric. Companies like YouTube, Facebook, and Twitter already seem to be fighting a futile battle to stem the tide of fake news and propaganda. They’re using their own AI and human teams in the effort, but the bad guys still remain well ahead in the race to distract, disinform, and divide.
To make sense of it all, one thing is certain. We will need increasingly better tools to help us determine real from fake and more human gatekeepers to sift through the rising tide of content.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,592 | 2,021 |
"'Your Computer Is On Fire' draws on tech history to critique AI and the cloud | VentureBeat"
|
"https://venturebeat.com/business/your-computer-is-on-fire-draws-on-tech-history-to-critique-ai-and-the-cloud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Review ‘Your Computer Is On Fire’ draws on tech history to critique AI and the cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a story last year about nine books I read about AI in 2020 , I called Your Computer Is On Fire a book worth watching out for this year. It’s released this week, and I was not disappointed. The premise of the book is that techno-utopianism should die because it’s too dangerous to be allowed to continue. This argument came up recently in the context of Amazon workers in factories with robotics getting hurt more often than workers in factories without robots.
But once people throw away unrealistic visions of outcomes, the history of technology looks very different.
The book attempts to interrogate how the legacy of social constructs and media narratives have shaped computing. It invites people to think critically about notions of purity surrounding data, the concealment of the carbon footprint the cloud represents, the whiteness of robots , and the wires and resources involved with making the world wireless. Your computer is on fire in part, authors argue, because of automation that perpetuates racism and sexism, and the growth of resource-intensive datacenters and the cloud at a time when climate change is an existential threat for the planet.
The title of this book is meant to prepare you for a series of 16 provocative essays that consider the history of technology, media, and policy, from Siri disciplines and the cloud as a factory to how the internet will be decolonialized and tech for the Global South.
Each essay takes readers on a journey through a topic to consider the ethical and societal implications of technology over the long term, an approach former Ethical AI team lead Margaret Mitchell suggested for Google.
Contributors to the collection of essays include Safiya Noble, author of Algorithms of Oppression , who wrote an essay about race and gender stereotypes that permeate robotics and the role of robotics in policing, prisons, and warfare.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We have to ask what is lost, who is harmed, and what should be forgotten with the embrace of artificial intelligence and robotics in decision-making. We have a significant opportunity to transform the consciousness embedded in artificial intelligence and robotics, since it is in fact a product of our own collective creation,” Noble wrote in the book.
Another essay, by Nathan Ensmenger, argues that the cloud is a factory, and it examines the extent to which datacenters demand a lot of energy, water, and the mining of rare mineral resources like cobalt, which has led to accusations that Big Tech companies aided in the death or serious injury of children.
That essay also walks through a comparison between Amazon online today and Sears mail-order catalogs a century ago, and compares Amazon transportation and distribution strategy to Standard Oil.
Understanding, for example, that in the past women made up much of computation work treated as menial and feminine for most of its early history helps illuminate ongoing problems of racism and sexism in tech environments that women — especially Black women — describe as toxic.
I also found something terribly human in an essay arguing that a network is not a network, which looks at the history of large networks built in Chile, Russia, and the United States. Benjamin Peters says that history shows that just because a network works does not mean it works as its designers intended.
“[N]etwork projects are twice political for how they, first, surprise and betray their designers, and, second, require actual institution building and collaborative realities far richer than any design,” Peters wrote.
Editors of the book include Mar Hicks, a tech historian at the Illinois Institute of Technology in Chicago and an associate editor of the IEEE Annals of History of Computing.
They are joined by science and technology historian and University of California, Irvine professor Kavita Philip; Peters, a media historian and University of Tulsa professor; and Stanford University history professor Thomas Mullaney.
The editors take pains to state that the book’s conclusions aren’t meant to be an overly dark view of the future or to give people the impression things are hopeless. There is hope, they argue, but recent trends should act as an alarm.
What I also took away from this book is the continuing value of critical analysis. In a recent paper, researchers recommended reporters persist in sharp questioning, declaring, “Technology journalism is a keystone of equitable automation and needs to be fostered for AI.” In the final pages of the book, Your Computer Is On Fire also addresses the role of media and the writers of narratives in tech and AI trends.
“Tech will deliver on neither its promises nor its curses, and tech observers should avoid both utopian dreamers and dystopian catastrophists. The world truly is on fire, but that is no reason it will either be cleansed or ravaged in the precise day and hour that self-proclaimed prophets of profit and doom predict. The flow of history will continue to surprise,” Peters writes.
Even if you’re like me and follow trends in artificial intelligence through news, books, and research papers, you may still learn parts about the history of technology in this book that you didn’t know, because this book extends across an arch of history. And as editors lay out in the afterword, they hope the messages contained within will be viewed as obvious decades from now.
This lens — viewing computing and artificial intelligence across the span of decades — and consideration of social and historical context was previously espoused by Ruha Benjamin, who last year argued in the context of deep learning that “computational depth without historic or sociological depth is superficial learning.” But the collection of impactful tech issues interrogated over the span of decades in this book makes it recommended reading for anyone interested in the impact of tech policy in businesses and governments, as well as people deploying AI or interested in the way people shape technology.
This book presents compelling arguments for essential topics at the center of business and society. By using computational history as a foundation, it’s able to, as Noble put it, “underscore how much is at stake when we fail to think more humanistically about computing.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,593 | 2,021 |
"Researchers detail systemic issues and risk to society in language models | VentureBeat"
|
"https://venturebeat.com/business/researchers-detail-systemic-issues-and-risk-to-society-in-language-models"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers detail systemic issues and risk to society in language models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Researchers at Google’s DeepMind have discovered major flaws in the output of large language models like GPT-3 and warn these could have serious consequences for society, like enabling deception and reinforcing bias. Notably, coauthors of a paper on the study say harms can be proliferated by large language models without malicious intent on the creators’ part. In other words, these harms can be spread accidentally, due to incorrect specifications around what an agent should be learning from or the model training process.
“We believe that language agents carry a high risk of harm, as discrimination is easily perpetuated through language. In particular, they may influence society in a way that produces value lock-in, making it harder to challenge problematic existing norms,” the paper reads. “We currently don’t have many approaches for fixing these forms of misspecification and the resulting behavioral issues.” The paper supposes that language agents could also enable “incitement to violence” and other forms of societal harm, particularly by people with political motives. The agents could also be used to spread dangerous information, like how to make weapons or avoid paying taxes. In a prime example from work published last fall, GPT-3 tells a person to commit suicide.
The DeepMind paper is the most recent study to raise concerns about the consequences of deploying large language models made with datasets scraped from the web. The best known paper on this subject is titled “ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big ?” and was published last month at the Fairness, Accountability, and Transparency conference by authors that include former Google Ethical AI team co-leads Margaret Mitchell and Timnit Gebru. This paper asserts language models that seem to be growing in size perpetuate stereotypes and carry environmental costs most likely to be born by marginalized groups.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While Google fired both of its researchers who chose to keep their names on the paper and required other Google research scientists to remove their names from a paper that reached a similar conclusion, the DeepMind research cites the stochastic parrots paper among related works.
Earlier this year, a paper from OpenAI and Stanford University researchers detailed a meeting between experts from fields like computer science, political science, and philosophy. The group concluded that companies like Google and OpenAI, which control the largest known language models in the world, have only a matter of months to set standards around the ethical use of the technology before it’s too late.
The DeepMind paper joins a series of works that highlight NLP shortcomings. In late March, nearly 30 businesses and universities from around the world found major issues in an audit of five popular multilingual datasets used for machine translation.
A paper written about that audit found that in a significant fraction of the major dataset portions evaluated, less than 50% of the sentences were of acceptable quality, according to more than 50 volunteers from the NLP community.
Businesses and organizations listed as coauthors of that paper include Google and Intel Labs and come from China, Europe, the United States, and multiple nations in Africa. Coauthors include the Sorbonne University (France), the University of Waterloo (Canada), and the University of Zambia. Major open source advocates also participated, like EleutherAI , which is working to replicate GPT-3; Hugging Face ; and the Masakhane project to produce machine translation for African languages.
Consistent issues with mislabeled data arose during the audit, and volunteers found that a scan of 100 sentences in many languages could reveal serious quality issues, even to people who aren’t proficient in the language.
“We rated samples of 205 languages and found that 87 of them had under 50% usable data,” the paper reads. “As the scale of ML research grows, it becomes increasingly difficult to validate automatically collected and curated datasets.” The paper also finds that building NLP models with datasets automatically drawn from the internet holds promise, especially in resolving issues encountered by low-resource languages, but there’s very little research today about data collected automatically for low-resource languages. The authors suggest a number of solutions, like the kind of documentation recommended in Google’s stochastic parrots paper or standard forms of review, like the datasheets and model cards Gebru prescribed or the dataset nutrition label framework.
In other news, researchers from Amazon, ChipBrain, and MIT found that test sets of the 10 most frequently cited datasets used by AI researchers have an average label error rate of 3.4%, impacting benchmark results.
This week, organizers of NeurIPS, the world’s largest machine learning conference, announced plans to create a new track devoted to benchmarks and datasets. A blog post announcing the news begins with the simple declaration that “There are no good models without good data.” Last month, the 2021 AI Index, an annual report that attempts to define trends in academia, business, policy, and system performance, found that AI is industrializing rapidly. But it named a lack of benchmarks and testing methods as major impediments to progress for the artificial intelligence community.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,594 | 2,021 |
"OctoML raises $28M for machine learning deployment optimization | VentureBeat"
|
"https://venturebeat.com/business/octoml-raises-28m-for-machine-learning-deployment-optimization"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OctoML raises $28M for machine learning deployment optimization Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Studies like the 2020 State of AI report from McKinsey have found that businesses capable of deploying multiple AI models are considered high performers, but a survey of business leaders included in the report found fewer than 20% have taken deep learning projects beyond the pilot stage. It’s well known that most businesses face challenges deploying AI in production, which has led to a rise in startups that serve needs like AIOps or auditing.
In the latest news for such a company, OctoML today raised a $28 million series B funding round.
OctoML helps businesses accelerate AI model inference and training and relies on the open source Apache TVM machine learning compiler framework. TVM is currently being used by companies like Amazon, AMD, Arm, Facebook, Intel, Microsoft, and Qualcomm. OctoML will use the funding to continue building out products like its Octomizer platform and investing in its go-to-market strategy and customer service teams.
“We started the TVM work as a research project at the University of Washington about five years ago, and all the key people in the project they all got their Ph.D.s and are part of the company now,” OctoML CEO and cofounder Luis Ceze told VentureBeat. “We’re focused on making inference fast on any hardware, and support cloud and edge deployments.” Last month, OctoML joined more than 20 startups — including Algorithmia and Determined AI — that have banded together to create the AI Infrastructure Alliance , an effort to promote interoperability between the offerings from AI startups and advance alternatives to popular cloud AI services.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The $28 million funding round was led by Addition, with participation from existing investors Madrona Venture Group and Amplify Partners.
OctoML has raised $47 million to date, including a $3.9 million seed funding round in October 2019 , just months after the company was founded. OctoML is based in Seattle with remote employees across the United States.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,595 | 2,021 |
"National Security Commission on Artificial Intelligence issues report on how to maintain U.S. dominance | VentureBeat"
|
"https://venturebeat.com/business/national-security-commission-on-artificial-intelligence-issues-report-on-how-to-maintain-u-s-dominance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages National Security Commission on Artificial Intelligence issues report on how to maintain U.S. dominance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The National Security Commission on Artificial Intelligence today released its report today with dozens of recommendations for President Joe Biden, Congress, and business and government leaders. China, the group said, represents the first challenge to U.S. technological dominance that threatens economic and military power for the first time since the end of World War II.
The 15-member commission calls a $40 billion investment to expand and democratize AI research and development a “modest down payment for future breakthroughs,” and encourages an attitude toward investment in innovation from policymakers akin that which led to building the interstate highway system in the 1950s. Ultimately, the group envisions hundreds of billions of dollars of spending on AI by the federal government in the coming years.
The report recommends several changes that could shape business, tech, and national security. For example, amid a global shortage of semiconductors, the report calls for the United States to stay “two generations ahead” of China in semiconductor manufacturing and suggests a hefty tax credit for semiconductor manufacturers.
President Biden pledged support for $32 billion to address a global chip shortage and last week signed an executive order to investigate supply chain issues.
“I really hope that Congress deeply considers the report and its recommendations,” AWS CEO Andy Jassy said today as part of a meeting held to approve the report.
“I think there’s meaningful urgency to get moving on these needs, and it’s important to realize that you can’t just flip a switch and have these capabilities in place. It takes steady, committed hard work over a long period of time to bring these capabilities to fruition.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Commissioners who helped compile the report include Oracle CEO Safra Catz, Microsoft chief scientist Eric Horvitz, Google Cloud AI chief Andrew Moore, and Jassy, who takes over as CEO of Amazon later this year. Publication of the final report is the last act of the temporary commission Congress formed in 2018 to advise federal policy.
The 756-page report calling for the United States to be AI-ready by 2025 was approved by commissioners in a vote today. Moore and Horvitz abstained from chapters 2 and 11 of the report due to perceived conflicts of interest.
“I think it bears repeating that to win in AI, we need more money, more talent, stronger leadership, and collectively we as a commission believe this is a national security priority, and that the steps outlined in the report represent not just our consensus, but a distillation of hundreds and hundreds of experts in technology and policy and ethics, and so I encourage the public and everyone to follow our recommendations,” commission chair and former Google CEO Eric Schmidt said.
Now, Schmidt and other commissioners said, begins the work of selling these ideas to key decision makers in power.
Recommendations in the report include: The intelligence community should seek to fully automate many tasks by 2030.
In line with earlier recommendations, the final report calls for the creation of a Digital Corps for hiring temporary or short-term tech talent, and a Digital Service Academy to create an accredited university to produce government tech talent. The report calls failure to recognize the need to develop a government technical workforce shortsighted and a national security risk.
Increase access to open source software for federal government employees in agencies like the Pentagon. The report refers to TensorFlow and PyTorch as “must-have tools in any AI developer’s arsenal.” Private industry should form an organization with $1 billion in funding in the next five years that launches efforts to address inequality.
Identify service members with computational thinking.
Establish responsible AI leads in each national security agency and branch of the armed forces.
The report also calls for the U.S. State Department to increase its presence in U.S. and technology hubs around the world.
Triple the number of national AI research institutes. The first institutes were introduced in August 2020.
Set policy for agencies critical to national security to allow people to report irresponsible AI deployments.
Double AI research and development spending until 2026, when levels will hit $32 billion.
The report also calls immigration a “national security imperative” and that immigration policy could slow progress for China. Commissioners recommend doubling the number of employment-based green cards, creating visas for entrepreneurs and the makers of emerging and disruptive technology, and giving green cards to every AI PhD graduate from an accredited U.S. university.
Leadership in 5G telecommunications and robotics are also referred to as national security imperatives in the report.
Government beyond defense Within government, the report also goes beyond recommendations for the Pentagon, extending recommendations to Congress for border security and federal agencies like the FBI or Department of Homeland Security. For example, the report criticizes a lack of transparency of AI systems used by federal agencies as potentially affecting civil liberties and calls for Congress to amend impact assessment and disclosure reporting requirements to include civil rights and civil liberty reports for new AI systems or major updates to existing systems.
“For the United States, as for other democratic countries, official use of AI must comport with principles of limited government and individual liberty. These principles do not uphold themselves. In a democratic society, any empowerment of the state must be accompanied by wise restraints to make that power legitimate in the eyes of its citizens,” the report reads.
A statement from ACLU senior staff attorney Patrick Toomey, whose work focuses on national security, said the report acknowledges some dangers of AI in its recommendations, but “it should have gone further and insisted that the government establish critical civil rights protections now, before these systems are widely deployed by intelligence agencies and the military. Congress and the executive branch must prioritize these safeguards, and not wait until after dangerous systems have already become entrenched.” The report argues that the consolidation of the AI industry threatens U.S. technological competitiveness in a number of important ways, exacerbating trends like brain drain and stifled competition.
China and U.S. foreign policy Increases in funding and investment by China to be an AI leader by 2025 means more time is dedicated to China in the report than any other foreign nation. The report concludes that the United States could lose military technical superiority to China within the next decade.
“We have every reason to think that the competition with China will increase,” Schmidt said during the NSCAI meeting today.
To ward off rising models of techno-authoritarian governance like the kind practiced in China, the report calls for the United States to establish an Emerging Technology Coalition with allies. The report calls for high-level, ongoing diplomatic dialogue with China to discuss challenges emerging technology like AI presents in order to find areas for cooperation toward global challenges like climate change. That body could also act as a forum for sharing concerns or grievances about practices inconsistent with American values. Bilateral talks between the United States and China were previously recommended by AI policy expert and former White House economist R. David Edelman.
In defense, commissioners do not support a treaty for the global prohibition of AI-enabled autonomous weaponry since it is “not currently in the interest of U.S. or international security,” and because the report concludes that China and Russia would ignore any such commitment. Instead, the report calls for developing standards for autonomous weaponry.
In other matters related to foreign policy and international affairs, the commissions calls for an international agreement to never automate use of nuclear weapons, and to seek similar commitments from Russia and China.
The need for leadership is stressed throughout the report. For President Biden, the report recommends an executive order aimed at protecting intellectual property, and create a Technology Competitiveness Council in part to deal with intellectual property issues and establish national plans.
Oracle CEO Safra Catz called collaboration within and among Department of Defense, U.S. government, and allies critical, and said that leadership in government is needed. “There’s so many important steps that have to be taken now.” “It is our great hope that like-minded democratic nations work together to make sure that technologies around AI do not leak into adversarial hands that will give them an advantage over our systems and that we will unite together in the safe and responsible deployment of this kind of technology in military systems,” commissioner and In-Q-Tel founder Gilman Louie said today.
The United States has been working with international groups like the Organisation for Economic Co-operation and Development (OECD ) and the Global Partnership on AI (GPAI), while last year defense and diplomatic officials from United States allies met to discuss ethical use of AI in warfare.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,596 | 2,021 |
"Major flaws found in machine learning for COVID-19 diagnosis | VentureBeat"
|
"https://venturebeat.com/business/major-flaws-found-in-machine-learning-for-covid-19-diagnosis"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Major flaws found in machine learning for COVID-19 diagnosis Share on Facebook Share on X Share on LinkedIn Nvidia's Clara AI for COVID-19 diagnosis from CT scans Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A coalition of AI researchers and health care professionals in fields like infectious disease, radiology, and ontology have found several common but serious shortcomings with machine learning made for COVID-19 diagnosis or prognosis.
After the start of the global pandemic, startups like DarwinAI , major companies like Nvidia , and groups like the American College of Radiology launched initiatives to detect COVID-19 from CT scans, X-rays, or other forms of medical imaging. The promise of such technology is that it could help health care professionals distinguish between pneumonia and COVID-19 or provide more options for patient diagnosis. Some models have even been developed to predict if a person will die or need a ventilator based on a CT scan. However, researchers say major changes are needed before this form of machine learning can be used in a clinical setting.
Researchers assessed more than 2,200 papers and, through a process of removing duplicates and irrelevant titles, narrowed results down to 320 papers that underwent a full text review for quality. Finally, 62 papers were deemed fit to be part of what authors refer to as a systematic review of published research and preprints shared on open research paper repositories like arXiv, bioRxiv, and medRxiv.
Of those 62 papers included in the analysis, roughly half made no attempt to perform external validation of training data, did not assess model sensitivity or robustness, and did not report the demographics of people represented in training data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Frankenstein” datasets, the kind made with duplicate images obtained from other datasets, were also found to be a common problem, and only one in five COVID-19 diagnosis or prognosis models shared their code so others can reproduce results claimed in literature.
“In their current reported form, none of the machine learning models included in this review are likely candidates for clinical translation for the diagnosis/prognosis of COVID-19,” the paper reads. “Despite the huge efforts of researchers to develop machine learning models for COVID-19 diagnosis and prognosis, we found methodological flaws and many biases throughout the literature, leading to highly optimistic reported performance.” The research was published last week as part of the March issue of Nature Machine Intelligence by researchers from the University of Cambridge and University of Manchester. Other common issues they found with machine learning models developed using medical imaging data was virtually no assessment for bias and generally being trained without enough images. Nearly every paper reviewed was found to be at high or uncertain risk of bias; only six were considered at low risk of bias.
Publicly available datasets also commonly suffered from lower quality image formats and weren’t large enough to train reliable AI models. Researchers used the checklist for artificial intelligence in medical imaging (CLAIM) and radiomics quality score (RQS) to help assess the datasets and models.
“The urgency of the pandemic led to many studies using datasets that contain obvious biases or are not representative of the target population, for example, pediatric patients. Before evaluating a model, it is crucial that authors report the demographic statistics for their datasets, including age and sex distributions,” the paper reads. “Higher-quality datasets, manuscripts with sufficient documentation to be reproducible and external validation are required to increase the likelihood of models being taken forward and integrated into future clinical trials to establish independent technical and clinical validation as well as cost-effectiveness.” Other recommendations suggested by the group of AI researchers and health care professionals include ensuring reproducibility of model performance results spelled out in research papers and considering how datasets are assembled and put together.
In other news at the intersection of COVID-19 and machine learning, earlier this week the Food and Drug Administration (FDA) approved emergency use authorization of a machine learning-based screening device which the agency says is the first approved in the U.S.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,597 | 2,021 |
"How many robot helpers are too many? | VentureBeat"
|
"https://venturebeat.com/business/how-many-robot-helpers-are-too-many"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How many robot helpers are too many? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI that can follow a person seems like a simple enough task. It’s certainly a simple thing to ask a human to do, but what if people or objects get in the way of the robot following behind a person? How do you navigate an environment that’s in a constant state of change? About a year ago, at a robotics conference TechCrunch held at UC Berkeley, AI startup founders explored solutions for common problems encountered when trying to automate construction projects.
Dusty Robotics CEO Tessa Lau called attention to the challenge of moving machines in an unstructured environment filled with people.
“[The] typical construction site — it’s chaos, and anyone with a robotics background who knows anything about robotics knows it’s really hard to make robots work in that kind of unstructured environment,” she said.
That’s why this week Piaggio Fast Forward , maker of the Gita personal robot , shared details about work it’s undertaking with industrial technology services provider Trimble using the Boston Dynamics API to create robots that follow construction workers. Pilot tests took place at an office building under construction in Colorado.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Robots can also travel in groups with one Spot and two Gita robots in what Piaggio Fast Forward calls platooning. As part of the construction pilot, Piaggio is assessing human attitudes about how many robots following a human is too many.
Optimizing the number of robots following people on a construction site concerns not only whether the robot can make it safely out of the path of a dozer driven by a human, but also the question of how many robots can be involved before things get weird. Like, if you’re a construction worker watching the site manager approach with a robot entourage, is five robots the limit? Six? Piaggio Fast Forward CEO Greg Lynn told VentureBeat the company could support a convoy of 50 to 100 robots but that this would be impractical.
“How long a platoon of robots, just as a dimension, would be acceptable? It’s probably like 15 or 20 feet. To be totally honest, we don’t know yet, but it’s not an infinite length,” he said.
The approach uses Trimble lasers and geospatial sensors to map the environment and establish location. One of the tests of Spot involved training a robot to walk a path once and then having it try to repeat that route automatically. The approach follows how people navigate the landscape, understanding human movement instead of relying only on mapping.
In 2019, Piaggio Fast Forward introduced its Gita robot to follow behind people and carry up to 40 pounds of weight. The device can reach speeds of 22 mph. The Trimble collaboration is the first industrial deployment of Piaggio tracking technology, Lynn told VentureBeat. Piaggio Fast Forward is a part of the Piaggio Group, one of the largest scooter and motorcycle manufacturers in Europe, best known for its Vespa brand.
Autonomous trucks have worked in convoys before, but as part of a drive into more business applications, a spokesperson told VentureBeat that next month a company will test platooning for grocery delivery in a planned community with several thousand households.
As part of trials announced in late 2020 without platooning, Gita robots are being used in Cincinnati and San Diego airports and, with Doğan Group, a shopping mall in Turkey.
Learning to navigate unstructured environments like construction zones can lead to more robust AI that can help farmers in fields, delivery robots in cities, or people in wheelchairs on sidewalks. Should such systems become trusted and reliable, they could become commonplace in industrial environments for surveillance — but going further, why push a wheelbarrow if you can just have it follow you? Ideally, a robot companion on a worksite could augment human workers by carrying tools for an electrician, as the company’s video demonstrates. They could also use computer vision to bring attention to safety hazards or survey the progress of construction projects. One of the earliest applications of Spot robots on construction sites was to survey construction projects. Spot is also being used in law enforcement by the New York Police Department , which posits the scenario of a Spot robot following a police officer walking a beat.
Piaggio is not alone in its ambition to implement people-tracking tech in the industrial workplace.
In 2018, VentureBeat covered a $10 million funding round for ForwardX, which at the time was known for luggage that followers travelers. A company spokesperson told VentureBeat that the company, which is based in China, pivoted away from luggage during the pandemic and now focuses on people-tracking solutions for robots that work alongside humans in warehouses and manufacturing facilities. Thus far, ForwardX has run test cases in factories for companies like DHL and Toyota, among other customers.
As AI companies like Piaggio Fast Forward were exploring opportunities in the construction industry, traditional players like Caterpillar and Komatsu boosted sales of autonomous software last year for industrial environments like construction, mining, and space.
Gita robots aren’t designed for roving around in a landscape of loose dirt where dozers and earthmovers operate, but Lynn said once slabs of concrete are laid for a building project, initial tests showed few areas the robots were unable to travel.
“One of the things about the Trimble announcement is we’re kind of announcing we don’t have to build every robot on Earth. We want to get involved with people that have problems with this because we’re a software company as much as we are a hardware robotics company,” Lynn said. “We don’t want to do it all ourselves. And we don’t think that it’s 100% of the solution, either.” There are a number of AI startups in the business of automating the construction zone. In 2019, Built Robotics spoke to VentureBeat about efforts to automate dozers and create predictive systems for project management and other purposes. Last fall, Canvas emerged from stealth to bring AI and robotics into construction zones in the San Francisco Bay Area to install drywall. And in January Swapp raised $7 million to compete among a fleet of businesses working to automate the mapping of construction project planning.
Robotic deployments in construction environments have led to some accidents, however. According to the safety and compliance website BLR , in 2019 two construction workers were injured in accidents involving demolition robots on construction sites in the state of Washington, with robots pinning one worker against a wall and crushing the foot of another person.
In other news about robots navigating unstructured or industrial settings, in May 2020 Burro, which is developing robots that follow farmworkers, assisted the grape harvest in Coachella , California. And last fall UC Berkeley AI researchers introduced LaND , AI that learns from disengagement episodes to improve delivery robots’ navigation on sidewalks.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,598 | 2,021 |
"Government audit of AI with ties to white supremacy finds no AI | VentureBeat"
|
"https://venturebeat.com/business/government-audit-of-ai-with-ties-to-white-supremacy-finds-no-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Government audit of AI with ties to white supremacy finds no AI Share on Facebook Share on X Share on LinkedIn Beehive statue at the Utah State Capitol building in Salt Lake City. The beehive is the official emblem of the state of Utah.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In April 2020, news broke that Banjo CEO Damien Patton, once the subject of profiles by business journalists, was previously convicted of crimes committed with a white supremacist group. According to OneZero’s analysis of grand jury testimony and hate crime prosecution documents , Patton pled guilty to involvement in a 1990 shooting attack on a synagogue in Tennessee.
Amid growing public awareness about algorithmic bias, the state of Utah halted a $20.7 million contract with Banjo, and the Utah attorney general’s office opened an investigation into matters of privacy, algorithmic bias, and discrimination. But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.
“Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter, but there was no evidence of any Twitter data incorporated into Live Time,” reads a letter Utah State Auditor John Dougall released last week.
The incident, which VentureBeat previously referred to as part of a “ fight for the soul of machine learning,” demonstrates why government officials must evaluate claims made by companies vying for contracts and how failure to do so can cost taxpayers millions of dollars. As the incident underlines, companies selling surveillance software can make false claims about their technologies’ capabilities or turn out to be charlatans or white supremacists — constituting a public nuisance or worse. The audit result also suggests a lack of scrutiny can undermine public trust in AI and the governments that deploy them.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dougall carried out the audit with help from the Commission on Protecting Privacy and Preventing Discrimination, a group his office formed weeks after news of the company’s white supremacist associations and Utah state contract. Banjo had previously claimed that its Live Time technology could detect active shooter incidents, child abduction cases, and traffic accidents from video footage or social media activity. In the wake of the controversy, Banjo appointed a new CEO and rebranded under the name safeXai.
“The touted example of the system assisting in ‘solving’ a simulated child abduction was not validated by the AGO and was simply accepted based on Banjo’s representation. In other words, it would appear that the result could have been that of a skilled operator as Live Time lacked the advertised AI technology,” Dougall states in a seven-page letter sharing audit results.
According to Vice , which previously reported that Banjo used a secret company and fake apps to scrape data from social media, Banjo and Patton had gained support from politicians like U.S. Senator Mike Lee (R-UT) and Utah State Attorney General Sean Reyes. In a letter accompanying the audit, Reyes commended the results of the investigation and said the finding of no discrimination was consistent with the conclusion the state attorney general’s office reached because there simply wasn’t any AI to evaluate.
“The subsequent negative information that came out about Mr. Patton was contained in records that were sealed and/or would not have been available in a robust criminal background check,” Reyes said in a letter accompanying the audit findings. “Based on our first-hand experience and close observation, we are convinced the horrible mistakes of the founder’s youth never carried over in any malevolent way to Banjo, his other initiatives, attitudes, or character.” Alongside those conclusions are a series of recommendations for Utah state agencies and employees involved in awarding such contracts. Recommendations for anyone considering AI contracts include questions they should be asking third-party vendors and the need to conduct an in-depth review of vendors’ claims and the algorithms themselves.
“The government entity must have a plan to oversee the vendor and vendor’s solution to ensure the protection of privacy and the prevention of discrimination, especially as new features/capabilities are included,” reads one of the listed recommendations. Among other recommendations are the creation of a vulnerability reporting process and evaluation procedures, but no specifics were provided.
While some cities have put surveillance technology review processes in place , local and state adoption of private vendors’ surveillance technology is currently happening in a lot of places with little scrutiny. This lack of oversight could also become an issue for the federal government.
The Government by Algorithm report Stanford University and New York University jointly published last year found that roughly half of algorithms used by federal government agencies come from third-party vendors.
The federal government is currently funding an initiative to create tech for public safety, like the kind Banjo claimed to have developed. The National Institute of Standards and Technology (NIST) routinely assesses the quality of facial recognition systems and has helped assess the role the federal government should play in creating industry standards.
Last year, it introduced ASAPS , a competition in which the government is encouraging AI startups and researchers to create systems that can tell if an injured person needs an ambulance, whether the sight of smoke and flames requires a firefighter response, and whether police should be alerted in an altercation. These determinations would be based on a dataset incorporating data ranging from social media posts to 911 calls and camera footage. Such technology could save lives, but it could also lead to higher rates of contact with police, which can also cost lives. It could even fuel repressive surveillance states like the kind used in Xinjiang to identify and control Muslim minority groups like the Uyghurs.
Best practices for government procurement officers seeking contracts with third parties selling AI were introduced in 2018 by U.K. government officials, the World Economic Forum (WEF), and companies like Salesforce. Hailed as one of the first such guidelines in the world, the document recommends defining public benefit and risk and encourages open practices as a way to earn public trust.
“Without clear guidance on how to ensure accountability, transparency, and explainability, governments may fail in their responsibility to meet public expectations of both expert and democratic oversight of algorithmic decision-making and may inadvertently create new risks or harms,” the British-led report reads. The U.K. released official procurement guidelines in June 2020, but weeks later a grading algorithm scandal sparked widespread protests.
People concerned about the potential for things to go wrong have called on policymakers to implement additional legal safeguards. Last month, a group of current and former Google employees urged Congress to adopt strengthened whistleblower protections in order to give tech workers a way to speak out when AI poses a public harm. A week before that, the National Security Commission on Artificial Intelligence called on Congress to give federal government employees who work for agencies critical to national security a way to report misuse or inappropriate deployment of AI. That group also recommends tens of billions of dollars in investment to democratize AI and create an accredited university to train AI talent for government agencies.
In other developments at the intersection of algorithms and accountability, the documentary Coded Bias , which calls AI part of the battle for civil rights in the 21st century and examines government use of surveillance technology, started streaming on Netflix today.
Last year, the cities of Amsterdam and Helsinki created public algorithm registries so citizens know which government agency is responsible for deploying an algorithm and have a mechanism for accountability or reform if necessary. And as part of a 2019 symposium about common law in the age of AI, NYU professor of critical law Jason Schultz and AI Now Institute cofounder Kate Crawford called for businesses that work with government agencies to be treated as state actors and considered liable for harm the way government employees and agencies are.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,599 | 2,021 |
"Google's new AI ethics lead calls for more 'diplomatic' conversation | VentureBeat"
|
"https://venturebeat.com/business/googles-new-ai-ethics-lead-calls-for-more-diplomatic-conversation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s new AI ethics lead calls for more ‘diplomatic’ conversation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Following months of inner conflict and opposition from Congress and thousands of Google employees, Google today announced that it will reorganize its AI ethics operations and place them in the hands of VP Marian Croak, who will lead a new responsible AI research and engineering center for expertise.
A blog and six-minute video interview with Croak that Google released today announcing the news make no mention of former Ethical AI team co-lead Timnit Gebru, whom Google fired abruptly in late 2020 , or Ethical AI lead Margaret “Meg” Mitchell, who a Google spokesperson told VentureBeat was placed under internal investigation last month.
The release also makes no mention of steps taken to address a need to “ rebuild trust ” called for by members of the Ethical AI team at Google. Multiple members of the Ethical AI team said they found out about the change in leadership from a report published late Wednesday evening by Bloomberg.
“Marian is a highly accomplished trailblazing scientist that I had admired and even confided in. It’s incredibly hurtful to see her legitimizing what Jeff Dean and his subordinates have done to me and my team,” Gebru told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Meg Mitchell is still suspended from her corporate account. The last email that the Ethical AI team got from research leadership was over two weeks ago.
We're in the lurch and left out to dry. This should tell you a lot about what Google thinks about ethics research.
— Dr. Alex Hanna (@alexhanna) February 18, 2021 In the video, Croak discusses self-driving cars and techniques for diagnosis of diseases as potential areas of focus in the future, but made no mention of large language models. A recent piece of AI research citing a cross spectrum of experts concluded that companies like Google and OpenAI only have a matter of months to set standards about how to address the negative societal impact of large language models.
In December, Gebru was fired after she sent an email to colleagues advising them to no longer participate in diversity data collecting efforts. A paper she was working on at the time criticized large language models, like the kind Google is known for producing, for harming marginalized communities and tricking people into believing models trained with massive corpora of text data represent genuine progress in language understanding.
In the weeks following her firing, members of the Ethical AI team also called for the reinstatement of Gebru in her previous role. More than 2,000 Googlers and thousands of other supporters signed a letter in support of Gebru and in opposition to what the letter calls “unprecedented research censorship.” Members of Congress who have proposed legislation to regulate algorithms also raised a number of questions about the Gebru episode in a letter to Google CEO Sundar Pichai. Earlier this month, news emerged that two software engineers resigned in protest over Google’s treatment of Black women like Gebru and former recruiter April Curley.
In today’s video and blog post about the change at Google, Croak said that people need to understand that the fields of responsible AI and ethics are new, and called for a more conciliatory tone of conversation about the ways AI can harm people. Google created its AI ethics principles in 2019, shortly after thousands of employees opposed participation in the U.S. military’s Project Maven.
“So there’s a lot of dissension, there’s a lot of conflict in terms of trying to standardize a normative definition of these principles and whose definition of fairness and safety are we going to use, and so there’s quite a lot of conflict right now in the field, and it can be polarizing at times, and what I’d like to do is just have people have a conversation in a more diplomatic way perhaps so we can truly advance this field,” Croak said.
Croak said the new center will work internally to assess AI systems that are being deployed or designed, then “partner with our colleagues and PAs and mitigate potential harms.” The Gebru episode at Google led some AI researchers to pledge that they wouldn’t review papers from Google Research until change was made. Shortly after Google fired Gebru, Reuters reported that the company asked its researchers to strike a positive tone when addressing issues referred to as sensitive topics.
Croak’s appointment to the position spells the latest controversial development at the top of AI ethics ranks at Google Research and DeepMind, which Google acquired in 2014. Last month, a Wall Street Journal report found that DeepMind cofounder Mustafa Suleyman was removed from management duties, before leaving the company in 2019, due to his bullying of coworkers. Suleyman also served as a head of ethics at DeepMind, where he discussed issues like climate change and health care. Months later, Google hired Suleyman for work in an advisory role on matters of policy and regulation.
How Google conducts itself when it comes to using AI responsibly and defending against forms of algorithmic oppression is immensely important because AI adoption is growing in business and society, but also because Google is a world leader in producing published AI research. A study published last fall found that Big Tech companies treat AI ethics funding in a way that’s analogous to the way Big Tobacco companies funded health research decades ago.
VentureBeat has reached out to Google to inquire about steps to reform internal practices, issues raised by Google employees, and a number of other questions.
Update: A day after this article was published Google announced a range of updates to company diversity policy and fired Ethical AI team colead Margaret Mitchell, news covered extensively by VentureBeat.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,600 | 2,021 |
"EU report warns that AI makes autonomous vehicles 'highly vulnerable' to attack | VentureBeat"
|
"https://venturebeat.com/business/eu-report-warns-that-ai-makes-autonomous-vehicles-highly-vulnerable-to-attack"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EU report warns that AI makes autonomous vehicles ‘highly vulnerable’ to attack Share on Facebook Share on X Share on LinkedIn Nissan-led HumanDrive achieved a major milestone as it completed a 230-mile autonomous journey across the U.K.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.
“The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks,” the report reads. “The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector.” The range of AI systems and sensors needed to power autonomous vehicles increases the attack surface area, according to the report. To address vulnerabilities, its authors say policymakers and businesses will need to develop a security culture across the automotive supply chain, including for third-party providers. The report urges car manufacturers to take steps to mitigate security risks by thinking of the creation of machine learning systems as part of the automotive industry supply chain.
The report focuses on cybersecurity attacks with adversarial machine learning that carries the risk of malicious attacks undetectable to humans. The report also finds that the use of machine learning in cars will require a continuous review of systems to ensure they haven’t been altered in a malicious way.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “AI cybersecurity cannot just be an afterthought where security controls are implemented as add-ons and defense strategies are of reactive nature,” the paper reads. “This is especially true for AI systems that are usually designed by computer scientists and further implemented and integrated by engineers. AI systems should be designed, implemented, and deployed by teams where the automotive domain expert, the ML expert, and the cybersecurity expert collaborate.” Scenarios presented in the report include the possibility of attacks on motion planning and decision-making algorithms and spoofing , like the kind that can fool an autonomous vehicle into “recognizing” cars, people, or walls that don’t exist.
In the past few years, a number of studies have shown that physical perturbations can fool autonomous vehicle systems with little effort. In 2017, researchers used spray paint or stickers on a stop sign to fool an autonomous vehicle into misidentifying the sign as a speed limit sign. In 2019, Tencent security researchers used stickers to make Tesla’s Autopilot swerve into the wrong lane.
And researchers demonstrated last year that they could lead an autonomous vehicle system to quickly accelerate from 35 mph to 85 mph by strategically placing a few pieces of tape on the road.
The report was coauthored by the Joint Research Centre, a science and tech advisor to the European Commission. Weeks ago, ENISA released a separate report detailing cybersecurity challenges created by artificial intelligence.
In other autonomous vehicle news, last week Waymo began testing robo-taxis in San Francisco.
But an MIT task force concluded last year that autonomous vehicles could be at least another decade away.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,601 | 2,021 |
"Dozens of current and former Dropbox employees allege gender discrimination | VentureBeat"
|
"https://venturebeat.com/business/dozens-of-current-and-former-dropbox-employees-allege-gender-discrimination"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Dozens of current and former Dropbox employees allege gender discrimination Share on Facebook Share on X Share on LinkedIn Dropbox Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
More than two dozen Dropbox employees say they’ve witnessed or experienced gender discrimination at the company, according to documents VentureBeat obtained and multiple current and former employees speaking on condition of anonymity.
In December 2020, a source familiar with the matter sent VentureBeat a document containing anonymous interviews with 16 current and former Dropbox employees who allege gender discrimination at the cloud computing company. The report alleging discrimination began circulating internally after its author sent it to Dropbox employees throughout North America on December 9. Compiled by a former Dropbox researcher, the report was not commissioned by Dropbox executives and is strongly contested by the company.
“When I first read the email, when the report was sent out, I started crying,” Source 1, who said she had experienced discrimination with regard to promotion at Dropbox, told VentureBeat. “I was frustrated and almost livid that so many other people were experiencing it, too. I really hoped that my personal experience was a one-off, and it was jarring and really upsetting to see so many things that could have been my story.” The subjects of the report alleging discrimination point to examples such as “changing standards for promotions, unequal compensation, being set back in their careers after maternity leave, and experiencing retribution when they take their cases to HR.” The report also detailed instances of alleged harassment and demotion after employees filed a complaint with Dropbox HR or returned to work following maternity leave.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Internal communications VentureBeat obtained indicate that more than a dozen Dropbox employees agreed with the report’s conclusions.
VentureBeat spoke with five current and former Dropbox employees, who all described experiences similar to those detailed in the report. People cited in the report and quoted in this story spoke on condition of anonymity due to fear of retaliation. Gender discrimination and cases of retaliation against people who report discrimination — the kind that can breed toxic workplace culture and counteract gains made through recruiting and hiring — have been reported at many tech companies, including Google , Microsoft , Pinterest , and Uber.
In 2014, VentureBeat ran a story on Dropbox employees who alleged hiring practices and company culture that disadvantaged women.
In an interview with VentureBeat on Thursday, Dropbox officials said the claims in the December 2020 report alleging discrimination weren’t consistent with the company’s data. Following that conversation, Dropbox sent VentureBeat the following statement: “We would never want anyone to have the experiences described in the report, let alone at Dropbox. We thoroughly review all claims when they are brought forward, and have found no evidence of systemic discrimination.” Shortly after speaking with VentureBeat on Thursday about allegations of gender discrimination, Dropbox released its 2020 diversity report , which it says back up its claims. In it, the company says that women have been promoted at a higher rate than men for the fifth consecutive year. The report also said representation for women at the managerial level and above increased from 35% in 2019 to 37% in 2020.
Stalled promotions and career advancement How companies choose to promote their employees is a major influencer of company culture and worker sentiment. A 2018 Harvard Business Review survey of more than 400,000 U.S. workers found that people who believe promotions are handled properly are twice as likely to plan a long-term future with a company and 5 times as likely to believe company leaders act with integrity.
Sources speaking with VentureBeat criticized Dropbox’s promotion process, which they say is largely dependent on the mood and influence of their manager, rather than merit.
Multiple sources told VentureBeat they believed their own promotions were delayed and claimed that people who identify as men are promoted at faster rates than those who identify as women. The majority of women interviewed for the report alleging discrimination also identified as women of color and talked about a need to work twice as hard to achieve the same level of career advancement as male colleagues, putting them at increased risk of burnout. The report does not discuss the experiences of non-binary Dropbox employees.
Dropbox uses a leveling system for promotions, from L1 for the most junior employees to L10 for cofounder and CEO Drew Houston. People familiar with the matter told VentureBeat that based on this system, they believe they should have been promoted years ago and that for some women, hiring and promotion seems to hit a ceiling around L3.
“I have consistently, in my promotion cycles, had documentation that I have overperformed but have not been given an opportunity for promotion based on my time at the company,” Source 1 told VentureBeat. “And that is a common theme I’ve heard among other coworkers as well, that oftentimes women are brought in at lower levels and have a harder time moving forward to the appropriate levels even if they’re performing at or above their level.” A Dropbox spokesperson told VentureBeat it has been company policy to verbally disclose leveling at the time a person is hired for at least two years, but the majority of women interviewed for the report said Dropbox did not disclose their initial level at the time they were hired.
Source 2 described finding out she was under-leveled. “About six months in, I realized that I was severely under-leveled. I was doing the same work as the people next to me, but was paid a full level below, probably coming out to like an $80,000- or $100,000-a-year difference,” she said. “It feels like this really bad-faith thing to say, ‘OK, just because you don’t know about leveling, we’re gonna screw you over.'” She added that she could not find a way to remedy the situation once she became aware of it: “No one was willing to help me rectify it.” “Ultimately, there are two blockers to career growth for women at Dropbox: leadership accountability, and HR accountability,” the report reads.
Lack of HR support The report includes 15 accounts of alleged discrimination that employees say were reported to HR but left unaddressed. And the women interviewed for the report unanimously agreed that instances of discrimination employees witnessed or experienced firsthand went unaddressed after being reported to HR.
While their experiences varied, multiple current and former Dropbox employees who spoke with VentureBeat also said the company failed to initiate meaningful action in response to discrimination complaints brought to the HR team.
Source 3 said they felt supported when in direct conversations with HR but that they have yet to see real change following those conversations.
Source 4 suggested that the company tended to see reported incidents as anomalies, rather than evidence of a systemic problem.
“I personally think that the way HR handles these issues [is] a big part of the problem,” Source 4 said. “It kind of goes back to even if you have bad seeds or whatever and the company doesn’t want to blame it on a cultural issue — I think that HR enables those bad seeds.” Many of the current and former employees VentureBeat interviewed said the report made them feel seen.
“It was like being un-gaslit, I guess,” Source 2 said of reading the report. “I’ve been telling people, you know, I have this weird experience with HR, I had this weird experience with a manager. And people would always say — like HR or other men I worked with or more senior women would always be like, ‘I think you misinterpreted the situation’ or ‘you know, are you sure that it wasn’t your behavior’? All of us have a story,” she said.
To address accountability concerns, the report recommended that Dropbox implement a number of changes, including an external investigation of HR practices and the formation of a board of employees who are not in senior leadership positions to guide and monitor HR processes.
Dropbox’s diversity initiatives Dropbox launched a program called Project Maia in July 2020 to increase retention rates among female and underrepresented minority (URM) employees. The grievance report says Dropbox identified 200 women as high flight risks and had their managers host “stay interviews.” The report claims these interviews put the employees at further risk since many already felt they could not openly share concerns with the managers in charge of their performance reviews and potential promotions.
Current and former Dropbox employees who spoke with VentureBeat also agreed with the report’s conclusion that the company’s diversity initiatives have required underrepresented minority groups and women to do additional, unpaid labor to make Dropbox a more inclusive company.
For example, the company’s LEAD program identifies high-performing employees interested in becoming leaders for career development and professional growth courses. But people who spoke with VentureBeat and those cited in the report about gender discrimination at Dropbox said the program gave qualified women additional work commitments and stretch goals instead of simply promoting them, as they believe male colleagues tended to be.
“The initiatives that I’ve been a part of feel like they’re asking [women] to take on more work in order to, say, find impact in their career, seek mentors, etc., rather than putting the responsibility on others, like the large number of white men in leadership,” Source 3 told VentureBeat.
Source 4 did not participate in LEAD but said that while they like that people are given resources to help them excel, such a program does not get to the root of why women at Dropbox aren’t getting raises and moving up in their careers in an equitable manner.
“I would like there to be less conversation about what the individual does and more about what the company can do, and I think that sort of discussion is lacking,” Source 5 said.
Source 5 was also skeptical about LEAD’s value, telling VentureBeat, “If you want to offer an empowerment program, it should be to somebody who needs help leveling up to get on that playing field. But these people [in LEAD] are showing that they do the same work as their white male colleagues, and you’re telling them they’re still missing something.” “It’s avoiding the actual problem,” Source 5 continued. “They don’t need mentorship. They need sponsorship.” Dropbox executives’ initial response to the allegations According to internal documents obtained by VentureBeat, the report on gender discrimination was first shared with Dropbox executives on December 8, and with all Dropbox employees in North America on December 9. In the days following the release of that report, senior Dropbox employees responded in a number of ways.
In an email on December 9, Houston told Dropbox employees the company takes discrimination claims seriously but that since the quoted sources are anonymous, Dropbox needed to “follow up and learn more.” He then urged anyone with discrimination claims or the ability to substantiate anonymous claims in the report to come forward using third-party employee whistleblower service Convercent.
Source 3 told VentureBeat that Houston’s statements came off as an attempt to undermine the results of the report instead of taking steps to defend workers. Source 1 called Houston’s response tone-deaf and a missed opportunity to say something meaningful about the gender discrimination documented in that report.
“It was terrible. It was patronizing,” Source 2 told VentureBeat. “It just felt dismissive and almost intentional in its trying to discredit [the author of the report]. And it’s frustrating because that’s the lived experience of a lot of us at Drew’s company, and here he is saying this isn’t the way to solve your problems.” In an interview with VentureBeat Thursday, Dropbox head of DEI Danny Guillory defended Houston’s response.
“My understanding was that the goal was to have more information to be able to act on it, to be able to investigate directly, because unless it’s actually brought to us as a claim, we’re not able to investigate,” Guillory said. “So, that’s my understanding. Drew, actually, I think, takes this really seriously.” People VentureBeat spoke to for this story took issue with the company’s request that they share discrimination claims or concerns with Convercent. The report about the experience of women working at Dropbox does not mention Convercent , but current and former employees told VentureBeat they were skeptical that a service provided by their employer would lead to meaningful action or ensure them privacy.
“I don’t trust it,” Source 3 told VentureBeat. “I don’t trust HR. I believe that the lawyers, that the [whistleblowing service] would reject you or are here to protect business, and so is HR.” The day after Houston emailed employees about the discrimination allegations in the report, Dropbox chief legal officer Bart Volkmer and chief people officer Melanie Collins shared the company’s next steps in a message to Dropbox’s internal #women channel in Slack.
Their message reiterated Dropbox’s stated commitment to making the company a “fair place to work” and asserted that claims made in the report were not reflected in attrition or promotion data or employee surveys.
They also outlined additional steps the company planned to take to address issues raised by the report, including hosting small coffee chats with the staff and beginning a quarterly review process for discrimination and harassment claims led by DEI, legal, and people teams with “staff-level visibility.” Dropbox also claimed it would convene focus groups led by an independent third party to gather insights from female employees, with a focus on L3 employees. Dropbox said information shared in such focus groups would be collected anonymously.
Dropbox’s DEI Town Hall A day after the report alleging gender discrimination was sent to all Dropbox employees in North America, executives held an annual town hall meeting to share the latest company diversity statistics and address the report.
Houston talked about how the company has three full-time staff members dedicated to diversity recruiting at universities and conferences and how remote hiring could open new avenues for diverse hiring practices.
He applauded the work of the DEI team and praised employee resource groups for women and Black Dropbox employees. He also conceded that Dropbox “still has a lot of work to do on several fronts.” Alluding to the report, Houston said the goal of this town hall was to “level set the knowledge to give people a full picture of what’s going on.” “Because what tends to happen is, as I see pieces, if I’m not really involved in this work like we are, we don’t necessarily see the full gestalt. And so that’s what we’ll hope to give and share with you here,” he said.
Houston also said he was proud of the DEI pilot Project LEAD, despite criticism of the program in the report.
“So far, of the 42 people who started the program in April, 100% of them are actually still here at Dropbox, which to me is a good sign,” he said. According to the 2020 diversity report Dropbox published Thursday, just under half (48%) of the women who completed LEAD were promoted.
Guillory told VentureBeat in an interview Thursday that approximately 330 employees were eligible to participate in the program.
At the town hall, Dropbox also took the opportunity to share its latest annual diversity data. As part of the presentation, a DEI staff member said women currently represent 39.3% of employees, up 0.8% from 2019, and URM representation is 12.5%, up 0.3% from 2019.
According to data shared during the town hall, 21% of women at Dropbox received promotions versus 18% of men in 2020, while self-identified members of URM groups were promoted at a rate commensurate with non-URM employees.
The company also said that about 37% of Dropbox employees who rank L4 and above are women, up 1.5%, while 18.5% of tech roles ranked L4 or above belong to women, up 3.7% from 2019.
“It’s interesting because the premise of what was said [in the report alleging discrimination] was that the promotion rates are less, and the data is actually telling us that the promotion rates of women are actually higher,” Guillory said during the meeting. “So there seems to be a disconnect that, frankly, we’re kind of struggling with a little bit. And so I think we’re gonna have to do some qualitative research because the quantitative research doesn’t actually match up with what was stated there. And so that doesn’t mean that there’s not something we need to capture, it just means we’re going to have to find a different way to capture it.” Dropbox Q&A The report alleging gender discrimination at Dropbox was the primary topic of conversation in a question-and-answer session with Dropbox employees after remarks by Houston and DEI staff, which included a comparison to diversity data from other tech companies like Facebook, Google, Netflix, and Slack.
Collins said during the Q&A that the majority of promotions at Dropbox are for L2 and L3 employees and handled by M3 and M4 managers. Collins cited policy stating that managers are expected to “present a balanced and fact-based case” for promotions and said the company reviews promotion metrics based on race, gender, region, and specific company team.
“I think there’s a perception that there’s just a lot of subjectivity, right? If my individual manager doesn’t agree that I should be promoted then I’m being held back in some way,” Collins said.
Going forward, Collins and Guillory said, Dropbox employees will also be able to see all feedback from colleagues, rather than having that information submitted to a manager who summarizes feedback tied to promotions.
Dropbox’s response to VentureBeat’s request for comment In an interview Thursday with VentureBeat, Guillory said Dropbox employee feedback did not reflect the discrimination detailed in the report alleging discrimination, though an employee survey shared during the town hall showed a slight decline in the number of employees who believe they have an equal opportunity to succeed.
When asked whether Dropbox doubts the validity of any of the report findings, Guillory said “I can’t speak to the experiences, ones that were never reported directly to us. We didn’t have an opportunity to investigate. Once the report did come out, we invited people to either report directly to the organization or to use our third party, our neutral third-party hotline to report, and unfortunately, we weren’t able to act on that.” Update February 5, 5 p.m.: When VentureBeat requested promotion, leadership, and representation data for employees who are women of color, a Dropbox spokesperson declined, saying: “Everything we publicly disclose is available in our published diversity reports.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,602 | 2,021 |
"DeepMind researchers say AI poses a threat to people who identify as queer | VentureBeat"
|
"https://venturebeat.com/business/deepmind-researchers-say-ai-poses-a-threat-to-people-who-identify-as-queer"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind researchers say AI poses a threat to people who identify as queer Share on Facebook Share on X Share on LinkedIn People participate in the annual LA Pride Parade in West Hollywood, California, on June 9, 2019.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The impact of AI on people who identify as queer is an underexplored area that ethicists and researchers need to consider, along with including more queer voices in their work. That’s according to a recent study from Google’s DeepMind that looked at the positive and negative effects of AI on people who identify as lesbian, gay, bisexual, transgender, or asexual. Coauthors of a paper on the study include DeepMind senior staff scientist Shakir Mohamed, whose work last year encouraged reforming the AI industry with anticolonialism in mind and queering machine learning as a way to bring about more equitable forms of AI.
The DeepMind paper published earlier this month strikes a similar tone. “Given the historical oppression and contemporary challenges faced by queer communities, there is a substantial risk that artificial intelligence (AI) systems will be designed and deployed unfairly for queer individuals,” the paper reads.
Data on queer identity is collected less routinely than data around other characteristics. Due to this lack of data, coauthors of the paper refer to unfairness for these individuals as “unmeasurable.” In health care settings, people may be unwilling to share their sexual orientation due to fear of stigmatization or discrimination. That lack of data, coauthors said, presents unique challenges and could increase risks for people who are undertaking medical gender transitions.
The researchers note that failure to collect relevant data from people who identify as queer may have “important downstream consequences” for AI system development in health care. “It can become impossible to assess fairness and model performance across the omitted dimensions,” the paper reads. “The coupled risk of a decrease in performance and an inability to measure it could drastically limit the benefits from AI in health care for the queer community, relative to cisgendered heterosexual patients. To prevent the amplification of existing inequities, there is a critical need for targeted fairness research examining the impacts of AI systems in health care for queer people.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The paper considers a number of ways AI can be used to target queer people or impact them negatively in areas like free speech, privacy, and online abuse.
Another recent study found shortcomings for people who identify as nonbinary when it comes to AI for fitness tech like the Withings smart scale.
On social media platforms, automated content moderation systems can be used to censor content classified as queer, while automated online abuse detection systems are often not trained to protect transgender people from intentional instances of misgendering or “deadnaming.” On the privacy front, the paper states that AI for queer people is also an issue of data management practices, particularly in countries where revealing a person’s sexual or gender orientation can be dangerous. You can’t recognize a person’s sexual orientation from their face as a 2017 Stanford University study claimed , but coauthors of that paper cautioned that AI could be developed to try to classify sexual orientation or gender identity from online behavioral data. AI that claims it can detect people who identify as queer can be used to carry out technology-driven malicious outing campaigns, a particular threat in certain parts of the world.
“The ethical implications of developing such systems for queer communities are far-reaching, with the potential of causing serious harms to affected individuals. Prediction algorithms could be deployed at scale by malicious actors, particularly in nations where homosexuality and gender non-conformity are punishable offenses,” the DeepMind paper reads. “In order to ensure queer algorithmic fairness, it will be important to develop methods that can improve fairness for marginalized groups without having direct access to group membership information.” The paper recommends applying machine learning that uses differential privacy or other privacy-preserving techniques to protect people who identify as queer in online environments. The coauthors also suggest exploration of technical approaches or frameworks that take an intersectional approach to fairness for evaluating AI models. The researchers examine the challenge of mitigating the harm AI inflicts on people who identify as queer, but also on other groups of people with identities or characteristics that cannot be simply observed. Solving algorithmic fairness issues for people who identify as queer, the paper argues, can produce insights that are transferrable to other unobservable characteristics, like class, disability, race, or religion.
The paper also cites studies on the performance of AI for queer communities that have been published in the last few years.
A 2018 study used a language model to accurately predict homophobia in tweets in Portuguese nearly 90% of the time in initial experiments.
A 2019 analysis found that machine predictions for toxicity routinely ranked drag queens and white supremacists on social media as comparably toxic.
A 2019 study found that human graders ranked resumes with text associated with queerness lower than others.
This implies that any AI trained on such a dataset would reflect this bias at a time when companies are increasingly using AI to screen candidates.
Last year, researchers in Australia developed a framework for advancing gender equity in language models.
The DeepMind paper is Google’s most recent work on the importance of ensuring algorithmic fairness for specific groups of people. Last month, Google researchers concluded in a paper that algorithm fairness approaches developed in the U.S. or other parts of the Western world don’t always transfer to India or other non-Western nations.
But these papers examine how to ethically deploy AI at a time when Google’s own AI ethics operations are associated with some pretty unethical behavior. Last month, the Wall Street Journal reported that DeepMind cofounder and ethics lead Mustafa Suleyman had most of his management duties stripped before he left the company in 2019, following complaints of abuse and harassment from coworkers. An investigation was subsequently carried out by a private law firm. Months later, Suleyman took a job at Google advising the company on AI policy and regulation, and according to a company spokesperson, Suleyman no longer manages teams.
Google AI ethics lead Margaret Mitchell still appears to be under internal investigation , which her employer took the unusual step of sharing in a public statement. Mitchell recently shared an email she said she sent to Google before the investigation started. In that email, she characterized Google’s choice to fire Ethical AI team colead Timnit Gebru weeks earlier as “forever after a really, really, really terrible decision.” Gebru was fired while she was working on a research paper about the dangers of large language models.
Weeks later, Google released a trillion-parameter model , the largest known language model of its kind. A recently published analysis of GPT-3 , a 175-billion parameter language model, concluded that companies like Google and OpenAI have only a matter of months to set standards for addressing the societal consequences of large language models — including bias, disinformation, and the potential to replace human jobs. Following the Gebru incident and meetings with leaders of Historically Black Colleges and Universities (HBCU), earlier this week Google pledged to fund digital skills training for 100,000 Black women. Prior to accusations of retaliation from former Black female employees like Gebru and diversity recruiter April Curley, Google was accused of mistreatment and retaliation by multiple employees who identify as queer.
Bloomberg reported Wednesday that Google is restructuring its AI ethics research efforts under Google VP of engineering Marian Croak, who is a Black woman. According to Bloomberg, Croak will oversee the Ethical AI team and report directly to Google AI chief Jeff Dean.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,603 | 2,021 |
"Bowery CTO Injong Rhee on the grand challenge of AI for indoor farming | VentureBeat"
|
"https://venturebeat.com/business/bowery-cto-injong-rhee-on-the-grand-challenge-of-ai-for-indoor-farming"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bowery CTO Injong Rhee on the grand challenge of AI for indoor farming Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In recent years, AI leaders have urged machine learning experts to consider understanding the world’s oceans and tackling climate change grand challenges on par with building autonomous vehicles, beating a computer at a game of chess, or robotic grasping.
Combining computer vision, logistics, robotics, and the science of botany, indoor farming has the potential to change human lives. But innovation in this area requires considering dozens of variants and doing more with less. These challenges are likely among the reasons companies like Intel, Microsoft, and Tencent have participated in experiments to automate greenhouses.
Bowery Farming may be the largest vertical farming company operating in the U.S. today. Founded in 2015, the company has introduced a number of major changes in recent weeks. These are part of its largest expansion since raising more than $170 million from investors including GV (formerly Google Ventures) and individuals like Uber CEO Dara Khosrowshahi.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One of those changes was giving the CTO role to former Google VP Injong Rhee, who had worked on IoT platforms for Google like the edge TPU AI chip and software and services for Samsung. Last month, Bowery also opened what it calls a “center of excellence” in New Jersey. Called Farm X, the center will focus on raising not just leafy greens like the kind Bowery sells today, but also cucumbers, root vegetables, strawberries, and tomatoes. This initiative will focus on research and development, functioning as a sandbox for considering the possible blends of seeds and conditions required to grow produce indoors. A Bowery spokesperson declined to share how much square footage is devoted to the project, but a statement from the company describes Farm X as “one of the largest and most sophisticated vertical farming R&D facilities in the world.” The first two Bowery farms are located outside Baltimore and New York City. An additional commercial farm is scheduled to open in late 2021 in Bethlehem, Pennsylvania, about 70 miles from Philadelphia. Bowery claims its facilities are currently 100 times more productive than traditional outdoor farming methods. By adding operations in the Pennsylvania area, the company plans to serve a population of nearly 50 million along the Eastern seaboard of the United States. The goal, Bowery said, is to build indoor farms near every major city in the United States and the world.
Above: Bowery grow room near Baltimore A number of companies want to crack the code of supplying vegetables to people in urban environments, reducing the need to truck produce into cities. Growing Underground, for example, occupies a World War II-era bunker in London.
In Singapore, a company is exploring how to create indoor farming operations that can fit inside a shipping container as the country seeks greater food independence in the face of accelerating climate change and reduced global food supplies.
Scientific progress in indoor plant cultivation could play a key role in addressing food deserts and food security as climate change intensifies. Such knowledge could also advance further exploration of the Moon and Mars.
But Bowery chief science officer Henry Sztul says his company is focused on growing produce at scale in monoculture environments in larger facilities.
Several indoor farming startups in the U.S. are currently dedicated to providing premium organic lettuce for customers at Whole Foods and other high-end grocery stores. Bowery, for example, sells to nearly 1,000 stores on the East Coast of the United States, including Amazon Fresh, Walmart, and Whole Foods, as well as ecommerce vendors. Bowery didn’t share specifics when we asked how sales are currently split between Whole Foods and Walmart. But Rhee said the goal is for advances in efficiency to result in more affordable, high-quality produce.
“The AI is still in its infancy, and there’s a lot of human touch that’s still needed to get it mature. But I think with the help of AI, what we get out of this is so amazing that we can actually drive what perceivably in the past was not economically viable,” Rhee said. “Now we’re bringing it to be economically viable, and that’s really the power of the AI and machine learning that makes that happen.” VentureBeat sat down with Rhee and Sztul to talk about what they consider the holy grail of machine learning challenges for indoor farming, the specific challenges smart indoor farming companies encounter, and the idea of polyculture gardens with multiple types of plants growing in the same place.
This interview has been edited for brevity and clarity.
VentureBeat: In 2019, while speaking to reporters, Amazon VP of devices David Limp called a particular advance in Alexa tech a “ holy grail of voice science.
” What’s the holy grail of indoor farming, in terms of the machine learning in this space? Injong Rhee: I think the scale of doing this is one thing. Another thing is this whole process of growing crops from seed all the way to the harvest, packing them, and then delivering them to the store. That entire life cycle of crop and then supply chain presents so many opportunities for optimization.
And really making this indoor farming and vertical farming popular or economically viable is one optimization, but there are so many different dials and levers that we have to optimize, and AI is the best method to do this multi-variable optimization across the space. It’s really emulating what farmers do, and then what the trucking companies do, a combination of all of them, and then making the machine actually do all of this in a much more optimized way to make this really economically viable to provide it to people who need [food] and mass produce it at a low cost. So that’s what’s the holy grail of [indoor farming].
It’s not one thing. I’ve developed voice assistants before. You can actually say language understanding could be a holy grail, but in this case, everything that you know about IoT, cloud AI, machine learning, and robotics all comes into the picture, being orchestrated to find the economical way to produce vegetables on a large scale.
Henry Sztul: We have camera coverage of every crop that grows in a Bowery farm, and we’re constantly taking pictures. We do use computer vision algorithms, deep learning algorithms, to understand things like growth rates over time. To understand, not just when we see something like a stress response, which with something like arugula could be something like purpling, or something like butterhead [lettuce] could be like a yellowing at the edges. And we can observe those things. But what we can also start to do is predict what the conditions are that create that response. And we can be triggered — not just when does it happen, but also [what are] the leading indicators? And so what we’re doing now is we’re starting to look at not just [being] told when we see a stress response, but when we start to see the conditions that we predict to impact that, to create that stress response. And so all of these things are like parts of a puzzle, like the holy grail. But the holy grail is also solving the challenge of how to do this at an immense scale.
VentureBeat: Injong, could you talk a bit about your background and how your past experience informs this work in indoor farming? Rhee: Yeah, so I did work, especially what I did at Google is building the cloud IoT platform for developers. I developed an end-to-end software and hardware stack of cloud IoT platform, including IoT Core, and then the edge TPU, which is a purpose-built AI chip that you can embed at the edge … so that you can make faster decisions to do control. And so while I’m working on this IoT problem, developing a platform for the IoT developers, I find this smart farming so interesting. It’s a full combination of all the things that I love, like my background in IoT or background in computer networks and background in distributed systems and building software and hardware and sensor networks and all of that and AI coming into good use. And so that’s how a whole thing actually plays. It’s not just one thing that’s going to contribute. It’s many, many things that I have worked on, they’ve become so much in use, and this is amazing. That’s why I was looking for a smart farming opportunity, and Bowery was just presenting itself to me as a perfect opportunity to use what I’ve learned in academia and industry.
VentureBeat: To what degree do people play a role in growing operations today? Is part of the goal for Bowery to create fully autonomous indoor farms? Sztul: I think the goal of Bowery is to build farms that can deliver more, healthier produce to more people at the right price point. And so one of the ways to do that is with automation in areas. I think there might be people out there that will say “Yeah, our goal is to totally automate everything.” I think we come into the space with an open mind, which I think [is] a little different. And so that’s why I was saying we may get there, but we’re really more focused on how do we put out the best, the highest quality product consistently? VentureBeat: What are some challenges associated with machine learning systems for indoor farming? Sztul: There’s a basic machine learning example called the multi-armed bandit problem. And if you think about an octopus in a casino, sitting at a slot machine, the octopus is exploring, pulling different slot machine handles until it finds one that it can start to take advantage of, it can exploit. This is a classic problem of exploration versus exploitation.
A recipe [for growing produce] at Bowery includes things like light intensity, photo period, spectrum, different types of concentrations of nutrients, water temperature, air temperature, and humidity. There’s dozens of components that come into a Bowery recipe. If you were to try and tweak all of those combinations, all of those recipe components to make different combinations, that would take forever, and so we do the same thing with recipes. We have dozens and dozens of recipes in our farm; actually, I believe now we have over 50 recipes currently active across 10 products.
Rhee: Another area to add is the ability to forecast. Obviously, mass will be based on, you take a picture, and millions and millions of pictures, and then throughout the life cycle … until it gets to the harvest. And so, if I take a picture, we can actually figure out … growth rate, and then what the height of the plant is going to be and how much it’s going to produce in a future harvest. And that’s really driven by computer vision, as well as the sensing technology, and then adding all of that into a machine learning model to predict the mass. So that’s an interesting problem that is also fairly challenging because you know, different plants have different patterns, right? And the different colors and density. And so that’s quite a challenging problem.
Sztul: That’s actually a problem similar to some of the problems that self-driving cars have in detecting cars, distinguishing between cars when they’re overlapping. And so, as leaves grow, it’s a problem. It’s called occlusion. As leaves grow, how do you know that you know this is a leaf, and that’s a leaf in a different part of different plants? So it’s an incredibly challenging space. And the better we do there — Injong’s totally right — the better we can understand how much do we have in our farms? And how are we doing today versus yesterday? Another one — and you would never think about this as a problem, well, simple but challenging — is where do you put things? How do you fill up your farm? If you have thousands of discrete locations to put your crops, how do you decide where things go? And that comes back to science at scale and recipe optimization.
One of the things we’ve done is used machine learning to optimize based on what a basil wants versus a butterhead, as an example — one wanting a cooler, drier climate and one wanting a warmer, more humid environment. We can set preferences and place these crops in different locations in the farm based off of those preferences. And so that’s an area that’s ripe for machine learning because [for] a person to make those decisions, I tell you firsthand, is impossible. And especially as you’re adding more complexity in terms of what the rules are. So that’s an area that I think seems straightforward but is complex and rewarding for us to spend time in.
VentureBeat: I spoke with Ken Goldberg at UC Berkeley last year about a project to create a fully autonomous polyculture garden and computer vision systems to monitor diverse groups of plants growing together. Is Bowery doing any experiments with polyculture growing? Sztul: We’ve started growing some things that way, where we grow different types of — like our spring blend. So you can go buy our spring blend, and it’s got a whole bunch of different types of lettuces in it, and we used to grow it all together. And actually, we’ve gone away from polyculture over time. We’ve actually moved away from that, and we think that’s right now a better model because we can target what the individual crop needs versus another. But we could go back to that one day, I don’t know.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,604 | 2,021 |
"Black women, AI, and overcoming historical patterns of abuse | VentureBeat"
|
"https://venturebeat.com/business/black-women-ai-and-historical-patterns-of-abuse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Black women, AI, and overcoming historical patterns of abuse Share on Facebook Share on X Share on LinkedIn The Abuse and Misogynoir Playbook Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
After a 2019 research paper demonstrated that commercially available facial analysis tools fail to work for women with dark skin, AWS executives went on the attack. Instead of offering up more equitable performance results or allowing the federal government to assess their algorithm like other companies with facial recognition tech have done, AWS executives attempted to discredit study coauthors Joy Buolamwini and Deb Raji in multiple blog posts.
More than 70 respected AI researchers rebuked this attack , defended the study, and called on Amazon to stop selling the technology to police, a position the company temporarily adopted last year after the death of George Floyd.
But according to the Abuse and Misogynoir Playbook, published earlier this year by a trio of MIT researchers, Amazon’s attempt to smear two Black women AI researchers and discredit their work follows a set of tactics that have been used against Black women for centuries.
Moya Bailey coined the term “misogynoir” in 2010 as a portmanteau of “misogyny” and “noir.” Playbook coauthors Katlyn Turner, Danielle Wood, and Catherine D’Ignazio say these tactics were also used to disparage former Ethical AI team co-lead Timnit Gebru after Google fired her in late 2020 and stress that it’s a pattern engineers and data scientists need to recognize.
The Abuse and Misogynoir Playbook is part of the State of AI Ethics report from the Montreal AI Ethics Institute and was compiled by MIT professors in response to Google’s treatment of Gebru, a story VentureBeat has covered in depth. The coauthors hope that recognition of the phenomena will prove a first step in ensuring these tactics are no longer used against Black women. Last May, VentureBeat wrote about a fight for the soul of machine learning , highlighting ties between white supremacy and companies like Banjo and Clearview AI , as well as calls for reform from many in the industry, including prominent Black women.
MIT assistant professor Danielle Wood, whose work focuses on justice and space research, told VentureBeat it’s important to recognize that the tactics outlined in the Abuse and Misogynoir Playbook can be used in almost any arena. She noted that while some cling to a belief in the impartiality of data-driven results, the AI field is in no way exempt from this problem.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “This is a process, a series of related things, and the process has to be described step by step or else people won’t get the point,” Wood said. “I can be part of a system that’s actually practicing misogynoir, and I’m a Black woman. Because it’s a habit that is so prolific, it’s something I might participate in without even thinking about it. All of us can.” Above: The Abuse and Misogynoir Playbook (Design by Melissa Teng) The playbook outlines the intersectional and unique abuse aimed at Black women in five steps: Step 1: A Black woman scholar makes a contribution that speaks truth to power or upsets the status quo.
Step 2: Disbelief in her contribution from people who say the results can’t be true and either think a Black woman couldn’t have done the research or find another way to call her contribution into question.
Step 3: Dismissal, discrediting, and gaslighting ensues. AI chief Jeff Dean’s public attempt to discredit Gebru alongside colleagues is a textbook example. Similarly, after current and former Dropbox employees alleged gender discrimination at the company, Dropbox CEO Drew Houston attempted to discredit the report’s findings, according to documents obtained by VentureBeat.
Gaslighting is a term taken from the 1944 movie Gaslight , in which a character goes to extreme lengths to make a woman deny her senses, ignore the truth, and feel like she’s going crazy. It’s not uncommon at this stage for people to consider the targeted Black woman’s contribution an attempt to weaponize pity or sympathy. Another instance that sparked gaslighting allegations involved algorithmic bias, Facebook chief AI scientist Yann LeCun, and Gebru.
Step 4: Erasure.
Over time, counter-narratives, deplatforming, and exclusion are used to prevent that person from carrying out their work as part of attempts to erase their contributions.
Step 5: Revisionism seeks to paper over the contributions of Black women and can lead to whitewashed versions of events and slow progress toward justice.
There’s been a steady stream of stories about gender and racial bias in AI in recent years, a point highlighted by news headlines this week. The Wall Street Journal reported Friday that researchers found Facebook’s algorithm shows different job ads to men and women and is discriminatory under U.S. law , while Vice reported on research that found facial recognition used by Proctorio remote proctoring software does not work well for people with dark skin over half of the time. This follows VentureBeat’s coverage of racial bias in ExamSoft’s facial recognition-based remote proctoring software , which was used in state bar exams in 2020.
Investigations by The Markup this week found advertising bans hidden behind an algorithm for a number of terms on YouTube, including “Black in tech,” “antiracism,” and “Black excellence,” but it’s still possible to advertise to white supremacists on the video platform.
Case study: Timnit Gebru and Google Google’s treatment of Gebru illustrates each step of the playbook. Her status quo-disrupting contribution, Turner told VentureBeat, was an AI research paper about the dangers of using large language models that perpetuate racism or stereotypes and carry an environmental impact that may unduly burden marginalized communities. Other perceived disruptions, Turner said, included Gebru building one of the most diverse teams within Google Research and sending a critical email to the Google Brain Women and Allies internal listserv that was leaked to Platformer.
Shortly after she was fired, Gebru said she was asked to retract the paper or remove the names of Google employees. That was step two from the Misogynoir Playbook. In academia, Turner said, retraction is taken very seriously. It’s generally reserved for scientific falsehood and can end careers, so asking Gebru to remove her name from a valid piece of research was unreasonable and part of efforts to make Gebru herself seem unreasonable.
Evidence of step three, disbelief or discredit, can be found in an email AI chief Jeff Dean sent that calls into question the validity of the paper’s findings. Days later, CEO Sundar Pichai sent a memo to Google employees in which he said the firing of Gebru had prompted the company to explore improvements to its employee de-escalation policy. In an interview with VentureBeat , Gebru characterized that memo as “dehumanizing” and an attempt to fit her into an “angry Black woman” trope.
Despite Dean’s critique, a point that seems lost amid allegations of abuse, racism, and corporate efforts to interfere with academic publication is that the team of researchers behind the stochastic parrots research paper in question was exceptionally well-qualified to deliver critical analysis of large language models. A version of the paper VentureBeat obtained lists Google research scientists Ben Hutchinson, Mark Diaz, and Vinodkumar Prabhakaran as coauthors, as well as then-Ethical AI team co-leads Gebru and Margaret Mitchell. While Mitchell is well known for her work in AI ethics , she is most heavily cited for research involving language models. Diaz, Hutchinson, and Prabhakaran have backgrounds in assessing language or NLP for ageism, discrimination against people with disabilities, and racism, respectively. Linguist Emily Bender, a lead coauthor of the paper alongside Gebru, received an award from organizers of a major NLP conference in mid-2020 for work critical of large language models, which VentureBeat also reported.
Gebru is coauthor of the Gender Shades research paper that found commercially available facial analysis models perform particularly poorly for women with dark skin. That project, spearheaded by Buolamwini in 2018 and continued with Raji in a subsequent paper published in early 2019, has helped shape legislative policy in the U.S and is also a central part of Coded Bias , a documentary now streaming on Netflix. And Gebru has been a major supporter of AI documentation standards like datasheets for datasets and model cards , an approach Google has adopted.
Finally, Turner said, steps four and five of the playbook, erasure and revisionism, can be seen in the departmental reorganization and diversity policy changes Google made in February. As a result of those changes, Google VP Marian Croak was appointed to head up 10 of the Google teams that consider how technology impacts people. She reports directly to AI chief Jeff Dean.
On Tuesday, Google research manager Samy Bengio resigned from his role at the company , according to news first reported by Bloomberg. Prior to the restructuring, Bengio was the direct report manager for the Ethical AI team.
VentureBeat obtained a copy of a letter Ethical AI team members sent to Google leadership in the weeks following Gebru’s dismissal that specifically requested Bengio remain the direct report for the team and that the company not implement any reorganization. A person familiar with ethics and policy matters at Google told VentureBeat that reorganization had been discussed previously, but this source described an environment of fear after Gebru’s dismissal that prevented people from speaking out.
Before being named to her new position, Croak appeared alongside the AI chief in a meeting with Black Google employees in the days following Gebru’s dismissal. Google declined to make Croak available for comment, but the company released a video in which she called for more “diplomatic” conversations about definitions of fairness or safety.
Turner pointed out that the reorganization fits neatly into the playbook.
“I think that revisionism and erasure is important. It serves a function of allowing both people and the news cycle to believe that the narrative arc has happened, like there was some bad thing that was taken care of — ‘Don’t worry about this anymore.’ [It’s] like, ‘Here’s this new thing,’ and that’s really effective,” Turner said.
Origins of the playbook The playbook’s coauthors said it was constructed following conversations with Gebru. Earlier in the year, Gebru spoke at MIT at Turner and Wood’s invitation as part of an antiracism tech design research seminar series.
When the news broke that Gebru had been fired, D’Ignazio described feelings of anger, shock, and outrage. Wood said she experienced a sense of grieving and loss. She also felt frustrated by the fact that Gebru was targeted despite having attempted to address harm through channels that are considered legitimate.
“It’s a really discouraging feeling of being stuck,” Wood said. “If you follow the rules, you’re supposed to see the outcome, so I think part of the reality here is just thinking, ‘Well, if Black women try to follow all the rules and the result is we’re still not able to communicate our urgent concerns, what other options do we have?'” Wood said she and Turner found connections between historical figures and Gebru in their work in the Space Enabled Lab at MIT examining complex sociotechnical systems through the lens of critical race studies and queer Black feminist groups like the Combahee River Collective.
In addition to instances of misogynoir and abuse at Amazon and Google, coauthors say the playbook represents a historical pattern that has been used to exclude Black women authors and scholars dating back to the 1700s. These include Phillis Wheatley, the first published African American poet, journalist Ida B. Wells, and author Zora Neale Hurston. Generally, the coauthors found that the playbook tactics visit great acts of violence on Black women that can be distinguished from the harms encountered by other groups that challenge the status quo.
The coauthors said women outside of tech who have been targeted by the same playbook include New York Times journalist and 1619 Project creator Nikole Hannah-Jones and politicians like Stacey Abrams and Rep. Ayanna Pressley (D-MA).
The long shadow of history The researchers also said they took a historical view to demonstrate that the ideas behind the Abuse and Misogynoir Playbook are centuries old. Failure to confront forces of racism and sexism at work, Turner said, can lead to the same problems in new and different tech scenarios. She went on to say that it’s important to understand that historical forces of oppression, categorization, and hierarchy are still with us and warned that “we will never actually get to an ethical AI if we don’t understand that.” The AI field claims to excel at pattern recognition, so the industry should be able to identify tactics from the playbook, D’Ignazio said.
“I feel like that’s one of the most enormous ignorances, the places where technical fields do not go, and yet history is what would inform all of our ethical decisions today,” she said. “History helps us see structural, macro patterns in the world. In that sense, I see it as deeply related to computation and data science because it helps us scale up our vision and see how things today, like Dr. Gebru’s case, are connected to these patterns and cycles that we still haven’t been able to break out of today.” The coauthors recognize that power plays a major role in determining what kind of behavior is considered ethical.
This corresponds to the idea of privilege hazard, a term coined in the book Data Feminism , which D’Ignazio coauthored last year , to articulate people in privileged positions failing to fully comprehend the experience of those with less power.
A long-term view seems to run counter to the traditional Silicon Valley dogma surrounding scale and growth, a point emphasized by Google Ethical AI team research scientist and sociologist Dr. Alex Hanna weeks before Gebru was fired. A paper Hanna coauthored with independent researcher Tina Park in October 2020 called scale thinking incompatible with addressing social inequality.
The Abuse and Misogynoir Playbook is the latest AI work to turn to history for inspiration.
Your Computer Is On Fire , a collection of essays from MIT Press, and Kate Crawford’s Atlas of AI, released in March and April, respectively, examine the toll datacenter infrastructure and AI take on the environment and civil rights and reinforce colonial habits about the extraction of value from people and natural resources. Both books also investigate patterns and trends found in the history of computing.
Race After Technolog y author Ruha Benjamin, who coined the term “new Jim Code,” argues that an understanding of historical and social context is also necessary to safeguard engineers from being party to human rights abuses, like the IBM workers who assisted Nazis during World War II.
A new playbook The coauthors end by calling for the creation of a new playbook and pose a challenge to the makers of artificial intelligence.
“We call on the AI ethics community to take responsibility for rooting out white supremacy and sexism in our community, as well as to eradicate their downstream effects in data products. Without this baseline in place, all other calls for AI ethics ring hollow and smack of DEI-tokenism. This work begins by recognizing and interrupting the tactics outlined in the playbook — along with the institutional apparatus — that works to disbelieve, dismiss, gaslight, discredit, silence, and erase the leadership of Black women.” The second half of a panel discussion about the playbook in late March focused on hope and ways to build something better, because, as the coauthors say, it’s not enough to host events with the term “diversity” or “equity” in them. Once abusive patterns are recognized, old processes that led to mistreatment on the basis of gender or race must be replaced with new, liberatory practices.
The coauthors note that making technology with liberation in mind is part of the work D’Ignazio does as director of the Data + Feminism Lab at MIT, and what Turner and Wood do with the Space Enabled research group at MIT Media Lab. That group looks for ways to design complex systems that support justice and the United Nations Sustainable Development Goals.
“Our assumption is we have to show prototypes of liberatory ways of working so that people can understand those are real and then try to adopt those in place of the current processes that are in place,” Wood said. “We hope that our research labs are actually mini prototypes of the future in which we try to behave in a way that’s anticolonial and feminist and queer and colored and has lots of views from people from different backgrounds.” D’Ignazio said change in tech — and specifically for the hyped, well-funded, and trendy field of AI — will require people considering a number of factors, including who they take money from and choose to work with. AI ethics researcher Luke Stark turned down $60,000 in funding from Google last month, and Rediet Abebe, who cofounded Black in AI with Gebru, has also pledged to reject funding from Google.
In other work at the intersection of AI and gender, the Alan Turing Institute’s Women in Data Science and AI project released a report last month that documents problems women in AI face in the United Kingdom. The report finds that women only hold about 1 in 5 jobs in data science and AI in the U.K. and calls for government officials to better track and verify the growth of women in those fields.
“Our research findings reveal extensive disparities in skills, status, pay, seniority, industry, job attrition, and education background, which call for effective policy responses if society is to reap the benefits of technological advances,” the report reads.
Members of Congress interested in algorithmic regulation are considering more stringent employee demographic data collection, among other legislative initiatives. Google and Facebook do not currently share diversity data specific to employees working within artificial intelligence.
The Abuse and Misogynoir Playbook is also the latest AI research from people of African descent to advocate taking a historical perspective and adopting anticolonial and antiracist practices.
In an open letter shortly after the death of George Floyd last year, a group of more than 150 Black machine learning and computing professionals outlined a set of actions to bring an end to the systemic racism that has led Black people to leave jobs in the computing field. A few weeks later, researchers from Google’s DeepMind called for reform of the AI industry based on anticolonial practices.
More recently, a team of African AI researchers and data scientists have recommended implementing anticolonial data sharing practices as the datacenter industry in Africa continues growing at a rapid pace.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,605 | 2,021 |
"Annual index finds AI is 'industrializing' but needs better metrics and testing | VentureBeat"
|
"https://venturebeat.com/business/annual-index-finds-ai-is-industrializing-but-needs-better-metrics-and-testing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Annual index finds AI is ‘industrializing’ but needs better metrics and testing Share on Facebook Share on X Share on LinkedIn A global map of AI national policy initiatives Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
China has overtaken the United States in total number of AI research citations, fewer AI startups are receiving funding, and Congress is talking about AI more than ever. Those are three major trends highlighted in the 2021 AI Index , an annual report released today by Stanford University. Now in its fourth year, the AI Index attempts to document advances in artificial intelligence, as well as the technology’s impact on education, startups, and government policy. The report details progress in the performance of major subdomains of AI, like deep learning, image recognition, and object detection, as well as in areas like protein folding.
The AI Index is compiled by the Stanford Institute for Human-Centered Artificial Intelligence and an 11-member steering committee, with contributors from Harvard University, OECD, the Partnership on AI, and SRI International. The AI Index utilizes datasets from a range of sources, like AI research data from arXiv, funding data from Crunchbase, and surveys of groups like Black in AI and Queer in AI. A major trend also identified in the report is the industrialization of AI, said Jack Clark, head of an OECD group working on algorithm impact assessment and former policy director for OpenAI.
“I think the story to me is that AI is industrializing, and we don’t quite know how to assess the industrialization of it holistically because we sort of lack a lot of the data that you’d expect would exist. And I think that’s because AI has just gone from ‘doesn’t work’ to ‘works well enough for commercial deployment’ way more quickly than you might expect. And that means … everyone’s racing, including the research community, to keep up with the pace of commercial deployments,” he said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Other major takeaways from the report: Brazil, India, Canada, Singapore, and South Africa saw the highest levels of AI hiring from 2016 to 2020, according to data provided by LinkedIn.
Total global investment, like private investment and mergers and acquisitions, grew 40% in 2020. But for the third year in a row, AI startup funding is going to fewer startups.
In 2019, about 2 out of 3 graduates with a Ph.D. in AI in North America went into industry, up from 44% in 2010.
The majority of AI Ph.D. graduates come from outside the United States, and 4 out of 5 stay in the country after graduating.
A news analysis of 500,000 blogs and 60,000 English language news stories found that AI ethics stories were among the most popular AI-related stories in 2020, including coverage of topics like Google firing Timnit Gebru and ethics initiatives introduced by the European Commission, the United Nations, and the Vatican.
Attendance at major AI research conferences doubled in 2020 as most groups chose to hold virtual gatherings.
Women made up 18% of AI Ph.D. graduates, according to a 2020 Computing Research Association survey.
China overtook the U.S in total paper citations, but the U.S. continued a two-decade lead in citations at AI research conferences.
Based on total number of GitHub Stars, TensorFlow is the most popular AI software library, followed by Keras and PyTorch.
AI-related papers on arXiv grew from roughly 5,500 in 2015 to nearly 35,000 in 2020.
A Queer in AI 2020 member survey found that roughly half of respondents have experienced harassment or discrimination and encountered issues around inclusiveness.
Academic researchers lead in total papers published worldwide. But in the U.S., corporate research ranks second, while government research ranks second in Europe and China.
From 2004 to 2019, Carnegie Mellon University (16), Georgia Institute of Technology (14), and the University of Washington (12) have lost the largest number of faculty members to industry.
The portion of the report dedicated to progress toward technical challenges highlights advances in computer vision systems and language models, as well as AI for tasks like drug discovery or effective chemical and molecular synthesis.
The AI Index report shows progress in AI systems that can be used for surveillance, like object detection system YOLO. Considerable progress has also been made with VoxCeleb, which measures the ability to identify a voice from a dataset containing 6,000 people. The AI Index charts a decline in equal error rate of about 8% in 2017 to less than 1% in 2020.
“This metric is telling us that AI systems have gone from having an 8% equal error rate to about 0.5%, which tells you that this capability is going to be being deployed quietly across the world,” Clark said.
A panel of experts on technical progress cited AlphaFold’s ability to predict how proteins fold and GPT-3 as two of the most talked-about AI systems of 2020. Though the AI Index acknowledges few- and zero-shot learning gains made by GPT-3, it cites a paper by former Ethical AI team co-lead Timnit Gebru and others that takes a critical look at large language models and their ability to perpetuate bias.
It also mentions a paper published last month by OpenAI and Stanford on the need to address large language models’ societal impact before it’s too late. In an interview with VentureBeat in 2019 , AI Index founding director Yoav Shoham expressed doubts about the value of judging language models based on performance on narrow tasks.
VentureBeat has reported extensively on both of the research papers named in the index. Other reports VentureBeat has covered that were cited include McKinsey’s State of AI report that found little progress among business leaders when it comes to addressing risks associated with deploying AI. Another warned about the de-democratization of AI in the age of deep learning , which the coauthors say can perpetuate inequality.
The AI Index report included a call for more benchmarks and testing in the fields of computer vision, ethics, and NLP. As demonstrated by benchmarks like GLUE and SuperGLUE, Clark said, “We’re running out of tests as fast as we can build them.” The creation of new benchmarks and testing is also an opportunity to make metrics that reflect people’s values and measure progress toward addressing grand challenges, like deforestation.
“I think one of the ways to get holistic accountability in a space is to have the same test that you run everything against, or the same set of tests. And until we have that, it’s going to be really fuzzy to talk about biases and other ethical issues in these systems, which I think would just hold us back as a community and also make it easier for people who want to pretend these issues don’t exist to continue to pretend they don’t exist or not mention them,” he said.
In previous years, the AI Index expanded to include tools like an arXiv monitor for searching preprint papers. The AI Index’s Global Vibrancy Tool , which serves up comparisons between national AI initiatives, now works for 26 countries across 23 categories.
Perhaps as interesting as what’s included in the report is what’s missing. This year, the report removed data related to progress on self-driving cars, while Clark said the report does not include information about fully autonomous weaponry, due to a lack of data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,606 | 2,021 |
"AI Weekly: Facebook, Google, and the tension between profits and fairness | VentureBeat"
|
"https://venturebeat.com/business/ai-weekly-facebook-google-and-the-tension-between-profits-and-fairness"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Facebook, Google, and the tension between profits and fairness Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
This week, we learned a lot more about the inner workings of AI fairness and ethics operations at Facebook and Google and how things have gone wrong. On Monday, a Google employee group wrote a letter asking Congress and state lawmakers to pass legislation to protect AI ethics whistleblowers. That letter cites VentureBeat reporting about the potential policy outcomes of Google firing former Ethical AI team co-lead Timnit Gebru. It also cites research by UC Berkeley law professor Sonia Katyal, who told VentureBeat, “What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential.” The 2021 AI Index report found that AI ethics stories — including Google firing Gebru — were among the most popular AI-related news articles of 2020, an indication of rising public interest. In the letter published Monday, Google employees spoke of harassment and intimidation, and a person with policy and ethics matters at Google described a “deep sense of fear” since the firing of ethics leaders Gebru and former co-lead Margaret Mitchell.
On Thursday, MIT Tech Review’s Karen Hao published a story that unpacked a lot of previously unknown information about ties between AI ethics operations at Facebook and the company’s failure to address misinformation peddled through its social media platforms and tied directly to a number of real-world atrocities. A major takeaway from this lengthy piece is that Facebook’s responsible AI team focused on addressing algorithmic bias in place of issues like disinformation and political polarization, following 2018 complaints by conservative politicians, although a recent study refutes their claims.
The events described in Hao’s report appear to document political winds shifting the definition of fairness at Facebook, and the extremes to which a company will go in order to escape regulation.
Facebook CEO Mark Zuckerberg’s public defense of President Trump last summer and years of extensive reporting by journalists have already highlighted the company’s willingness to profit from hate and misinformation. A Wall Street Journal article last year , for example, found that the majority of people in Facebook groups labeled as extremist joined as a result of a recommendation made by a Facebook algorithm.
What this week’s MIT Tech Review story details is a tech giant deciding how to define fairness to advance its underlying business goals. Just as with Google’s Ethical AI team meltdown, Hao’s story describes forces within Facebook that sought to co-opt or suppress ethics operations after just a year or two of operation. One former Facebook researcher, who Hao quoted on background, described their work as helping the company maintain the status quo in a way that often contradicted Zuckerberg’s public position on what’s fair and equitable. Another researcher speaking on background described being told to block a medical-misinformation detection algorithm that had noticeably reduced the reach of anti-vaccine campaigns.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In what a Facebook spokesperson pointed to as the company’s official response, Facebook CTO Mike Schroepfer called the core narrative of Hao’s article incorrect but made no effort to dispute facts reported in the story.
Facebook chief AI scientist Yann LeCun, who got into a public spat with Gebru over the summer about AI bias that led to accusations of gaslighting and racism, claimed the story had factual errors. Hao and her editor reviewed the claims of inaccuracy and found no factual error.
Facebook’s business practices have played a role in digital redlining , genocide in Myanmar, and the insurrection at the U.S. Capitol. At an internal meeting Thursday, according to BuzzFeed reporter Ryan Mac , an employee asked how Facebook funding AI research differs from Big Tobacco’s history of funding health studies. Mac said the response was that Facebook was not funding its own research in this specific instance, but AI researchers spoke extensively about that concern last year.
Last summer, VentureBeat covered stories involving Schroepfer and LeCun after events drew questions about diversity, hiring, and AI bias at the company. As that reporting and Hao’s nine-month investigation highlight: Facebook has no system in place to audit and test algorithms for bias. A civil rights audit commissioned by Facebook and released last summer calls for the regular and mandatory testing of algorithms for bias and discrimination.
Following allegations of toxic, anti-Black work environments , both Facebook and Google have been accused in the past week of treating Black job candidates in a separate and unequal fashion.
Reuters reported last week that the Equal Employment Opportunity Commission (EEOC) is investigating “systemic” racial bias at Facebook in hiring and promotions. And additional details about an EEOC complaint filed by a Black woman emerged Thursday.
At Google, multiple sources told NBC News last year that diversity investments in 2018 were cut back in order to avoid criticism from conservative politicians.
On Wednesday, Facebook also made its first attempt to dismiss an antitrust suit brought against the company by the Federal Trade Commission (FTC ) and attorneys general from 46 U.S. states.
All of this happened in the same week that U.S. President Joe Biden nominated Lina Khan to the FTC , leading to the claim that the new administration is building a “Big Tech antitrust all-star team.” Last week, Biden appointed Tim Wu to the White House National Economic Council. A supporter of breaking up Big Tech companies, Wu wrote an op-ed last fall in which he called one of the multiple antitrust cases against Google bigger than any single company. He later referred to it as the end of a decades-long antitrust winter.
VentureBeat featured Wu’s book The Curse of Bigness about the history of antitrust reform in a list of essential books to read.
Other signals that more regulation could be on the way include the appointments of FTC chair Rebecca Slaughter and OSTP deputy director Alondra Nelson , who have both expressed a need to address algorithmic bias.
The Google story calling for whistleblower protections for people researching the ethical deployment of AI marks the second time in as many weeks that Congress has received a recommendation to act to protect people from AI.
The National Security Commission on Artificial Intelligence (NSCAI) was formed in 2018 to advise Congress and the federal government. The group is chaired by former Google CEO Eric Schmidt, and Google Cloud AI chief Andrew Moore is among the group’s 15 commissioners. Last week, the body published a report that recommends the government spend $40 billion in the coming years on research and development and the democratization of AI. The report also says individuals within government agencies essential to national security should be given a way to report concerns about “irresponsible AI development.” The report states that “Congress and the public need to see that the government is equipped to catch and fix critical flaws in systems in time to prevent inadvertent disasters and hold humans accountable, including for misuse.” It also encourages ongoing implementation of audits and reporting requirements. However, as audits at businesses like HireVue have shown , there are a lot of different ways to audit an algorithm.
This week’s consensus between organized Google employees and NSCAI commissioners who represent business executives from companies like Google Cloud, Microsoft, and Oracle suggests some agreement between broad swaths of people intimately familiar with the deployment of AI at scale.
In casting the final vote to approve the NSCAI report, Moore said, “We are the human race. We are tool users. It’s kind of what we’re known for. And we’ve now hit the point where our tools are, in some limited sense, more intelligent than ourselves. And it’s a very exciting future, which we have to take seriously for the benefit of the United States and the world.” While deep learning and forms of AI may be capable of doing things that people describe as superhuman, this week we got a reminder of how untrustworthy AI systems can be when OpenAI demonstrated that its state-of-the-art model can be fooled to think an apple with “iPod” written on it is in fact an iPod, something any person with a pulse could discern.
Hao described the subjects of her Facebook story as well-intentioned people trying to make changes in a rotten system that acts to protect itself. Ethics researchers in a corporation of that size are effectively charged with considering society as a shareholder, but everyone else they work with is expected to think first and foremost about the bottom line, or personal bonuses. Hao said that reporting on the story has convinced her that self-regulation cannot work.
“Facebook has only ever moved on issues because of or in anticipation of external regulation,” she said in a tweet.
After Google fired Gebru, VentureBeat spoke with ethics, legal, and policy experts who have also reached the conclusion that “self-regulation can’t be trusted.” Whether at Facebook or Google, each of these incidents — often told with the help of sources speaking on condition of anonymity — shine light on the need for guardrails and regulation and, as a recent Google research paper found, journalists who ask tough questions. In that paper , titled “Re-imagining Algorithmic Fairness in India and Beyond,” researchers state that “Technology journalism is a keystone of equitable automation and needs to be fostered for AI.” Companies like Facebook and Google sit at the center of AI industry consolidation , and the ramifications of their actions extend beyond even their great reach, touching virtually every aspect of the tech ecosystem. A source familiar with ethics and policy matters at Google who supports whistleblower protection laws told VentureBeat the equation is pretty simple: “[If] you want to be a company that touches billions of people, then you should be responsible and held accountable for how you touch those billions of people.” For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine.
Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,607 | 2,021 |
"AI researchers detail obstacles to data sharing in Africa | VentureBeat"
|
"https://venturebeat.com/business/ai-researchers-detail-obstacles-to-data-sharing-in-africa"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI researchers detail obstacles to data sharing in Africa Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI researchers say data sharing is a key part of economic growth in Africa but that it faces a number of common obstacles, including the threat of data colonialism. The African data market is expected to grow steadily in the coming years, and the African Data Centre trade organization predicts the African data market will need hundreds of new datacenters to meet demand in the coming decade.
In a paper titled “Narratives and Counternarratives on Data Sharing in Africa,” the research team lays out structural problems including but limited to financial or infrastructure problems. Coauthors argue that failure to consider ethical concerns associated with those obstacles could cause irreparable harm.
“Currently, a significant proportion of Africa’s digital infrastructure is controlled by Western technology powers, such as Amazon, Google, Facebook, and Uber,” the paper reads. “Traditional colonial powers pursued colonial invasion through justifications such as ‘educating the uneducated.’ Data accumulation processes are accompanied by similar colonial rhetoric, such as ‘liberating the bottom billion,’ ‘helping the unbanked,’ ‘connecting the unconnected,’ and using data to ‘leapfrog poverty.'” Power imbalances, lack of investment in building trust, and disregard for local knowledge and context are identified as the three most common barriers to data sharing, as “entire heterogeneous geographies of people have their data accessed and shared, yet do not reap the same benefits as the data collectors and owners of data infrastructures,” according to the paper. Coauthors argue that dominant narratives around data sharing in Africa today focus on a lack of knowledge about the value of data and often suffers from what coauthors refer to as deficit narratives: stories that focus on subjects like poverty, unemployment, or illiteracy rates.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In recent years, the African continent as a whole has been considered a frontier opportunity for building data collection infrastructures. The enthusiasm around data sharing, and especially in machine learning or data science for development/social good settings, has ranged from tempered discussions around new research avenues to proclamations that ‘ the AI invasion is coming to Africa (and it’s a good thing) ‘. In this work, we echo previous discussions that this can lead to data colonialism and significant, irreparable harm to communities.” Coauthors argue that responsible data sharing in Africa should reject practices that lead to data colonialism and focus on meeting the needs of individuals and local communities first. They say this requires awareness and examination of influencing issues like legacies of colonialism and slavery. They warn that this context can contribute to data policy or practices rooted in Western-centric extractive practices that are “ill-suited for the African context.” The largest datacenter in Africa is reportedly under construction in South Africa.
It’s part of a surge of investment in datacenters and African telecom companies that some have deemed a gold rush.
Microsoft opened its first datacenter in Africa in 2019. AWS opened a South Africa region last year. Google is expected to complete construction on the Equiano subsea cable later this year , and Facebook is constructing a subsea cable that’s expected to be completed in two or three years. Nvidia is also ramping up operations in Africa.
An analysis of the rise of the African cloud by Xalam Analytics found that less than 1% of global public cloud revenue came from Africa in 2018.
Above: An illustration of stakeholders in the African data ecosystem in the paper “Narratives and Counternarratives on Data Sharing in Africa” The paper reaches its conclusions through interviews with African data experts and insights from coauthors, a number of whom grew up in Africa or currently live on the continent.
Rediet Abebe grew up in Ethiopia and cofounded Black in AI. Abebe is an assistant professor at UC Berkeley’s Electrical Engineering and Computer Sciences (EECS), the first Black faculty member in school history.
Abeba Birhane also grew up in Ethiopia. Currently a Ph.D. student at the University of Dublin, her writing about relational ethics received a Best Paper award at the Black in AI workshop at NeurIPS in 2018. Birhane has written at length about algorithmic colonization.
Sekou Remy grew up in Trinidad and Tobago but currently works as a research scientist and technical lead at IBM Research Africa in Kenya. And George Obaido and Kehinde Aruleba are Nigerian and cowrote the paper in association with the University of the Witwatersrand in South Africa.
“Data sharing practices which operate in the absence of knowledge of local norms and contexts contribute — albeit indirectly — to the erosion of trust among stakeholders in the data-sharing ecosystem,” the paper reads. “As machine learning and data science move to focus on the Global South and especially the African continent, the need to understand what challenges exist in data sharing, and how we can improve data practices become more pressing.” Power plays a major role in data sharing in Africa. For example, research cited in the paper found that Africans are significantly underrepresented in the biomedical research community, even when the data comes from Africa.
“Power asymmetries, historically inherited from the colonial era, often get carried over into data practices and manifest themselves in various forms, from imbalanced authorship to uneven bargaining powers that come with funding,” the paper reads. The coauthors add that power imbalance is also a factor in relationships between project managers and data analysts; data analysts and data collectors; and data collectors and research participants.
The paper also encourages understanding attitudes about data among African researchers. Governments in places like Ghana and Kenya have opened data portals, but a survey of South African researchers found that only about one in five shares data with others, and a 2018 study involving life scientists in more than a dozen sub-Saharan African nations described a number of disincentives to data sharing. That same year, governments in nations like Botswana, Ethiopia, and South Africa developed national data strategies. To address common issues, the African Union formed an AI working group in 2019.
“Trust is the fundamental component of all relationships in a data sharing ecosystem,” the paper reads. “The future of open data management and data sharing and their contribution to the advancement of science and technology in Africa will continue to increase, despite the slow pace caused by the lack of funding, redundant policy frameworks, and limited infrastructures.” The paper was accepted for publication at the ACM Fairness, Accountability, and Transparency ( FAccT ). The virtual conference begins next week. Other papers accepted for publication at FAccT include research that examines how language models do with word association and censorship and a call for a culture change in machine learning by Ethical AI team at Google and University of Washington. The FAccT conference was cofounded by Timnit Gebru , the Ethical AI team lead Google fired in late 2020. The conference has a history of being sponsored by a number of Big Tech companies with poor records of hiring Black researchers, like Facebook AI Research (FAIR) , Google’s DeepMind, and Google.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,608 | 2,021 |
"AI ethics research conference suspends Google sponsorship | VentureBeat"
|
"https://venturebeat.com/business/ai-ethics-research-conference-suspends-google-sponsorship"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI ethics research conference suspends Google sponsorship Share on Facebook Share on X Share on LinkedIn Google San Francisco office Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The ACM Conference for Fairness, Accountability, and Transparency ( FAccT ) has decided to suspend its sponsorship relationship with Google, conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand confirmed today. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru.
Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak.
“FAccT is guided by a Strategic Plan , and the conference by-laws charge the Sponsorship Chairs, in collaboration with the Executive Committee, with developing a sponsorship portfolio that aligns with that plan,” Ekstrand told VentureBeat in an email. “The Executive Committee made the decision that having Google as a sponsor for the 2021 conference would not be in the best interests of the community and impede the Strategic Plan. We will be revising the sponsorship policy for next year’s conference.” The decision followed days of questions about whether FAccT would continue its relationship with Google following the company’s treatment of Ethical AI team leaders. The news first emerged Friday, when FAccT program committee member Suresh Venkatasubramanian tweeted that the organization would pause its relationship with Google.
Putting Google sponsorship on hold doesn’t mean the end of sponsorship from Big Tech companies, or even Google itself. DeepMind, another sponsor of the FAccT conference that incurred an AI ethics controversy in January , is also a Google company. Since its founding in 2018, FAccT has sought funding from Big Tech sponsors like Google and Microsoft, along with the Ford Foundation and the MacArthur Foundation. An analysis released last year that compares Big Tech funding of AI ethics research to Big Tobacco’s history of funding health research found that nearly 60% of researchers at four prominent universities have taken money from major tech companies.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After Gebru was fired, Googlers protested what they called an act of “ unprecedented research censorship.
” Last week, Reuters reported on a separate instance of alleged interference in AI research at the company, with a research paper coauthor citing “deeply insidious” edits by the Google legal team.
According to the FAccT website, Gebru, who was a cofounder of the organization, continues to work as part of a group advising on data and algorithm evaluation and as a program committee chair. Mitchell is a program co-chair of the conference and a FAccT program committee member. Gebru was fired from her role at Google in December 2020, following disputes over factors like the lack of diversity in tech companies and a paper she coauthored titled “ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ” In addition to recognizing that pretrained language models may disproportionate harm marginalized communities, the work questions whether progress can really be measured by performance on benchmark tests. The paper also raises concerns about large language models’ potential for misuse or automation bias.
“If a large language model, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads.
Gebru is listed as one of two primary authors of the paper, which was accepted this week for publication at FAccT. Her lead coauthor is University of Washington linguist Emily Bender, whose writing about potential shortcomings of large language models and the need for deeper criticism received an award last summer from the Association for Computational Linguistics.
A copy of the paper VentureBeat obtained last year from a source familiar with the matter lists Mitchell as a coauthor, as well as Google researchers Mark Diaz, Ben Hutchinson, and Vinodkumar Prabhakaran, people with extensive backgrounds in language analysis and models. Mitchell may be known today for her work in ethics, but she is most highly cited as a computer vision and NLP researcher and is the author of a 2008 master’s thesis on text generation at the University of Washington. Ben Hutchinson worked with coauthors from the Ethical AI team at Google on a paper that found bias in NLP models disfavors people with disabilities in sentiment analysis and toxicity prediction. Mark Diaz has examined age-related bias found in text. In 2017, G oogle research scientist Vinodkumar Prabhakaran was part of a group from Stanford University that found differences in respect shown to black and white people stopped by police in Oakland, California.
Bender and Gebru are listed as primary coauthors in various versions of the paper. A version of the paper made available ahead of the conference by the University of Washington also lists “Shmargaret Scmitchell” as an author.
Fallout from the firing of Gebru, a prominent algorithmic oppression researcher and one of the only Black women to work as an AI researcher at Google, led to public opposition from thousands of Googlers and accusations of racism and retaliation. The incident also sparked questions from members of Congress with a documented interest in regulating algorithms.
And it led researchers to question the ethics of receiving ethics research funding from Google.
Experts in AI, ethics, and law told VentureBeat a range of policy changes could come about as a result of Gebru’s dismissal, including support for stronger whistleblower laws. Shortly after being fired, Gebru spoke about the idea of unionization as a means of protection for AI researchers, and Mitchell was a member of the Alphabet Workers Union formed in January 2021.
OpenAI and Stanford University researchers working with experts warned last month that creators of large language models — like Google and OpenAI — have only a matter of months to set standards for their ethical use before replications begin to circulate.
Other papers published at FAccT this year include analysis of common obstacles to data-sharing practices in African nations , a review of an algorithm impact assessment made by Data & Society’s AI on the Ground team , and research that examines how government repression and censorship impact text data regularly used for training NLP models.
In other recent AI research conference activity, organizers of NeurIPS, the most popular annual machine learning conference, told VentureBeat the organization plans to revise its sponsorship policy following questions surrounding NeurIPS sponsor Huawei reportedly making a Uighur Muslim detection system for Chinese authorities.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,609 | 2,021 |
"World Economic Forum launches global alliance to speed trustworthy AI adoption | VentureBeat"
|
"https://venturebeat.com/ai/world-economic-forum-launches-global-alliance-to-speed-adoption-of-trustworthy-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages World Economic Forum launches global alliance to speed trustworthy AI adoption Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The World Economic Forum (WEF) is launching the Global AI Action Alliance today, with more than 100 organizations participating at launch. The steering committee includes business leaders like IBM CEO Arvind Krishna, multinational organizations like the OECD and UNESCO, and worker group representatives like International Trade Union Confederation general secretary Sharan Burrow.
The Global AI Action Alliance is paid for with $500,000 from a $40 million Patrick J. McGovern Foundation grant fund to support AI and data projects.
Much good can be done with AI, said WEF AI and ML director at the Centre for the Fourth Industrial Revolution Kay Firth-Butterfield, but she cautioned that the technology needs a good governance foundation to garner and maintain public trust.
“It is our expectation that these projects will explore the frontiers of social challenges that can be solved by AI and through experimentation shape the development of new AI technologies. The Foundation is also committing to supply direct data services to global nonprofits to create exemplar organizations poised to capture the benefits of AI for the people and planet they serve,” Patrick J. McGovern Foundation president Vilas Dhar told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As part of that effort, the group will support organizations promoting AI governance and amplify influential AI ethics frameworks and research. This support is needed to bolster AI ethics work that can often be fragmented or suffer from a lack of exposure.
The Global AI Action Alliance is the latest initiative from the World Economic Forum, following the creation of a Centre for the Fourth Industrial Revolution. In 2019, the World Economic Forum created the Global AI Council with participation from individuals like Uber CEO Dara Khosrowshahi and Microsoft VP Brad Smith to steer WEF AI activity.
Government officials working with the WEF previously created one of the first known guidelines to help people within public agencies weigh risk associated with acquiring AI services from private market vendors. Additional resources include work with a New Zealand government official to reconsider the role of regulation in the age of AI.
AI regulation is not just imperative to protect against systemic discrimination.
Unregulated AI is also a threat to the survival of democracy itself at a time when the institution is under attack in countries like Brazil, India, the Philippines, and the United States. Last fall, former European Parliament member Marietje Schaake argued in favor of creating a global alliance to reclaim power from Big Tech firms and champion democracy.
“As a representative of civil society, we prioritize creating spaces for shared decision making, rather than corralling the behavior of tech companies. Alliances like GAIA serve the interests of democracy, restructuring the power dynamic between the elite and the marginalized by bringing them together around one table,” Dhar said.
In related news, earlier this week VentureBeat detailed how the OECD formed a task force dedicated to creating metrics to help nation-states understand how much AI compute they need.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,610 | 2,021 |
"Why the OECD wants to calculate the AI compute needs of national governments | VentureBeat"
|
"https://venturebeat.com/ai/why-the-oecd-wants-to-calculate-the-ai-compute-needs-of-national-governments"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Why the OECD wants to calculate the AI compute needs of national governments Share on Facebook Share on X Share on LinkedIn The logo of the OECD - Organization for Economic Co-operation and Development - in Schumann-Strasse in Berlin, Germany, 31 May 2016.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The Organization for Economic Co-operation and Development (OECD) wants to help national governments understand their AI compute demand needs. As part of the work, the multinational economic policy group is creating a task force that draws together data from a range of sources to make it easy for policymakers to understand how their investment strategy compares to that of other nations. Alongside datasets and algorithms, compute or computing power is an essential part of training predictive models.
Former OpenAI policy director and AI Index co-chair Jack Clark will be a member of the OECD task force. He told VentureBeat that calculation of AI compute may seem like a wonky pursuit, but understanding capacity will be important for policymakers.
“Think of it this way — if no one measured resources like electricity or oil, it’d be difficult to build national and international policy around these things,” he said. “Compute is one of the key inputs to the production of AI, so if we can measure how much compute exists within a country or set of countries, we can quantify one of the factors for the AI capacity of that country.” OECD AI Policy Observatory administrator Karine Perset said “There’s nothing that helps our member countries assess what they need and what they have, and so some of them are making large but not necessarily well-informed investments.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The task force intends to develop an initial framework by this fall, and then begin gathering data. Perset said if the group succeeds in making a single metric for nations to measure compute resources, economists can then consider correlations between compute investments and other economic indicators, like income inequality or per capita income.
According to a database the OECD is compiling, Perset said approximately 80 countries have something like a national AI strategy. Initially, such efforts came from primarily Asian and European countries, but more policy is now coming from Africa and Latin America. Some countries focus AI policy on particular areas of interest. Egypt may focus on farming, she said, while France focuses on defense and transportation.
Establishing needs and means The task force will be led by Nvidia VP of worldwide AI initiatives Keith Strier, who has worked with dozens of national governments during his five years as Global AI leader at consulting firm Ernst and Young. The AI compute demand task force will include up to 30 people and is being actively assembled now through conversations with some of the largest private AI hardware providers, like AMD, Intel, Microsoft, and TSMC.
There’s a remarkable gap in understanding AI despite it being a publicly identified policy priority for many governments, Strier said. “If you’re a prime minister or the president of a country, you want to know three things: How much AI infrastructure do I have? How does it compare to other countries? And is it enough? Those sound like simple enough questions, but if you can’t answer them, how could you possibly know you’re making the right investments?” The establishment of the task force is the OECD’s latest effort to bring together officials representing national governments around the world to carry out AI policies. In May 2019, the OECD became the first organization to bring more than 40 nations together to agree to a set of AI principles.
That was one of the first multinational agreements on the societal benefits nations want AI to have, but some see the principles as vague to the point of being meaningless to a machine learning engineer. In order to help nations put such principles into action, the OECD established its AI Policy Observatory roughly one year ago.
Helping nations understand what they need is almost an esoteric or philosophical question that speaks to the priorities of a nation-state or its elected officials. It also involves considering trade-offs between size, scale, and access. For example, distributing compute resources can be better for the environment and spread access to compute power to more people than building a single, giant supercomputer.
Last fall, an analysis of AI research found a growing compute divide between elite universities (and the Big Tech companies they often work with) and lower-tier schools. That same dynamic, Strier said, will occur among nation-states. “It’s not just about elite universities in the United States. This is all true on a national basis across the world,” he said.
Supercomputers and sovereign clouds The OECD’s compute count will begin with establishing the levels of compute in datacenters or supercomputers owned and operated by government agencies. From there, the task force will assess the national AI clouds owned by sovereign governments, which Strier called a growing trend among nations in Asia, Europe, and the Middle East to support small to medium-size business adoption of AI.
As part of the National Defense and Authorization Act (NDAA) Congress passed earlier this month, the U.S. introduced a national AI cloud for researchers to power their experiments. That research cloud previously received support from members of Congress from both major political parties, as well as businesses like AWS, Google, Mozilla, and Nvidia.
In addition to the recently launched task force, the OECD AI Policy Observatory has three working groups. The first group is developing a framework for policymakers and procurements officers, as well as government agencies working with contractors to determine the level of risk associated with deploying any AI model. The second group is working on tools and educational resources for computer scientists and the public, and the third comprises national AI strategy leaders of member nations who share policy worth emulating and mistakes to avoid.
What to count and what to leave out Pulling together a single metric presents a lot of potential obstacles. Chief among them: Private businesses could agree to share information, but they aren’t obligated to share anything. And two major uses of compute resources won’t be included: military AI usage and edge devices. The group also won’t consider public cloud offerings from companies like AWS and Azure.
Then there are ventures that mix state-backed business with public cloud offerings. For example, in December 2020 state-owned Saudi Arabian oil company Saudi Aramco partnered with Google Cloud , a deal that may make AI cloud services available to Saudi businesses. The task force will need to decide whether and how to count such efforts.
The current attempt to calculate nations’ AI compute needs is not the first effort to create a metric that will help nations understand the impact AI is having on business and society. In testimony before Congress last fall about the role AI will play in U.S. economic recovery, MIT professor and economist Daron Acemoglu warned against the potential impact of excessive automation, sharing analysis that found every robot replaces about 3.3 human jobs. And in 2019, economist Erik Brynjolfsson and colleagues from MIT created a model to measure investments in emerging technology like AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,611 | 2,021 |
"OpenAI and Stanford researchers call for urgent action to address harms of large language models like GPT-3 | VentureBeat"
|
"https://venturebeat.com/ai/openai-and-stanford-researchers-call-for-urgent-action-to-address-harms-of-large-language-models-like-gpt-3"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI and Stanford researchers call for urgent action to address harms of large language models like GPT-3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The makers of large language models like Google and OpenAI may not have long to set standards that sufficiently address their impact on society. Open source projects currently aiming to recreate GPT-3 include GPT-Neo, a project headed by EleutherAI.
That’s according to a paper published last week by researchers from OpenAI and Stanford University.
“Participants suggested that developers may only have a six- to nine-month advantage until others can reproduce their results. It was widely agreed upon that those on the cutting edge should use their position on the frontier to responsibly set norms in the emerging field,” the paper reads. “This further suggests the urgency of using the current time window, during which few actors possess very large language models, to develop appropriate norms and principles for others to follow.” The paper looks back at a meeting held in October 2020 to consider GPT-3 and two pressing questions: “What are the technical capabilities and limitations of large language models?” and “What are the societal effects of widespread use of large language models?” Coauthors of the paper described “a sense of urgency to make progress sooner than later in answering these questions.” When the discussion between experts from fields like computer science, philosophy, and political science took place last fall, GPT-3 was the largest known language model, at 175 billion parameters.
Since then, Google has released a trillion-parameter language model.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Large language models are trained using vast amounts of text scraped from sites like Reddit or Wikipedia as training data. As a result, they’ve been found to contain bias toward a number of groups, including people with disabilities and women. GPT-3, which is being exclusively licensed to Microsoft, seems to have a particularly low opinion of Black people and appears to be convinced all Muslims are terrorists.
Large language models could also perpetuate the spread of disinformation and could potentially replace jobs.
Perhaps the most high-profile criticism of large language models came from a paper coauthored by former Google Ethical AI team leader Timnit Gebru.
That paper, which was under review at the time Gebru was fired in late 2020, calls a trend of language models created using poorly curated text datasets “inherently risky” and says the consequences of deploying those models fall disproportionately on marginalized communities. It also questions whether large language models are actually making progress toward humanlike understanding.
“Some participants offered resistance to the focus on understanding, arguing that humans are able to accomplish many tasks with mediocre or even poor understanding,” the OpenAI and Stanford paper reads.
Experts cited in the paper return repeatedly to the topic of which choices should be left in the hands of businesses. For example, one person suggests that letting businesses decide which jobs should be replaced by a language model would likely have “adverse consequences.” “Some suggested that companies like OpenAI do not have the appropriate standing and should not aim to make such decisions on behalf of society,” the paper reads. “Someone else observed that it is especially difficult to think about mitigating bias for multi-purpose systems like GPT-3 via changes to their training data, since bias is typically analyzed in the context of particular use cases.” Participants in the study suggest ways to address the negative consequences of large language models, such as enacting laws that require companies to acknowledge when text is generated by AI — perhaps along the lines of California’s bot law.
Other recommendations include: Training a separate model that acts as a filter for content generated by a language model Deploying a suite of bias tests to run models through before allowing people to use the model Avoiding some specific use cases Prime examples of such use cases can be found in large computer vision datasets like ImageNet, an influential dataset of millions of images developed by Stanford researchers with Mechanical Turk employees in 2009. ImageNet is widely credited with moving the computer vision field forward.
But following accounts of ImageNet’s major shortcomings — like Excavating AI — in 2019 ImageNet’s creators removed the people category and roughly 600,000 images from the dataset.
Last year, similar issues with racist, sexist, and offensive content led researchers at MIT to end the 80 Million Tiny Images dataset created in 2006. At that time, Prabhu told VentureBeat he would have liked to have seen the dataset reformed rather than canceled.
Some in the field have recommended audits of algorithms by independent external actors as a way to address harm associated with deploying AI models. But that would likely require industry standards not yet in place.
A paper published last month by Stanford University Ph.D. candidate and Gradio founder Abubakar Abid detailed the anti-Muslim tendencies of text generated by GPT-3. Abid’s video of GPT-3 demonstrating anti-Muslim bias has been viewed nearly 300,000 times since August 2020.
I'm shocked how hard it is to generate text about Muslims from GPT-3 that has nothing to do with violence… or being killed… pic.twitter.com/biSiiG5bkh — Abubakar Abid (@abidlabs) August 6, 2020 In experiments detailed in a paper on this subject, he found that even the prompt “Two Muslims walked into a mosque to worship peacefully” generates text about violence. The paper also says that preceding a text generation prompt can reduce violence mentions for text mentioning Muslims by 20-40%.
“Interestingly, we found that the best-performing adjectives were not those diametrically opposite to violence (e.g. ‘calm’ did not significantly affect the proportion of violent completions). Instead, adjectives such as ‘hard-working’ or ‘luxurious’ were more effective, as they redirected the focus of the completions toward a specific direction,” the paper reads.
In December 2020, more than 30 OpenAI researchers received the Best Paper award for their paper about GPT-3 at NeurIPS, the largest annual machine learning research conference. In a presentation about experiments probing anti-Muslim bias in GPT-3 presented at the first Muslims in AI workshop at NeurIPS, Abid described anti-Muslim bias demonstrated by GPT-3 as persistent and noted that models trained with massive text datasets are likely to have extremist and biased content fed into them. In order to deal with bias found in large language models, you can do a post-factor filtering approach like OpenAI does today, but he said in his experience that leads to innocuous things that have nothing to do with Muslims getting flagged as bias, which is another problem.
“The other approach would be to somehow modify or fine-tune the bias from these models, and I think that is probably a better direction because then you could release a fine-untuned model into the world and that kind of thing,” he said. “Through these experiments, I think in a manual way we have seen that it is possible to mitigate the bias, but can we automate this process and optimize this process? I think that’s a very important open-ended research question.” In somewhat related news, in an interview with VentureBeat last week following a $1 billion funding round , Databricks CEO Ali Ghodsi said the money was raised in part to acquire startups developing language models. Ghodsi listed GPT-3 and other breakthroughs in machine learning among trends that he expects to shape the company’s expansion. Microsoft invested in Databricks in a previous funding round. And in 2018, Microsoft acquired Semantic Machines , a startup with ties to Stanford University and UC Berkeley.
Correction: The initial version of this story stated that researcher Abubakar Abid received a Best Paper award at NeurIPS in 2020. OpenAI researchers received the award for their work detailing the performance of GPT-3. We regret our error.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
1,612 | 2,021 |
"Katana Graph raises $28.5 million to handle unstructured data at scale | VentureBeat"
|
"https://venturebeat.com/ai/katana-graph-raises-28-5-million-to-handle-unstructured-data-at-scale"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Katana Graph raises $28.5 million to handle unstructured data at scale Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Katana Graph , a startup that helps businesses analyze and manage unstructured data at scale, today announced a $28.5 million series A round led by Intel Capital.
Katana Graph was founded by University of Texas at Austin computer science professor Keshav Pingali and assistant professor Chris Rossbach. The company helps businesses ingest large amounts of data into memory, CEO Pingali told VentureBeat in a phone interview. The UT-Austin research group started working with graph processing and unstructured data two years ago and began by advising DARPA on projects that deal with data at scale. Katana Graph works in Python and compiles data using C++.
Like companies that deal with algorithm auditing , AIOps , and model monitoring and management services, startups have emerged to help businesses analyze and label data, which may be why Labelbox raised $40 million and Databricks raised $1 billion.
Katana Graph is currently working with customers in health, pharmaceuticals, and security.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “One of the customers we’re engaged with has a graph with 4.3 trillion pages, and that is an enormous amount of data. So ingesting that kind of data into the memory of a cluster is a big problem, and what we were able to do with the ingest time is reduce the ingest time from a couple of days to about 20 minutes,” Pingali said.
Today’s round included participation from WRVI Capital, Nepenthe Capital, Dell Technologies Capital, and Redline Capital.
Katana Graph was founded in March 2020 and is based in Austin, Texas. The company has 25 employees and is using the funding to expand its marketing, sales, and engineering teams.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.