id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
1,213
2,018
"Stepping Into an Amazon Store Helps It Get Inside Your Head | WIRED"
"https://www.wired.com/story/stepping-into-amazon-store-helps-get-inside-your-head"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Stepping Into an Amazon Store Helps It Get Inside Your Head The interior of an Amazon Go store in Seattle. Amazon Save this story Save Save this story Save Infrared light flooded down invisibly as I eyed the pastries in Amazon ’s new convenience store in downtown San Francisco. It helped cameras mounted on the store’s ceiling detect that I picked up a croissant, then put it back. My flirtation with a $3.19 morsel of flaky pastry was recorded during a preview of the Amazon Go store that opened in San Francisco’s financial district this morning. As in the five other such stores in Seattle and Chicago, shoppers gain entry by scanning a QR code in the Amazon Go mobile app to open a subway-style entry gate. Hundreds of cameras on the ceiling, plus sensors in the shelves, then record what each person picks up, so they can walk out without having to visit a checkout. Amazon Go’s design offers shoppers an eerie freedom. Breezing out of a store without breaking stride feels efficient but also a little like shoplifting. Less perceptibly, the cameras and shelf sensors also log data that provides Amazon a remarkable view of what people do in a physical store. “Anything that you pick up, anything that you put back, is kept track of,” says Dilip Kumar, the vice president responsible for the technology behind Amazon Go. Tracking shoppers like that puts Amazon in a position to teleport some of the data-driven tactics from its dominant online shopping business to the physical world. Kumar likened the way his system logged my moment of temptation with the croissant to how the company’s online store logs what people click. “It's the equivalent of interest on an Amazon detail page,” he said. Peter Fader, a professor of marketing at the University of Pennsylvania's Wharton School, compares the data that Amazon Go stores could collect to an earlier retail revolution—the debut of barcode scanners. That technology also streamlined the experience for shoppers while revealing new information to store owners, such as which items were frequently bought together. When someone visits an Amazon Go store, the Seattle company's technology can see not just what they bought but what they picked up and discarded, and also in what order they handled different items. That information could be used to optimize a store’s selection and design, and to power personalized marketing messages, Fader says. “It becomes possible to figure out what’s the bait to attract and retain and build relationships with the most valuable customers,” he says. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Cashier-less systems can collect impressively detailed data on shoppers. Standard Cognition is one of several startups offering similar technology to existing retailers. The company boasts that, unlike Amazon Go, its system uses only overhead cameras, not shelf sensors, making it easy to deploy in existing stores. Evan Shiue, who leads strategy at Standard Cognition, says the startup’s technology can log things like how often people picking up a can of Pepsi look at its nutrition panel, or whether they put it down and buy a Coke instead. “You get a lot of great behavioral data in ecommerce —what I added to my basket and then took out and how long I looked at a particular item,” says Shiue. “We can now do this in bricks and mortar through computer vision.” Visitors to an Amazon Go store are watched closely from above by conventional and depth cameras—aided by infrared illumination—from the moment they scan their app to gain entry. Amazon trained machine-learning algorithms to recognize when items have been picked up using thousands of hours of footage of people grabbing items from shelves, Kumar says. Weight sensors in a store's shelves help the system confirm what item, and how many, a person has taken. Kumar wouldn’t discuss in detail how Amazon might use the data it collects in Amazon Go stores. He acknowledged that the company could one day combine information from my visit with my record of Amazon.com purchases—potentially helping customer data analysts at both businesses—but said that wasn’t the project’s “primary purpose.” A company spokesperson said that any sensitive data collected by Amazon Go stores is treated in accordance with Amazon's existing data security policies, and directed WIRED to a privacy notice in the convenience stores' app. The success of other companies that use technology to merge online and offline customer tracking suggests that Amazon could reap considerable benefits. Stacy Smollin Schwartz, a professor at Rutgers Business School, points to Starbucks’ popular mobile app, which uses loyalty points and personalized challenges to lure customers into stores, and Disney’s MagicBand wristbands that track visitors’ perambulations around theme parks. Both have shown that a digital lens on a person’s real-world actions can open up new ways to change their behavior. For Amazon, that might mean sending people targeted promotions through the Amazon Go app—for example, to encourage someone who grabs a daily breakfast sandwich to also drop by for lunch. “Knowing people’s habits and being able to individually try to manipulate those habits to increase the individual loyalty and profitability of each customer is very valuable,” says Smollin Schwartz. A few minutes after I walked out of the store’s electronic gate Friday, the Amazon Go app buzzed for my attention. It showed an itemized receipt for the Advil, curry paste, and Amazon Go-branded chocolate bar I had grabbed from the shelves and stuffed into my pockets. The price total and tax were noted. Not visible—what Amazon learned about me. How the US fought China's cybertheft— with a Chinese spy Robocars could make humans unhealthier than ever Turning California’s weed into the champagne of cannabis Welcome to Voldemorting, the ultimate SEO dis PHOTOS: From Mars, Pennsylvania, to the Red Planet Get even more of our inside scoops with our weekly Backchannel newsletter Senior Editor X Topics Amazon Retail machine learning Kari McMahon Amit Katwala Will Knight Amit Katwala David Gilbert David Gilbert Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,214
2,018
"How Grubhub Analyzed 4,000 Dishes to Predict Your Next Order | WIRED"
"https://www.wired.com/story/how-grubhub-analyzed-4000-dishes-to-predict-your-next-order"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Adam Rogers Business How Grubhub Analyzed 4,000 Dishes to Predict Your Next Order Jonathan Kitchen/Getty Images Save this story Save Save this story Save All Matt Maloney wanted to know was whether Chicago-style deep dish pizza is better than New York-style thin crust. It’s a simple question. If he were anyone else, Maloney would have had to get violently anecdotal. Deep dish, while delicious, is obviously not so much a pizza as a casserole; conversely, if you want to put pizza toppings on a cracker, why not just order a flatbread? (Maloney is from Chicago, so you can guess which side he comes down on.) But no. Maloney felt like he should be able to literally answer the question. Because in addition to being deeply dishian, he’s also the CEO of Grubhub, the biggest online food-delivery service in the US. “Given the volume of transactions I do on a daily basis,” Maloney says, “I should be able to tell you, objectively, which is better.” Don’t let’s fight about whether “popular” equals “better.” Because broadly, Maloney is of course right. With 14.5 million active users ordering from 80,000 restaurants, Grubhub data ought to be able to tell you a lot about food. Maloney wanted to be able to segment, quantify, and compare who was ordering what across neighborhoods and cities. He wanted to algorithmically recommend dishes, help restaurants optimize their food choices, attract new customers with slicker service, and frankly get customers all over the country to act more like New Yorkers, who order from somewhere at least once a week. Today Grubhub does indeed have an algorithm that can look across a country’s worth of take-out orders and tell a user what Indian joint near them delivers the most popular chicken tikka masala. But getting there required solving a seemingly impossible data problem, some high-end machine learning, and a cookbook author from Brooklyn. The problem was the data. Not the orders—the who-orders-what and from-where. Those are easy. It was the menus. Nobody’s dishes matched, each one was unique. A pilaf from one restaurant might be a biryani at another. Japanese curries weren’t Indian curries weren’t Pakistani curries. They worked on it for eight years. “Every time, the product and tech groups came back and said, ‘Matt, this is way too hard. Ultimately, to get what you want, it’s going to be a manual solution and we have 10 other things that are a priority,’” Maloney says. His response: “Guys. We’re a multibillion-dollar company and we can’t tell people what the intrinsic value of these fucking dishes are? We can’t even compare pad thai across the country?” “So I made them do it,” Maloney says. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Grubhub is only a multibillion-dollar company in the volume of food it moves, not in its revenues, but even so: What Maloney wanted is a tricky problem. That’s because of the unstructured, sui generis nature of restaurant menus. If you don’t have a methodology designed to produce data ready-made for statistical analysis, you’re using “found” data, which is always messy, says Duncan Watts, a social scientist at Microsoft Research. “In data science there’s a trope about how 90 percent of the work involved is cleaning and organizing the data itself,” Watts says. “It’s true for email data, browser data, Twitter data, news media data, and even administrative data that’s supposed to be clean.” As usual, the whole system would be a lot simpler without people in it. If you’re trying to build a recommendation engine for, say, a vast streaming entertainment service , well, most people don’t watch the same movie over and over. So you get a spread on their behavior. That might be less true when it comes to dinner orders. “I’ve read some papers that say there are explorer types and there are the types who say, ‘this is my favorite restaurant, so why should I go anywhere else?’” says Joel Sokol, director of the Master of Science in Analytics degree at Georgia Tech. So they might not want a new recommendation, no matter how perfect. “That’s really more a business problem than a data problem,” Sokol says. Most products in ecommerce have agreed-upon metadata, so-called stock-keeping units (or SKUs) that numerically keep track of inventory. As a result, “buying, navigating, discovering, personalizing, and recommending are relatively easy because everything looks the same to everyone,” says Maria Belousova, Grubhub’s CTO. “When it gets to food, it’s completely the opposite. Grubhub and every other company were trading paragraphs of text with a title and a price tag.” A chef who used a regional, nonstandard spelling on the name of a dish rendered that menu incompatible with others that used a standard spelling. Leave out an ingredient and suddenly it’s a different dish. Belousova says the way to reconcile such differences is often through “collaborative filtering, meaning people who like this also like that.” But she says that for hyperlocal businesses like neighborhood restaurants, collaborative filtering doesn’t work well. There aren’t enough people to collaborate and there aren’t enough options to filter. The universe of choices and choosers is too small. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In the parlance of a data scientists, food is an unstructured domain. Grubhub had 14 million menu items and the only thing they had in common was that sometimes people ate them. So Belousova’s team set out to build its own taxonomy of food. They realized they had three independent but overlapping datasets. First they had the menus, full of the unique snowflake language each restaurant used for each dish but with some commonalities. Luckily, since restaurants give their menus to Grubhub and Grubhub translates them for the website, the people making the food are incentivized to give a lot of information. Second, Grubhub had user search logs and reviews. Those could show what people looked for and what they eventually ordered. And the company could limit the production of that data to actual, knowledgeable customers, since the service only gives reviewing rights to those who’ve actually ordered food. That only works on a platform where people are talking about stuff they’ve purchased; someplace like, oh, say, Yelp ends up being more of a free-for-all and can be less useful. And third, they had order history for customers and, maybe more importantly, the volume of orders for each menu item. In this construction, more orders per item tells you that the specific item is of high quality—or at least is popular, which, yes, isn’t necessarily the same thing. But one might be a proxy for the other. The tech team built an algorithm that could ingest all that data and begin to understand what the menus were actually saying. Almost. Because then they had to define what “is” is. Which is to say, like, what are bagels, really? What if the menu doesn’t call the boiled-dough baked round-with-a-hole bread product served with cream cheese and lox a bagel? It’s still a bagel, right? This is a problem of nomenclature, and the algorithm was supposed to learn not only what a basic food is, from adobo to zaataar, but its characteristics—culinary metadata like spicy-versus-mild or vegetarian or what culture it hails from. Grubhub’s data team learned to extract significant terms from menus and overlay that with search terms, and whether they ended with orders or not. “We were envisioning a graph of dishes in the cloud, connected to each other,” Belousova says. “You need chefs, diner vocabulary, and order vocabulary. Overlay those three datasets together and you get those relationships.” It was a feedback loop innovative enough that they filed a patent on it. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But, yeah, so, it didn’t work. That’s not totally fair. “You can cover maybe 35 to 40 percent of every menu if you have a good algorithm,” Maloney says. “But all the corner cases were unique.” Grubhub went looking for help. It came in the shape of Melissa Schreiber, a culinary school grad and author of two books about the food of Brooklyn. “I came in and they handed me the classifications of all the menu items on our platform, and they weren’t organized into usable categories for search,” Schreiber says. “I basically tuned up what the data had turned up.” Schreiber created a cuisine dictionary for the data team that broke down the ingredients in many of the dishes, an internal document that included names of cuisines, history, sometimes maps to show the geographic relationships. She built decks to explain to the data scientists dishes that didn’t have obvious names. “The taxonomy was obviously data driven, and it needed that human touch, that finesse of somebody that understood food more than data,” Schreiber says. She helped the team map dishes to cuisines, drawing lines like the one between Japanese curry rice and Indian curries, let’s say, or how to separate tacos from burritos. “Do you have Sushiritto in San Francisco?” Schreiber asks me. “That was weeks of conversation. Is it sushi? Is it a burrito? Every time someone would go they’d take a picture of it and post it to me.” All that fed back into making search more rational. If you’re looking for fish, do you want Dover sole or chirashi? When you order Chinese, maybe you think about the protein first, whereas with Mexican maybe you’re thinking, torta or combinacion? The data team took Schreiber’s edits and incorporated them into the search and recommendation algorithms. The result? A taxonomy of about 4,000 dishes, with every item in the menu database classified into multiple categories and subcategories. It’s not as sophisticated as what a data scientist might crave, but it does break into ideas as disparate as appetizers versus mains and healthy versus pizza. “Our system is a vector of preference,” says Belousova, somewhat cryptically. “Now that you understand what every menu item is and what every diner likes, you can tie things together.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Order from Grubhub a lot, and the system will build a taste profile for you and then suggest restaurants near you that match the profile, via email or a notification. Order one dish from a bunch of places, and the system will tell you where a lot of people order that dish. “If I know there’s a specific banh mi sandwich ordered 30 times by 1,000 people who live within one mile of you, that’s a good indicator that’s an amazing sandwich,” Maloney says. “If I know you’ve had six different chicken vindaloos from six restaurants with no re-orders, I know you’re looking, and I know from other people’s data what the most popular chicken vindaloo is. You better believe I’m putting that front and center for you.” To be fair, lots of online food delivery businesses work with their data and have some kind of predictive recommendation algorithm. And it’s always challenging. “Some places are just a pizza restaurant. All they serve is pizza, and you don’t get a subcategory of ‘marinara’ or ‘margherita,’” says Enu Herzberg, head of data at Postmates. “And some places—imagine the Cheesecake Factory, with a subclass of every food on Earth.” So Postmates relies on collaborative filtering. Basically, you’ll probably like things that other people like, if they also like some of the things you like. Postmates ingests menus, too, structuring some data itself, then using natural-language processing and other techniques to make distinctions that data scientists like, such as between a “category” and an “item.” “As you’re typing in the word ‘burger,’ we’re dynamically both searching the names of merchants and scanning menus,” Herzberg says. “You always pray for a cleaner dataset, but we’re pragmatic as well.” And Postmates is also learning about timing—about the kinds of things people generally order at a given time of afternoon, or more toward the beginning of a week for lunch (salad) versus the end (fried carbohydrates). That helps with recommendations for users, and it helps with optimizing where and when to send the people doing the deliveries. Another leading company, DoorDash, uses its data for that kind of optimization as well—for its users and maybe more interestingly for the delivery runners, which the company calls dashers. “You want to make sure the customer gets the food at the time they expect. You want to get it at the best quality from the merchant,” says Rajat Shroff, DoorDash’s VP of product. “And we want to make sure the dashers don’t waste their time waiting around.” So its algorithms do load balancing based on dasher location, delivery address, and restaurant speed. “Zero wait time. That’s what the prediction algorithms are trying to do,” Shroff says. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg All of which is why it was worth it to Maloney to build the artisanal menu database. Everyone is using collaborative filters to deliver recommendations. He’d like Grubhub to offer more. It cut data-sharing deals with Yelp and Foursquare; partnered with the company that owns KFC, Pizza Hut, and Taco Bell; and it’s buying up competitors like Yelp’s Eat24 delivery directory to increase to 80,000 the number of restaurants on the list. That’s big. But the business is only going to get more competitive. A report from McKinsey says that in 2016, 30 percent of food-delivery orders came online, a figure it expects to increase to 65 percent by 2020. Morgan Stanley thinks online delivery could be a $220 billion market in 2020, 40 percent of total restaurant sales. But McKinsey says Grubhub, which connects diners to restaurants that actually handle the deliveries, will face more competition from “new delivery companies” that provide their own vehicles and logistics, giving those companies access to higher-end restaurants that want to reach customers without running their own deliveries. The Wall Street Journal points out that DoorDash just got funding to expand to 1,600 North American cities. And then, as is customary to say at this point in this kind of story, there is Amazon. In this case, logistical legerdemain that combines the Grubhub-like Amazon Restaurants with delivery from the Amazon-owned Whole Foods grocery stores could upend the whole business. That’s why it was worth it to Maloney to tell his data team to figure out recommendations and search. That McKinsey report says that once people decide which online delivery platform to use, 80 percent of them stick with it. “Anything we can do to increase personalization and more accurately predict what you are more likely to eat is going to increase conversion rate, frequency rate, and your affinity for my platform,” Maloney says. And that does suggest a problem with Maloney’s original pizza question. This data can tell you what people order the most, but it still can’t tell you, objectively, what kind of pizza is the best. So all I can tell you is that, according to Grubhub, Chicagoans order deep dish pizza 722 percent more than in any other place in the United States. Data doesn’t lie, but you probably could have guessed that one. That fact that every other part of the country avoids deep dish? That’s what data scientists call “suggestive.” As a pizza scientist would say—especially one who also liked shrimp on her pie: correlation is not crustacean. Don’t count on food delivery via robot anytime soon. The holidays make delivery an even thornier problem. Remember when Yelp got into the food delivery game? Senior Correspondent X Topics food data Will Knight Will Knight Niamh Rowe Will Knight Christopher Beam Caitlin Harrington David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,215
2,018
"The Limits of Artificial Intelligence and Deep Learning | WIRED"
"https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jason Pontin Ideas Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning Save this story Save Save this story Save End User Research Sector IT Research Technology Machine learning Machine vision Natural language processing Neural Network Sundar Pichai, the chief executive of Google, has said that AI “is more profound than … electricity or fire.” Andrew Ng, who founded Google Brain and now invests in AI startups, wrote that “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” Their enthusiasm is pardonable. There have been remarkable advances in AI , after decades of frustration. Today we can tell a voice-activated personal assistant like Alexa to “Play the band Television ,” or count on Facebook to tag our photographs; Google Translate is often almost as accurate as a human translator. Over the last half decade, billions of dollars in research funding and venture capital have flowed towards AI; it is the hottest course in computer science programs at MIT and Stanford. In Silicon Valley, newly minted AI specialists command half a million dollars in salary and stock. But there are many things that people can do quickly that smart machines cannot. Natural language is beyond deep learning; new situations baffle artificial intelligences, like cows brought up short at a cattle grid. None of these shortcomings is likely to be solved soon. Once you’ve seen you’ve seen it, you can’t un-see it: deep learning, now the dominant technique in artificial intelligence, will not lead to an AI that abstractly reasons and generalizes about the world. By itself, it is unlikely to automate ordinary human activities. Jason Pontin ( @jason_pontin ) is an Ideas contributor for WIRED. He is a senior partner at Flagship Pioneering, a firm in Boston that creates, builds, and funds companies that solve problems in health, food, and sustainability. From 2004 to 2017 he was the editor in chief and publisher of MIT Technology Review. Before that he was the editor of Red Herring magazine, a business magazine that was popular during the dot-com boom. To see why modern AI is good at a few things but bad at everything else, it helps to understand how deep learning works. Deep learning is math: a statistical method where computers learn to classify patterns using neural networks. Such networks possess inputs and outputs, a little like the neurons in our own brains; they are said to be “deep” when they possess multiple hidden layers that contain many nodes, with a blooming multitude of connections. Deep learning employs an algorithm called backpropagation, or backprop, that adjusts the mathematical weights between nodes, so that an input leads to the right output. In speech recognition, the phonemes c-a-t should spell the word “cat;” in image recognition, a photograph of a cat must not be labeled “a dog;” in translation, qui canem et faelem ut deos colunt should spit out “who worship dogs and cats as gods.” Deep learning is “supervised” when neural nets are trained to recognize phonemes, photographs, or the relation of Latin to English using millions or billions of prior, laboriously labeled examples. Deep learning’s advances are the product of pattern recognition: neural networks memorize classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren’t classification problems at all. “People naively believe that if you take deep learning and scale it 100 times more layers, and add 1000 times more data, a neural net will be able to do anything a human being can do,” says François Chollet, a researcher at Google. “But that’s just not true.” Gary Marcus, a professor of cognitive psychology at NYU and briefly director of Uber’s AI lab, recently published a remarkable trilogy of essays, offering a critical appraisal of deep learning. Marcus believes that deep learning is not “a universal solvent, but one tool among many.” And without new approaches, Marcus worries that AI is rushing toward a wall, beyond which lie all the problems that pattern recognition cannot solve. His views are quietly shared with varying degrees of intensity by most leaders in the field, with the exceptions of Yann LeCun, the director of AI research at Facebook, who curtly dismissed the argument as “all wrong,” and Geoffrey Hinton, a professor emeritus at the University of Toronto and the grandfather of backpropagation , who sees “no evidence” of a looming obstacle. According to skeptics like Marcus, deep learning is greedy, brittle, opaque, and shallow. The systems are greedy because they demand huge sets of training data. Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualize the situation and frequently breaks. They are opaque because, unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted in terms of their weights within a mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These limitations mean that a lot of automation will prove more elusive than AI hyperbolists imagine. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience,” explains Pedro Domingos, the author of The Master Algorithm and a professor of computer science at the University of Washington. “Or consider robot control: A robot can learn to pick up a bottle, but if it has to pick up a cup, it starts from scratch.” In January, Facebook abandoned M , a text-based virtual assistant that used humans to supplement and train a deep learning system, but never offered useful suggestions or employed language naturally. What’s wrong? “It must be that we have a better learning algorithm in our heads than anything we’ve come up with for machines,” Domingos says. We need to invent better methods of machine learning, skeptics aver. The remedy for artificial intelligence, according to Marcus, is syncretism: combining deep learning with unsupervised learning techniques that don’t depend so much on labeled training data, as well as the old-fashioned description of the world with logical rules that dominated AI before the rise of deep learning. Marcus claims that our best model for intelligence is ourselves, and humans think in many different ways. His young children could learn general rules about language, and without many examples, but they were also born with innate capacities. “We are born knowing there are causal relationships in the world, that wholes can be made of parts, and that the world consists of places and objects that persist in space and time,” he says. “No machine ever learned any of that stuff using backprop.” Other researchers have different ideas. “We’ve used the same basic paradigms [for machine learning] since the 1950s,” says Pedro Domingos, “and at the end of the day, we’re going to need some new ideas.” Chollet looks for inspiration in program synthesis , programs that automatically create other programs. Hinton’s current research explores an idea he calls “ capsules ,” which preserves backpropagation, the algorithm for deep learning, but addresses some of its limitations. “There are a lot of core questions in AI that are completely unsolved,” says Chollet, “and even largely unasked.” We must answer these questions because there are tasks that a lot of humans don’t want to do, such as cleaning toilets and classifying pornography, or which intelligent machines would do better, such as discovering drugs to treat diseases. More: there are things that we can’t do at all, most of which we cannot yet imagine. You can stop panicking about a superhuman AI. As Kevin Kelly writes, that’s a myth. Another worry you can cross off your list? The fear that robots will take all of our jobs. It’s not nearly that simple. But AI is becoming an ever-more integral factor in the future of work. Say hello to your new AI coworkers. Photograph by WIRED/Getty Images Ideas Contributor X Topics artificial intelligence deep learning Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,216
2,018
"How Google Pixel 3's Camera Works Wonders With Just One Rear Lens | WIRED"
"https://www.wired.com/story/google-pixel-3-camera-features"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Lauren Goode Gear How Google Pixel 3's Camera Works Wonders With Just One Rear Lens Wired Save this story Save Save this story Save When Samsung revealed the Galaxy Note 9 back in August, it showed off new AI-powered camera features, like flaw detection and a scene optimizer to tune the exposure and color of a shot before you’ve captured it. When Apple launched the iPhone XS and XS Max last month, it talked a lot about how the new phone’s AI-specific neural processor enabled better photos, especially Portrait pics. Now, it’s Google’s turn to boast about its AI-enhanced smartphone camera—and show how its software smarts and access to vast networks of data give it a leg up on the competition. Earlier today Google announced its new Google Pixel 3 and Pixel 3 XL smartphones. The new phones were expected (and had been leaked weeks beforehand), but since Google makes the vast majority of its revenue from digital advertising, any new hardware launch from the company piques a particular kind of interest. Google may not sell nearly as many phones as its flagship competitors do, but it knows that if it’s going to compete at all in the high-end smartphone market, it has to have a killer camera. The cameras on last year’s Pixel 2 and Pixel 2 XL phones were widely acknowledged to be excellent cameras. How was it going to make this year’s phones exceptional? The answer, for Google, was clear: Anything you can do in AI, we can do better. The challenge was “not to launch gimmicky features, but to be very thoughtful about them, with the intent to let Google do things for you on the phone,” said Mario Queiroz, vice president of product management at Google. At the same time, being thoughtful about using AI in photography also means being careful not to insert biases. This is something that Google has had to reckon with in the past, when its image-labeling technology made a terrible mistake ; underscoring the challenges of using software to categorize photos. Google doing more things for you, as Queiroz put it, means it’s making more decisions around what a “good” photo looks like. The company’s work on the Pixel 3 camera started before the Pixel 2 phone even launched, according to Isaac Reynolds, a product manager on the Google Pixel camera team. “If the phone starts somewhere between 12 to 24 months in advance [of shipping], the camera starts six to eight months before that,” he says. “We’ve been thinking about the Pixel 3 camera for a long time, certainly more than a year.” During that time period, the Pixel camera team identified several features—as many as 10, though not all would make it into the phone—that Google’s computational photography researchers were working on. “It’s not, ‘Hey let’s assign a team to this particular project.’ We have a whole team that’s already researching these things,” says Sabrina Ellis, director of product management for Pixel. “For example, low light is an entire area of research for us. And the question becomes, ‘Is this something that would be a great feature for users or not?’” Eventually, the Pixel team narrowed down the list to include the camera features that were both technically possible and actually useful. For example, new features called Top Shot, Photobooth, Super Res Zoom, and Motion Auto Focus all use artificial intelligence and machine learning to either identify or compensate for all our human fallibility. (Turns out, we’re not very good at standing still while taking photos.) To be sure, some of the improvements to the Google Pixel 3 camera come from hardware upgrades. The front-facing camera now consists of two wide-angle, 12-megapixel camera lenses, better for wide-angle selfies. A slider tool below the viewfinder lets you adjust how wide you want the shot to go. The 12.2-megapixel rear camera has been improved, and the camera sensor is a “newer generation sensor,” though Reynolds conceded that it “has a lot of the same features.” The Pixel 3 also has a flicker sensor, which is supposed to mitigate the flicker effect you get when you’re shooting a photo or video under certain indoor lighting. Some of the “new” features might not seem all that new, at least in the broader smartphone market. You can now adjust the depth effect on a Portrait photo after it’s been captured on the Pixel 3, something that Apple and Samsung already offer on their flagship phones. A synthetic fill flash brightens selfies snapped in the dark; Apple has done this for awhile too. The Pixel’s dynamic range has been improved again, but these days, HDR-done-right is a baseline feature on flagship phones—not a standout one. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft There’s also the fact that the Google Pixel 3 still has a single-lens rear camera, while all of its high-end smartphone competitors have gone with double or even triple the number of lenses. Google argues it doesn’t really need another lens—“we found it was unnecessary,” Queiroz says—because of the company’s expertise in machine learning technology. Pixel phones extract enough depth information already from the camera’s dual-pixel sensor, and then run machine learning algorithms, trained on over a million photos, to produce the desired photo effect. It’s exactly the kind of answer you’d expect from a company that specializes in software. It’s also a convenient answer when camera components are some of the key parts that are driving up the cost of fancy smartphones. But there are some features launching with the Pixel 3 that do appear to be the clear beneficiaries of Google’s AI prowess—specifically, Google’s Visual Core, a co-processor that Google developed with Intel. It serves as a dedicated AI chip for the Pixel camera. The Visual Core was first rolled out with the Pixel 2 smartphone, a signal that Google was willing to invest in and customize its own chips to make something better than an off-the-shelf component. It’s what powers the Pixel’s commendable HDR+ mode. This year, the Visual Core has been updated, and it has more camera-related tasks. Top Shot is one of those features. It captures a Motion Photo, and then automatically selects the best still image from the bunch. It’s looking for open eyes and big smiles, and rejecting shots with windswept hair or faces blurred from too much movement. Photobooth is another one. The new feature is based on technology from the Google Clips camera , a tiny static camera that automatically captures moments throughout your day, or during an event, like a birthday party. Photobooth only takes front-facing photos, but it works a little bit like Clips: You select that mode, raise the camera, and once the camera sees your face in the frame and sees you make an expression, it starts auto-snapping a bunch of photos. If you’re trying to take a picture in the dark—so dark that your smartphone photos would normally look like garbage, as one Google product manager described it to me—the Pixel 3’s camera will suggest something called Night Sight. This isn't launching with the phone, but is expected to come later this year. Night Sight requires a steady hand because it uses a longer exposure, but it fuses together a bunch of photos to create a nighttime photo that doesn’t look, well, like garbage. All of this without using the phone’s flash, too. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Super Res Zoom, another feature new to Pixel 3, isn’t just a software tweak. It requires a lens that’s a little bit sharper than the camera’s sensor, so that the resolution isn’t limited by the sensor. But it enhances the resolution on a photo that you’ve zoomed way in on by using machine learning to adjust for the movement of your hand. (If you have the smartphone on a tripod or stable surface, you can actually see the frame moving slightly, as the camera mimics your hand movement.) There are almost too many new camera features to take full advantage of. It’s hard to know without having really used the Pixel 3 yet which of these actually are useful and which are gimmicks, the thing Queiroz said Google was trying to avoid. This relatively new trend in computational photography, the use of AI and machine learning to compensate for a lack of hardware or for human imperfection, raises some questions about the existence of bias in the machine learning models that Google is using. Google’s photo data sets have already been shown to have bias, as have others. One thing that stood out to me as I got a sneak peek at Google’s new Pixel cameras: There were an awful lot of references to photos with smiling, happy faces. Top Shot looks for photos that would be considered decent by any photo standards, but it also looks for that group shot where you’re all smiling. Photobooth won’t start auto-snapping photos until you’ve made some sort of expression, like a smile or a goofy face. Google uses AI to make photos look better overall, for sure—but in doing that it’s also making subtle determinations around what a good photo is. “If AI is just being used to make photos look better, then everyone likes it,” said Venkatesh Saligrama, a professor Boston University’s school of engineering who has researched gender biases in machine learning. “On the other hand, if it’s using information more broadly, to say this is what they like and what they don’t like and altering your photography that way, then it might not be something you want out of the system.” “It could be applying broader culture influences, and in some cases that may not be good,” Saligrama added. Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Reynolds, the Pixel camera product manager, says his team likens some of the new features to building a “shot list” of what photos most people would want to take in a given situation—say, at a wedding. “Everyone goes into a wedding with a shot list, and when we built Top Shot, we had those sorts of lists in mind,” he said. “And somewhere on that shot list is also a very serious pose, a dramatic photo. But I think we decided to focus on that group photo where everyone is smiling at the same time.” Google also has specific machine learning models that can detect surprise, or amusement, in certain scenarios, Reynolds said. It has annotated over 100 million faces. It knows these things. For the most part, this technology may very well translate into wow-worthy photos on the Google Pixel 3. It may surpass the already-impressive Google Pixel 2 camera. Or it may just nudge the future of smartphone photography forward slightly, in a year when every major smartphone camera is pretty darn good. One thing’s certain: Google’s doing it the Google way. The Facebook hack exposes an internet-wide failure The information terrorists trying to reshape America How the best jumpers in the world fly so damn high 25 years of predictions and why the future never arrives An oral history of Apple's Infinite Loop Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Writer X Topics Google Pixel software Julian Chokkattu Simon Hill David Nield Reece Rogers Lauren Goode Julian Chokkattu Nena Farrell Brenda Stolyar WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,217
2,019
"OpenAI Wants to Make Ultra-Powerful AI. But Not in a Bad Way | WIRED"
"https://www.wired.com/story/company-wants-billions-make-ai-safe-humanity"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business OpenAI Wants to Make Ultrapowerful AI. But Not in a Bad Way Shiho Fukada/Bloomberg/Getty Images Save this story Save Save this story Save Application Games Company Open AI Sector Research Technology Machine learning One Saturday last month, five men ages 19 through 26 strode confidently out of a cloud of magenta smoke in a converted auto showroom in San Francisco. They sat at a line of computer keyboards to loud cheers from a crowd of a few hundred. Ninety minutes of intense mouse-clicking later, the five’s smiles had turned sheepish and the applause consolatory. Team OG, champions at the world’s most lucrative videogame, Dota 2 , had lost two consecutive games to a collective of artificial intelligence bots. The result was notable because complex videogames are mathematically more challenging than cerebral-seeming board games like chess or Go. Yet leaning against a wall backstage, Sam Altman, CEO of OpenAI, the research institute that created the bots, was as relieved as he was celebratory. “We were all pretty nervous this morning—I thought we had like a 60-40 chance,” said Altman, a compact figure in a white T-shirt and whiter, showy sneakers. He became OpenAI’s CEO in March after stepping down as president of influential startup incubator YCombinator and had reason to be measured about the day’s win. To succeed in his new job, Altman needs bots to do more than beat humans at videogames—he needs them to be better than people at everything. OpenAI’s stated mission is to ensure that all of humanity benefits from any future AI that’s capable of outperforming “humans at most economically valuable work.” Such technology, dubbed artificial general intelligence, or AGI, does not seem close , but OpenAI says it and others are making progress. The organization has shown it can produce research on par with the best in the world. It has also been accused of hype and fearmongering by AI experts critical of its fixation on AGI and AI technology’s potential hazards. Under Altman’s plans, OpenAI’s research—and provocations—would accelerate. Previously chair of the organization, he took over as CEO after helping flip most of the nonprofit’s staff into a new for-profit company , in hopes of tapping investors for the billions he claims he needs to shape the destiny of AI and humanity. Altman says the big tech labs at Alphabet and elsewhere need to be pressured by a peer not driven to maximize shareholder value. “I don’t want a world where a single tech company creates AGI and captures all of the value and makes all of the decisions,” he says. At an MIT event in late 2014, Tesla CEO Elon Musk described AI research as like “summoning the demon.” In the summer of 2015, he got talking with Altman and a few others over dinner about creating a research lab independent of the tech industry to steer AI in a positive direction. OpenAI was announced late that year , with Altman and Musk as cochairs. Musk left the board early in 2018, citing potential conflicts with his other roles. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In its short life, OpenAI has established itself as a serious venue for AI research. Ilya Sutskever, a cofounder of the organization who left a plum position in Google’s AI group to lead its research, oversees a staff that includes fellow ex-Googlers and alumni of Facebook, Microsoft, and Intel. Their work on topics such as robotics and machine learning has appeared at top peer-reviewed conferences. The group has teamed up with Google parent Alphabet to research AI safety; beating Team OG in Dota 2 earned respect from experts in AI and gaming. OpenAI’s metamorphosis into a for-profit corporation was driven by a feeling that keeping pace with giants such as Alphabet will require access to ever-growing computing resources. In 2015 , OpenAI said it had $1 billion in committed funding from Altman, Musk, LinkedIn cofounder Reid Hoffman, early Facebook investor Peter Thiel, and Amazon. Altman now says a single billion won’t be enough. “The amount of money we needed to be successful in the mission is much more gigantic than I originally thought,” he says. OpenAI CTO Greg Brockman, center, shakes hands with members of professional e-gaming team OG after they lost two games of Dota 2 to his researchers' artificial intelligence bots. OpenAI Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg IRS filings show that in 2017, when OpenAI showed its first Dota -playing bot, it spent $8 million on cloud computing. Its outlay has likely grown significantly since. In 2018, OpenAI disclosed that a precursor to the system that defeated Team OG tied up more than 120,000 processors rented from Google’s cloud division for weeks. The champion-beating version trained for 10 months, playing the equivalent of 45,000 years of Dota against versions of itself. Asked how much that cost, Greg Brockman, OpenAI’s chief technology officer, says the project required “millions of dollars” but declined to elaborate. Altman isn’t sure if OpenAI will continue to rely on the cloud services of rivals—he remains open to buying or even designing AI hardware. The organization is keeping close tabs on new chips being developed by Google and a raft of startups to put more punch behind machine learning algorithms. To raise the funds needed to ensure access to future hardware, Altman has been trying to sell investors on a scheme wild even for Silicon Valley. Sink money into OpenAI, the pitch goes, and the company will pay you back 100-fold —once it invents bots that outperform humans at most economically valuable work. Altman says delivering that pitch has been “the most interesting fundraising experience of my life—it doesn’t fit anyone’s model.” The strongest interest comes from AI-curious wealthy individuals, he says. Hoffman and VC firm Khosla Ventures have invested in the new, for-profit OpenAI but didn’t respond to requests for comment. No one is told when to expect returns, but betting on OpenAI is not for the impatient. VC firms are informed they’ll have to extend the duration of their funds beyond the industry standard decade. “We tell them upfront, you're not going to get a return in 10 years,” Altman says. Even as it tries to line up funding, OpenAI is drawing criticism from some leading AI researchers. In February, OpenAI published details of language processing software that could also generate remarkably fluid text. It let some news outlets— including WIRED —try out the software but said the full package and specifications would be kept private out of concern they could be used maliciously, for example to pollute social networks. That annoyed some prominent names in AI research, including Facebook's chief AI scientist Yann LeCun. In public Facebook posts, he defended open publication of AI research and joked that people should stop having babies, since they could one day create fake news. Mark Zuckerberg clicked “like” on the baby joke; LeCun did not respond to a request for comment. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For some, the episode highlighted how OpenAI’s mission leads it to put an ominous spin on work that isn’t radically different from that at other corporate or academic labs. “They’re doing more or less identical research to everyone else but want to raise billions of dollars on it,” says Zachary Lipton, a professor who works on machine learning at Carnegie Mellon University and also says OpenAI has produced some good results. “The only way to do that is to be a little disingenuous.” Altman concedes that OpenAI may have sounded the alarm too early—but says that’s better than being too late. “The tech industry has not done a good enough job trying to be proactive about how things may be abused,” he says. A Google cloud executive who helps implement the company’s internal AI ethics rules recently spoke in support of OpenAI’s self-censorship. After the defeated Team OG departed the stage last month to sympathetic acclaim, OpenAI cued up a second experiment designed to demonstrate the congenial side of superhuman AI. Dota experts—and a few novices, including WIRED—played on teams alongside bots. The AI software unlucky enough to get WIRED as a teammate mostly evinced superhuman indifference to helping a rookie player. It focused instead on winning the game, following instincts honed by months of expensive training. Narrow hyper-competence is a hallmark of existing AI systems. A WIRED reporter could play Dota badly while taking occasional notes and talking with an OpenAI researcher, before riding a bicycle home in city traffic. Despite millions spent on training, the Dota bots could only play the specific version of the game they were designed for. There’s little consensus on how to make AI software more flexible , or what components might be needed to make AGI more than a technological fantasy. Even Altman is daunted by the scale of the challenge. “I have days where I’m convinced it’s all going to happen and others where it all feels like a pipe dream,” he says. The quietly lucrative business of donating human eggs The antibiotics business is broken— but there's a fix Are we there yet? A reality check on self-driving cars What gets lost in the black horror renaissance How a scammy phone call led to the robocall king 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Senior Editor X Topics artificial intelligence OpenAI bots ethics Elon Musk Will Knight Steven Levy Will Knight Steven Levy Will Knight Reece Rogers Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,218
2,017
"An Old Technique Could Put Artificial Intelligence in Your Hearing Aid | WIRED"
"https://www.wired.com/story/an-old-technique-could-put-artificial-intelligence-in-your-hearing-aid"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business An Old Technique Could Put Artificial Intelligence in Your Hearing Aid Getty Images Save this story Save Save this story Save End User Consumer Small company Sector IT Semiconductors Technology Chips Neural Network Dag Spicer is expecting a special package soon, but it’s not a Black Friday impulse buy. The fist-sized motor, greened by corrosion, is from a historic room-sized computer intended to ape the human brain. It may also point toward artificial intelligence's future. Spicer is senior curator at the Computer History Museum in Mountain View, California. The motor in the mail is from the Mark 1 Perceptron, built by Cornell researcher Frank Rosenblatt in 1958. Rosenblatt's machine learned to distinguish shapes such as triangles and squares seen through its camera. When shown examples of different shapes, it built “knowledge” using its 512 motors to turn knobs and tune its connections. "It was a major milestone," says Spicer. Computers today don’t log their experiences---or ours---using analog parts like the Perceptron’s self-turning knobs. They store and crunch data digitally, using the 1s and 0s of binary numbers. But 11 miles away from the Computer History Museum, a Redwood City, California, startup called Mythic is trying to revive analog computing for artificial intelligence. CEO and cofounder Mike Henry says it’s necessary if we’re to get the full benefits of artificial intelligence in compact devices like phones, cameras, and hearing aids. Mythic's analog chips are designed to run artificial neural networks in small devices. Mythic Mythic uses analog chips to run artificial neural networks, or deep-learning software, which drive the recent excitement about AI. The technique requires large volumes of mathematical and memory operations that are taxing for computers---and particularly challenging for small devices with limited chips and battery power. It’s why the most powerful AI systems reside on beefy cloud servers. That’s limiting, because some places AI could be useful have privacy, time, or energy constraints that mean handing off data to a distant computer is impractical. You might say Mythic’s project is an exercise in time travel. “By the time I went to college analog computers were gone,” says Eli Yablonovitch, a professor at University of California Berkeley who got his first degree in 1967. “This brings back something that had been soundly rejected." Analog circuits have long been relegated to certain niches, such as radio signal processing. Henry says internal tests indicate Mythic chips make it possible to run more powerful neural networks in a compact device than a conventional smartphone chip. "This can help deploy deep learning to billions of devices like robots, cars, drones, and phones," he says. Related Stories Chips Tom Simonite Wunderkind Tom Simonite Artificial Intelligence Tom Simonite Henry likes to show the difference his chips could make with a demo in which simulations of his chip and a smartphone chip marketed as tuned for AI run software that spots pedestrians in video from a camera mounted on a car. The chips Mythic has made so far are too small to run a full video processing system. In the demo, Mythic’s chip can spot people from a greater distance, because it doesn’t have to scale down the video to process it. The suggestion is clear: you’ll be more comfortable sharing streets with autonomous vehicles that boast analog inside. Digital computers work by crunching binary numbers through clockwork-like sequences of arithmetic. Analog computers operate more like a plumbing system, with electrical current in place of water. Electrons flow through a maze of components like amplifiers and resistors that do the work of mathematical operations by changing the current or combining it with others. Measuring the current that emerges from the pipeline reveals the answer. That approach burns less energy than an equivalent digital device on some tasks because it requires fewer circuits. A Mythic chip can also do all the work of running a neural network without having to tap a device's memory, which can interfere with other functions. The analog approach isn't great for everything, not least because it's more difficult to control noise, which can affect the precision of numbers. But that's not a problem for running neural networks, which are prized for their ability to make sense of noisy data like images or sound. "Analog math is great for neural networks, but I wouldn't balance my check book with it," Henry says. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg If analog comes back, it won't be the first aspect of the Mark 1 Perceptron to get a second life. The machine was one of the earliest examples of a neural network, but the idea was mostly out of favor until the current AI boom started in 2012. Objects identified in video by a simulation of a conventional smartphone chip tuned for artificial intelligence. Mythic A simulation of Mythic's chip can identify more objects from a greater distance because it doesn't have to scale down the video to process it. Mythic Mythic's analog plumbing is more compact than the Perceptron Mark 1's motorized knobs. The company's chips are repurposed flash memory chips like those inside a thumb drive---a hack that turns digital storage into an analog computer. The hack involves writing out the web of a neural network for a task such as processing video onto the memory chip's transistors. Data is passed through the network by flowing analog signals around the chip. Those signals are converted back into digital to complete the processing and allow the chip to work inside a conventional digital device. Mythic has a partnership with Fujitsu, which makes flash memory and aims to get customers final chip designs to test next year. The company will initially target the camera market, where applications include consumer gadgets, cars, and surveillance systems. Mythic hopes its raise-the-dead strategy will keep it alive in a crowded field of companies working on custom silicon for neural networks. Apple and Google have added custom silicon to power neural networks into their latest smartphones. Yablonovitch of Berkeley guesses that Mythic won't be the last company that tries to revive analog. He gave a talk this month highlighting the opportune match between analog computing and some of today's toughest, and most lucrative, computing problems. “The full potential is even bigger than deep learning,” Yablonovitch says. He says there is evidence analog computers might also help with the notorious traveling-salesman problem, which limits computers planning delivery routes, and in other areas including pharmaceuticals, and investing. Something that hasn’t changed over the decades since analog computers went out of style is engineers’ fondness for dreaming big. Rosenblatt told the New York Times in 1958 that “perceptrons might be fired to the planets as mechanical space explorers.” Henry has extra-terrestrial hopes, too, saying his chips could help satellites understand what they see. He may be on track to finally prove Rosenblatt right. Senior Editor X Topics artificial intelligence chips analog Will Knight Will Knight Aarian Marshall Gregory Barber Paresh Dave Steven Levy Will Bedingfield Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,219
2,019
"Amazon Alexa and the Search for the One Perfect Answer | WIRED"
"https://www.wired.com/story/amazon-alexa-search-for-the-one-perfect-answer"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons James Vlahos Business Amazon Alexa and the Search for the One Perfect Answer Play/Pause Button Pause Voice computing seeks to deliver a single correct response to any query. That's why it's going to upend our relationship with information. Jacob Burge Save this story Save Save this story Save Application Personal assistant Company Amazon End User Consumer Sector Consumer services Source Data Speech Technology Natural language processing If you had visited the Cambridge University Library in the late 1990s, you might have observed a skinny young man, his face illuminated by the glow of a laptop screen, camping out in the stacks. William Tunstall-­Pedoe had wrapped up his studies in computer science several years earlier, but he still relished the musty aroma of old paper, the feeling of books pressing in from every side. The library received a copy of nearly everything published in the United Kingdom, and the sheer volume of information—5 million books and 1.2 million periodicals—inspired him. It was around this time, of course, that another vast repository of knowledge— the internet —was taking shape. Google , with its famous mission statement “to organize the world’s information and make it universally accessible and useful,” was proudly stepping into its role as librarian to the planet. But as much as Tunstall-­Pedoe adored lingering in the stacks, he felt that computers shouldn’t require people to laboriously track down information the way that libraries did. Yes, there was great pleasure to be had in browsing through search results, stumbling upon new sources, and discovering adjacent facts. But what most users really wanted was answers, not the thrill of a hunt. This article is adapted from Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think, by James Vlahos, to be published in March by Houghton Mifflin Harcourt. Houghton Mifflin Harcourt As tools for achieving this end, search engines were almost as cumbersome as their book-stuffed predecessors. First, you had to think of just the right keywords. From the long list of links that Google or Yahoo produced, you had to guess which one was best. Then you had to click on it, go to a web page, and hope that it contained the information you sought. Tunstall-­Pedoe thought the technology should work more like the ship’s computer on Star Trek : Ask a question in everyday language, get an “instant, perfect answer.” Search engines as helpful librarians, he believed, must eventually yield to AIs as omniscient oracles. This was a technological fantasy on par with flying cars, but Tunstall-­Pedoe set about making it a reality. He had been earning money as a programmer since the age of 13 and had always been particularly fascinated by the quest to teach natural language to machines. As an undergraduate, he had written a piece of software called Anagram Genius, which, when supplied with names or phrases, cleverly rearranged the letters. “Margaret Hilda Thatcher,” for instance, became “A girl, the arch mad-hatter.” (Years later, author Dan Brown used Anagram Genius to generate the plot-­critical puzzles in The Da Vinci Code. ) Now, sequestered in the library, Tunstall-Pedoe began building a prototype that could answer a few hundred questions. Two decades later, with the rise of voice computing platforms such as Amazon Alexa and Google Assistant , the world’s biggest tech companies are suddenly, precipitously moving in Tunstall-­Pedoe’s direction. Voice-­enabled smart speakers have become some of the industry’s best-selling products; in 2018 alone, according to a report by NPR and Edison Research, their prevalence in American households grew by 78 percent. According to one market survey, people ask their smart speakers to answer questions more often than they do anything else with them. Tunstall-­Pedoe’s vision of computers responding to our queries in a single pass—providing one-shot answers, as they are known in the search community—has gone mainstream. The internet and the multibillion-­dollar business ecosystems it supports are changing irrevocably. So, too, is the creation, distribution, and control of information—the very nature of how we know what we know. In 2007, having weathered the dotcom crash and its aftermath, Tunstall-­Pedoe and a few colleagues were close to launching their first product—a website called True Knowledge that would offer one-shot answers to all kinds of questions. At the time, theirs was still a heterodox goal. “There were people in Google who were completely allergic to what we were doing,” Tunstall-­Pedoe says. “The idea of a one-shot answer to a search was taboo.” He recalls arguing with one senior Google employee who rejected the notion of there even being such a thing as a single correct reply. The big search engines, despite having indexed billions of web pages, did not possess a deep understanding of user queries. Rather, they engaged in glorified guesswork: You typed a few keywords into the Google search bar, and the company’s PageRank system returned a long list of statistically backed conjectures about what you wanted to know. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To demonstrate that True Knowledge’s one-shot ambition was possible, Tunstall-­Pedoe and his small team in Cambridge had developed a digital brain consisting of three primary components. The first was a natural-language-­processing system that tried to robustly interpret questions. For instance, “How many people live in,” “What is the population of,” and “How big is” would all be represented as queries about the number of inhabitants of a place. The second component of the system amassed facts. Unlike a search engine, which simply pointed users toward websites, True Knowledge aspired to supply the answers itself. It needed to know that the population of London is 8.8 million, that LeBron James is 6'8", that George Washington’s last words were “ ’Tis well,” and so on. The great majority of these facts were not manually keyed into the system; that would have been too arduous. Instead, they were automatically retrieved from sources of structured data, where information is listed in a computer-­readable format. Finally, the system had to encode how all of these facts related to one another. The programmers created a knowledge graph , which can be pictured as a giant treelike structure. At its base was the category “object,” which encompassed every single fact. Moving upward, the “object” category branched into the classes “conceptual object” (for social and mental constructs) and “physical object” (for everything else). The higher up the tree you went, the more refined the categorizations got. The “track” category, for instance, split into groupings that included “route,” “railway,” and “road.” Building the ontology was a grueling task, and it swelled to tens of thousands of categories, comprising hundreds of millions of facts. But the structure it provided allowed new information to be sorted like laundry into dresser drawers. Related Stories magazine James Vlahos Code David Pierce Fighting Words James Vlahos The knowledge graph encoded relationships in a taxonomic sense: A Douglas fir is a type of conifer, a conifer is a type of plant, and so on. But beyond simply expressing that there was a connection between two entities, the system also characterized the nature of each connection: Big Ben is located in England. Emmanuel Macron is the president of France. This meant that True Knowledge effectively learned some commonsense rules about the world that, while blazingly obvious to humans, typically elude computers: A landmark can exist only in a single place. France can have only one sitting president. Most exciting for Tunstall-­Pedoe, True Knowledge could handle questions whose answers were not explicitly spelled out beforehand. Imagine somebody asking, “Is a bat a bird?” Because the ontology had bats sorted into a subgroup under “mammals” and birds were located elsewhere, the system could correctly reason that bats are not birds. True Knowledge was getting smart, and in pitches to investors, Tunstall-­Pedoe liked to thumb his nose at the competition. For instance, he’d Google “Is Madonna single?” The search engine’s shallow understanding was obvious when it returned the link “Unreleased Madonna single slips onto Net.” True Knowledge, meanwhile, knew from the way the question was phrased that “single” was being used as an adjective, not a noun, and that it was defined as an absence of romantic connections. So, seeing that Madonna and Guy Ritchie were connected (at the time) by an is married to link, the system more helpfully answered that, no, Madonna was not single. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Liking what they saw, investors cranked open the venture capital spigot in 2008. True Knowledge expanded to around 30 employees and moved to a larger office in Cambridge. But the technology didn’t initially catch on with consumers, in part because its user interface was “an ugly baby,” Tunstall-­Pedoe says. So he relaunched True Knowledge as a cleanly designed smartphone app, one available on both iPhones and Android devices. It had a cute logo—a smiley face with one eye—and a catchy new name, Evi (pronounced EE-vee ). Best of all, you could speak your questions to Evi and hear the replies. Evi debuted in January 2012, a few months after Apple launched its Siri voice assistant , and shot to No. 1 in the company’s app store, quickly racking up more than half a million downloads. (Apple, apparently piqued by headlines such as “introducing evi: siri’s new worst enemy,” at one point threatened to pull the app.) Tunstall-­Pedoe was swamped with acquisition interest. After a series of meetings with suitors, True Knowledge agreed to be bought out. Nearly everyone would get to keep their jobs and stay in Cambridge, and Tunstall-­Pedoe would become a senior member of the product team for a not-yet-released voice computing device. When that device came out in 2014, its question-­answering abilities would be significantly powered by Evi. The buyer was Amazon, and the device was the Echo. Jacob Burge Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One-shot answers were unfashionable back when Tunstall-­Pedoe started programming at Cambridge. But that was no longer the case by the time the Echo came out. In the era of voice computing, offering a single answer is not merely a nice-to-have feature; it’s a need-to-have one. “You can’t provide 10 blue links by voice,” Tunstall-Pedoe says, echoing prevailing industry sentiment. “That’s a terrible user experience.” As the world’s largest tech firms wised up, they began retracing many of True Knowledge’s steps. In 2010, Google acquired Meta­web, a startup that was creating an ontology called Freebase. Two years later, the company unveiled the Knowledge Graph, which boasted 3.5 billion facts. That same year, Microsoft launched what would become known as the Concept Graph, which grew to contain 5 million entities. In 2017, Facebook, Amazon, and Apple all acquired knowledge-­graph-building companies. Lately, many researchers have begun designing autonomous systems that crawl the web for answers, stocking ontologies with new facts far quicker than any human could. The bull rush makes sense. Market analysts estimate that, by 2020, up to half of all internet searches will be spoken aloud. Lately, even the trusty old librarians of onscreen search have been quietly switching to oracle mode. Google has been steadily boosting the prevalence of featured snippets, a type of one-shot answer, in the desktop and mobile versions of its search engine. They get pride of place above the other results. Let’s say you search for “What is the rarest element in the universe?” Right there, under the query box, is the response: “The radioactive element astatine.” According to the marketing agency Stone Temple, Google served up instant answers for more than a third of all searches in July 2015. Eighteen months later, it did so more than half the time. The move toward one-shot answers has been just slow enough to obscure its own most important consequence: killing off the internet as we know it. The conventional web, with all of its tedious pages and links, is giving way to the conversational web, in which chatty AIs reign supreme. The payoff, we are told, is increased convenience and efficiency. But for everyone who has economic interests tied to traditional web search—businesses, advertisers, authors, publishers, the tech giants—the situation is perilous. To understand why, it helps to quickly review the economics of the online world, where attention is everything. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Companies want to be found; they want their ads to be seen. So, since the earliest days of the internet, they have worked to master the mysterious art of search engine optimization, or SEO—tweaking keywords and other elements of sites to make them appear higher in the search rankings. To guarantee a prime location, companies also fork over money directly to the search services for paid discovery, purchasing small ads that run atop or beside the results. When desktop search was the only game around, companies jockeyed to be one of the top 10 links listed; people often don’t scroll any lower than that. Since the rise of mobile, they’ve raced to get into the top five. With voice search, companies face an even more daunting challenge. They want to grab what’s known as position zero—to supply the one-shot answer that appears above all the other results. Position zero is critical because it is most often what gets read aloud. And it is often the only thing that gets read, according to Greg Hedges, a VP at the marketing agency RAIN, which advises brands on their conversational AI strategy. “If you want to be visible in a few years, you have to make sure that your website is optimized for voice search,” he says. Suppose you run a sushi restaurant and have many competitors nearby. A user asks his voice device, “What’s a good sushi place near me?” If your restaurant isn’t the one the AI regularly chooses first, you’re in trouble. There is, of course, a verbal equivalent to scrolling down: After hearing the top option, the customer might say, “I don’t like the sound of that. What else is nearby?” But that requires work, which people avoid when they can. Reaching position zero requires a wholly different strategy than conventional SEO. The importance of putting just the right keywords on a web page, for instance, is declining. Instead, SEO gurus try to think of the natural-language phrases that users might say—like “What are the top-rated hybrid cars?”—and incorporate them, along with concise answers, on sites. The hope is to produce the perfect bit of content that the AI will extract and read aloud. For now, there is no paid discovery for voice search. But when it inevitably arrives, the internet’s ad economy will be turned upside down. Because voice oracles dispense answers one at a time, they offer less real estate for advertisers. “There’s going to be a battle for shelf space, and each slot should theoretically be more expensive,” Jared Belsky, the current CEO of the digital marketing agency 360i, told Adweek in 2017. “It’s the same amount of interest funneling into a smaller landscape.” This may prove especially true in retail environments such as Amazon, where a purchase-ready consumer is right on the other end of the smart speaker. With voice, the goal is to summit Everest—to get the top result—or die trying. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What if your product isn’t a hybrid car or a spicy tuna roll but knowledge itself? Publishers are already uncomfortably dependent on the big tech companies for most of their traffic, and thus much of their advertising income. According to the analytics company Parse.ly, Google searches currently account for about half of all referrals to publishers’ sites; shared links on Facebook account for a quarter. One-shot answers could seriously restrict this traffic. For instance: I am an Oregon Ducks fan. In the past, I’d go to ESPN.com the morning after a game to find out who won. Once there, I might click on another story or two, giving the site a few fractions of a cent in ad revenue. If I were feeling especially generous, I might even sign up for a monthly subscription. But now I can simply ask my phone, “Who won the Ducks game?” I get my answer, and ESPN never sees my traffic. Maybe you care about ESPN, a major business in its own right, having its traffic siphoned off; maybe you don’t. The point is that a similar dynamic could affect a huge number of content creators, from the whales to the minnows. Consider the story of Brian Warner, who runs a website called Celebrity Net Worth. On the site, curious visitors can punch in the name of, say, Jay-Z and find out—thanks to research by Warner’s employees—that the rapper is worth an estimated $930 million. Warner claims that Google started harvesting answers from his site even after he explicitly denied the search giant’s request for access to his company’s database. Once this started, he says, the amount of traffic that actually reached Celebrity Net Worth plummeted by 80 percent, and he had to lay off half of his staff. “How many thousands of other websites and businesses has Google paved over?” he asks. (A Google spokesperson declined to comment specifically on Warner’s version of events; she noted, however, that site administrators can use the company’s developer tools to prevent their pages from appearing in featured snippets.) When voice AIs read an extracted bit of content, they often do credit the source. They may offer a verbal attribution or, if the device in question has a screen, a visual one. But name-­dropping doesn’t pay the bills; publishers need traffic. With a typical smart speaker, the chances that a user would somehow supply that traffic are slim. Google and Amazon’s workarounds are clumsy: A user can go to the smartphone companion app for her Home or Echo, find the result of the search, and click a link to go to the content creator’s site. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A user could go to that trouble. But why bother when she already has the answer she sought? As Asher Elran, a web traffic expert and CEO of Dynamic Search, put it in a blog post back in 2013, one-shot answers rig the game in Google’s favor. “As websites, we expect to compete for those ranks by using SEO and providing interesting content,” he wrote. “What we do not expect is the answer to the questions appearing to the searcher before we get a chance to impress them with our hard work.” When Tunstall-Pedoe began working on what would become True Knowledge, he got the impression that Google opposed providing one-shot answers. Although some employees undoubtedly felt that way at the time, statements from the company’s leaders make clear that the long-term plan was always to build an oracle. “When you use Google, do you get more than one answer?” Eric Schmidt asked in a 2005 interview, more than a decade before he stepped down as chair. “Of course you do. Well, that’s a bug … We should be able to give you the right answer just once.” For years, technological obstacles kept Schmidt’s goal at a safe remove. This came with certain advantages. Under Section 230 of the Communications Decency Act, a 1996 law that governs freedom of expression on the internet, online intermediaries cannot be held responsible for content supplied by others. As long as Google remained a mere conduit for information, rather than a creator of that information—a neutral librarian rather than an all-knowing oracle—it could likely avoid a blizzard of legal liabilities and moral responsibilities. “Part of the reason why Google liked 10 blue links was because they weren’t determining what was true or false,” Tunstall-­Pedoe says. But the company’s don’t-­kill-­the-­messenger positioning is much harder to accept in the voice era. Say you click on a search result and end up reading an article from the San Francisco Chronicle. Google is clearly not responsible for the content of that article. But when the company’s Assistant delivers an answer to one of your questions, the distinction becomes murkier. Even though the information may have been extracted from a third-party source, it feels as though it’s coming straight from Google. As such, the companies serving up replies to voice searches gain great power to decree what is true. They become overlords of epistemology. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Danny Sullivan, Google’s public liaison for search, touched on this hazard last year in a blog post about featured snippets. Until recently, he explained, users who asked “How did the Romans tell time at night?” had been getting an absurd one-shot answer: sundials. This was a no-­consequence mistake, and Sullivan assured the public that Google was working to prevent such gaffes in the future. But it isn’t difficult to imagine a similar blunder with bigger ramifications, particularly as more and more Americans embrace voice search and the notion of the infallible AI oracle. Past one-shot answers have falsely claimed that Barack Obama was declaring martial law, that Woodrow Wilson was a member of the Ku Klux Klan, that MSG causes brain damage, and that women are evil. Google willingly fixed these whoppers, explaining that it had not authored them—that the mistakes had been automatically extracted from shoddy websites. Giving people a way to check sourcing offers some insulation against misinformation run amok. But it is difficult to imagine a user of Echo or Home going to the trouble of regularly logging into the companion app; the extra effort goes against the whole hands-free, no-look ethos of voice computing. And the verbal attributions, when they exist, are typically vague. A user might be told that an answer came from Yahoo or Wolfram Alpha. That’s akin to saying, “Our tech company got this information from another tech company.” It lacks the specificity of seeing the name of a reporter or media outlet; it also omits mention of the evidence used to arrive at a conclusion. When the source is a company’s own knowledge graph or other internal resource, the derivation becomes even more opaque: “Our tech company got this information from itself. Trust us.” The strategy of delivering one-shot answers also implies that we live in a world in which facts are simple and absolute. Sure, many questions do have a single correct answer: Is Earth a sphere? What is the population of India? For other questions, though, there are multiple legitimate perspectives, which puts voice oracles in an awkward position. Recognizing this, Microsoft’s Cortana sometimes gives two competing answers to contested questions rather than just one. Google is considering doing a version of the same. Whether or not these companies wish to play the role of Fact-Checker to the World, they’re backing themselves into it. The command that big tech companies have over the dissemination of information, particularly in the era of voice computing, raises the specter of Orwellian control of knowledge. In places such as China, where the government heavily censors the internet, this is not just an academic concern. In democratic countries, the more pressing question is whether companies are manipulating facts in ways that benefit their corporate interests or the personal agendas of their leaders. The control of knowledge is a potent power, and never have so few companies attained such dominance as the portals through which the vast majority of the world’s information flows. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The rest of us, meanwhile, may be losing the very skills that allow us to hold these gatekeepers to account. Once we become accustomed to placing our faith in the handy oracle on the kitchen counter, we may lose patience with the laborious—and curiosity-stoking, and thought-­provoking—hunt for facts, expecting them to come to us instead. Why pump water from a well if it pours effortlessly from your faucet? Tunstall-­Pedoe, who left Amazon in 2016, acknowledges that voice oracles introduce new risks, or at least worsen existing ones. But he has the typical engineer’s view that the problems caused by technology can be solved by—you guessed it—more and better technology, such as AIs that learn to suppress factually incorrect information. If online oracles one day get good enough to make a place like the Cambridge University Library obsolete, he imagines that he would feel nostalgic. But only up to a certain point. “I might miss it,” Tunstall-­Pedoe says, “but I’m not sure that I would go back there if I didn’t need to.” James Vlahos (@jamesvlahos) wrote about the Alexa Prize , a chatbot competition sponsored by Amazon, in issue 26.03. This article appears in the March issue. Subscribe now. Let us know what you think about this article. Submit a letter to the editor at [email protected]. AR will spark the next big tech platform —mirrorworld How measles hacks the body—and harms victims for years 10 ways to stay active (and sane) if it's horrible outside Autocomplete presents the best version of you Monkeys with super-eyes could help cure color blindness 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Topics magazine-27.03 longreads amazon alexa voice assistants artificial intelligence Steven Levy Will Knight Will Knight Reece Rogers Steven Levy Will Knight Caitlin Harrington Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,220
2,019
"Alphabet’s AI Might Be Able to Predict Kidney Disease | WIRED"
"https://www.wired.com/story/alphabets-ai-predict-kidney-disease"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Gregory Barber Business Alphabet’s AI Might Be Able to Predict Kidney Disease Play/Pause Button Pause Casey Chin; Getty Images Save this story Save Save this story Save Application Prediction Company Alphabet Google End User Research Sector Health care Google has a solution for the creaking inefficiencies of modern healthcare: push notifications. No, not those annoying reminders to practice your Arabic lesson on Duolingo or subscribe to a new Lyft deal. Google is betting its alerts can save your life. The company is building an artificial-intelligence-driven system that promises to give doctors an early warning of dangerous medical conditions arise, part of its ongoing efforts to break into healthcare. On Wednesday, Alphabet’s artificial intelligence lab DeepMind showed progress toward that kind of disease prediction, starting with a condition called acute kidney injury. Using software developed with the Department of Veterans Affairs, researchers were able to predict the condition in patients up to 48 hours before it occurred. The machine learning software was trained using medical records from more than 700,000 VA patients, and could anticipate 90 percent of cases where the damage was severe enough that a patient required dialysis. The results, published in the journal Nature, suggest doctors could one day get early warnings in time to prevent some patients suffering kidney damage, says Eric Topol , a professor at Scripps Research who wasn’t involved in the research. “This is remarkable work,” he says. “You could potentially mitigate the need for dialysis or kidney transplant, or prevent a patient’s death.” More than half of adults admitted to an ICU end up with acute kidney injury, which can be lethal. But if detected early, the condition is often easy to treat or prevent by increasing fluids or removing a risky medication. Alphabet has a ready-made vehicle to help commercialize its research. Kidney-protecting algorithms would be a perfect upgrade to a mobile app called Streams being tested by DeepMind in some British hospitals, Topol says. On Wednesday, DeepMind and its collaborators separately published results showing that using Streams, doctors missed only 3 percent of cases of kidney deterioration, compared with 12 percent missed without the app. That version of Streams doesn’t use DeepMind’s specialty, machine learning; it alerts staff based on results from a single blood test. But the plan is to merge the two threads of research. Using Streams, physicians could be alerted to predictions of acute kidney injury, says Dominic King, a former surgeon who leads DeepMind’s health effort---and eventually other conditions as well, like sepsis or pancreatitis. “We want to move care from reactive firefighting, which is how you spend most of your life as a physician, to proactive and preventive care,” he says. That kind of shift is difficult in a hospital setting, with its entrenched rules and warrenous chains of command. DeepMind has previously recognized that any AI software it designs for health care needs to integrate with existing hospital workflows. Hence its decision to first test an AI-free version of Streams in hospitals before adding any predictive capabilities. "This is remarkable work." Eric Topol, Scripps Research One potential challenge is notification fatigue. An inevitable side effect of making predictions is false positives---the algorithm sees signs of a disease that never develops. Even if that sparked unnecessary care, says DeepMind researcher Nenad Tomasev, the algorithm would still on balance likely save medical staff time and money by avoiding serious complications and interventions like dialysis. The question, though, is how to account for human behavior. False positives increase the risk that alerts become annoying and eventually are ignored. Topol of Scripps notes that while the algorithm performed well on historical data from the VA, DeepMind needs to validate that it truly predicts kidney disease in patients. Such studies are more complex, lengthy, and expensive than testing an idea using a pile of existing data, and Topol says few have been done for medical applications of AI. When they have, such as in trials of software that reads retinal images, their performance has been less impressive than in studies using past data. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Another potential hurdle: The algorithm relies heavily on localized demographic data to make its predictions, meaning the system developed for the VA won’t generate good predictions for other hospitals. Even in the study, the algorithm was less accurate at predicting kidney deterioration in women, because they represented only 6 percent of the patients in the dataset. Alphabet has launched numerous experiments in healthcare, though it doesn’t have much to show for it in its financial results—more than 80 percent of the company’s revenue still comes from ad clicks. An effort to offer electronic medical records was shut down in 2011. More recently the company has spun up experiments using AI to read medical images , and is testing software in India that screens for eye problems caused by diabetes. Alphabet’s Verily arm has focused on ambitious projects like nanoparticles that deliver drugs and smart contact lenses. Two job ads posted by Google this month underline its commitment to its health division and the challenges the new effort faces. One seeks a head of marketing to create a “brand identity” for Google Health. The other asks for an experienced executive to lead work on deploying Google’s health technology in the US. The ad notes that Google has been “exploring applications in health for more than a decade.” Alphabet’s predilection for big data could prove an advantage in healthcare. (People type around 1 billion health-related queries into Google’s search engine each day, Google Health VP David Feinberg said at the SXSW conference in Austin this year.) But it also brings challenges. The company has vast and lightly regulated stocks of information on online behavior. For health projects, it must negotiate access to medical records by finding partners in health care, as it did with the VA, whose use of data is bound by strict privacy rules. Alphabet’s health experiments have already run into regulatory and legal troubles. In 2017 the UK data regulator said one of DeepMind’s hospital collaborators had breached the law by giving the company patient data without patient consent, and access to more information than was justified. That background caused alarm in some privacy experts when Google said in November that it would absorb the Streams project from DeepMind, as part of an effort to unify its health care projects under new hire David Feinberg, previously CEO of Pennsylvania health system Geisinger. Google acquired DeepMind in 2014. In June, a Chicago man filed a lawsuit against Google, the University of Chicago, and the University of Chicago Medical Center, alleging that personal data was not properly protected in a project using data analysis to predict future health problems. Google and the medical center have said they followed applicable best practices and regulations. How Loon's balloons find their way to deliver the internet Did this international drug dealer create bitcoin? Maybe ! Cold War–era bunker mania forever altered Albania The “manosphere” and the challenge of quantifying hate Fear, misinformation, and measles spread in Brooklyn 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Editor X Staff Writer X Topics artificial intelligence Alphabet Google healthcare machine learning Paresh Dave Will Knight Vittoria Elliott Gregory Barber Will Knight Khari Johnson Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,221
2,019
"The Pentagon Doubles Down on AI–and Wants Help from Big Tech | WIRED"
"https://www.wired.com/story/pentagon-doubles-down-ai-wants-help-big-tech"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business The Pentagon Doubles Down on AI–and Wants Help from Big Tech “AI will not only increase the prosperity of the nation but enhance our national security,” says Dana Deasy, the Pentagon's chief information officer. Brian Murphy/DVIDS Save this story Save Save this story Save Application Regulation Company Alphabet Google End User Government Sector Defense In the 1960s, the Department of Defense began shoveling money toward a small group of researchers with a then-fringe idea: making machines intelligent. Military money played a central role in establishing a new science— artificial intelligence. Sixty years later, the Pentagon believes AI has matured enough to become a central plank of America’s national security. On Tuesday, the department released an unclassified version of its AI strategy, which calls for rapid adoption of AI in all aspects of the US military. The plan depends on the Pentagon working closely with the tech industry to source the algorithms and cloud computing power needed to run AI projects. Federal contracting records indicate that Google, Oracle, IBM, and SAP have signaled interest in working on future Defense Department AI projects. “AI will not only increase the prosperity of the nation but enhance our national security,” said Dana Deasy, the department’s chief information officer, at a news briefing Tuesday. He said Russian and Chinese investments in military AI technology heighten the need for US forces to use more AI, too. “We must adopt AI to maintain our strategic position and prevail on future battlefields,” Deasy said. Previous Defense Department efforts to tap into the tech industry’s AI expertise haven’t all gone smoothly. Last year thousands of Google employees protested against the company’s work on Project Maven , which was intended to demonstrate how the US military could benefit from tapping commercially available AI technology. The pushback against Google’s work on a program using algorithms to identify objects in video from drones prompted the company to decide not to renew the contract. CEO Sundar Pichai also released new guidelines on its use of AI that forbid work on weapons , but permit other military work. The heart of the Pentagon AI strategy published Tuesday is a unit established in June last year called the Joint Artificial Intelligence Center, known as the JAIC. It will function as a hub of AI expertise to support military branches, and vet all Defense Department AI projects larger than $15 million. The JAIC will also develop its own AI projects in a similar vein to Project Maven, including by tapping tech company algorithms and AI tools. The JAIC was initially proposed in 2017 by the department’s Defense Innovation Board of tech industry experts, chaired by Eric Schmidt, previously chairman and CEO of Google. It was structured in large part by Brendan McCord, previously head of machine learning at the Defense Innovation Unit, a kind of Silicon Valley embassy for the Pentagon. McCord is also the primary author of the department’s AI strategy. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Lt. Gen. John "Jack" Shanahan, who leads the JAIC, said the unit will focus on rapidly deploying existing AI algorithms and tools, often contracted from technology companies, in military scenarios. “Commercial solutions are available for most of the problems we’ve discovered in the past and will discover in the future,” he said. “That is where some of the world’s best talent resides right now.” Shanahan also led Project Maven, which is being integrated into the JAIC, and appears to be popular with US commanders. In budget requests last summer, the Air Force and Marine Corps described plans to make wider use of Maven algorithms, including putting them “on multiple unmanned aerial vehicles,” and using them to identify targets based on data from drones carrying special cameras that can monitor up to 40 square miles of territory at a time. William Carter, deputy director of the technology policy program at the Center for Strategic and International Studies, says Project Maven has won respect for showing that Pentagon AI projects could be quick, and efficient. “One of the most remarkable things about Maven was that it was so cheap relative to the power of the system that was developed,” he says. Defense Department CIO Dana Deasy, left, and Lt. Gen. John "Jack" Shanahan, head of the Joint Artificial Intelligence Center. Sgt. Amber Smith/DVIDS Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The JAIC will also work on new cloud infrastructure to provide the data storage and computing power needed for AI projects. That likely means more contracts like JEDI, a cloud contract worth more than $10 billion that is expected to be announced in coming months, with Amazon and Oracle among the leading bidders. Oracle, Google, IBM, and SAP were listed among the “interested vendors” for an “ AI industry day ” the JAIC co-hosted in late November to discuss future AI projects. A Google spokesperson confirmed the company participated in the industry day. An IBM spokesperson pointed to the company’s existing $135 million Army contract that involves applying AI to predict equipment faults, and said IBM hopes to work more with the Defense Department on the technology. Brian Roach, managing director for regulated industries at SAP North America, said the company works with all branches of the armed services and is interested in supporting all kinds of future government AI programs. Oracle did not respond to a request for comment. Pichai has said Google’s AI principles still permit military projects, although the company claimed in October that they prevented Google from bidding for the JEDI cloud contract. Oracle has challenged the JEDI bidding process, claiming it is skewed in favor of Amazon. Amazon and the Pentagon deny this. At Tuesday’s briefing, Shanahan played down the suggestion that the protests by Google employees reflected widespread resistance to working with the Pentagon. “Our experience has been, with very few exceptions, an enthusiasm with working with Department of Defense,” he said. Bob Work, who established Project Maven while serving as deputy security of defense, before leaving the government in 2017, agrees. “I think the department was concerned that Google might be the canary in the coal mine but that’s not what happened,” he says. The Department’s overtures to tech companies—including Google—will continue, he says. “They’re going to try to reach out and convince as many companies as possible to work with the department on some of those issues.” The JAIC could soon have plenty of money to hand out to tech companies, and may even set up shop in Silicon Valley. Shanahan said Tuesday that the JAIC’s future budget isn’t finalized, but a Pentagon budget request document from June forecast the center’s budget at $89 million in 2019, and $414 million in 2020. The JAIC is currently based at the Pentagon and nearby Crystal City, but Deasy said it may add outposts near hubs of academic and industry AI talent. Rasha Abdul Rahim, deputy director for Amnesty International’s work on technology, says companies who work on government AI contracts should be wary. She says their algorithms may contribute to or magnify human rights violations from US military projects, such as the drone strike program, which has needlessly killed civilians. “Tech companies need to take steps to make sure they don’t cause or contribute to human rights abuses,” Abdul Rahim says. The JAIC is already working on several projects. One is training algorithms to predict the maintenance needed on H-60 special operations helicopters, a model used across the US services. The program is expected to save money, and help meet a 2017 directive by then defense secretary James Mattis to improve the readiness of US equipment for deployment. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Another project underway might earn the Pentagon’s AI programs some positive publicity. It will use a Project Maven-style approach to helping humanitarian response to disasters such as fires and floods. Shanahan said algorithms will be trained to identify fire lines on video shot from planes flying over wildfires, extracting data that can speed up efforts to fight the fires. The humanitarian program will also work on new ways to process satellite photos, drawing on a dataset created for a Pentagon competition offering $100,000 for algorithms that could identify structures such as damaged buildings and utility trucks. Shanahan said the JAIC is also thinking about applying AI to cybersecurity, an area where the Pentagon’s research agency DARPA has already spent considerable research funds. In 2016, it staged an odd, $55 million, contest in Las Vegas ballroom in which bots competed for a $2 million prize by hacking one another, while patching their own flaws. The Pentagon AI plan unveiled Tuesday also acknowledges ethical challenges the technology may cause. The department’s Defense Innovation Board of Silicon Valley advisors is developing a set of ethical principles for the use of AI. Shanahan said that ethical questions are likely to come up more often as the JAIC becomes more established. Right now the center is not working on autonomous weapons systems, but Deasy said it might in future. “JAIC is being established to support all aspects of what we do here,” he said. Inside the push to legalize magic mushrooms Imax ditched VR—but big theaters are buying in Ride with the guy who builds roller coasters in his yard Can lettuce survive climate change ? Glitches reveal Google Books' human scanners 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Editor X Topics artificial intelligence Pentagon Google Oracle IBM Will Knight Khari Johnson Amit Katwala David Gilbert Kari McMahon Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,222
2,018
"In Project Maven's Wake, the Pentagon Seeks AI Tech Talent | WIRED"
"https://www.wired.com/story/inside-the-pentagons-plan-to-win-over-silicon-valleys-ai-experts"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Zachary Fryer-Biggs Backchannel Inside the Pentagon’s Plan to Win Over Silicon Valley's AI Experts Play/Pause Button Pause Elena Lacey; Getty Images Save this story Save Save this story Save The American military is desperately trying to get a leg up in the field of artificial intelligence, which top officials are convinced will deliver victory in future warfare. But internal Pentagon documents and interviews with senior officials make clear that the Defense Department is reeling from being spurned by a tech giant and struggling to develop a plan that might work in a new sort of battle—for hearts and minds in Silicon Valley. The battle began with an unexpected loss. In June, Google announced it was pulling out of a Pentagon program—the much-discussed Project Maven —that used the tech giant’s artificial intelligence software. Thousands of the company’s employees had signed a petition two months earlier calling for an end to its work on the project, an effort to create algorithms that could help intelligence analysts pick out military targets from video footage. Inside the Pentagon, Google’s withdrawal brought a combination of frustration and distress—even anger—that has percolated ever since, according to five sources familiar with internal discussions on Maven, the military’s first big effort to utilize AI in warfare. This article was produced in partnership with the Center for Public Integrity , a nonprofit, nonpartisan news organization. “We have stumbled unprepared into a contest over the strategic narrative,” said an internal Pentagon memo circulated to roughly 50 defense officials on June 28. The memo depicted a department caught flat-footed and newly at risk of alienating experts critical to the military’s artificial intelligence development plans. “We will not compete effectively against our adversaries if we do not win the ‘hearts and minds’ of the key supporters,” it warned. Maven was actually far from complete and cost only about $70 million in 2017, a molecule of water in the Pentagon’s oceanic $600 billion budget that year. But Google’s announcement exemplified a larger public relations and scientific challenge the department is still wrestling with. It has responded so far by trying to create a new public image for its AI work and by seeking a review of the department’s AI policy by an advisory board of top executives from tech companies. The reason for the Pentagon’s anxiety is clear: It wants a smooth path to use artificial intelligence in weaponry of the future, a desire already backed by the promise of several billion dollars to try to ensure such systems are trusted and accepted by military commanders, plus billions more in expenditures on the technologies themselves. The exact role that AI will wind up playing in warfare remains unclear. Many weapons with AI will not involve decision-making by machine algorithms, but the potential for them to do so will exist. As a Pentagon strategy document said in August: “Technologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Developing artificial intelligence, officials say, is unlike creating other military technologies. While the military can easily turn to big defense contractors for cutting-edge work on fighter jets and bombs, the heart of innovation in AI and machine learning resides among the non-defense tech giants of Silicon Valley. Without their help, officials worry, they could lose an escalating global arms race in which AI will play an increasingly important role, something top officials say they are unwilling to accept. “If you decide not to work on Maven, you’re not actually having a discussion on if artificial intelligence or machine learning are going to be used for military operations,” Chris Lynch, a former tech entrepreneur who now runs the Pentagon’s Defense Digital Service, said in an interview. AI is coming to warfare, he says, so the question is, which American technologists are going to engineer it? Lynch, who recruits technical experts to spend several years working on Pentagon problems before returning to the private sector, said that AI technology is too important, and that the agency will proceed even if it has to rely on lesser experts. But without the help of the industry’s best minds, Lynch added, “we’re going to pay somebody who is far less capable to go build a far less capable product that may put young men and women in dangerous positions, and there may be mistakes because of it.” Google isn’t likely to shift gears soon. Less than a week after announcing that the company would not seek to renew the Maven contract in June, Google released a set of AI principles which specified that the company would not use AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Some defense officials have complained since then that Google was being unpatriotic , noting that the company was still pursuing work with the Chinese government, the top US competitor in artificial intelligence technology. “I have a hard time with companies that are working very hard to engage in the market inside of China, and engaging in projects where intellectual property is shared with the Chinese, which is synonymous with sharing it with the Chinese military, and then don't want to work for the US military,” General Joe Dunford, chairman of the Joint Chiefs of Staff, commented while speaking at a conference in November. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In December testimony before congress, Google CEO Sundar Pichai acknowledged that Google had experimented with a program involving China, Project Dragonfly , aimed at developing a model of what government-censored search results would look like in China. However, Pichai testified that Google currently “has no plans to launch in China.” Project Maven’s aim was to simplify work for intelligence analysts by tagging object types in video footage from drones and other platforms, helping analysts gather information and narrow their focus on potential targets, according to sources familiar with the partly classified program. But the algorithms did not select the targets or order strikes, a longtime fear of those worried about the intersection of advanced computing and new forms of lethal violence. Many at Google nonetheless saw the program in alarming terms. “They immediately heard drones and then they thought machine learning and automatic target recognition, and I think it escalated for them pretty quickly about enabling targeted killing, enabling targeted warfare,” said a former Google employee familiar with the internal discussions. Google is just one of the tech giants that the Pentagon has sought to enlist in its effort to inject AI into modern warfare technology. Among the others: Microsoft and Amazon. After Google’s announcement in June more than a dozen large defense firms approached defense officials, offering to take over the work, according to current and former Pentagon officials. But Silicon Valley activists also say the industry cannot easily ignore the ethical qualms of tech workers. “There’s a division between those who answer to shareholders, who want to get access to Defense Department contracts worth multimillions of dollars, and the rank and file who have to build the things and who feel morally complicit for things they don’t agree with,” the former Google employee said. In an effort to bridge this gulf and dampen hard-edged opposition from AI engineers, the Defense Department has so far undertaken two initiatives. The first, formally begun in late June, was to create a Joint Artificial Intelligence Center meant to oversee and manage all of the military’s AI efforts, with an initial focus on PR-friendly humanitarian missions. It’s set to be run by Lieutenant General Jack Shanahan, whose last major assignment was running Project Maven. In a politically shrewd decision, its first major initiative is to figure out a way to use AI to help organize the military’s search and rescue response to natural disasters. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Our goal is to save lives,” Brendan McCord, one of the chief architects of the Pentagon’s AI strategy, said while speaking at a technical conference in October. “Our military’s fundamental role, its mission, is to keep the peace. It is to deter war and protect our country. It is to improve global stability, and it’s to ultimately protect the set of values that came out of the Enlightenment.” The second initiative is to order a new review of AI ethics by an advisory panel of tech experts, the Defense Innovation Board, which includes former Google CEO Eric Schmidt and LinkedIn cofounder Reid Hoffman. That review, designed to develop principles for the use of AI by the military, is being managed by Joshua Marcuse, a former adviser to the secretary of defense on innovation issues who is now executive director of the board. Set to take about nine months, the advisory panel will hold public meetings with AI experts, while an internal Pentagon group also considers questions. Then it will forward recommendations to secretary of defense James Mattis about the ways that AI should or should not be injected into weapons programs. “This has got to be about actually looking in the mirror and being willing to impose some constraints on what we will do, on what we won’t do, knowing what the boundaries are,” Marcuse said in an interview. To make sure the debate is robust, Marcuse said that the board is seeking out critics of the military’s role in AI. “They have a set of concerns, I think really valid and legitimate concerns, about how the Department of Defense is going to apply these technologies, because we have legal authority to invade people’s privacy in certain circumstances, we have legal authority to commit violence, we have legal authority to wage war,” he said. Resolving those concerns is critical, officials say, because of the difference in how Washington and Beijing manage AI talent. China can conscript experts to work on military problems, whereas the United States has to find a way to interest and attract outside experts. “They have to choose to work with us, so we need to offer them a meaningful, verifiable commitment that there are real opportunities to work with us where they can feel confident that they’re the good guys,” Marcuse said. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Despite his willingness to discuss potential future constraints on AI usage, Marcuse said he didn’t think the board would try to change the Pentagon’s existing policy on autonomous weapons that depend on AI, which was put in place by the Obama administration in 2012. That policy, which underwent a minor technical revision by the Trump administration in May 2017, doesn’t prevent the military from using artificial intelligence in any of its weapons systems. It mandates that commanders have “appropriate levels of human judgment” over any AI-infused weapons systems, although the phrase isn’t further defined and remains a source of confusion within the Pentagon, according to multiple officials there. It does, however, require that before a computer could be programmed to initiate deadly action, the weapons system that contains it must undergo special review by three senior Pentagon officials—in advance of its purchase. To date that special review hasn’t been undertaken. In late 2016, during the waning days of the Obama administration, the Pentagon took a new look at the 2012 policy and decided in a classified report that no major change was needed, according to a former defense official familiar with the details. “There was nothing that was held up, there was no one who thought, ‘Oh we have to update the directives,’” the former official said. The Trump administration nonetheless has internally discussed making it clearer to weapons engineers within the military—who it fears have been reluctant to inject AI into their designs—that the policy doesn’t ban the use of autonomy in weapons systems. The contretemps in Silicon Valley over Project Maven at least temporarily halted that discussion, prompting the department’s leaders to try first to win the support of the Defense Innovation Board. But one way or another, the Pentagon intends to integrate more AI into its weaponry. “We’re not going to sit on the sidelines as a new technology revolutionizes the battlefield,” Marcuse said. “It’s not fair to the American people, it’s not fair to our service members who we send into harm’s way, and it’s not fair to our allies who depend on us.” The Center for Public Integrity is a nonprofit, nonpartisan, investigative newsroom in Washington, DC. More of its national security reporting can be found here. Alexa grew up this year, mostly because we talked to it 8 sci-fi writers imagine the bold and new future of work The mad scramble for the world's most coveted meteorite Galileo, krypton, and how the true meter came to be Everything you want to know about the promise of 5G 👀 Looking for the latest gadgets? Check out our picks , gift guides , and best deals all year round 📩 Get even more of our inside scoops with our weekly Backchannel newsletter Samanth Subramanian Steven Levy Christopher Beam Virginia Heffernan Vauhini Vara Lexi Pandell Amit Katwala Gideon Lichfield Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,223
2,018
"Preparing for malicious uses of AI"
"https://openai.com/blog/preparing-for-malicious-uses-of-ai"
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Research Preparing for malicious uses of AI February 20, 2018 More resources Read paper Safety & Alignment , Responsible AI , Publication AI challenges global security because it lowers the cost of conducting many existing attacks, creates new threats and vulnerabilities, and further complicates the attribution of specific attacks. Given the changes to the threat landscape that AI seems to bring, the report makes some high-level recommendations that companies, research organizations, individual practitioners, and governments can take to ensure a safer world: Acknowledge AI’s dual-use nature : AI is a technology capable of immensely positive and immensely negative applications. We should take steps as a community to better evaluate research projects for perversion by malicious actors, and engage with policymakers to understand areas of particular sensitivity. As we write in the paper: “Surveillance tools can be used to catch terrorists or oppress ordinary citizens. Information content filters could be used to bury fake news or manipulate public opinion. Governments and powerful private actors will have access to many of these AI tools and could use them for public good or harm.” Some potential solutions to these problems include pre-publication risk assessments for certain bits of research, selectively sharing some types of research with a significant safety or security component among a small set of trusted organizations, and exploring how to embed norms into the scientific community that are responsive to dual-use concerns. Learn from cybersecurity : The computer security community has developed various practices that are relevant to AI researchers, which we should consider implementing in our own research. These range from “red teaming” by intentionally trying to break or subvert systems, to investing in tech forecasting to spot threats before they arrive, to conventions around the confidential reporting of vulnerabilities discovered in AI systems, and so on. Broaden the discussion : AI is going to alter the global threat landscape, so we should involve a broader cross-section of society in discussions. Parties could include those involved in the civil society, national security experts, businesses, ethicists, the general public, and other researchers. Like our work on concrete problems in AI safety , we’ve grounded some of the problems motivated by the malicious use of AI in concrete scenarios, such as: persuasive ads generated by AI systems being used to target the administrator of a security systems; cybercriminals using neural networks and “fuzzing” techniques to create computer viruses with automatic exploit generation capabilities; malicious actors hacking a cleaning robot so that it delivers an explosives payload to a VIP; and rogue states using omniprescent AI-augmented surveillance systems to pre-emptively arrest people who fit a predictive risk profile. We’re excited to start having this discussion with our peers, policymakers, and the general public; we’ve spent the last two years researching and solidifying our internal policies at OpenAI and are going to begin engaging a wider audience on these issues. We’re especially keen to work with more researchers that see themselves contributing to the policy debates around AI as well as making research breakthroughs. Authors Jack Clark Michael Page Dario Amodei Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top "
1,224
2,019
"Open Source Software: The Complete Wired Guide | WIRED"
"https://www.wired.com/story/wired-guide-open-source-software"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Klint Finley Business The WIRED Guide to Open Source Software Play/Pause Button Pause Radio Save this story Save Save this story Save When someone buys a new smartphone, often they're preoccupied with the camera specs or the size of the screen or its storage capabilities. It's easy to overlook one of the most foundational aspects of these sleek consumer gadgets: their operating systems. The world's most popular mobile operating system is Google's Android. It powers more than 86 percent of smartphones in the world. What's even more remarkable is that Android is based on the open source Linux operating system. That means anyone can view the code at the heart of the vast majority of smartphones, modify it, and, more important, share it with anyone else. This openness enables collaboration. Unlike, say, Microsoft Windows, which was developed and is maintained by a single company, Linux is developed and maintained by more than 15,000 programmers around the world. These programmers might work for companies that compete with each other, or they might volunteer to create something new that’s then given away. For free. Gratis. As crazy as that might sound, the open source way of building software is now embraced by the likes of IBM, which plans to pay $34 billion for open source company Red Hat , Microsoft, which paid $7.5 billion to acquire the code hosting and collaboration platform GitHub, and Walmart , which released its own open source software. Open source is even seeing applications in the next iteration of technology: AI. Google open sourced its artificial intelligence engine, TensorFlow, in 2015, enabling companies and researchers to build applications using some of the same software the search giant used to create tools that search photos, recognize spoken words, and translate languages. Since then, Dropbox has used TensorFlow to recognize text in scanned documents and photographs, Airbnb has used it to help categorize photos in its listings, and a company called Connecterra has used it to help dairy farmers analyze their cows' health. Why would Google give away something so central to its business? Because it hoped outside developers would make the software better as they adapted it to their own needs. And they have: Google says more than 1,300 outsiders have worked on TensorFlow. By making it open source, Google helped TensorFlow become one of the standard frameworks for developing AI applications, which could bolster its cloud-hosted AI services. In addition to garnering outside help for a project, open source can provide valuable marketing, helping companies attract and retain technical talent. Keep in mind that Google didn't give away the data that powers its AI applications. Just using TensorFlow won't magically allow you to build a search engine and advertising business that can compete with Google. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg So Google stands to benefit, but why would an outsider contribute improvements to TensorFlow? Let's say a company makes its own version of TensorFlow with unique elements, but keeps those elements private. Over time, as Google made its own changes to TensorFlow, it might become harder for that other company to integrate its changes with the official version; also, the second company would miss out on improvements contributed by others. In short, open source provides a way for companies to collaborate on technology that’s mutually beneficial. The open source software movement grew out of the related, but separate, "free software" movement. In 1983, Richard Stallman, at the time a programmer at the MIT Artificial Intelligence Laboratory, said he would create a free alternative to the Unix operating system, then owned by AT&T; Stallman dubbed his alternative GNU, a recursive acronym for "GNU's Not Unix." For Stallman, the idea of "free" software was about more than giving software away. It was about ensuring that users were free to use software as they saw fit, free to study its source code, free to modify it for their own purposes, and free to share it with others. Stallman released his code under a license known as the GNU Public License, or GPL, which guarantees users those four software freedoms. The GPL is a "viral" license, meaning that anyone who creates software based on code licensed under the GPL must also release that derivative code under a GPL license. Source code The human-readable code that is translated, or "compiled," into the binary code that machines can read. When you buy software like Microsoft Office, you typically only get the binary code, which makes it hard to understand or modify the software. Open source software Software distributed with a license that allows anyone to use, view, modify, and share the software's source code. GPL The GNU Public License, a software license that allows anyone to use, view, modify, and share a project's source code; but anyone who uses the code to create a derivative work must also provide the source code for that work under the GPL. Apache An open source web server, a software foundation, and a permissive license that, unlike the GPL, allows source code to be mixed into non-open source, commercial code. Open core software Commercial software built on open source software that also includes non-open source code. Library Usually smaller collections of code that can be used as building blocks for larger projects, saving developers from having to write common features, such as password authentication, from scratch. Fork A copy of a code base that serves as the basis for a distinct version of the software. Often forks are used by individuals or companies to customize software for their own needs. Other times, they become the foundations of separate projects. Libre Office, for example, is a fork of Open Office. GitHub A popular service now owned by Microsoft for hosting code. Offers the ability to fork code bases with one click. Importantly, the license doesn't forbid companies from selling copies of GNU software. As long as you allow your customers to share your code, you can charge as much as you want for your software. The phrase "free as in free speech, not free as in free beer" is often used to help explain this apparent contradiction. Other programmers soon followed Stallman's example. One of the most important was Linus Torvalds, the vitriolic Finnish programmer who created the Linux operating system in 1991. Linux is a "kernel," the core of an operating system that talks to the hardware and translates the basic input from your keyboard, mouse, or touchscreen into something the software can understand. GNU lacked a finished kernel at the time, so many GNU users combined GNU and Linux into a functional operating system. Bundles of the GNU operating system, Linux kernel, and other tools became known as GNU/Linux distributions; some purists still refer to Linux-based operating systems as "GNU/Linux." Soon, companies like Red Hat were making money selling support for open source technologies like Linux. Linux---or GNU/Linux if you prefer---became especially popular for running web servers and now runs 69.4 percent of web servers, according to data compiled by W3Techs. Alongside the rise of Linux and the web came several other free tools, including the Apache web server, MySQL database, and programming languages like Perl and PHP. Many used the GPL license, but others adopted more permissive licenses that, unlike the GPL, allowed companies to create proprietary products using their code. In time, tensions grew between those, like Stallman, who believed that all software should be free on ethical grounds, and more business-oriented developers who thought that freely sharing code was a better way to build software but not an ethical imperative. In 1998, a group met to discuss how to promote the idea of shared code and open collaboration. Worried that the term “free software” and Stallman’s more absolutist philosophy would make their ideas less palatable to businesses that wanted to keep some of their code proprietary, the group settled on the label "open source," coined by Christine Peterson, to distinguish its aims. During the 2000s, open source went truly mainstream. In 2004, programmer David Heinemeier Hansson released his web application programming framework Ruby on Rails, which quickly became one of the world’s most important web development tools, as well as the foundation for services like Twitter and Kickstarter. Meanwhile, Yahoo was funding the development of the open source data-crunching system Hadoop. After its release in 2006, other companies, including Facebook, Twitter, and eBay began contributing to the project, helping demonstrate the value of inter-company collaboration. Sun Microsystems' $1 billion acquisition of MySQL in 2008 proved open source could be big business. That same year Google released its first Android phones, moving open source from the server to your pocket. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Now open source is practically everywhere. Walmart uses open source software like the development platform Node, and it has opened up the code of its cloud management tool OneOps and its development platform Electrode. JP Morgan Chase open sourced its blockchain platform Quorum , on which its employees collaborated with the creators of the privacy focused bitcoin alternative Zcash. Even Microsoft, whose former CEO once called Linux a "cancer," now uses and releases open source software such as its popular. NET programming framework. It even uses Linux to run parts of its cloud service Azure and has shared its own Linux tools with the community. Open source isn’t counterculture anymore. It’s the establishment. The rise of open source hasn't been without glitches. Despite the corporate world's embrace of open source software, many independent or startup-based projects still haven't figured out how to make money. Even the developers of software that’s widely used by major companies can struggle to raise funds to cover their costs or hire others. That can have serious consequences. August 1969 Ken Thompson and Dennis Ritchie create the Unix operating system at AT&T's Bell Labs. It's not open source, but they make the source code available. September 1983 Richard Stallman announces that he's working on a free alternative to Unix called GNU that won't require a license from AT&T. August 1991 Linus Torvalds announces that he is "doing a (free) operating system (just a hobby, won't be big and professional like gnu)." That operating system would become known as Linux. April 1995 Former WIRED web developer Brian Behlendorf and eight others release the first version of Apache web server---with bandwidth sponsored by WIRED. The project's permissive licensing helped win big corporations over to open source. Apache is still the most popular web server today. February 1998 Christine Peterson introduces the term "open source" at a summit for promoting code sharing and collaboration. August 1999 Red Hat, which sells support for Linux to companies, goes public with a successful IPO. It would go on to become the first open source company to rake in $1 billion in annual revenue. But its big payday was yet to come. June 2001 Then Microsoft CEO Steve Ballmer calls Linux a "cancer" in an interview with the Chicago Sun-Times. July 2004 The first release of Ruby on Rails, the open source development platform used by countless startups, including Twitter during its early days. January 2008 Sun acquires opens source database maker MySQL for $1 billion. October 2008 The first Android phone , the T-Mobile G-1, goes on sale, bringing the Linux operating system to the masses. June 2012 As part of its long effort to rehabilitate relations with the open source world, Microsoft announces support for Linux on its cloud service Azure. November 2014 Microsoft announces an open source version of its .NET programming framework. October 2018 Database company MongoDB adopts a new license that restricts how cloud services can use its software amid a growing controversy over commercial licensing for open source software. October 2018 IBM announces plans to buy Red Hat for $34 billion. For example, in 2014, security researchers revealed serious vulnerabilities in two crucial open source projects: OpenSSL and Bash , which are part of many major operating systems. No software is free of potential security problems, but the fact that these issues went undetected for so long highlighted a big problem for open source: Many big-name open source projects rely on lesser-known open source components run by volunteers who have little time to fix problems and no money to hire security auditors. Some companies that have built businesses around open source products are adopting controversial new licensing schemes. In an effort to keep cloud computing services from selling competing services based on its code, MongoDB created a new license in 2018 that restricts how other companies can use its MongoDB Community Server. Other open source companies have adopted the Fair Source license, which requires companies with more than 15 employees to pay a fee to use software that uses the license, or the newer Commons Clause , which restricts how companies can commercialize the software. You can still view the source code from software released under these licenses, but they break with the free and open source software tradition of allowing users to do whatever they want with the code. Startups, meanwhile, are working on novel ways to turn a profit on open source. Red Hat makes money by selling support for its open source products, but that’s not feasible for every open source project. A company called Tidelift aims to sell support through a single subscription fee for a package of open source projects. Think of it as “Netflix for open source.” Solving these funding problems is crucial to the future of open source. But money isn’t the only problem. The open source workforce is even less diverse than the tech industry as a whole, according to a survey conducted in 2017 by GitHub. Half of the respondents had witnessed bad behavior—such as rudeness, name calling, or harassment—and said it was enough to keep them away from a particular project or community. Around 18 percent of survey respondents had experienced such bad behavior firsthand. That's a problem because working on open source projects is now an important part of landing a job in technology. If women and minorities are shut out of open source, then the technology industry as a whole becomes that much less diverse. One way many open source projects are trying to address the issue is through a code of conduct called the Contributor Covenant , which warns participants against personal attacks, harassment, or "other conduct which could reasonably be considered inappropriate in a professional setting." As common sense as these guidelines might sound, they've proved controversial among open source coders used to being judged solely on their code, not their professionalism—or lack thereof. The author of the Contributor Covenant is still periodically harassed. Still, there are signs of progress. In 2018, Torvalds, long accused of creating a toxic environment in the Linux community, apologized for his past behavior, and the Linux project adopted the Contributor Covenant. Inclusion isn’t just an ethical issue for open source. Diverse teams build better products. And making better software is what open source is all about. Is Stallman Stalled? WIRED profiled Richard Stallman and the free software movement in our first issue in 1993. Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine Google has a long history of releasing open source code, including the AI code that’s part of its software empire. This wasn't an entirely altruistic decision: Google expects to benefit from other companies advancing the state of AI. Microsoft Says It's in Love With Linux. Now It's Finally Proving It How Microsoft went from being the poster child of proprietary software to open source proponent by releasing one of its flagship developer-centric products as open source. The Internet Is Broken, and Shellshock Is Just the Start of Our Woes How the massive security bug called Shellshock lay undiscovered for more than two decades in the open source program Bash, which is included with MacOS and most Linux-powered operating systems---and why it matters for the internet. Open Source Won. Now What? Red Hat rakes in billions in revenue every year, but many other open source companies have struggled. Meanwhile, volunteer developers burn out, and serious bugs go unaddressed. Giving Open Source Projects Life After a Developer's Death When the developers of open source projects pass away or burn out, it can have ripple effects across many projects that rely on those developers' code. Here's how the community is learning to handle these situations. The Woman Bringing Civility to Open Source Projects Ada Coraline wrote the Contributor Covenant, a code of conduct for open source projects in 2014. She has faced harassment ever since, but many of the largest open source projects have adopted either her covenant or a similar code of conduct. Last updated April 23, 2019. Enjoyed this deep dive? Check out more WIRED Guides. Contributor X Topics Linux software Microsoft open source Kari McMahon Will Knight Will Knight Will Knight Morgan Meaker Will Knight Steven Levy Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,225
2,016
"Google's Hardware Endgame? Making Its Very Own Chips | WIRED"
"https://www.wired.com/2016/02/googles-hardware-endgame-making-its-very-own-chips"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google's Hardware Endgame? Making Its Very Own Chips Getty Images Save this story Save Save this story Save Google made some notable news yesterday---by doing nothing. Company representatives did not appear on stage in a lecture hall at the corporate headquarters of chipmaker Qualcomm. Typically, this would be no big deal. Google isn't in the habit of doing such things. But earlier this month, Bloomberg reported that Google would somehow make its presence felt at Qualcomm HQ, in front of Qualcomm stockholders, offering its "stamp of approval" for a new kind of Qualcomm chip. That would have been a big deal. Qualcomm is the world's largest smartphone chip maker, but this wasn't about smartphone chips. This was about chips for computer servers, the machines that deliver all those services and emails and other data to your phone from across the Internet. And if we're talking about Google---the world's largest Internet company---server chips are no small thing. But Google didn't show. According to one person familiar with the matter who asked not to be identified because he wasn't authorized to speak to the press, Google was indeed slated to deliver some sort of message, but pulled out. It's a back-and-forth that shows how important the chip market is to Google, how important Google is to the world's chipmakers, and how both Google and all those chipmakers are so carefully working behind the scenes to tip the scales in their favor. That includes the chip industry's behemoth: Intel. In delivering its many services, from Search to Gmail to Maps, to tens of millions of people across the globe, Google operates a network of data centers stretching from Oregon to Finland to Taiwan. Google engineers custom design the tens of thousands of servers that drive these computing hives. And Google buys all the chips for these servers directly from the companies making them. Right now, that means Google buys an enormous number of chips from Intel, the chipmaker that dwarfs all others. And we mean enormous. In late 2012, Intel bigwig Diane Bryant told us that Google bought more server chips than all but five companies on earth. That's remarkable when you consider that everyone else on that list actually sells servers , including Dell and HP. Google builds servers only for itself. According to Shane Rau, an analyst with research firm IDC, Google now accounts for 5 percent of all server chips sold worldwide. Over the course of a recent year-long period, he says, Google bought about 1.2 million chips. So, if it looks like Google may start buying chips from someone other than Intel, industry insiders sit up and pay attention. Now that Google is serious about cloud computing--- inviting the world's business to run all their software on its state-of-the-art infrastructure ---its already massive slice of the chip market will only grow. As more companies move onto the Google cloud, they'll buy fewer servers from the likes of HP and Dell. Intel wants to keep that massive Googly slice of the market---while newer rivals like Qualcomm and Applied Micro and Cavium are looking to take it away. But in the end, Google may take things in yet another direction. It may design its own chips. Google certainly has an interest in seeing other chipmakers challenge Intel's dominance. Today, according to IDC, Intel controls 99 percent of the server chip market. If Google can buy chips from more companies, prices are bound to drop. Simply by flirting with other players, Google can encourage Intel to keep prices down. But in addition to lower prices, Google may have an interest in server chips that are more like those that Qualcomm makes for smartphones---that is, chips that consume extremely small amounts of power. You may think Google would need big beefy chips to run its sweeping Internet empire, but the trick to running so vast an operation is finding ways of breaking tasks into tiny pieces and spreading them across many modest pieces of hardware. That way, one failure doesn't really matter. The others can pick up the slack. Plus, this distributed model is more efficient. If you run an empire as big as Google network, you must keep costs---meaning power usage---as low as possible. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That's why Qualcomm is making server chips more like energy-sipping smartphone chips than the old-school energy-gulping variety. It knows Google is interested. It also knows that other Internet giants like Amazon and Facebook are interested. Facebook has made no secret of this. That's why Qualcomm wanted Google at its event: to show the rest of the Internet that the big dog was interested. This isn't the first time Google has flirted with alternative chip architectures. And for years, rumors have swirled indicating that Google may end up designing its own chips. That isn't beyond the realm of possibility. After all, Google designs so much other hardware, from servers to storage systems to networking gear. And unlike Intel, chipmaker ARM will license its basic processor designs to anyone so they can further customize them. Qualcomm makes custom ARM designs. Google could, too. Just this week, the rumors started flying again. Google contributions to an open source software project indicated that it has built its own chip, and some people got very excited. As it turns out, this is merely a chip for network interface cards---cards that connect servers to a larger network. Even if Google did design such a chip, it's most likely a small development. It's "unlikely to be much in the way of revolutionary architecture," says JR Rivers, who once helped design networking gear at Google and now runs a networking startup called Cumulus. These less sophisticated chips are quite different from a CPU, the chip that represents the brain of a server. But it shows how Google thinks about data center hardware: It's always looking to gain any advantage. At the moment, server chips based on the ARM design are still maturing. Rau, the IDC analyst, says they account for less than 1 percent of the market, and at this point, he suspects, companies like Google are buying these chips only to experiment with them. "I don't think these chips are in major volume in big cloud companies---yet," he says. So, Google is still very much dependent upon Intel for its processors---as much as Intel is dependent upon Google. That means Google must carefully balance the situation. It wants the Qualcomms of the world to succeed, to build ARM chips it can use in bulk. But it doesn't necessarily want to tick off Intel. Thus, this week's no-show. Right now, it's status quo. But the balance may shift. Senior Writer X Topics data Enterprise Google Intel microchips qualcomm Will Knight Gregory Barber Will Knight Steven Levy Kari McMahon Will Knight Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,226
2,015
"Dell. EMC. HP. Cisco. These Tech Giants Are the Walking Dead | WIRED"
"https://www.wired.com/2015/10/meet-walking-dead-hp-cisco-dell-emc-ibm-oracle"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Dell. EMC. HP. Cisco. These Tech Giants Are the Walking Dead Getty Images Save this story Save Save this story Save HP. Cisco. Dell. EMC. IBM. Oracle. Think of them as the walking dead. Oh, sure, they'll shuffle along for some time. They'll sell some stuff. They'll make some money. They'll command some headlines. They may even do some new things. But as tech giants, they're dead. This was driven home in wonderfully complete fashion this past Wednesday, thanks to a trio of events. If you don't follow the seemingly uninteresting, enormously lucrative, and, in fact, endlessly fascinating world of enterprise computing---computing that helps run big businesses---you may have missed them all. But they were big news in the enterprise world. And together, they show just how dead those giants really are. First, Pure Storage, a Silicon Valley startup that sells a new kind of hardware for storing large amounts of digital data, made its Wall Street debut. Later in the day, The Wall Street Journal reported that big-name computer tech company Dell was in talks to buy EMC , a storage outfit that's much older and much larger than Pure Storage ( the deal was announced this morning ). And during an event in Las Vegas, Amazon introduced a sweeping collection of new cloud computing services that let you juggle vast amounts of data without setting up your own hardware. That may seem like a lot to wrap your head around, but the story is really quite simple. For decades, if you were building a business and you needed to store lots o' data, EMC was your main option. You gave the company lots o' money, and it gave you some hefty machines packed with hard disks and some software for storing data on those hard disks. The trick was that you could only get that software from EMC. So, anytime you wanted to store more data, you gave EMC more money. This made the company very rich. But then little companies like Pure Storage came along and sold storage gear built around flash, a much faster alternative to hard drives, letting you juggle more data more quickly and, potentially, for less money. But more importantly, cloud computing companies like Amazon came along, letting you store data on their machines. These machines sat on the other side of the Internet, but you could access them from anywhere, at any time. That meant you didn't have to buy hardware from EMC or anyone else. That's the subtext as EMC, once a giant of the tech world, merges with Dell, a company that isn't exactly on the rise. Dell, in fact, suffers from the same conundrum as EMC---a conundrum that grew so onerous, Dell went private. This conundrum also plagues HP. And IBM. And Cisco. And Oracle. As Bloomberg Business feature writer, Elon Musk biographer, and unparalleled Silicon Valley hack Ashlee Vance puts it : "Why don't IBM, HP, EMC, Dell and Cisco all merge and get this thing over with?" Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What is this conundrum? Well, we'll let Vance explain that too. When someone asked what we should call that IBM-HP-EMC-Dell-Cisco merger, his response was wonderfully descriptive. He suggested we call the company Fucked By The Cloud. The Cloud. The term has taken on so many meanings in recent years. But keep in mind: most of these meanings come from IBM, HP, EMC, Dell, Cisco, and other companies that don't want to be fucked by it. The best way to think about The Cloud is this: It's the way that the giants of the Internet---aka Amazon, Google, and Facebook---build their businesses. These companies built Internet businesses so large---businesses that ran atop hundreds, thousands, even tens of thousands of computers---they eventually realized they couldn't build them with hardware and software from established vendors. They couldn't use traditional storage gear from EMC. They couldn't use servers from Dell and HP and IBM. They couldn't use networking gear from Cisco. They couldn't use databases from Oracle. It was too expensive. And it couldn't scale. That's another buzzword. It means "helping an online operation achieve world domination." So, Amazon and Google and Facebook built a new breed of hardware and software that would scale quite nicely. They built their own servers, their own storage gear, their own networking gear, their own databases and other software for juggling information across all this hardware. They streamlined their hardware to make it less expensive, and in some cases, they sped it up, moving from hard disks to flash drives. They built databases that juggled data using the memory subsystems of dozens, hundreds, or even thousands of machines--- subsystems that can operate even faster than flash. But they didn't keep this stuff to themselves. They shared it. Now, all the stuff that Amazon and Google and Facebook built is trickling down to the rest of the world. That's important, because, as time goes on and the Internet expands, so many other businesses will scale like Amazon and Google and Facebook. Many already are. Amazon is now offering up its own infrastructure to this world of businesses. Literally. That's what a cloud computing service is. Google is doing the same. And Facebook, more than anyone, has released both its software and its hardware designs to the world at large , so that others can build their own operations in much the same way. This is called open source. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With help from these open source designs and the general example of the Internet giants, an army of up-and-coming enterprise vendors are offering hardware and software that operates a lot like the stuff Amazon and Google and Facebook have built. This includes not only storage vendors like Pure Storage, but server makers like Quanta and networking outfits like Cumulus Networks and Big Switch. Myriad software makers, such as MemSQL and MongoDB , sell databases based on designs from Facebook and Google and Amazon. All this is why IBM, HP, EMC, Dell, and Cisco are fucked. Yes, they can offer their own cloud computing services. They can offer software and hardware that works like the stuff Facebook has open sourced. And to a certain extent, they have. But the competition now stretches far and wide. And if they go too far with new cloud services and products, they'll cannibalize their existing businesses. This is called the innovator's dilemma. Yes, this conundrum plagues Oracle too. The Oracle empire is funded by expensive databases that don't scale. The difference is that Oracle has built a sales team that can force businesses into buying anything---even if it makes no economic sense. This is called The Iron Fist of Larry Ellison. Oh, and it plagues another venerable tech company: Microsoft. The difference here is that Microsoft has more quickly and adeptly moved into the world of cloud computing. Like Amazon and Google and Facebook, it runs its own massive Internet services, including Bing. That means it too has been forced to build its own data center hardware and software. And it has done an unusually good job of challenging Amazon with its own cloud computing services. This is called Microsoft Azure. Of course, Microsoft suffers from other problems too. One of its biggest money makers is the Windows operating system, for instance, and a relatively small number of people use Windows on smartphones, tablets, and other devices of the future. This is called Fucked By Mobile. Who's not fucked? Well, Pure Storage is looking better than EMC. That said, its IPO wasn't exactly a home run. And it still sells stuff that you have to install in your own data center. Gear like this will always have a place in the world. But the future of enterprise computing, it has become increasingly clear, lies with cloud computing services. And that means it lies with Amazon. Amazon is by far the world's largest cloud computing operation. Its cloud services are where so many businesses and coders go to run software and store data. And last week, the company continued its efforts to take this model still further---to offer up not just raw processing power and raw storage but also its own databases and data analytics tools and other software services. If you use Amazon, you don't need servers and other hardware from Dell and HP and EMC and Cisco---and you don't need databases from Oracle and IBM. Luckily, Amazon has some competition in the cloud computing world. That would be Google and Microsoft. The others are also-rans. HP and Oracle and IBM and the rest will imitate Amazon. But they're too far behind---and carry too much baggage---to catch up. Google and Microsoft can put some heat on Amazon. In fact, Microsoft is further along than Google. So, in short, we're really pulling for Fucked By Mobile. Update: This story has been updated with the news that Dell and EMC have indeed merged. Senior Writer X Topics Amazon Cisco Cloud Computing Dell EMC Enterprise hp IBM Microsoft Oracle Steven Levy Paresh Dave Amanda Hoover Niamh Rowe Will Knight Paresh Dave Vittoria Elliott Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,227
2,015
"Microsoft Knows Exactly Where Intel's Future Is | WIRED"
"https://www.wired.com/2015/06/microsoft-knows-exactly-intels-future"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Microsoft Knows Exactly Where Intel's Future Is Getty Images Save this story Save Save this story Save This week, Microsoft researcher Doug Burger received more than his usual share of email. On Monday, Intel told the world it was spending $16.7 billion in cash to acquire a company called Altera. And perhaps more than anyone, Burger understands why this deal makes sense for the world's largest chip maker. At Microsoft, he cooked up a new way of powering the company's Bing search engine using the low-power programmable chips sold by Altera, pairing them with traditional microprocessors from Intel. Asked how he views the Intel acquisition, Burger is understandably coy. "For me," he says, "it meant I answered a lot of extra mail." But he sees a very real future for the unexpectedly effective chips offered by Altera. As he points out, Microsoft has publicly said that Altera's chips can boost the speed of Bing in big ways, and that they're suited for use inside the "deep learning" systems that are rapidly giving online services the ability to identify faces in photos and recognize speech. In acquiring Altera, Intel is embracing a movement poised to reshape so many of the data centers that underpin the world's online services. IBM is also exploring the use of Altera chips , as are researchers at the University of Michigan. On the whole, these chips aren't as powerful as the CPUs that traditional drive computer servers, but engineers can program them to perform specific tasks with a new level of efficiency. "People are doing so many experiments," Burger says, "trying to figure out the right combination of algorithm and platform that gives you the best results." Altera makes what are called field programmable gate arrays, or FPGAs. Such chips have been around for years. Typically, engineers use them as a way of prototyping new processors---trying out new designs. Companies also put them into specialized hardware, including gear that routes data across computer networks. But a few years ago, Burger realized that if Microsoft put them into servers, they could also streamline the operation of a sweeping online service like Bing. Last summer, Microsoft unveiled a pilot project---Project Catapult---where it tested a network of about 1,600 servers that pair Intel CPUs with Altera FPGAs. According to Burger , the Altera chips could process certain Bing algorithms about 40 times faster than traditional CPUs, providing a boost that could make Microsoft's search engine twice as fast on the whole. Basically, chips help decide how to rank the items that turn up on Bing's results page. At the time, Burger told WIRED that Microsoft would move Bing to this kind of hardware in early 2015. He now says the company has yet to make the transition, but it will. "My team is really heads-down right now," he says, "trying to make that happen for live customer traffic in the data center." The move is just one way that the giants of the Internet are changing the hardware ---including the chips---that drive their services. Google, Facebook, Chinese search company Baidu, and others are now using GPUs---chips originally built for rendering graphics---to power the latest in artificial intelligence tools, including speech recognition, image recognition, and natural language processing. And like Microsoft, various companies are exploring other low-power chips that can reduce costs and perhaps boost speeds in the data centers. Google and Facebook are looking at silicon based on the ARM chips that power most smartphones. According to a recent study from the University of Michigan, if you operate a voice recognition service akin to Apple Siri on traditional hardware, it requires about 168 times more machines, space, and power than a text-based search engine along the lines of Google Search. GPUs and FPGAs, the study shows, can shrink that gap. "It's going to be absolutely critical that future data center designs include GPUs or FPGAs," Michigan professor Jason Mars recently told WIRED. "You can get at least an order-of-magnitude improvement." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Though Microsoft initially described Project Catapult as a way of boosting the performance of Bing---a good old text-based search engine---the company is also exploring FPGAs as a way of running voice recognition tools and other artificially intelligent systems, as it described in a recent research paper. Like GPUs, the company believes, FPGAs are suited for use with "neural nets"---networks of machines that approximate the networks of neuron in the human brain---used to power services that recognize images and spoken words. "The results in that paper were pretty compelling," Burger says. According to Burger, FPGAs are better suited to a much wider variety of tasks than GPUs. "The FPGA is less efficient for the stuff that's exactly suited to GPUs," he explains, "but for everything else, they can be more efficient, because you can customize the pipeline." He points out, however, that programming an FPGA may require more work and new engineering skills. "The FPGA solutions are harder to change that software," he says. "A really, really good FPGA designer can bring up some of these stacks pretty quickly. But there tend to be, I think, fewer people that do that really well than there are GPU programmers." As the Microsofts and the Googles began to explore these new types of chips, Intel was very much on the outside looking in. GPUs are made by companies like nVidia. Burger calls Altera and its rival Xilinx "the Coke and the Pepsi" of FPGAs. But things are changing. Intel sees where the market is headed, and it wants to be there. The company has already built experimental motherboards that include both Intel CPUs and Altera FPGAs, and after agreeing to acquire Altera, it intends to offer such boards "as highly customized, integrated products," even as it works to "enhance Altera’s products through design and manufacturing improvements." Burger declines to discuss the Intel-Altera deal, except to say that with Intel controlling a majority of the worldwide market for server chips, this is sure to create a "very interesting dynamic" in the world of FPGAs. Intel says it can better integrate CPUs withe FPGAs. But in a way, the deal also creates less competition in the marketplace. As Burger puts it, it's no longer Altera's Coke to Xilink's Pepsi. Citing various analyst reports---"I'm not sharing my opinion," he says---Burger does indicate that the deal is a sign of an even larger transformation in the chip world. As we approach the demise of Moore's Law ---the notion that raw chip power will double every 18 months---chip manufacturers like Intel must find new ways of moving their business forward. If they can't build faster CPUs, they can can at least offer a new breed of chip. As Microsoft is using it, that's what the FPGA amounts to. Senior Writer X Topics Enterprise Intel Microsoft NVIDIA Steven Levy Niamh Rowe Paresh Dave Paresh Dave Will Knight Paresh Dave Peter Guest Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,228
2,016
"Concrete AI safety problems"
"https://openai.com/blog/concrete-ai-safety-problems"
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Research Concrete AI safety problems We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around ensuring that modern machine learning systems operate as intended. June 21, 2016 More resources Read paper Safety & Alignment , Robustness , Publication We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around ensuring that modern machine learning systems operate as intended. (The problems are very practical, and we’ve already seen some being integrated into OpenAI Gym. ) Advancing AI requires making AI systems smarter, but it also requires preventing accidents—that is, ensuring that AI systems do what people actually want them to do. There’s been an increasing focus on safety research from the machine learning community, such as a recent paper from DeepMind and FHI. Still, many machine learning researchers have wondered just how much safety research can be done today. The authors discuss five areas: Safe exploration. Can reinforcement learning (RL) agents learn about their environment without executing catastrophic actions? For example, can an RL agent learn to navigate an environment without ever falling off a ledge? Robustness to distributional shift. Can machine learning systems be robust to changes in the data distribution, or at least fail gracefully? For example, can we build image classifiers that indicate appropriate uncertainty when shown new kinds of images, instead of confidently trying to use its potentially inapplicable learned model? Avoiding negative side effects. Can we transform an RL agent’s reward function to avoid undesired effects on the environment? For example, can we build a robot that will move an object while avoiding knocking anything over or breaking anything, without manually programming a separate penalty for each possible bad behavior? Avoiding “reward hacking” and “ wireheading ”. Can we prevent agents from “gaming” their reward functions, such as by distorting their observations? For example, can we train an RL agent to minimize the number of dirty surfaces in a building, without causing it to avoid looking for dirty surfaces or to create new dirty surfaces to clean up? Scalable oversight. Can RL agents efficiently achieve goals for which feedback is very expensive? For example, can we build an agent that tries to clean a room in the way the user would be happiest with, even though feedback from the user is very rare and we have to use cheap approximations (like the presence of visible dirt) during training? The divergence between cheap approximations and what we actually care about is an important source of accident risk. Many of the problems are not new, but the paper explores them in the context of cutting-edge systems. We hope they’ll inspire more people to work on AI safety research, whether at OpenAI or elsewhere. We’re particularly excited to have participated in this paper as a cross-institutional collaboration. We think that broad AI safety collaborations will enable everyone to build better machine learning systems. Let us know if you have a future paper you’d like to collaborate on! Authors Paul Christiano Greg Brockman Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top "
1,229
2,014
"AI star Andrew Ng announces departure from Chinese tech giant Baidu - The Verge"
"https://www.theverge.com/2017/3/22/15020064/baidu-ai-andrew-ng-resigns"
"The Verge homepage The Verge homepage The Verge The Verge logo. / Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence / Business AI star Andrew Ng announces departure from Chinese tech giant Baidu AI star Andrew Ng announces departure from Chinese tech giant Baidu / Ng founded Google Brain By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge. | Share this story The AI ambitions of Chinese tech giant Baidu have received a setback this morning, with the news that the company’s chief scientist, Andrew Ng, is leaving the firm in April. Ng moved to Baidu in 2014 from online education platform Coursera after previously founding the Google Brain project, and is one of the leading voices in the field of artificial intelligence. He announced his forthcoming departure from Baidu today, writing in a blog post on Medium that he will continue his work in AI “to shepherd in this important societal change.” “Baidu’s AI is incredibly strong, and the team is stacked up and down with talent; I am confident AI at Baidu will continue to flourish,” writes Ng. “After Baidu, I am excited to continue working toward the AI transformation of our society and the use of AI to make life better for everyone.” Baidu has emerged as one of the world’s leading AI companies, with CEO Robin Li declaring in March that “the era of mobile internet has ended” and that the firm will now “aggressively invest in AI.” According to a report from Bloomberg , Baidu has a research budget of 20 billion yuan or $2.9 billion, with most of this going toward AI. The company’s machine learning group numbers more than 1,300 engineers, and produces both research and commercial products in a wide range of areas, including self-driving cars , facial recognition , and medical chatbots. It’s not known where Ng’s work will take him next, but it’s clear he’s still heavily invested in the AI industry. He writes: “Just as electricity transformed many industries roughly 100 years ago, AI will also now change nearly every major industry — healthcare, transportation, entertainment, manufacturing — enriching the lives of countless people. I am more excited than ever about where AI can take us.” Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily. From our sponsor Advertiser Content From More from Tech The latest AI copyright lawsuit involves Mike Huckabee and his books Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved "
1,230
2,019
"Scientists Discover Nearly 200,000 Kinds of Ocean Viruses | WIRED"
"https://www.wired.com/story/scientists-discover-nearly-200000-kinds-of-ocean-viruses"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jonathan Lambert Science Scientists Discover Nearly 200,000 Kinds of Ocean Viruses Michelle Yun/Quanta Magazine; source: brgfx Save this story Save Save this story Save Every time you swallow a mouthful of seawater while swimming at the beach, you’re downing about as many viruses as there are people in North America. However, despite the staggering abundance of marine viruses—and the key role that these infectious agents seem to play in global processes like the carbon cycle —scientists still know relatively little about the variety of viruses that are out there. In 2015 a team documented 5,476 distinct kinds of viruses in the ocean. In 2016 the same team updated its count to 15,222. But in a study published this week in Cell , that number skyrockets to 195,728 distinct viral populations, a more than twelvefold increase. “This is a pretty amazing study,” said Louis-Marie Bobay , a microbial genomicist from the University of North Carolina-Greensboro, who was not involved in the work. “We know so little about viral ecology in much of the ocean, and this is some of the most impressive, and global, data ever collected.” The twelvefold leap was enabled by an ambitious global sampling expedition and more sophisticated genomic analysis. Although the oceans cover 70 percent of our planet, until a few years ago most knowledge of marine viral diversity came from only a few well-studied locations. That changed with the Tara Oceans project, which sought a more complete inventory of marine microbial and viral diversity by sampling all over the globe. The schooner Tara has made its way around the ocean, collecting samples from the surface to the depths and from pole to pole. The new study included samples from 43 locations in the Arctic that weren’t used in the 2015 and 2016 studies. About 40 percent of the novel virus populations came from the new Arctic samples. The rest came from reanalysis of Tara samples used for the earlier studies. “The algorithms we use to assemble viral genomes out of chunks of DNA got much, much better,” said Ann Gregory , a microbial ecologist at the Catholic University of Leuven in Belgium and one of the lead authors of the study. As well as piecing together strands of DNA out of fragments, Gregory and her colleagues had to settle on a way to classify the variety of virus genomes they were seeing. Defining a viral “species” is controversial, as viruses reproduce asexually and frequently swap DNA with one another and their hosts. Because viruses don’t contain the necessary machinery to replicate independently, some biologists do not consider viruses even fully “alive.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Jennifer Brum/Sullivan Lab at Ohio State Instead of species, Gregory classified the viruses into “populations” in which “there’s more gene flow within a group than between groups of viruses.” If sequenced viruses shared at least 95 percent of their DNA, she called them members of the same discrete population. This method yielded nearly 200,000 populations. About 90 percent of them couldn’t be mapped onto any known viral taxonomy, making them totally new to science. And, though viruses aren’t traditionally classified into genera, like Homo for humans or Staphylococcus for staph bacteria, Gregory concluded that the diversity of the populations they sampled was on the order of many new genera. Moreover, the researchers inferred the existence of five community-level groups of viruses that mapped onto distinct marine ecological zones based on temperature and depth: Arctic, Antarctic, temperate and tropical surface, temperate and tropical subsurface, and deep ocean. Within the genomes of these communities, the researchers found evidence of genetic adaptation to each ecological zone. “Temperature was the biggest predictor of community structure,” said Ahmed Zayed , a graduate student at Ohio State University who co-led the analysis. Varying temperatures support different kinds of microbial host communities, Zayed explained, and viruses adapt accordingly. The schooner Tara in the Arctic. Anna Deniaud/Fondation Tara Océan Globally, the observed patterns of biodiversity among viruses clash somewhat with established ecological trends. “There’s this paradigm that diversity is highest at the equator, and lessens as you move towards the poles,” Zayed said. The researchers did find increased diversity at the equator, but they also found a surprising amount of diversity in the Arctic. “We were surprised to see the Arctic as a biodiversity hotspot, which is particularly relevant since these waters are among the fastest-changing on the planet due to climate change,” said Matthew Sullivan , a microbiologist at Ohio State and the senior author of the study. Gregory said more research needs to be done to understand why the Arctic is so diverse, but she thinks it might have to do with the smaller host cells that live in these chilly waters. “Smaller hosts means more hosts, which might mean more opportunity for viruses to diversify.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As for whether the researchers expect another huge jump in varieties a few years from now, Sullivan thinks not. “Do I think there is more to discover? Sure, but I’m hopeful at this point that we’ve largely captured the abundant viruses that we can with this method,” he said, adding, “at least until we get into totally new environments with totally different selective pressures.” According to Curtis Suttle , a microbial ecologist at the University of British Columbia, viruses play a major role in global biogeochemical cycles, including the carbon cycle, whereby carbon moves between Earth’s biosphere and atmosphere. “I’ve been trying to make the case that marine viruses are crucially important for a long time,” said Suttle, who was not involved in the new study. “Getting this kind of data out into the community is hugely important to understanding the role of viruses in global processes.” Suttle explained that the oceans currently absorb approximately half of the carbon emissions caused by humans, and the amount of carbon dioxide absorbed continues to rise. Viruses affect the level of saturation: According to Suttle, anywhere from 20 to 40 percent of the global bacterial population is killed every day by viruses. When a bacterium is killed by a viral infection, its cell wall explodes. “All the carbon that made that bacteria gets released into the oceans,” he said, and some of the carbon ends up being sequestered deep in the ocean. Some scientists have speculated that viruses could someday be used to tweak the carbon cycle and reduce the amount of carbon dioxide in the atmosphere, according to Suttle. Zayed, who became interested in viruses while studying phage therapy as an alternative to antibiotics for treating infections, calls this potentially risky geoengineering scheme “phage therapy for the environment.” Whether the viral discovery has practical applications or not, Melissa Duhaime , a microbial ecologist at the University of Michigan, is excited by the sheer “cool factor” of the new study. “When you first begin looking at new data like this, it’s like landing on Mars and looking around for the first time,” Duhaime said, “but a Mars with little critters never described before staring back at you.” Original story reprinted with permission from Quanta Magazine , an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. Combatting drug deaths with opioid vending machines America’s grandest movie palaces find strange new lives What to expect from Sony's next-gen PlayStation Helvetica, the world's most popular font, gets a face-lift A new strategy for treating cancer , thanks to Darwin ✨ Optimize your home life with our Gear team's best picks, from robot vacuums to affordable mattresses to smart speakers. 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Topics viruses Quanta Magazine Amit Katwala Grace Browne Matt Simon Dell Cameron Max G. Levy Dhruv Mehrotra Max G. Levy Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,231
2,017
"The Lone Star Tick That Gives People Meat Allergies May Be Spreading | WIRED"
"https://www.wired.com/story/lone-star-tick-that-gives-people-meat-allergies-may-be-spreading"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Megan Molteni Science Oh, Lovely: The Tick That Gives People Meat Allergies Is Spreading Alamy Save this story Save Save this story Save First comes the unscratchable itching, and the angry blossoming of hives. Then stomach cramping, and—for the unluckiest few—difficulty breathing, passing out, and even death. In the last decade and a half, thousands of previously protein-loving Americans have developed a dangerous allergy to meat. And they all have one thing in common: the lone star tick. Red meat, you might be surprised to know, isn’t totally sugar-free. It contains a few protein-linked saccharides, including one called galactose- alpha -1,3-galactose, or alpha-gal, for short. More and more people are learning this the hard way, when they suddenly develop a life-threatening allergy to that pesky sugar molecule after a tick bite. Yep, one bite from the lone star tick—which gets its name from the Texas-shaped splash of white on its back—is enough to reprogram your immune system to forever reject even the smallest nibble of perfectly crisped bacon. For years, physicians and researchers only reported the allergy in places the lone star tick calls home, namely the southeastern United States. But recently it’s started to spread. The newest hot spots? Duluth, Minnesota, Hanover, New Hampshire, and the eastern tip of Long Island, where at least 100 cases have been reported in the last year. Scientists are racing to trace its spread, to understand if the lone star tick is expanding into new territories, or if other species of ticks are now causing the allergy. The University of Virginia is deep in the heart of lone star tick country. It’s also home to a world-class allergy research division, headed up by immunologist Thomas Platts-Mills. He’d been hearing tales of the meat allergy since the ’90s—people waking up in the middle of the night after a big meal, sweating and breaking out in hives. But he didn’t give it much thought until 2004, when he heard about another group of patients all suffering from the same symptoms. This time, it wasn’t a plate of pork chops they shared; it was a new cancer drug called cetuximab. The drug worked, but curiously, patients that lived in the southeast were 10 times as likely to report side effects of itching, swelling, and a dangerous drop in blood pressure. Related Stories public health Megan Molteni public health Roxanne Khamsi Health Lizzie Wade Platts-Mills teamed up with cetuximab’s distributor, Bristol-Myers Squibb, and began comparing patient blood samples. He discovered that all the patients who experienced an allergic reaction had pre-existing antibodies to alpha-gal, and cetuximab was full of the stuff, thanks to the genetically modified mice from which it was derived. With that mystery solved, Platts-Mills turned to figuring out what made patients so sensitive to alpha-gal. The best hint he had was the geographic overlap between the cetuximab patients and previously reported meat allergies. The area perfectly matched where people came down with Rocky Mountain spotted fever—a disease carried by the lone star tick. But it wasn’t until Platts-Mills and two of his lab members came down with tick-induced meat allergies of their own that they made the connection. Over the next few years Platts-Mills and his colleague Scott Commins screened more meat allergy patients and discovered that 80 percent reported being bitten by a tick. What’s more, they showed that tick bites led to a 20-fold increase in alpha-gal antibodies. Since ethics standards prevented them from attaching ticks to randomized groups of patients, this data was the best they could do to guess how meat allergy arises. Something in the tick’s saliva hijacks humans’ immune systems, red-flagging alpha-gal, and triggering the massive release of histamines whenever red meat is consumed. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Researchers are still trying to find what that something is. Commins has since moved to the University of North Carolina, where he’s injecting mice with lone star tick extracts to try to understand which molecules are setting off the alpha-gal bomb. It’s tricky: Tick saliva is packed with tons of bioactive compounds to help the parasite feed without detection. One of them might be an alpha-gal analogue—something similar-but-different-enough in shape that it sets off the human immune system. But it could also be a microbe—like a bacteria or virus—that triggers the response. Some have even suggested that residual proteins from the ticks’ earlier blood meals could be the culprit. Whatever it is, allergy researchers will be paying attention. Because, as far as anyone can tell, alpha-gal syndrome seems to be the only allergy that affects all people, regardless of genetic makeup. “There’s something really special about this tick,” says Jeff Wilson, an asthma, allergy, and immunology fellow in Platts-Mills’ group. Usually a mix of genes and environmental factors combine to create allergies. But when it comes to the lone star tick it doesn’t matter if you’re predisposed or not. “Just a few bites and you can render anyone really, really allergic,” he says. In the meantime, Platts-Mills, Commins, and Wilson are busy communicating the scale of the public health problem. Every day they check local news headlines to log new cases of catastrophic hamburger aversion, and spend hours on the phone gathering the latest intel from allergy clinics and academic centers around the country. They’re building the first real red meat allergy incidence map of the US—because state health departments aren’t required to report alpha-gal syndrome to the Centers for Disease Control and Prevention. And it’s still rare enough outside the southeastern US that many doctors don’t correctly diagnose it. Wilson is trying to get blood samples from all the new outbreaks, to figure out if the patients’ antibodies correspond to the saliva of lone star ticks or a different tick species. That will tell him if the increases in the allergy are the result of changing range patterns, or if other ticks have developed the capacity to rewire human immune systems in the same way. That information would also provide further clues to the mechanism itself. As for a cure? There’s not much science has to offer on that front, besides Epipens and veggie burgers. X Topics public health ticks medicine climate change Grace Browne Dhruv Mehrotra Max G. Levy Amit Katwala Dell Cameron Max G. Levy Matt Simon Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,232
2,021
"These Doctors Are Using AI to Screen for Breast Cancer | WIRED"
"https://www.wired.com/story/doctors-using-ai-screen-breast-cancer"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business These Doctors Are Using AI to Screen for Breast Cancer Images from a mammogram of a patient whom the algorithm identified as high risk four years before cancer was diagnosed. Courtesy of MIT Save this story Save Save this story Save Application Prediction End User Research Sector Health care Research Source Data Images Technology Machine learning Machine vision When Covid came to Massachusetts, it forced Constance Lehman to change how Massachusetts General Hospital screens women for breast cancer. Many people were skipping regular checkups and scans due to worries about the virus. So the center Lehman codirects began using an artificial intelligence algorithm to predict who is at most risk of developing cancer. Since the outbreak began, Lehman says, around 20,000 women have skipped routine screening. Normally five of every 1,000 women screened shows signs of cancer. “That’s 100 cancers that we haven’t diagnosed,” she says. Lehman says the AI approach has helped identify a number of women who, when persuaded to come in for routine screening, turn out to have early signs of cancer. The women flagged by the algorithm were three times as likely to develop cancer; previous statistical techniques were no better than random. The algorithm analyzes prior mammograms, and seems to work even when physicians did not see warning signs in those earlier scans. “What the AI tools are doing is they're extracting information that my eye and my brain can't,” she says. Courtesy of MIT Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Researchers have long touted the potential for AI analysis in medical imaging, and some tools have found their way into medical care. Lehman has been working with researchers at MIT for several years on ways to apply AI to cancer screening. But AI is potentially even more useful as a way to more accurately predict risk. Breast cancer screening sometimes involves not just examining a mammogram for precursors of cancer, but collecting patient information and feeding both into a statistical model to determine the need for follow-up screening. Adam Yala , a PhD student at MIT, began developing the algorithm Lehman is using, called Mirai, before Covid. He says the goal of using AI is to improve early detection and to reduce the stress and cost of false positives. “What the AI tools are doing is they're extracting information that my eye and my brain can't.” Constance Lehman, radiologist, Massachusetts General Hospital To create Mirai, Yala had to overcome problems that have bedeviled other efforts to use AI in radiology. He used an adversarial machine learning approach, where one algorithm tries to deceive another, to account for differences among radiology machines, which could mean that patients that face the same risk of breast cancer get different scores. The model was also designed to aggregate data from several years, making it more accurate than previous efforts that include less data. The MIT algorithm analyzes the standard four views in a mammogram, from which it then infers information about a patient that is often not collected, such as history of surgery or hormone factors such as menopause. This can help if that data has not been collected by a doctor already. Details of the work are outlined in a paper published today in the journal Science Translational Medicine. Mirai was found to be more accurate than the statistical models normally used to judge a woman’s breast cancer risk. When compared using historical patient data, 42 percent of people who went on to develop cancer in five years were flagged as high risk by the algorithm, compared with 23 percent for the best existing model. The algorithm also worked on patient data from Taiwan and Sweden, suggesting it is effective for a broad range of patients. Yala says the model seems to generalize well because of the large, sufficiently diverse dataset used, but he notes that it is always important to validate algorithms in different settings. By Tom Simonite Judy Wawira Gichoya , an assistant professor of radiology at Emory University School of Medicine, who plans to test the MIT algorithm, says the work shows the importance of AI experts working together with doctors. But she plans to validate the algorithm carefully on her own patients’ data before using it. Charles Kahn , a professor of radiology at the University of Pennsylvania and editor of the radiology journal, says Covid has had a huge impact on routine medical care. “It's not just haircuts that people are missing during the pandemic,” he says. “And it has a serious impact on their health.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kahn says the potential of the approach being tested at MGH is that it could help personalize treatment, with individual patients ideally receiving a clearer picture of their risk as well as a custom screening plan. But he worries that algorithmic approaches can lead to biased care. “It can creep in in ways you never envisioned,” he says. Covid has changed medical care in other ways. It has accelerated adoption of telemedicine, for instance, which benefits some communities more than others. Lehman says she hopes that the AI methods she’s testing can benefit people who typically receive less medical attention. “A lot of people have lived their whole lives in our health care system as if we were in a pandemic,” she says. “They do not have access to quality care, and they aren't being screened.” 📩 Want the latest on tech, science, and more? Sign up for our newsletters ! The case for cannibalism, or: How to survive the Donner Party A digital picture frame is my favorite way to keep in touch These are the 17 must-watch TV shows of 2021 If Covid-19 did start with a lab leak, would we ever know ? Ash Carter: The US needs a new plan to beat China on AI 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Senior Writer X Topics medicine health machine learning artificial intelligence algorithms Will Knight Khari Johnson Will Knight Will Knight Khari Johnson Will Knight Matt Laslo Amanda Hoover Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,233
2,020
"When AI Sees a Man, It Thinks 'Official.' A Woman? 'Smile' | WIRED"
"https://www.wired.com/story/ai-sees-man-thinks-official-woman-smile"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business When AI Sees a Man, It Thinks 'Official.' A Woman? 'Smile' Illustration: Sam Whitney; Getty Images Save this story Save Save this story Save Application Content moderation Ethics Face recognition Prediction Company Alphabet Amazon Google Microsoft End User Big company Research Sector Research Source Data Images Technology Machine learning Machine vision Men often judge women by their appearance. Turns out, computers do too. When US and European researchers fed pictures of congressmembers to Google ’s cloud image recognition service, the service applied three times as many annotations related to physical appearance to photos of women as it did to men. The top labels applied to men were “official” and “businessperson”; for women they were “smile” and “chin.” “It results in women receiving a lower status stereotype: that women are there to look pretty and men are business leaders,” says Carsten Schwemmer, a postdoctoral researcher at GESIS Leibniz Institute for the Social Sciences in Köln, Germany. He worked on the study, published last week , with researchers from New York University, American University, University College Dublin, University of Michigan, and nonprofit California YIMBY. The researchers administered their machine vision test to Google’s artificial intelligence image service and those of rivals Amazon and Microsoft. Crowdworkers were paid to review the annotations those services applied to official photos of lawmakers and images those lawmakers tweeted. Google's AI image recognition service tended to see men like senator Steve Daines as businesspeople, but tagged women lawmakers like Lucille Roybal-Allard with terms related to their appearance. Courtesy of Carsten Schwemmer Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The AI services generally saw things human reviewers could also see in the photos. But they tended to notice different things about women and men, with women much more likely to be characterized by their appearance. Women lawmakers were often tagged with “girl” and “beauty.” The services had a tendency not to see women at all, failing to detect them more often than they failed to see men. The study adds to evidence that algorithms do not see the world with mathematical detachment but instead tend to replicate or even amplify historical cultural biases. It was inspired in part by a 2018 project called Gender Shades that showed that Microsoft’s and IBM’s AI cloud services were very accurate at identifying the gender of white men, but very inaccurate at identifying the gender of Black women. The new study was published last week, but the researchers had gathered data from the AI services in 2018. Experiments by WIRED using the official photos of 10 men and 10 women from the California State Senate suggest the study’s findings still hold. Amazon's image processing service Rekognition tagged images of some women California state senators including Ling Ling Chang, a Republican, as "girl" or "kid" but didn't apply similar labels to men lawmakers. WIRED Staff via Amazon Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg All 20 lawmakers are smiling in their official photos. Google’s top suggested labels noted a smile for only one of the men, but for seven of the women. The company’s AI vision service labeled all 10 of the men as “businessperson,” often also with “official” or “white collar worker.” Only five of the women senators received one or more of those terms. Women also received appearance-related tags, such as “skin,” “hairstyle,” and “neck,” that were not applied to men. Amazon and Microsoft’s services appeared to show less obvious bias, although Amazon reported being more than 99 percent sure that two of the 10 women senators were either a “girl” or “kid.” It didn’t suggest any of the 10 men were minors. Microsoft’s service identified the gender of all the men, but only eight of the women, calling one a man and not tagging a gender for another. Google switched off its AI vision service’s gender detection earlier this year, saying that gender cannot be inferred from a person’s appearance. Tracy Frey, managing director of responsible AI at Google’s cloud division, says the company continues to work on reducing bias and welcomes outside input. “We always strive to be better and continue to collaborate with outside stakeholders—like academic researchers—to further our work in this space,” she says. Amazon and Microsoft declined to comment; both companies’ services recognize gender only as binary. The US-European study was inspired in part by what happened when the researchers fed Google’s vision service a striking, award-winning image from Texas showing a Honduran toddler in tears as a US Border Patrol officer detained her mother. Google’s AI suggested labels including “fun,” with a score of 77 percent, higher than the 52 percent score it assigned the label “child.” WIRED got the same suggestion after uploading the image to Google’s service Wednesday. Schwemmer and his colleagues began playing with Google’s service in hopes it could help them measure patterns in how people use images to talk about politics online. What he subsequently helped uncover about gender bias in the image services has convinced him the technology isn’t ready to be used by researchers that way, and that companies using such services could suffer unsavory consequences. “You could get a completely false image of reality,” he says. A company that used a skewed AI service to organize a large photo collection might inadvertently end up obscuring women businesspeople, indexing them instead by their smiles. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When this image won World Press Photo of the Year in 2019 one judge remarked that it showed "violence that is psychological." Google's image algorithms detected "fun." WIRED Staff via Google Prior research has found that prominent datasets of labeled photos used to train vision algorithms showed significant gender biases , for example showing women cooking and men shooting. The skew appeared to come in part from researchers collecting their images online, where the available photos reflect societal biases, for example by providing many more examples of businessmen than businesswomen. Machine learning software trained on those datasets was found to amplify the bias in the underlying photo collections. Schwemmer believes biased training data may explain the bias the new study found in the tech giant’s AI services, but it’s impossible to know without full access to their systems. Diagnosing and fixing shortcomings and biases in AI systems has become a hot research topic in recent years. The way humans can instantly absorb subtle context in an image while AI software is narrowly focused on patterns of pixels creates much potential for misunderstanding. The problem has become more pressing as algorithms get better at processing images. “Now they’re being deployed all over the place,” says Olga Russakovsky, an assistant professor at Princeton. “So we’d better make sure they’re doing the right things in the world and there are no unintended downstream consequences.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WIRED Staff via Google An academic study and tests by WIRED found that Google's image recognition service often tags women lawmakers like California state senator Cathleen Galgiani with labels related to their appearance, but sees men lawmakers like her colleague Jim Beall as businesspeople and elders. WIRED Staff via Google Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One approach to the problem is to work on improving the training data that can be the root cause of biased machine learning systems. Russakovsky is part of a Princeton project working on a tool called REVISE that can automatically flag some biases baked into a collection of images, including along geographic and gender lines. When the researchers applied the tool to the Open Images collection of 9 million photos maintained by Google, they found that men were more often tagged in outdoor scenes and sports fields than women. And men tagged with “sports uniform” were mostly outdoors playing sports like baseball, while women were indoors playing basketball or in a swimsuit. The Princeton team suggested adding more images showing women outdoors, including playing sports. Google and its competitors in AI are themselves major contributors to research on fairness and bias in AI. That includes working on the idea of creating standardized ways to communicate the limitations and contents of AI software and datasets to developers—something like an AI nutrition label. Google has developed a format called “ model cards ” and published cards for the face and object detection components of its cloud vision service. One claims Google’s face detector works more or less the same for different genders, but doesn’t mention other possible forms that AI gender bias might take. 📩 Want the latest on tech, science, and more? Sign up for our newsletters ! The strange and twisted tale of hydroxychloroquine How to escape a sinking ship (like, say, the Titanic ) The future of McDonald's is in the drive-thru lane Why it matters which charger you use for your phone The latest Covid vaccine results, deciphered 🎮 WIRED Games: Get the latest tips, reviews, and more 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones Senior Editor X Topics machine learning artificial intelligence algorithms face recognition computer vision Google Microsoft Amazon Will Knight Niamh Rowe Reece Rogers Will Knight Christopher Beam Will Knight Caitlin Harrington Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
1,234
2,020
"Deepfake Putin is here to warn Americans about their self-inflicted doom | MIT Technology Review"
"https://www.technologyreview.com/2020/09/29/1009098/ai-deepfake-putin-kim-jong-un-us-election"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Deepfake Putin is here to warn Americans about their self-inflicted doom By Karen Hao archive page Deepfake Putin Mischief/RepresentUs The news: Two political ads will broadcast on social media today, featuring deepfake versions of Russian president Vladimir Putin and North Korean leader Kim Jong-un. Both deepfake leaders will be giving the same message: that America doesn’t need any election interference from them; it will ruin its democracy by itself. What are they for? Yes, the ads sound creepy, but they’re meant for a good cause. They’re part of a campaign from the nonpartisan advocacy group RepresentUs to protect voting rights during the upcoming US presidential election, amid president Trump’s attacks on mail-in voting and suggestions that he may refuse a peaceful transition. The goal is to shock Americans into understanding the fragility of democracy as well as provoke them to take various actions, including checking their voter registration and volunteering for the polls. It flips the script on the typical narrative of political deepfakes, which experts often worry could be abused to confuse voters and disrupt elections. How they were made: RepresentUs worked with the creative agency Mischief at No Fixed Address, which came up with the idea of using dictators to deliver the message. They filmed two actors with the right face shape and authentic accents to recite the script. They then worked with a deepfake artist who used an open-source algorithm to swap in Putin’s and Kim’s faces. A post-production crew cleaned up the leftover artifacts of the algorithm to make the video look more realistic. All in all the process took only 10 days. Attempting the equivalent with CGI likely would have taken months, the team says. It also could have been prohibitively expensive. Are we ready? The ads were supposed to broadcast on Fox, CNN, and MSNBC in their Washington, DC, markets, but the stations pulled them last-minute from airing. A spokesperson for the campaign said they were still waiting on an explanation. The ads include a disclaimer at the end, stating: “The footage is not real, but the threat is.” But given the sensitive nature of using deepfakes in a political context, it’s possible the networks felt the American public just wasn’t ready. hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward. By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be. New York City is fixing the relationship between government and technology–and not in the ways you’d expect. By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated. By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,235
2,020
"The owner of WeChat thinks deepfakes could actually be good | MIT Technology Review"
"https://www.technologyreview.com/2020/07/28/1005692/china-tencent-wechat-ai-plan-says-deepfakes-good"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The owner of WeChat thinks deepfakes could actually be good By Karen Hao archive page Ms Tech | Artbreeder, Pixabay The news: In a new white paper about its plans for AI, translated by China scholars Jeffrey Ding and Caroline Meinhardt, Tencent, the owner of WeChat and one of China’s three largest tech giants, emphasizes that deepfake technology is “not just about ‘faking’ and ‘deceiving,’ but a highly creative and groundbreaking technology.” It urges regulators to “be prudent” and to avoid clamping down on its potential benefits to society. The examples: Tencent listed five examples of what it perceives as beneficial applications of deepfake technology that already exist or could soon exist today: For enhancing TV and film production. The technology has already been used to let deceased actors appear in new movies, such as Fast and Furious 7 , and could be further developed to create body doubles for stunts and other purposes. It could also be used to automatically generate voice-overs in different languages to increase the global distribution of movies. For personalizing entertainment. As the viral app Zao showed last year, deepfake technology can be used to face-swap users into movies or video games. It could create a new genre of hyper-personalized entertainment. For improving e-commerce. The technology is already being used to generate virtual models of different body types and ethnicities, as well as to let users digitally try on clothes for a more interactive online shopping experience. For creating realistic virtual avatars. It has already been used to generate three-dimensional digital humans to perform as virtual pop stars and TV anchors , and to bring historical figures into virtual reality. It could also be combined with computer vision and natural-language understanding to create smart digital assistants capable of natural interactions. For helping patients. Finally, the technology has shown potential for helping those affected by chronic illness. For example, it has allowed people who have lost their voice to ALS to communicate by using a deepfake of it. Why it matters: Tencent says it’s already working to advance some of these applications. This will likely spur its competitors to do the same if they haven’t yet, and influence the direction of Chinese startups eager to be acquired. As a member of China’s “AI national team,” which the government created as part of its overall AI strategy, the company also has significant sway among regulators who want to help foster the industry’s growth. Any concerns? Tencent acknowledges that deepfake technology can cause harm, particularly in its use for face-swapping people into pornography. But the company is forcefully optimistic that it “will not topple society’s truths, much less pose a threat to the world order.” Of course, that’s easy to say for a company that stands to benefit significantly from the technology’s commercialization. hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done. By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,236
2,020
"An Indian politician is using deepfake technology to win new voters | MIT Technology Review"
"https://www.technologyreview.com/2020/02/19/868173/an-indian-politician-is-using-deepfakes-to-try-and-win-voters"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts An Indian politician is using deepfake technology to win new voters By Charlotte Jee archive page A still of a deepfake video of Indian politician Manoj Tiwari YouTube | BJP The news: A deepfake of the president of India’s ruling Bharatiya Janata Party (BJP), Manoj Tiwari, went viral on WhatsApp in the country earlier this month, ahead of legislative assembly elections in Delhi, according to Vice. It’s the first time a political party anywhere has used a deepfake for campaigning purposes. In the original video Tiwari speaks in English, criticizing his political opponent Arvind Kejriwal and encouraging voters to vote for the BJP. The second video has been manipulated using deepfake technology so his mouth moves convincingly as he speaks in Haryanvi, the Hindi dialect spoken by the target voters for the BJP. The purpose: The BJP has partnered with political communications firm The Ideaz Factory to create deepfakes that let it target voters across the over 20 different languages used in India. The party told Vice that the Tiwari deepfake reached approximately 15 million people in 5,800 WhatsApp groups. Causing alarm: This isn’t the first time deepfakes have popped up during a political campaign. For example, last December, researchers made a fake video of the two candidates in the UK’s general election endorsing each other. It wasn’t supposed to sway the vote, however—merely to raise awareness about deepfake technology. This case in India seems to be the first time deepfakes have been used for a political campaign. The big risk is that we reach a point where people can no longer trust what they see or hear. In that scenario, a video wouldn’t even need to be digitally altered for people to denounce it as fake. It’s not hard to imagine the corrosive impact that would have on an already fragile political landscape. Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech. hide by Charlotte Jee Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done. By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,237
2,019
"Generative adversarial networks: What GANs are and how they've evolved | VentureBeat"
"https://venturebeat.com/2019/12/26/gan-generative-adversarial-network-explainer-ai-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative adversarial networks: What GANs are and how they’ve evolved Share on Facebook Share on X Share on LinkedIn Synthetic images produced by StyleGAN, a GAN created by Nvidia researchers. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Perhaps you’ve read about AI capable of producing humanlike speech or generating images of people that are difficult to distinguish from real-life photographs. More often than not, these systems build upon generative adversarial networks (GANs), which are two-part AI models consisting of a generator that creates samples and a discriminator that attempts to differentiate between the generated samples and real-world samples. This unique arrangement enables GANs to achieve impressive feats of media synthesis, from composing melodies and swapping sheep for giraffes to hallucinating footage of ice skaters and soccer players. In point of fact, it’s because of this prowess that GANs have been used to produce problematic content like deepfakes, which is media that takes a person in existing media and replaces them with someone else’s likeness. The evolution of GANs — which Facebook AI research director Yann LeCun has called the most interesting idea of the decade — is somewhat long and winding, and very much continues to this day. They have their deficiencies, but GANs remain one of the most versatile neural network architectures in use today. History of GANs The idea of pitting two algorithms against each other originated with Arthur Samuel, a prominent researcher in the field of computer science who’s credited with popularized the term “machine learning.” While at IBM, he devised a checkers game — the Samuel Checkers-playing Program — that was among the first to successfully self-learn, in part by estimating the chance of each side’s victory at a given position. But if Samuel is the grandfather of GANs, Ian Goodfellow, former Google Brain research scientist and director of machine learning at Apple’s Special Projects Group, might be their father. In a seminal 2014 research paper simply titled “ Generative Adversarial Nets ,” Goodfellow and colleagues describe the first working implementation of a generative model based on adversarial networks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Goodfellow has often stated that he was inspired by noise-contrastive estimation, a way of learning a data distribution by comparing it against a defined noise distribution (i.e., a mathematical function representing corrupted or distorted data). Noise-contrastive estimation uses the same loss functions as GANs — in other words, the same measure of performance with respect to a model’s ability to anticipate expected outcomes. Of course, Goodfellow was’t the only one to pursue an adversarial AI model design. Dalle Molle Institute for Artificial Intelligence Research co-director Juergen Schmidhuber advocated predictability minimization, a technique that models distributions through an encoder that maximizes the objective function (the function that specifies the problem to be solved by the system) minimized by a predictor. It adopts what’s known as a minimax decision rule, where the possible loss for a worst case (maximum loss) scenario is minimized as much as possible. And this is the paradigm upon which GANs are built. GAN architecture Again, GANs consist of two parts: generators and discriminators. The generator model produces synthetic examples (e.g., images) from random noise sampled using a distribution, which along with real examples from a training data set are fed to the discriminator, which attempts to distinguish between the two. Both the generator and discriminator improve in their respective abilities until the discriminator is unable to tell the real examples from the synthesized examples with better than the 50% accuracy expected of chance. GANs train in an unsupervised fashion, meaning that they infer the patterns within data sets without reference to known, labeled, or annotated outcomes. Interestingly, the discriminator’s work informs that of the generator — every time the discriminator correctly identifies a synthesized work, it tells the generator how to tweak its output so that it might be more realistic in the future. In practice, GANs suffer from a number of shortcomings owing to their architecture. The simultaneous training of generator and discriminator models is inherently unstable. Sometimes the parameters — the configuration values internal to the models — oscillate or destabilize, which isn’t surprising given that after every parameter update, the nature of the optimization problem being solved changes. Alternatively, the generator collapses, and it begins to produce data samples that are largely homogeneous in appearance. Above: The architecture of a generative adversarial network (GAN). The generator and discriminator also run the risk of overpowering each other. If the generator becomes too accurate, it’ll exploit weaknesses in the discriminator that lead to undesirable results, whereas if the discriminator becomes too accurate, it’ll impede the generator’s progress toward convergence. A lack of training data also threatens to impede GANs’ progress in the semantic realm, which in this context refers to the relationships among objects. Today’s best GANs struggle to reconcile the difference between palming and holding an object, for example — a differentiation most humans make in seconds. But as Hanlin Tang, senior director of Intel’s AI laboratory, explained to VentureBeat in a phone interview, emerging techniques get around these limitations. One entails building multiple discriminator into a model and fine-tuning them on specific data. Another involves feeding discriminator dense embedding representations, or numerical representations of data, so that they have more information from which to draw. “There [aren’t] that many well-curated data sets to start … applying GANs to,” Tang said. “GANs just follow where the data sets are going.” On the subject of compute, Youssef Mroueh, a research staff member in the IBM multi-modal algorithms and engines group, is working with colleagues to develop lightweight models dubbed “ small GANs ” that reduce training time and memory usage. The bulk of their research is concentrated in the MIT-IBM Watson AI Lab, a joint AI research effort between the Massachusetts Institute of Technology and IBM. “[It’s a] challenging business question: How can we change [the] modeling without all the computation and hassle?” Mroueh said. “That’s what we’re working toward.” GAN applications Image and video synthesis GANs are perhaps best known for their contributions to image synthesis. StyleGAN, a model Nvidia developed, has generated high-resolution head shots of fictional people by learning attributes like facial pose, freckles, and hair. A newly released version — StyleGAN 2 — makes improvements with respect to both architecture and training methods, redefining the state of the art in terms of perceived quality. In June 2019, Microsoft researchers detailed ObjGAN , a novel GAN that could understand captions, sketch layouts, and refine the details based on the wording. The coauthors of a related study proposed a system — StoryGAN — that synthesizes storyboards from paragraphs. Such models have made their way into production. Startup Vue.ai ‘s GAN susses out clothing characteristics and learns to produce realistic poses, skin colors, and other features. From snapshots of apparel, it can generate model images in every size up to five times faster than a traditional photo shoot. Elsewhere, GANs have been applied to the problems of super-resolution (image upsampling) and pose estimation (object transformation). Tang says one of his teams used GANs to train a model to upscale 200-by-200-pixel satellite imagery to 1,000 by 1,000 pixels, and to produce images that appear as though they were captured from alternate angles. Above: Examples of edits performed by GAN Paint Studio. Scientists at Carnegie Mellon last year demoed Recycle-GAN , a data-driven approach for transferring the content of one video or photo to another. When trained on footage of human subjects, the GAN generated clips that captured subtle expressions like dimples and lines that formed when subjects smiled and moved their mouths. More recently, researchers at Seoul-based Hyperconnect published MarioNETte , which synthesizes a reenacted face animated by a person’s movement while preserving the face’s appearance. On the object synthesis side of the equation, Google and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a GAN that can generate images of 3D models with realistic lighting and reflections and enables shape and texture editing, as well as viewpoint shifts. Video Predicting future events from only a few video frames — a task once considered impossible — is nearly within grasp thanks to state-of-the-art approaches involving GANs and novel data sets. One of the newest papers on the subject from DeepMind details recent advances in the budding field of AI clip generation. Thanks to “computationally efficient” components and techniques and a new custom-tailored data set, researchers say their best-performing model — Dual Video Discriminator GAN (DVD-GAN) — can generate coherent 256 x 256-pixel videos of “notable fidelity” up to 48 frames in length. In a twist on the video synthesis formula, Cambridge Consultants last year demoed a model called DeepRay that invents video frames to mitigate distortion caused by rain, dirt, smoke, and other debris. Artwork GANs are capable of more than generating images and video footage. When trained on the right data sets, they’re able to produce de novo works of art. Researchers at the Indian Institute of Technology Hyderabad and the Sri Sathya Sai Institute of Higher Learning devised a GAN, dubbed SkeGAN , that generates stroke-based vector sketches of cats, firetrucks, mosquitoes, and yoga poses. Scientists at the Maastricht University in the Netherlands created a GAN that produces logos from one of 12 different colors. Victor Dibia, a human-computer interaction researcher and Carnegie Mellon graduate, trained a GAN to synthesize African tribal masks. Meanwhile, a team at the University of Edinburgh’s Institute for Perception and Institute for Astronomy designed a model that generates images of fictional galaxies that closely follow the distributions of real galaxies. In March during its GPU Technology Conference (GTC) in San Jose, California, Nvidia took the wraps off of GauGAN , a generative adversarial AI system that lets users create lifelike landscape images that never existed. GauGAN — whose name comes from post-Impressionist painter Paul Gauguin — improves upon Nvidia’s Pix2PixHD system introduced last year , which was similarly capable of rendering synthetic worlds but left artifacts in its images. The machine learning model underpinning GauGAN was trained on more than one million images from Flickr, imbuing it with an understanding of the relationships among over 180 objects including snow, trees, water, flowers, bushes, hills, and mountains. In practice, trees next to water have reflections, for instance, and the type of precipitation changes depending on the season depicted. Music GANs are architecturally well-suited to generating media, and that includes music. In a paper published in August, researchers hailing from the National Institute of Informatics in Tokyo describe a system that’s able to generate “lyrics-conditioned” melodies from learned relationships between syllables and notes. Not to be outdone, in December, Amazon Web Services detailed DeepComposer , a cloud-based service that taps a GAN to fill in compositional gaps in songs. “For a long time, [GANs research] has been about improving the training instabilities whatever the modality is — text, images, sentences, et cetera. Engineering is one thing, but it’s also [about] coming up with [the right] architecture,” said Mroueh. “It’s a combination of lots of things.” Speech Google and Imperial College London researchers recently set out to create a GAN-based text-to-speech system capable of matching (or besting) state-of-the-art methods. Their proposed system — GAN-TTS — consists of a neural network that learned to produce raw audio by training on a corpus of speech with 567 pieces of encoded phonetic, duration, and pitch data. To enable the model to generate sentences of arbitrary length, the coauthors sampled 44 hours’ worth of two-second snippets together with the corresponding linguistic features computed for five-millisecond snippets. An ensemble of 10 discriminators — some of which assess linguistic conditioning, while others assess general realism — attempt to distinguish between real and synthetic speech. Medicine In the medical field, GANs have been used to produce data on which other AI models — in some cases, other GANs — might train and to invent treatments for rare diseases that to date haven’t received much attention. In April, the Imperial College London, University of Augsburg, and Technical University of Munich sought to synthesize data to fill in gaps in real data with a model dubbed Snore-GAN. In a similar vein, researchers from Nvidia, the Mayo Clinic, and the MGH and BWH Center for Clinical Data Science proposed a model that generates synthetic magnetic resonance images (MRIs) of brains with cancerous tumors. Baltimore-based Insilico Medicine pioneered the use of GANs in molecular structure creation for diseases with a known ligand (a complex biomolecule) but no target (a protein associated with a disease process). Its team of researchers is actively working on drug discovery programs in cancer, dermatological diseases, fibrosis, Parkinson’s, Alzheimer’s, ALS, diabetes, sarcopenia, and aging. Robotics The field of robotics has a lot to gain from GANs, as it turns out. A tuned discriminator can determine whether a machine’s trajectory has been drawn from a distribution of human demonstrations or from synthesized examples. In that way, it’s able to train agents to complete tasks accurately, even when it has access only to the robot’s positional information. (Normally, training robot-directing AI requires both positional and action data. The latter indicates which motors moved over time.) “The idea of using adversarial loss for training agent trajectories is not new, but what’s new is allowing it to work with a lot less data,” Tang said. “The trick to applying these adversarial learning approaches is figuring out which inputs the discriminator has access to — what information is available to avoid being tricked [by the discriminator] … [In state-of-the-art approaches], discriminators need access to [positional] data alone, allowing us to train with expert demonstrations where all we have are the state data.” Tang says this enables the training of much more robust models than was previously possible — models that require only about two dozen human demonstrations. “If you reduce the amount of data that the discriminator has access to, you’re reducing the complexity of the data set that you have to provide to the model. These types of adversarial learning methods actually work pretty well in low-data regimes,” he added. Deepfake detection GANs’ ability to generate convincing photos and videos of people makes them ripe targets for abuse. Already, malicious actors have used models to generate fake celebrity pornography. But preliminary research suggests GANs could root out deepfakes just as effectively as they produce them. A paper published on the preprint server Arxiv.org in March describes spamGAN , which learns from a limited corpus of annotated and unannotated data. In experiments, the researchers say that spamGAN outperformed existing spam detection techniques with limited labeled data, achieving accuracy of between 71% and 86% when trained on as little as 10% of labeled data. Future directions What might the future hold with respect to GANs? Despite the leaps and bounds brought by this past decade of research, Tang cautions that it’s still early days. “GANs are still [missing] very fine-grained control,” he said. “[That’s] a big challenge.” For his part, Mroueh believes that GAN-generated content will become increasingly difficult to distinguish from real content. “My feeling is that the field will improve,” he said. “Comparing image generation in 2014 to today, I wouldn’t have expected the quality to become that good. If the progress continues like this, [GANs] will remain a very important research project.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,238
2,019
"Facebook's AI extracts playable characters from real-world videos | VentureBeat"
"https://venturebeat.com/2019/04/18/facebooks-ai-extracts-playable-characters-from-real-world-videos"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook’s AI extracts playable characters from real-world videos Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Remember those FMV games from the ’90s — the ones that blended prerecorded clips with animated sprites and 3D models? Facebook is bringing them back in style, and improved tenfold. In a newly published preprint paper on Arxiv.org (“ Vid2Game: Controllable Characters Extracted from Real-World Videos “), scientists at Facebook AI Research describe a system capable of extracting controllable characters from real-world videos. “Our method extracts a character from an uncontrolled video and enables us to control its motion,” the paper’s coauthors explain. “The model generates novel image sequences of that person … [and the] generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person.” The team’s approach relies on two neural networks, or layers of mathematical functions modeled after biological neurons: Pose2Pose, a framework that maps a current pose and a single-instance control signal to the next post, and Pose2Frame, which plops the current pose and new pose (along with a given background) on an output frame. The reanimation can be controlled by any “low-dimensional” signal, such as one from a joystick or keyboard, and the researchers say that the system is robust enough to position extracted characters in dynamic backgrounds. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! So how’s it work? First, an input video containing one or more characters is fed into a Pose2Pose network trained for a specific domain (e.g., dancing), which isolates them (plus estimated foreground spatial masks) and their motion — the latter of which is taken as a trajectory of their centers of mass. (The masks are used to determine which regions of the background are replaced by synthesized image information.) Using these and combined pose data, Pose2Frame separates between character-dependent changes in the scene like shadows, held items, and reflections and those that are character-independent, and returns a pair of outputs that are linearly blended with any desired background. To train the AI system, the researchers sourced three videos, each between five and eight minutes long, of a tennis player outdoors, a person swinging a sword indoors, and a person walking. Compared with a neural network model fed three-minute video of a dancer, they report that their approach managed to successfully field dynamic elements, such as other people and differences in camera angle, in addition to variations in character clothing and camera angle. “Each network addresses a computational problem not previously fully met, together paving the way for the generation of video games with realistic graphics,” they wrote. “In addition, controllable characters extracted from YouTube-like videos can find their place in the virtual worlds and augmented realities.” Facebook isn’t the only company investigating AI systems that might aid in game design. Startup Promethean AI employs machine learning to help human artists create art for video games, and Nvidia researchers recently demonstrated a generative model that can create virtual environments using video snippets. Machine learning has also been used to rescue old game textures in retro titles like Final Fantasy VII and The Legend of Zelda: Twilight Princess, and to generate thousands of levels in games like Doom from scratch. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,239
2,019
"Promethean AI automatically generates game scenes, like a bedroom, for human artists | VentureBeat"
"https://venturebeat.com/2019/04/07/promethean-ai-automatically-generates-game-scenes-like-a-bedroom-for-human-artists"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Promethean AI automatically generates game scenes, like a bedroom, for human artists Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Promethean AI uses artificial intelligence to help human artists create art for video games. It can, for instance, automatically generate a bedroom when a human artist says, “Make a bedroom.” Then the artist can take that scene and customize it as needed. I saw this in action as Andrew Maximov, founder of Promethean AI and former technical art director at Naughty Dog, showed it to me in a demo at the recent Game Developers Conference. “We’re an AI company that helps people build virtual worlds for video games or movies,” Maximov said in an interview with GamesBeat. “We’ve got an integration with Unreal” that allows human artists to take assets that were created by others and reuse them in an AI-generated 3D space. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Maximov showed it to me, saying, “Build a bedroom.” The tool created a 3D scene that showed a typical bedroom. He added, “Add a desk,” and the program automatically did so. It also added the appropriate shadows, reflections, and other details that made the desk fit in the room in a physically accurate way. Promethean AI did this in three seconds. As this example suggests, Promethean AI is semantically aware. You click a button and it starts listening. You tell it to build a nerdy teen’s bedroom. You don’t have to tell it what a bedroom is. It already knows. You can then go into that bedroom and move objects around. Rather than issue programming commands, you simply grab objects with a mouse and move them to where you want them. “If you say, ‘Remove the desk,’ it disappears and so do all of the other details like the shadows,” Maximov said. If you say, “Add a typewriter,” then Promethean will add a desk to put the typewriter on, since that makes sense in the context of the command. When necessary, Promethean will recalculate the scene, aligning a dresser against the wall and putting the shadows in the correct position. You can change the perspective and the whole scene moves with your viewpoint. You can add new objects to the scene like a newspaper, and the software simply layers on top of other objects. In 10 minutes, you can have a bedroom that looks like a scene from Ferris Bueller’s Day Off. Above: The AI chose and placed the objects in this room. Maximov worked with hundreds of artists on games in the Uncharted series, where the amount of work could be overwhelming. He started his Los Angeles company to help artists build out their virtual worlds. Publishers will like this tool because it can cut the costs of making a Triple-A game, which have gone from $40 million to $100 million and could easily go to $200 million in the future. Such games could require 7 million or more copies sold to make a profit. But human game artists aren’t necessarily going to be put out of work by this tool. A single artist could become far more productive while dedicating less time to the boring stuff and more time to the unique nuances that can set their work apart. Above: Andrew Maximov is CEO of Promethean AI. Promethean AI’s applications programming interface (API) is a tool set powered by patent-pending technology. It helps artists solve the problem of filling out vast spaces without being to formulaic or repetitive. It essentially visualizes the presentation of data. With the growing costs of Triple-A game development getting close to potentially unsustainable levels, Maximov believes that substantial production efficiency gains made by technology such as Promethean AI will allow game developers to bring true next-generation experiences to market in an economically sound manner at the quality their audiences deserve. Maximov believes it’s important to take care of creative people, to empower them to create things that would otherwise be impossible, and to give every artists the power of an army. For years, he has been fighting for democratizing the creative process, supporting artists and empowering creativity within every single person. “We are not building technology that will replace artists,” Maximov said. “We are power amplification tool. Everything we do assumes there will be an artist on top. We take you 80% or 90% of the way. Then you have the creative freedom to dedicate the time where you want to.” It has a parallel with the movies. A scene artist could create an entire scene from scratch on a sound stage. Or they could travel to a location and get that scene in the real world. They could then modify the lighting and dress up other details. The interesting thing is that finding a mouse in the real world and capturing its image is pretty cheap, but creating the same thing in a virtual world is pretty expensive. “We’re actively optimizing the process,” he said. “We take high-level creative intent and convert it into actionable 3D content.” Above: Promethean AI created this when asked to create a “nerdy, messy, ’80s, teenager’s bedroom.” For indie game companies, this could be a boon. They could create a game that is set in a castle and make that castle look realistic using Promethean AI. Currently, procedural, or automated solutions, can fill out art work. But such art can look fake, and it removes the artists from the loop. An algorithm will populate trees and rocks into a forest road. The Promethean AI solution can be trained by artists to imitate an artist’s style. It trains and learns through machine learning techniques. It evaluates a space and makes suggestions. And it supports new graphics technologies, like real-time ray tracing. And it’s not a black box that does only what it is trained to do. Artists can train Promethean AI to build their idea of what a bedroom or any other virtual space should look like. In other words, Promethean AI learns. Above: Did a human artist create this scene? “You are never locked into a particular mode of operation with this technology,” Maximov said. You can tell Promethean AI to “find something soft” and it will do that. You can tell it to make a messy room “more tidy” and it will do that. For a more complex scene, you could tell it to create a “post-apocalyptic” scene, and you’ll see things like mildew, wildlife, and overgrown greenery. Promethean AI already has a minimum viable product, and the team is currently working closely with game studios on deploying the AI in production as part of its early adoption program. Maximov said the company is talking with game outsourcing companies about using the tool. Above: Here’s an example of the kinds of assets that Promethean AI could insert into a scene. The tech is likely to go well beyond the game companies as well. A movie company in Los Angeles is using the tech to convert movie scripts into visual scenes, so the movie makers can quickly get a sense for what a movie set might look like for that particular script. They can ask Promethean for “asset variations” for a desk and it will come up with the choices. “Everyone who builds virtual worlds can be a user and trainer of this AI,” Maximov said. Artists have a lot of objects to choose from in the Unity and Unreal asset stores, where game artists can purchase objects created by others. But there’s nothing that automates the process of taking that art and populating it into a massive game, Maximov said. Promethean AI is building more demos, like how to populate scenes in virtual reality. “Our focus was to make sure that all the companies that create their own custom technology are not forced to redo the same thing over and over again,” Maximov said. “It’s all about creativity and the creative flow.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,240
2,023
"Artificial intelligence (AI) - CNET"
"https://www.cnet.com/tags/artificial-intelligence"
"Tech Money Home Wellness Energy Home Internet Deals & Reviews New! Price Finder More Close Join/Login Tech Home Entertainment Mobile Computing Services & Software Gaming Money Credit Cards Mortgages Insurance Loans Cryptocurrency Banking Investing Taxes Home Home Internet Home Security Kitchen & Household Smart Home Energy & Utilities Yard & Outdoors Wellness Sleep Nutrition Fitness Personal Care Parenting Medical Mental Health News & Culture Politics Social Media Privacy Misinformation Culture Internet Culture Entertainment Sports Videos Science Climate Space Biology Deals & Reviews Reviews Best Products Versus Gift Guide Shopping Extension Cars Best Cars Car Accessories Car Reviews Car News Car Prices Coupons Vistaprint Coupons StubHub Discount Codes H&M Coupons ExpressVPN Coupons Home Depot Coupons Office Depot Coupons Ashley Furniture Coupons Samsung Promo Codes NordVPN Coupons Surfshark Coupons Shutterfly Promo Codes Zenni Optical Promo Codes Hotels.com Coupons Walmart Promo Codes Booking.com Promo Codes Hotwire Promo Codes Want CNET to notify you of price drops and the latest stories? No, thank you Accept CNET Tags Artificial intelligence (AI) Artificial intelligence (AI) Latest News AI and You: OpenAI CEO Sam Altman Is Fired, the Rise of Synthetic Performers Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments. Article by Connie Guglielmo How Close Is That Photo to the Truth? What to Know in the Age of AI AI can let you lie with photos. But you don't want a photo untouched by digital processing. Article by Stephen Shankland Google Opens Up Its Bard AI Tool to Teenagers Around the World The company wants you to know that it's "continuing to be responsible" as it brings its generative AI tools to more people. Article by Gael Fashingbauer Cooper Qualcomm's Next Chip Brings ChatGPT-Like AI to More Affordable Phones The Snapdragon 7 Gen 3 brings performance upgrades and on-device AI to phones just below the top tier. Article by David Lumb Travel Planning With AI: I Tested It for a City I Know Inside and Out Thinking of asking AI to plan your next trip? Read this first. Article by Katie Collins AI or Not AI: Can You Spot the Real Photos? Take a look at these images and see if you can tell which are "fake." Gallery by James Martin CNET's Pro Photographers React to AI Photos Just as photography changed drawings and paintings, AI will forever change our photos. Video by Patrick Holland AI Assistants Need to Know a Lot About You to Work Best. Is That OK? To get the best of new generative AI assistants, you'll need to contend with these chatbots learning more about you. Article by Lisa Eadicicco How Qualcomm Plans to Bring Apple's Ecosystem Perks to Windows and Android Snapdragon Seamless could ease connectivity between devices from different companies for better functionality if brands get onboard. Article by David Lumb AI and You: ChatGPT Lets You Roll Your Own 'GPTs,' Wearable AI May Be the Next Big Thing Get up to speed on the rapidly evolving world of AI with our roundup of the past week's developments. Article by Connie Guglielmo The New Beatles Video: How AI Is Helping and Hindering the Music Industry A new Beatles song and video, released 50 years after the band broke up, is just one note in a crescendo of AI and music. Here's the latest on how AI's push into the industry is turning up the volume on the debate. Article by Gael Fashingbauer Cooper Qualcomm's PC Chip Could Mean Windows PCs as Good as Apple MacBooks Apple's M3-based MacBooks have launched, but Windows users still want their own efficient yet powerful laptops. Article by David Lumb See More More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletter Sitemap Careers Policies Help Center Terms of Use Privacy Policy Licensing Cookie Settings Do Not Sell or Share My Personal Information "
1,241
2,023
"Google reportedly collects health data on millions of Americans without informing patients - CNET"
"https://www.cnet.com/news/google-reportedly-collecting-health-data-on-millions-of-americans-without-informing-patients"
"X Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more Tech Tech Industry Google reportedly collects health data on millions of Americans without informing patients The initiative, called Project Nightingale, is a partnership with Ascension, the second-largest health care system in the US. Richard Nieva Former senior reporter Richard Nieva Nov. 11, 2019 5:37 p.m. PT 3 min read Angela Lang/CNET Google is collecting detailed health data on millions of Americans through a partnership with Ascension, the nation's second-largest health care system, according to a report Monday by The Wall Street Journal. The initiative, called Project Nightingale, collects information from people across 21 states, including data on lab results, diagnoses and hospitalization records, and also includes patient names and birthdates. The purpose of the project is reportedly to design health software that could home in on a patient's medical history. Patients and doctors haven't been informed of the Google partnership, and Ascension employees have raised concerns over the project, the Journal said. After the Journal report was published, Ascension issued a press release announcing the partnership. Ascension said the deal involves its infrastructure being moved onto Google's cloud platform, as well as the company adopting Google's G Suite productivity tools. The company said the deal is compliant with HIPAA, the federal law regulating the security and privacy of certain medical information. "As the health care environment continues to rapidly evolve, we must transform to better meet the needs and expectations of those we serve as well as our own caregivers and health care providers," said Eduardo Conrado, Ascension's executive vice president of strategy and innovations. Google also released a statement late Monday, calling the agreement with Ascension "standard practice in health care." "To be clear: under this arrangement, Ascension's data cannot be used for any other purpose than for providing these services we're offering under the agreement," Tariq Shaukat, president of Google Cloud, said in a blog post. "And patient data cannot and will not be combined with any Google consumer data." "By working in partnership with leading health care systems like Ascension, we hope to transform the delivery of health care through the power of the cloud, data analytics, machine learning, and modern productivity tools," Shaukat said. The project announcement comes as Google makes a bigger push into health care. Earlier this month, the search giant said it's buying Fitbit, a fitness tracker company, for $2.1 billion, signaling a deeper investment in health services. 09:59 Google, though, has received blowback for its treatment of medical information in the past. Two years ago, Google, the University of Chicago and an affiliated medical center struck a partnership that allowed the search giant to use patient data and health records in an attempt to improve predictive analysis. But in July, Google, the university and the medical center were hit with a lawsuit after the medical center allegedly shared records with Google without stripping away identifiable information. That data included doctors' notes and date stamps for "hundreds of thousands" of patients. At the time, Google said it acted in accordance with the law. The University of Chicago said the claims were "without merit." In another project, Deepmind, a Google artificial intelligence unit in the UK, got into hot water for the way it used data obtained through partnerships with hospitals. In 2016, Deepmind unveiled a pact with the Royal Free Hospital in London to build an app that would identify patients with acute kidney damage. But not every patient was aware that his or her data was being given to Google to test the app. Google's parent company, Alphabet, also has a robust operation around medical research. Alphabet's health tech arm, called Verily, has developed medical-focused wearables, including a smart contact lens for people with age-related farsightedness and a sensor-packed watch to collect data for clinical studies. Another Alphabet company, Calico , is trying to expand the length of the average human lifespan. On Monday, news of Project Nightingale riled up lawmakers. "Blatant disregard for privacy, public well-being, & basic norms is now core to Google's business model," Sen. Richard Blumenthal, a Democrat from Connecticut, said in a tweet. "This abuse is beyond shameful." Originally published Nov. 11, 11:18 a.m. PT. Update, 3:08 p.m. PT: Adds information from Ascension's press release; 4:37 p.m. PT: Adds comment from Sen. Richard Blumenthal; and 5:37 p.m. PT: Adds further comment from Google. The 17 best health and fitness apps for Apple Watch +15 more See all photos More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard "
1,242
2,023
"Google pushes further into health care with Fitbit, raising new privacy concerns - CNET"
"https://www.cnet.com/news/google-pushes-further-into-health-care-with-fitbit-raising-new-privacy-concerns"
"X Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more Tech Tech Industry Google pushes further into health care with Fitbit, raising new privacy concerns The search giant has faced blowback for its health care projects in the past. Richard Nieva Former senior reporter Richard Nieva Nov. 2, 2019 5:00 a.m. PT 4 min read Google announced it will buy Fitbit. Stephen Shankland/CNET Google's $2.1 billion purchase of Fitbit signals that the search giant is intent on burrowing deeper into our lives, giving it access to some of our most personal health information. If the company's earlier efforts in health care are any indication, Fitbit owners may want to consider where that information might end up. Over the past three years, Google's experiments in using artificial intelligence and medical data have resulted in government complaints and lawsuits because critics say patient privacy wasn't protected. In one case, Google was provided with identifiable patient information that included doctors' notes and date stamps. Google didn't respond to a request for comment. Google's push into health care comes as lawmakers and consumers increasingly express concerns about the amount of personal information big tech companies collect about users, much of which is used to target ads. On Friday, when it unveiled the deal, Google pledged that Fitbit's health and wellness data won't be used for its massive ad business. Still, analysts say Google's relationship with Fitbit, the most popular step counter on the market, could be even more invasive. Health data could factor into other projects. For example, it could be used for medical apps or deepen the company's relationships with health insurance providers, said Carolina Milanesi, an analyst at Creative Strategies. "The data doesn't have to be for advertising," she said. "When it comes to health, there's a hell of a lot of money to be made over services." That could include health insurance tie-ins or broader medical apps, Milanesi said. For example, the company could follow Apple's lead. The iPhone maker has a deal with insurance company Aetna that would let people earn points to subsidize the cost of an Apple Watch. Google tried to downplay concerns, saying Fitbit would slot into its growing consumer device business, which includes its flagship Pixel 4 phone, Nest Mini smart speaker and mesh Wi-Fi router. The search giant also knew privacy issues would be top of mind and tried to pacify any concerns. "Similar to our other products, with wearables, we will be transparent about the data we collect and why," Rick Osterloh, Google's hardware chief, said in a blog post. "We will never sell personal information to anyone." Founded in 2007, Fitbit is a pioneer in wearable tech. It helped usher in the age of step counters. Though the company has struggled financially, Fitbit still had a market share of 10.1% in the second quarter of 2019, shipping 3.5 million devices, according to IDC's latest wearables report. Fitbit is also seeking FDA clearance on sleep and heart rate measurements. Google and Alphabet, its parent company, already have a robust operation around medical research. Alphabet's health tech arm, called Verily, has developed its own wearables, including a smart contact lens for people with age-related farsightedness and a sensor-packed watch to collect data for clinical studies. Another Alphabet company, Calico , is trying to expand the length of the average human life span. But Google has faced blowback for other health care projects that involved sensitive, personal information about patients. Two years ago, Google, the University of Chicago and an affiliated medical center struck a partnership that allowed the search giant to use patient data and health records in an attempt to improve predictive analysis. But in July, the search giant, the university and the medical center were hit with a lawsuit after the medical center allegedly shared records with Google without stripping away identifiable information. That data included doctors' notes and date stamps for "hundreds of thousands" of patients. At the time, Google said it acted in accordance with the law. The University of Chicago said the claims were "without merit." The incident in Chicago wasn't isolated. Deepmind, a Google artificial intelligence unit in the UK, got into hot water for the way it used data obtained through partnerships with hospitals. In 2016, Deepmind unveiled a pact with the Royal Free Hospital in London to build an app that would identify patients with acute kidney damage. But not every patient was aware that his or her data was being given to Google to test the app. Fiona Caldicott, National Data Guardian at the UK's Department of Health, called the project legally "inappropriate," according to The Guardian. The search giant is already facing trust issues when it comes to data and privacy. Google CEO Sundar Pichai in May published an op-ed in The New York Times called "Privacy Should Not Be a Luxury Good." In the article, Pichai vows that the company will try to do more with less data. When it comes to health data, those concerns are amplified -- especially if that information is carelessly shared with third parties, breached or exploited. Google will have to convince Fitbit consumers that it's up to the task of protecting their most sensitive information. "Privacy breaches in the hands of the wrong people are devastating," said Brian Solis, an author and analyst who's written about how data is changing traditional industries. "In this case, it's terribly personal data." More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard "
1,243
2,019
"Nvidia wants AI Clara to be the AI platform for radiologists | VentureBeat"
"https://venturebeat.com/2019/04/08/nvidia-wants-ai-clara-to-be-the-ai-platform-for-radiologists"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia wants AI Clara to be the AI platform for radiologists Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia’s Clara AI toolkit for health care is being incorporated into ACR AI-LAB , an American College of Radiology (ACR) Data Science Institute platform in development now. Once it’s released later this year, the platform will be provided free to 38,000 radiologists in the United States so they can use more machine learning into their work. The move follows a three-month pilot assessment program at Ohio State University (OSU), Massachusetts General Hospital, and Brigham and Women’s Hospital’s Center for Clinical Data Science (CCDS). The Clara software development kit (SDK) became generally available last fall in conjunction with the Radiological Society of North America (RSNA) conference. At that time, Nvidia also announced the launch of toolkits for AI-assisted annotation and transfer learning, as well as increased efforts to create workstations and servers for major health care companies and institutions. GE Healthcare and Nuance are also participating in the creation of the ACR AI-LAB for building and sharing AI models for radiologists. The ACR AI-LAB will make its public debut in May and will include the ability to manually adapt or train systems or incorporate pretrained models or publicly available datasets. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Tech companies have shown an increasing interest in medical imaging like X-rays and computed tomography (CT) scans. The use of pattern recognition and image processing in health care in recent years has led to increased research and product deployment from companies like Google, Nvidia, and Baidu in a number of specialty fields like radiology, pathology, drug discovery, and oncology, the branch of medicine that specializes in the detection and treatment of cancer. Advances in machines able to diagnose disease as well as or better than medical imaging professionals is noted in the 2017 AI Index report. Examples of steps forward in computer vision in the past year include Baidu Research’s cancer detection model that can outperform human pathologists and fastMRI, a system developed by Facebook AI Research with the NYU School of Medicine that speeds the performance of MRI scans. In recent months, Alphabet’s Verily began to deploy AI for detection of diabetic retinopathy in India, and Google’s DeepMind introduced AI to diagnose treatment for more than 50 eye diseases. In addition to computer vision for pattern recognition, last September, Nvidia together with the Mayo Clinic and the MGH and BWH Center for Clinical Data Science, introduced methods to create synthetic training data for AI systems to detect brain tumors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,244
2,019
"Everything Apple announced at its 2019 iPhone event | VentureBeat"
"https://venturebeat.com/2019/09/11/everything-apple-announced-at-its-2019-iphone-event"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Everything Apple announced at its 2019 iPhone event Share on Facebook Share on X Share on LinkedIn Tim Cook at Apple's By Innovation Only iPhone event, September 10 (2019) Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apple CEO Tim Cook took to the stage yesterday for the much-hyped By Innovation Only iPhone event — and, as expected, the Cupertino company unveiled more than a bunch of new smartphones. Here’s a quick recap of everything Apple announced at the Steve Jobs Theater in Apple Park on Tuesday. iPhones iPhone 11 After months of leaks and rumors, Apple finally unveiled its new baseline iPhone model — the iPhone 11. Similar to its predecessor, the iPhone XR , the iPhone 11 retains the the same 6.1-inch (1792 x 828-pixel) display, replete with a black notch and bezel. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! There are some notable changes, of course, perhaps most notably the new dual-lens rear camera setup. Above: iPhone 11 With a $699 price tag (about $50 less than the corresponding device last year), it seems Apple is also adopting a more competitive pricing structure to combat falling iPhone revenues. Read more : Apple launches iPhone 11. iPhone 11 Pro / iPhone 11 Pro Max As is now standard at Apple events, the Cupertino company unveiled not one but three new iPhones yesterday. This is also the first time Apple has split its smartphones into two categories (with the addition of the Pro series), not unlike the way its Mac and iPad lineups are arranged. The iPhone 11 Pro sports a 5.7-inch display, while the iPhone 11 Pro Max comes with a 6.5-inch screen. A number of key differences separate these devices from their predecessors (and the iPhone 11), but the most obvious is that they both now have a triple-lens setup that will prove particularly useful with low-light photography. Above: iPhone 11 Pro devices have 3 camera lenses In terms of price, the iPhone 11 Pro / iPhone 11 Pro Max start at $999 and $1,099, respectively. Read more : Apple launches iPhone 11 Pro and iPhone 11 Pro Max and iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max: What Apple changed. A13 Bionic chip Above: Apple A13 chip Underpinning its new iPhone 11 lineup, Apple also unveiled its next-gen A13 Bionic processor, which should make the new models run faster — this is particularly good news for gamers, in terms of graphics capabilities. Kaiann Drance, Apple’s senior director of iPhone marketing, said the the A13 “is the fastest CPU ever in a smartphone.” Read more : Apple announces A13 Bionic chip for iPhone 11. Out with the old As is now standard with the arrival of new iPhones, Apple cut the prices of the older models — the iPhone 8 and iPhone XR — by $150. Moreover, Apple also discontinued the even-older iPhone 7 and iPhone XS. Read more : Apple cuts iPhone 8 and iPhone XR prices by $150, kills iPhone 7 and iPhone XS. Other hardware iPad The buildup to yesterday’s event sparked rumors that Apple was planning to shoehorn a new iPad into its keynote — and it did. The new entry-level seventh-generation iPad, which starts at $329, comes with a bigger screen — for the first time in its history, Apple’s new base-level tablet will ship with a 10.2-inch display, around a half-inch bigger than its predecessor. Read more : Apple unveils entry-level 2019 iPad with a 10.2-inch screen. Watch As expected, Apple introduced the refreshed Watch Series 5 yesterday, with several external and internal tweaks encased within similar (40mm) and larger (44mm) versions. Above: Apple Watch Series 5 The main draw this time around is an always-on display that promises the best part of a full day on a single charge, and the watch also has a built-in compass. Prices start at $399 and go all the way up to $1,299, depending on the model and material. In related news, Apple also discontinued the Watch Series 4 while simultaneously dropping the Series 3 price to start at $199. Read more: Apple launches Watch Series 5. Services / subscriptions Apple TV+ Apple had previously announced its subscription TV and movie service, which it calls Apple TV+, but until yesterday’s event we didn’t have many details with regard to pricing and availability. Apple TV+, which will feature ad-free original content from such big names as Steven Spielberg, Ron Howard, Sofia Coppola, Reese Witherspoon, and J.J. Abrams, will cost $4.99 per month when it launches globally on November 1. Anyone who buys a new Apple device this year will receive a year’s free access to the service. Read more: Apple TV+ costs $4.99 per month and launches on November 1. Apple Arcade Apple announced its Apple Arcade subscription gaming service back in March , but it wasn’t until yesterday’s event that we learned how much it would cost and when it would be available. We now know that Apple Arcade will launch in 150 markets on September 19 for $4.99 per month and will allow up to six family members to play ad-free games on iPhone, iPad, Mac, and Apple TV offline. A bunch of Apple Arcade games were also announced and demoed at Apple’s event yesterday, including Rayman, Pac-Man, and Steven Universe games. Read more: Apple Arcade will cost $4.99 per month, debuts September 19. Other news Research Apple unveiled the Apple Research app for iPhone and Apple Watch users who wish to share their health data. Related to this, Apple also introduced a series of studies with major health research organizations. Read more: Apple unveils Research app with heart and women’s health studies Operating systems Finally, Apple confirmed the launch dates for its various operating systems, with iOS 13 and watchOS 6 arriving on September 19 and iPadOS 13 on September 30. The next macOS installment — Catalina — will arrive sometime in October. Read more: Apple announces OS launch dates. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,245
2,019
"Apple will add HomePod multi-user, radio, and noise features after iOS 13 | VentureBeat"
"https://venturebeat.com/2019/09/11/apple-will-add-homepod-multi-user-radio-and-noise-features-after-ios-13"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple will add HomePod multi-user, radio, and noise features after iOS 13 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apple’s smart speaker HomePod hasn’t exactly taken off with consumers since it launched in February 2018, so relatively few people will be disappointed to learn that Apple has pushed back some of its anticipated software updates to two separate dates after the release of iOS 13. According to an updated HomePod product page , some new features will now arrive on September 30, while others are coming at an unspecified point “later this fall.” On September 30, Apple will update the HomePod with one new feature: support for streaming 100,000 internet radio stations. The addition effectively harnesses the collected offerings of independent services iHeartRadio, Radio.com, and TuneIn, enabling HomePod users to start playing live channels with simple voice commands. Bigger features were held back for the “later” fall update. One will enable iOS and iPadOS devices to hand off calls, songs, and podcasts to the HomePod just by holding their devices next to the speaker. Another promises to recognize up to six different people by their voices, delivering personalized music, messages, reminders, and phone calls through the HomePod based on the current user. This so-called “Personal Requests” feature will apparently only be available in English to start. One fall surprise is a new feature called Ambient Sounds, which will let the HomePod serve as a background noise generator with samples of “ocean waves, forest birds, rainstorms, and more.” While third-party apps have offered similar functionality on iOS devices for over a decade, and can be sent to the HomePod over AirPlay, the integrated feature promises to make accessing the listening experience easier for users. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! HomePod software updates were originally pushed simultaneously with iOS, as the speaker uses Apple’s A8 processor and runs a version of the mobile operating system. While iOS 13 will launch on September 19 , it will be followed by iOS 13.1 and iPadOS 13 on September 30, along with the first aforementioned HomePod update. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,246
2,019
"Apple will release iOS 13 and watchOS 6 on September 19, macOS Catalina in October | VentureBeat"
"https://venturebeat.com/2019/09/10/apple-will-release-ios-13-and-watchos-6-on-september-19-macos-catalina-in-october"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple will release iOS 13 and watchOS 6 on September 19, macOS Catalina in October Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Three months after giving developers access to beta versions of its new Apple TV, Apple Watch, iPad, and iPhone operating systems, Apple has officially set a release date for the final releases of iOS 13 and watchOS 6 for certain devices: September 19, 2019. Other final releases have been pushed back into October and the fall. Today’s announcement notably covers version 13.0 of iOS and watchOS 6 for Apple Watch Series 3 and 4, but not Series 1 and 2, which are now set to be released at an unspecified time later this fall. Additionally, Apple unexpectedly debuted beta versions of iOS and iPadOS 13.1 last month, and now plans to release that iOS update on September 30, with iPadOS 13 shipping the same day. Version 13.0 of the new phone and tablet operating systems bring system-wide Dark Mode support, a large number of customization tweaks to Apple’s Memoji, and quality-of-life improvements to apps such as Maps and Health, but held back one feature — simultaneous audio streaming to two pairs of AirPods or PowerBeats Pro wireless earphones — for version 13.1. As noted in our hands-on reports, tvOS 13 and watchOS 6 include some major visual and functional tweaks for Apple TV and Apple Watch users. The new tvOS brings yet another refresh to the device’s Home screen, enabling full-screen video previews of content while icons float in the foreground, as well as compatibility with Microsoft and Sony Bluetooth game controllers, and adding a Control Center with multi-user support. tvOS 13 also supports Apple’s new subscription services, Apple Arcade and Apple TV+, and adds oceanic screensavers. watchOS 6 adds a handful of new watch faces to the mix, including a beautiful new solar dial and more readable large numeral options. A handful of new first-party apps are bundled in, including Voice Memos, a calculator, and a Cycles period tracker, along with an on-device App Store for direct-to-Watch downloads. Apple is also releasing the final 10.15.0 version of its Mac operating system, macOS Catalina , in October; there’s no specific date, so it could be early, the middle, or late in the month. Catalina finally deprecates the all-in-one media management and playback application iTunes into a series of smaller apps for Music, TV, and Podcasts, as well as rolling iPhone and iPad device management directly into the Finder. Catalina also includes Sidecar, a wired or wireless way to turn an iPad into a second Mac display, and under-the-hood enhancements to enable iPadOS apps to run — with modifications — on the Mac. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,247
2,019
"iOS 13 will stop VoIP app background data collection, impacting Facebook | VentureBeat"
"https://venturebeat.com/2019/08/06/ios-13-will-stop-voip-app-background-data-collection-impacting-facebook"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages iOS 13 will stop VoIP app background data collection, impacting Facebook Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Though third-party developers have been quietly gathering data on iOS users as their apps run in the background, Apple is reportedly stopping the practice in iOS 13, forcing Facebook and other companies to significantly change their apps. The change was reported today by The Information , but has been awaited in Apple’s mobile operating system for quite some time. In prior versions of iOS, third-party communications applications have relied on PushKit, a background VoIP process that enabled them to detect incoming calls without the app being reopened — a user convenience that some developers exploited to collect data even when their apps weren’t actively in use. According to the report, iOS 13 will restrict the background process so it can only be used for internet-based calls, and will cut off background data collection, a change that is expected to heavily affect Facebook’s WhatsApp , as well as requiring rebuilding of other apps including Facebook Messenger , Snapchat , and WeChat. Users have cited problems with iOS background tasks for years, with numerous early reports pointing to Facebook’s core app as the source of unexpected battery drain. Facebook subsequently split off its own live communications features into Facebook Messenger, while other apps — including the aforementioned WhatsApp, Snapchat, and WeChat — have continued to grow in popularity, in part because of the ease of sharing encrypted content with friends and business contacts. WhatsApp apparently relies in part on the PushKit feature for its end-to-end encryption. For its part, Facebook has denied that it’s using PushKit for collecting data, but suggests that it’s working to address the issue. “The changes to the upcoming iOS releases are not insignificant,” a spokesperson told The Information, “but we are in conversations with Apple on how best to address” them. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Apple is implementing the change in iOS 13, which is currently in beta with an expected September release date, but will apparently give developers until April 2020 to update their software for compliance. Once major apps have been updated, users will enjoy improved privacy, as well as potentially major battery life improvements, depending on how much energy the communications apps were using in the background. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,248
2,019
"Apple Maps hands-on: Look Around and folders bring depth to iOS and iPadOS 13 | VentureBeat"
"https://venturebeat.com/2019/06/12/apple-maps-hands-on-look-around-and-folders-bring-depth-to-ios-and-ipados-13"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple Maps hands-on: Look Around and folders bring depth to iOS and iPadOS 13 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apple’s Maps application has come a long way since its inauspicious 2012 launch, which blackened the company’s eye and cost it a key executive. Over the next six years, Apple refined the app with new 2D and 3D maps, much-improved driving directions, and tools to navigate transit systems, all in the name of improving its reliability and utility. Apple Maps may not have eclipsed Google Maps , but it’s become a true peer, excelling in some ways while remaining behind in others. It’s no exaggeration to say that Maps is going to take another major leap forward this year, thanks in part to iOS 13 and iPadOS 13. The new iPhone and iPad operating systems share a couple of major improvements that will increase Maps’ depth, while related server-side updates will bring dramatically improved 2D cartography to U.S. users by year’s end. Look Around: Google Street View, reinvented The most stunning addition to Maps is the photographic product of mapping vehicles Apple has been sending out across the world, apparently starting in California: Look Around. Currently available for testing in Northern California, the feature provides better than 360-degree street level views of major roads and destinations: You’re able to pan left, right, up, and down, even seeing (limited) 3D depth in each image. When the feature was announced at WWDC, it was quickly derided in some quarters as “Apple inventing Google Street View,” but the truth is that it’s a reinvention of Street View with features Google will be hard-pressed to mimic without re-mapping every street in the world. During hands-on testing, it becomes obvious that objects such as cars, light poles, and trees are separate objects from other parts of the backgrounds, a cool pseudo-3D effect that makes each image look more realistic. Tapping anywhere on the screen shifts your perspective, as well, so you can move up and down streets just like Street View. The difference here is that you move through what feels more like 3D space than just transitioning between heavily distorted 2D pictures. Once you’ve seen Apple’s alternative, you’ll find it hard to look at Street View the same way. Collections: Folders for Maps “Wait, there are folders in Maps now?” Even if you watched Apple’s WWDC keynote where Collections were first discussed, you might have missed the point of the feature: Yes, Apple has added location folders to Maps. Collections enable you to group together lists of locations (“Places”) for whatever purpose, name the folder, and sort the contents by name, distance, or date added, as shown below. Above: Maps on iPadOS 13. This is different from iOS 12 (below), which maintained a single list of Favorites across the globe for later examination, as shown below. The newer version of Maps really enables you to focus on the specific types of destinations that matter to you at a given time, simply by selecting a folder with a quick tap. Above: Maps on iOS 12. Using the Collections system, you can create a “Daily Commute” folder with destinations that might be useful on your way to or from work, a “Visit Washington” folder with your hotel and preferred tourism locations, and so on. Each one opens the map to show the specific geography covered by your locations, letting you drill down to color-coded, high-contrast dropped pins with pinch and expand gestures. A related change redesignates “Favorites” to actually mean “locations you visit the most.” When you open Maps in iOS and iPadOS 13, the first thing you’ll see is a display of large circular “Favorites” icons for your home and work, guiding you to register preferred addresses for each within your “Me” Contact Card. Maps also lets you add additional large icons for one-tap reference whenever you open the app. If your car was recently parked, you’ll also see a Siri Suggestion that enables you to quickly find it nearby. That’s not a new feature, but it’s automatically surfaced above your Favorites when you’re not at home. Higher-detail 2D maps The other key change coming to Maps this year (and beyond) is a dramatic upgrade to the flat-shaded “Map” images that are rendered by the device rather than using photographs or textured 3D models. As noted in a report last year , Apple’s goal with these Maps is to increase the level and accuracy of ground-level detail available without photography, enabling better views of forests, walkways, sporting areas, and swimming pools, among other areas that aren’t just roads and buildings. (Apple also said last year that it will be using anonymous crowdsourced data to provide better traffic guidance alongside the improved maps.) While the improvements have been rolling out over the last year and should be visible to iOS 12 users as well, they again started in Northern California and are making their way across the country by the end of 2019. That should mean that you’ll begin to notice things like course-level details at local golf clubs (rather than big green areas) across many U.S. cities and towns around the time iOS 13 and the first iOS 13 devices become available. The improved flat-shaded maps should be visible on macOS Mojave and Catalina. So far, it doesn’t appear that Look Around or Collections will be included in Catalina’s version of Maps, which still has a 2018 date and is labeled version 2.1. Early thoughts Apple’s 2019 improvements to Maps aren’t so much flashy as functional, making the app easier to quickly consult for lists of related destinations, and properly ascertain ground-level details from either surprisingly robust Look Around photography or more precise flat-shaded cartography. Thus far, everything works as expected, albeit within a very limited geographic area — the new Look Around feature doesn’t even work in Los Angeles yet, from what I could see. Improving a mapping app clearly isn’t easy, especially when Apple has to go up against Google, which has added everything from offline maps to business messaging and robust data about individual businesses to its own app over the years. But iOS and iPadOS 13 will soon come at least a little closer to supplanting Google Maps as a daily navigation system, and for some people may fully eliminate the need to rely upon Google’s tools (and frequent personal tracking) going forward. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,249
2,019
"Hands-on with Find My: Track Apple devices, people, and probably Tags | VentureBeat"
"https://venturebeat.com/2019/06/06/hands-on-with-find-my-track-apple-devices-people-and-probably-tags"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hands-on with Find My: Track Apple devices, people, and probably Tags Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Many years after Apple added both GPS and location services to iPhones, I’m still a little uneasy about having family and friends track my whereabouts, and repulsed that strangers might gain access to my current or historic location data. It’s not that I have anything to hide — my existence is regrettably quite sedentary — but rather the principle that I should be able to move from place to place without fear of being stalked by anyone. I’ll admit that part of my concern comes from Apple having stirred the pot on privacy issues for the last few years ; there’s no doubt that numerous companies, including tech giants , are harvesting data for various reasons, innocuous and otherwise. Apple’s new iOS, iPadOS, and macOS application “Find My” is an effort to thread all of the company’s location-specific privacy considerations through the needle of location tracking. While it gives you the ability to load one app that tracks both Apple devices and Apple device users — plus the potential for tracking rumored standalone Apple Tags — it takes a thoroughly consent-based approach to location services. In a world where tracking so often takes place in the background without your awareness that it’s happening, Find My users may feel that they’re specifically agreeing to and managing the minutiae of … well, almost everything. Let’s take a look at what Find My brings to the table for iOS 13 , iPadOS 13, and macOS Catalina users — and what’s likely to come later this year. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Devices: Find your iPhone, iPad, Mac, or other Apple product One of the two prior iOS apps Find My subsumes is Find My iPhone , which for years has enabled users to remotely evoke location services on their own devices, pinpointing each on a map. The app’s Devices tab displays a large map scaled to show the varied locations of all devices linked to your iCloud account — including those used by family members — plus an expandable list that lets you drill down into the details for any specific device. An Apple Watch, for instance, will be listed at a specific address with its current battery level — assuming it has recently connected via iPhone or its own cellular connection to the internet — plus the ability to make it play a sound for easy location, mark it as lost to lock its functionality and trigger a display of your contact information, or instantly get directions using Apple Maps. It’s worth noting that macOS Catalina will add the “lock” feature for Macs with T2 security chips, including recent laptops and desktops, joining the many smaller iOS and watchOS devices that previously had the feature. Thoughtfully, Apple’s locking procedure not only prevents the device from being erased and reused by someone else, but also disables Apple Pay, so it can’t be used to make purchases until you retrieve it. While AirPods, iPod touches, and iPads are also on the list of tracked devices, Apple TVs and HomePods registered to your iCloud account won’t appear. Yet. Apple Tags: Possibly track otherwise untrackable items using Offline Finding Prior to this year’s WWDC, there was a rumor that Apple was working on its own competitor to Tile — the standalone device that attaches to a keychain or goes inside luggage, enabling anyone nearby with a Bluetooth connection and Tile app to help locate the item. Developers have already found apparent references to Apple’s version, “Tag,” in the Find My beta code, so it looks like it’s happening — the only questions are “how,” “when,” and “how much?” Without confirming Tag’s existence, Apple appears to be answering the “how” part at WWDC by disclosing a cryptography solution that enables a new mode, Offline Finding. Even when not connected to a network, Apple devices running iOS 13, iPadOS 13, or macOS Catalina will use Bluetooth to emit a rotating public key that is passively picked up by all other Apple devices nearby, and relayed — with encrypted, anonymous location data — to Apple’s servers. Only the owner of the device will have the ability to see the device and decrypt the location. The more devices that run the latest Apple OSes, the bigger the location network becomes. If you stop and think about that for a moment, the amount of device location data being shared by Apple devices is going to increase exponentially, and in the wrong hands, it could be at least as concerning as all the location data that Facebook, Google, and others have been harvesting. But if you trust Apple, the power of its network to find people and things could be huge, and all it would take to add tracking to a “Tag” would be Bluetooth hardware and a long-lasting battery. I strongly suspect Tags are coming, quite possibly as soon as this fall. If so, they’ll probably be very small and thin, enabling all sorts of additional items to be tracked. Rumors suggest that you’ll be able to share a tagged item’s location with friends or family to help locate and retrieve it. Whether the batteries will be rechargeable or replaceable remains an open but very important question. Pricing could, of course, be a problem. On the rare occasion Apple has marketed $20-$30 items in its stores, even if they’re iPod Socks, they tend to take off with fans of the company. At that sort of per-unit price, Apple Tags could become a runaway hit, but since we’re talking about Apple, there’s surely some way they could be sold for $50 or $100 a piece, at which point many people will just shrug them off. People: Find friends and family, let them find you The other key feature in Find My is the People tracker, based on the old app Find My Friends. Selecting the People tab lets you see the locations of family and friends who have specifically opted into location sharing — something parents can do for young children on the same iCloud account, and adults with their own accounts can authorize or deauthorize at will. That last point is important: If you’re concerned about someone you know having continuing access to your location, you can disable sharing to that person, or turn location sharing off altogether. Otherwise, a person is tracked by the device he or she specifies, such as an iPhone, iPad, or cellular Apple Watch, and if you select a person, you can easily see them on a map, use contact information to reach out to them, or get directions to their current location with Apple Maps. You can also set up notifications specific to that person — either for yourself about them, or for them about you. With this feature, you can let a family member know when you arrive or leave at a specific location, or receive a notification when your contact arrives or leaves a location. If the concept creeps you out for a specific person or generally, don’t share your location with that person, or at all. There’s also a “Me” feature that enables you to easily switch location sharing on and off for yourself, allow friend requests, receive location updates, and switch the map from photorealistic to shaded rendering. At the moment, Me doesn’t let you toggle which devices are being used to share location data; that choice is buried elsewhere in the device’s settings menus. Early thoughts Find My is still in the earliest beta stages, and though the iOS and iPadOS versions are fairly stable, the macOS Catalina version I tried keeps crashing rather than loading. (Update: A system-wide iCloud login issue was preventing the app from running; fixing it enabled the Catalina app to load, as shown above.) If the Offline Finding works as expected come fall, Apple’s service is going to be better than ever. I personally love that these features have been integrated into a single app, even though I think they could (and should) be integrated directly into Apple Maps — something Apple may not want to do, if only to increase the visibility of Find My as a standalone feature of its operating systems. Regardless, Mac users who have been stuck loading a web browser with iCloud to use Find features will be way better off with the dedicated app, and users of Apple’s tablets and pocket devices will find the new solution at least a little more convenient and powerful than before. If Tags appear later this year, Apple’s going to have another big hit on its hands, assuming they’re reasonably priced and as easy to use as they should be. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,250
2,019
"macOS Catalina and iPadOS 13 hands-on: Lots of good reasons to upgrade | VentureBeat"
"https://venturebeat.com/2019/06/05/macos-catalina-and-ipados-13-hands-on-lots-of-good-reasons-to-upgrade"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis macOS Catalina and iPadOS 13 hands-on: Lots of good reasons to upgrade Share on Facebook Share on X Share on LinkedIn Sidecar screen sharing is one of the tentpole features of macOS Catalina and iPadOS 13. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At this year’s Worldwide Developers Conference in San Jose, Apple introduced a record five operating systems — expected updates to iOS , macOS , tvOS , and watchOS , plus the unexpected iPadOS , a newly forked version of iOS specific to Apple’s tablets. Collectively, these five operating systems contain so many changes that they’re impractical to address in a single article, so we’re breaking them up into two pieces. Welcome to our combined look at macOS Catalina and iPadOS 13, which comes after actually living with the new OSes for a solid day. There’s one piece of good news up front: Apple’s betas have earned a reputation for being as stable as some companies’ final releases, and I’ve found these two to be pretty good even in their earliest forms. That said, the company posted an unusual warning on the download page for the iPadOS 13 beta. Important Note for Thrill Seekers : If you’re interested in living on the edge and trying out the great new features in iPadOS 13, we strongly advise waiting for the many bug fixes and refinements coming to the public beta next month. As nice as it would be to wait until next month to try these new OSes (and install them on devices we use every day), we didn’t want to wait that long to offer you this preview. So here’s what stands out in the Catalina and iPadOS betas. macOS Catalina: Breakups and getting back together When Apple began to discuss macOS 10.15, it unusually started by focusing on iTunes — an app that’s been cross-platform for years, and certainly isn’t a “Mac” feature. It turns out that there were two reasons for this: Critically, macOS Catalina has absorbed iTunes’ device synchronization and backup features. Secondarily, it has split iTunes’ remaining features into separate Music, Apple TV, and Podcasts apps. The result is that macOS Catalina itself has become even more important as a hub for iPads, iPhones, and other iOS devices, while eliminating the need to use iTunes as a conduit. Catalina’s device backup and sync features already work seamlessly — presently the same as they did in iTunes — and if anything, my sense is that these long-ignored features will get the attention and future refinements they deserve from the macOS team. As for the Music, Apple TV, and Podcasts apps, each listed as a 1.0 release, all I can say is that they’re pretty much exactly as would be expected — similar to prior iPad apps, but with several Mac-specific tweaks. Music (above) and Podcasts look and feel like stripped-down versions of their parts in iTunes, but Apple TV is an expansion. It gives the Mac access to the standalone Apple TV device’s Watch Now feature, and notably makes Macs the second devices to receive 4K video streams, something previously reserved for the Apple TV. Collectively, these are major, tangible improvements for Mac users who frequently consume content or sync their other Apple devices to their desktop or laptop machines. I didn’t think I’d care as much about these changes as I ultimately did once I started using them — they wind up making more of a difference in macOS Catalina than one might imagine. Above: The iPad-specific side of Sidecar can be used as a wireless Mac screen mirror, or as added space to hold apps, with the Mac’s cursor moving from screen to screen. If there’s any single feature that made me pull the trigger on installing Catalina and iPadOS early, it’s Sidecar, Apple’s new “use your iPad as an external monitor for your Mac” feature. And boy, is Sidecar impressive. With nothing more than a click of your iPad from a “Connect to:” list in a new System Preferences panel, Catalina creates a wireless connection to an iPad running iPadOS 13, letting you extend the Mac’s screen and use an Apple Pencil for cursor movement and input. You can use the iPad over Sidecar within Bluetooth range, or connect it via a USB cable to your Mac if you prefer. I’m not going to judge the performance of Sidecar at this stage, but I have to say that the feature will be so cool to a huge number of Mac/iPad owners that I’d expect sales of “second display” iPads will take off once Catalina is released. This is precisely the kind of optional but super useful feature that helps Apple sell multiple devices to individual users, and — to some developers’ dismay — obviates the need for third-party apps and accessories. In this particular case, leveraging the iPad’s display, touch, and stylus hardware for the Mac feels long overdue, and Catalina is much better for its inclusion. There are two other big Catalina features I haven’t been able to test for myself, but find extremely exciting. Catalyst is the renamed version of Marzipan, Apple’s architecture for bringing iOS — initially only iPadOS — apps to the Mac. Most developers are getting their first look at Catalyst during WWDC, and numerous iPad apps are expected to make their way over to macOS as a result, which should be great for Mac users (and developers). The other is a dramatically enhanced version of the classic iOS feature Voice Control , now billed as an accessibility option to let users navigate all of macOS using voice commands. I’m looking forward to digging into this in greater detail soon. By comparison with the aforementioned changes, other Catalina features feel medium-sized to small, but they’re still welcome. For instance, Photos (above) has added an immediately visible Days browser view that turns your library into the digital equivalent of Apple’s old printed photo books, with an uneven grid spotlighting certain images and autoplaying select videos. Even in beta 1 stage, it’s smooth and beautiful — a great facelift for an app that emerged in the wake of Apple’s former iPhoto and Aperture apps, and always felt like it lacked a unique identity. In addition to visual and organizational reworks for Notes and Reminders, Catalina also finally gets an app with “Find My iPhone” and “Find My Friends” features called Find My, though it keeps crashing in this early beta. Apple has also included the latest implementation of iOS’s Screen Time features — usage, downtime, and app limits that may have some value to workaholics, as well as kids whose parents gave them Macs instead of iPads. And there are numerous under-the-hood changes, including movement of the macOS system onto a dedicated read-only volume. One of my favorite things about Catalina, though, is how fast it feels. Unlocking the Mac with an Apple Watch or Touch ID scan happens almost instantly. Initiating Sidecar takes much less time than I would have imagined, given the sophistication of the wireless screensharing it’s doing. And everything seems just a little peppier on my nearly three-year-old MacBook Pro. Since this is just the first beta, which is generally the slowest and buggiest of all Apple releases, it’s almost certain to get better with time. iPadOS 13: Forking fantastic news for iPad users Apple’s decision to split the iPad off into its own version of iOS, called iPadOS, is really interesting. It’s hard to know exactly how Apple’s operating system development teams work, but the suggestion here is that the iPad is now — after almost a decade on the market — being treated as a “truly distinct experience” rather than sharing virtually all of its features and paradigms with the iPhone and iPod touch. If Apple handles this correctly, making iPadOS basically “iOS Plus,” this could lead to some really big innovations for its tablets while its pocket-sized devices continue to improve more iteratively. Or it could lead to new confusion as to which devices have new features. We’ll just have to see. Like macOS Catalina, iPadOS is noticeably faster than its predecessor at things users notice every day, including biometric unlocking. Face ID goes from “slightly sluggish” to nearly instant, and transitions into apps, between multitasking apps, and back to the Home screen all feel quicker. Again, this is just beta 1, so if all goes well, iPadOS 13 should be a lot faster than iOS 12 by the time it’s finished. On the “finally” front, iPadOS brings one particularly long-overdue feature to Apple’s tablets — a more customizable Home screen. On an 11-inch iPad Pro, there’s now an opportunity to have 30 apps per Home screen, and either keep or temporarily add a widget panel to the left side of the first page. While the feature isn’t as useful or attractive as a more customizable widget system (see: Android) would be, it’s a step forward for iOS — err, iPadOS — and sorta-kinda gets rid of the copious amounts of wasted space in the prior iPad app interface. There’s also a system-wide Dark Mode, inherited from macOS Mojave, which works exactly as one would expect, flipping whites to blacks and light grays to dark grays. It looks really nice, seems to work properly across all of iPadOS’s integrated apps, and can be set to turn on and off automatically at sunset/sunrise or a custom schedule. Another huge change is the addition of a small (iPhone-sized) “floating” keyboard as an alternative to the full-width versions that have forced every iPad for nine years to lose between a third and a half of their screen space whenever you type. While the keyboard also has another feature — Android-like swipe-to-type ability — the real benefit here for iPad users is a gigantic screen savings that will be highly useful when multitasking. You can place the keyboard wherever you want, and revert it to full-sized at the bottom of the screen if needed. I’m looking forward to testing this more to see how much of a difference it makes in my daily usage, but what’s here is a good start. If these were the only changes in iPadOS, I’d be pretty happy. But there are others. You can now have multiple Slide Over apps in a stack at the edge of the screen, selecting the one you want to use by dragging a handle at the bottom of the window. Multiple instances of the same app can be opened for separate purposes. And the long-damaged cursor movement interface has been overhauled to become more responsive to taps and movement, making it easier to place a cursor and select text for insertion and removal. There are also new copy, paste, and undo gestures, though I’m still trying to get the hang of using them. Other “finally” tweaks are less visible but still important. iPadOS finally gains the ability to natively support external storage — flash drives, card readers, and external hard drives — which will be a game-changer for importing and managing photos, videos, and other large collections of files. Photos gets the facelift mentioned in macOS Catalina, plus access to many of the image editing parameters that were previously exclusive to the Mac, though the parameter selection UI currently isn’t as straightforward as the Mac’s. Videos can now be edited with the same effects and filters without the need to open iMovie. There are a few “cute” and quality-of-life improvements that frequent Messages users will appreciate. Apple has added three new Animoji (Cow, Octopus, and Mouse) to the collection, dramatically increased the number (but sadly not the realism) of Memoji customizations, and added a new feature called Memoji Stickers. This feature adds basic Memoji support to iPads without TrueDepth cameras, automatically producing collections of stickers based on common facial expressions and emoji concepts. Kids will particularly appreciate gaining access to Memoji, and adults who have used the feature will be glad not to need to always pose and record 1-second video clips to share images with friends. Messages also lets you store a name and photo for sharing with selected contacts, a welcome feature that I’d hope to see expand into proper profiles in the future. Under the hood, the same Voice Control feature found in macOS Catalina is coming to iPadOS 13, as are tweaks to Mail, Reminders, and Maps. Safari is receiving what Apple calls a “Desktop-Class” improvement, but in practice, the app looks and feels almost identical to its predecessor — the key changes appear to be in forcing the iPad Safari to receive true “desktop” versions of finicky web pages such as Google Docs, which previously wouldn’t allow in-browser editing on the iPad. That’s expected to change with iPadOS. One other interesting change I noticed when taking my iPad on the road with iPhone tethering is a new feature called Low Data mode, which automatically recognizes that you’re on a hotspot and commensurately reduces app data usage to avoid killing your monthly plan. This is an interesting counterpoint to Android’s new 5G network awareness feature, which is designed to let users and developers unleash app data usage when connecting to fast and/or unlimited 5G data connections: While Google is preparing for its users to consume more data, Apple is enabling users to consume less. Early thoughts Taken as a whole, this year is bringing truly game-changing additions to macOS and iOS/iPadOS — ones that Apple’s rushed-through (but still long!) WWDC keynote could only barely do justice. Even after a relatively short period of hands-on use of these two new operating systems, I’m legitimately enamored with both in a way that I don’t often feel with new Apple OS releases; it wouldn’t just be hard to go back to earlier versions for cosmetic reasons, as there are some fundamental “way it works” improvements in each release. For me, the best part of macOS Catalina and iPadOS 13 may well wind up being Sidecar, which will certainly expand my ability to use both a Mac and an iPad together. Time will tell whether it’s something I use daily or solely as needed, but I’m excited to have the option, and can’t wait to see how the rest of the features evolve throughout the next few months of the beta cycle. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,251
2,019
"iOS 13 and watchOS 6 hands-on: As 5G looms, Apple takes small steps forward | VentureBeat"
"https://venturebeat.com/2019/06/05/ios-13-and-watchos-6-hands-on-as-5g-looms-apple-takes-small-steps-forward"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages iOS 13 and watchOS 6 hands-on: As 5G looms, Apple takes small steps forward Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Earlier today, I posted a hands-on look at Apple’s macOS Catalina and iPadOS 13 , the unexpected stars of this year’s Worldwide Developers Conference in San Jose. But there were three other major operating system releases during the same (long) keynote, and though they frankly weren’t as exciting as the Mac and iPad announcements, they’re worth digging into as well. iOS has been split into two versions: iOS 13 for iPhones, and iPadOS 13 for iPads. This isn’t the first time iOS has been forked — it’s the basis of tvOS for Apple TVs, and was stripped down to become watchOS for Apple Watches — but this time, the move feels like a demotion. Apple says that iPadOS 13 gets all the same new features of iOS 13, plus some, which means that the iPhone might not be the only star of Apple’s fall show this year. It’s about time. But realistically, Apple is probably going to sell a lot more iPhones and Apple Watches than iPads this year — even though Apple is not going to have 5G phone or watch hardware to offer customers. That makes new iOS 13 and watchOS 6 software features quite important to keeping demand going. So let’s take a look at the five biggest changes on each platform. iOS 13: Darker paint, cooler Messages, better Health While the upgrade from iOS 12 to iPadOS 13 felt like a big change on my iPad, the switch from iOS 12 to iOS 13 on my iPhone feels … well, a lot less transformative. 1. Speed. Apple says that Face ID is 30% faster on both devices, and apps will launch up to twice as fast as before. But while both differences were pretty obvious on my 11-inch iPad Pro, only the app speed increase was apparent on the iPhone XS. So your mileage may vary depending on the devices you’re using. 2. Dark Mode. Without downplaying the relevance of this feature, which enables the entire iPhone UI to go from white and light gray to black and dark gray, there’s not much more to say about it than “it works,” though presently only in first-party apps — something that will change during the beta period as developers add support. Apple has introduced four new wallpapers that shift in tone based on whether you’re in Light or Dark mode, as well as the ability to make the phone automatically transition between them on a schedule of your choice. I’m looking forward to seeing battery test results for how OLED-equipped iPhone X, XS, and XS Max models perform in both modes. 3. Messages. Just as noted in the iPadOS 13 discussion , iOS 13 includes several messaging improvements that are going to be very popular with users: new Cow, Octopus, and Mouse Animoji; a huge number of (regrettably simple) Memoji customizations; and Memoji Stickers, creating Message-insertable images based on a customized Memoji, using common facial expressions and emoji concepts. For the first time, Messages lets you store a name and photo that can be shared (or not) with selected contacts, rather than relying on your personal phone number or email address, as has been done in the past. It’s a welcome feature that I’d hope to see expand into proper profiles in the future. Android switchers may also prefer typing on a newly swipe-friendly version of the iOS keyboard, which isn’t exclusive to Messages. One addition that will be huge for some people leverages AirPods to let you instantly hear and verbally respond to incoming text messages. Imagine walking and being able to carry on a responsive text conversation with someone without ever looking at your screen; hopefully this feature will come to CarPlay, as well. 4. Health. In addition to adding menstrual cycle tracking — a feature that will especially benefit women seeking to maximize or minimize fertility — the Health app has received a visual overhaul and streamlining. Virtually everything now links out from a Summary screen, which begins with records, favorite data types, and highlights; continues with a snapshot of activity data; and adds health-related promotions: a “Register as an Organ Donor” ad, an “Update Your Medical ID” reminder, and explanations about hearing health and hearing loss. This ties into the Apple Watch’s new Noise app, which samples ambient noise and flags you if the level goes above 90 decibels, a dangerous level. 5. Maps. As mildly interesting as Apple’s updates to Maps sounded on stage, you’ll need to see one of them — “Look Around” — to believe it. What seemed like a knockoff of Google Street Maps turns out to actually contain some 3D data, such that when you find an area (hint: try San Francisco) with Look Around support, you will not just see a flat 2D image wrapped around a sphere, but rather actual parallaxing objects and people in scenes. It’s not complete 3D — imagine how much data that would take at this photographic level of detail — but it’s undeniably cool. Maps also lets you easily create and access folder-style Collections of favorite places, so you can keep all the locations for an entire trip (or daily collection of activities) in one or more folders for easy future reference. There are tons of other improvements, such as a mildly improved portrait mode in the Camera app, and very welcome new image layouts in Photos, plus a load of parameter editing tools brought over from the macOS version of the app. Again, most of these types of changes won’t feel like a huge deal unless you’ve been waiting for years to see them appear, in which case you’ll be pleased if not thrilled by how Apple has integrated them into iOS. WatchOS 6: More faces, more apps, more freedom Just like iOS 13, watchOS 6 isn’t so much a breakthrough release as another step down the road to making the Apple Watch a handy device for more types of users. 1. Faces. If you love analog watches, you may be happy to know that Apple has added five — California, Gradient, Modular Compact, Numerals Mono, and Solar Dial — to watchOS 6, each as a mild spin on something that’s come before. Solar Dial is the most unique, spotlighting the current sun or lunar position, with an analog/digital inset. The digital Numerals Duo adds Western Arabic, Eastern Arabic, and Devanagari options for its futuristic extra-large numbers, with color and outline customizations to change up the look. As with most prior watchOS face upgrades, the new designs largely aren’t fantastic, but some people will like them, and they’re better than nothing. 2. Apple Calculator and tips. Rather than just porting the iOS app Calculator directly to the Watch, Apple has added a simple tip calculation button to the familiar UI, enabling you to use the Digital Crown to dial in the percentage for your bill amount, then divide it by the number of people splitting a check. While this tip calculation feature isn’t new — people having been making apps like this for years — having it built into the Watch will enable more people to take advantage of it and lead the Watch (rather than the more popular smartphone tip calculator) to become visible in public settings. 3. Other new pre-installed apps. Apple has added the Audiobooks player app, the menstrual tracking app Cycles, the ambient noise sampling app Noise, the conversation recorder Voice Memos, and a Now Playing icon for easier access to music stored on the device itself. You can easily understand why any of them might be useful in specific situations, though we’re rapidly reaching the point where users should be able to turn off individual first-party apps they don’t want to see on the Watch’s increasingly cluttered Home screen. 4. App Store. watchOS apps have generally been a bust for a lot of different reasons, but Apple’s taking another swing at the concept by building the Watch its own on-device app store, giving developers the opportunity to craft (and sell) apps with no dependence on the iPhone for processing or data. Apps can now be downloaded directly from the Watch through an interface that’s as easy to use and navigate as the iOS App Store, which means this could be absolutely huge for Watch development … or another disappointment, like the Apple TV and Mac App Stores. Time will tell. 5. Freedom. There’s a sense that the Apple Watch won’t need to depend on the iPhone as much going forward; in addition to the App Store and apps, which will be able to update over the air without assistance, you may be able to update watchOS itself without using the iPhone’s Watch app — one of the most tedious and annoying elements of using the Watch today. This isn’t possible in the first developer beta, and there won’t be any public beta, so this mightn’t be fully apparent until watchOS 6.0.1 or 6.1 releases later this year, but if it happens, hallelujah. Early thoughts As a daily iPhone and Apple Watch user, I’m pleased rather than excited by the changes that are coming to the two devices I have on hand at all times. Once again, Apple is refining what’s good rather than really making any huge changes, and I’m not wholly thrilled with that — I’d honestly like some bona fide excitement from at least one of these platforms this year. Perhaps there will be some new hardware changes that will see the iPhone 11 and Apple Watch Series 5 really stand out in the fall. Or Apple might hold off on the big tweaks until next year’s iPhones, which are expected to add 5G support, a feature I wouldn’t hold my breath on for the Apple Watch. That said, if you’re capable of being content with something that’s being polished to a piano-like gloss, you’ll enjoy iOS 13 and watchOS 6. So far, they’re shaping up to be solid updates, and like iPadOS 13 and macOS Catalina, it will quickly become hard to imagine going back to their predecessors once you’ve become familiar with all the new features. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,252
2,019
"watchOS 6 adds App Store, Voice Memos, and new faces to Apple Watch | VentureBeat"
"https://venturebeat.com/2019/06/03/watchos-6-adds-app-store-voice-memos-and-new-faces-to-apple-watch"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages watchOS 6 adds App Store, Voice Memos, and new faces to Apple Watch Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. If there’s any consistent narrative for the four-year-old Apple Watch , it’s been a slow but steady march from iPhone dependence to independence — a process aided as much by chip improvements as new operating system releases. Today at WWDC 2019, Apple introduced watchOS 6 , a major update that could help some Apple Watch users cut the iPhone cord for apps, while building on most of the key features users have come to know and like or love. Apple’s own Apple Watch app selection hasn’t been bad since even before watchOS 5 , and it’s growing today with the addition of a Voice Memos app for recording conversations, a Calculator app for doing quick math problems, and an Audiobooks app for audiobook playback. Within the OS itself, Siri will gain the ability to display full webpage results in response to user queries. The company is also adding a handful of mildly interesting new watch faces to watchOS 6, though it’s regrettably still not allowing third-party developers to roll their own — one of the platform’s biggest and longest-running omissions. The new Apple faces include a sundial-styled Solar analog watch, a space-maximizing analog option called Modular Compact, an analog mix of Roman and Arabic numerals called California, new extra-large digital options, and Gradient, which places a colored gradient in the background, either in full screen or as a circle with four complications. Perhaps the biggest new watchOS 6 feature is support for apps that run independently of an iPhone — including the addition of an App Store app directly on the device. In addition to operating fully without iPhone help, users will be able to download Watch software without relying on a nearby iPhone. Though third-party Apple Watch apps haven’t quite taken off to the extent Apple (or users) might have hoped, the convenience of being able to add new features anywhere might help. Apple is adding Activity Trends to the watch, allowing you to see how your activity and workouts are doing over time, and providing coaching for long-term success. A Noise app will let users measure ambient noise levels, while new complications will let you quickly check hearing aid battery life and rain data. On the health side of things, Apple is adding Cycle Tracking, which will let women track menstrual cycles, to both watchOS and iOS. Additionally, the company noted that a new iOS 13 Health app will use machine learning to surface “highlights” from your Watch’s health data that will actually be valuable to you. The company also announced “summer” bands, including a Pride Edition Sport Loop that matches the latest Pride watch face. Apple is also adding several new single-colored Sport Band and Sport Loop options. watchOS 6 will support Apple Watch Series 1 and later devices, though the full-screen new watch faces and other features may require the Apple Watch Series 4 or later. The beta is available for registered developers today from Apple’s developer portal , with a public release coming in the fall; no public beta is expected to be offered. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,253
2,019
"Apple debuts iOS 13 for iPhones and iPods, splits off iPadOS for iPads | VentureBeat"
"https://venturebeat.com/2019/06/03/apple-debuts-ios-13-for-iphones-and-ipods-splits-off-ipados-for-ipads"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple debuts iOS 13 for iPhones and iPods, splits off iPadOS for iPads Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Apple’s annual operating system cadence has guaranteed a steady stream of June betas and September final releases for years of WWDCs, so it’s no surprise that the company is today announcing iOS 13 , the latest major version of its core software for mobile devices. The new release promises dramatic performance improvements under the hood, improving Face ID recognition times by 30%, reducing app sizes, and introducing Dark Mode. But there is a surprise: Starting today, iOS 13 will be for iPhones and iPod touches, while a separately tailored version for iPads will be known as iPadOS. If the new iPadOS name strikes you as unusual, bear in mind that iOS’s original name, iPhone OS, lasted through mid-2010, even as the original iPad was getting started. Apple switched it to “iOS” in June 2010 to acknowledge that the same software was running on iPhones, iPods, and iPads. Having previously debuted on macOS Mojave last year, iOS 13’s new Dark Mode effectively inverts user interface elements across all apps, making backgrounds black and dark gray rather than white and light gray, while turning black text white. Dark Mode could save power on iPhones with OLED screens, and may also make all compatible iOS devices easier on the eyes in dimly lit environments. There are numerous small changes, such as Time-Synced Lyrics in Music, a new QuickPath keyboard supporting swipes as well as taps, and rich text composition in Mail. Apple is also adding significant privacy features, including an “allow just once” sharing of locations for an app, Wi-Fi/Bluetooth protections, and a tracking-free Apple Sign In option — complete with the option of automatically generated forwarding email addresses, which are separately issued for individual apps and can be turned on or off as necessary. The Reminders app, which previously occupied its full main screen with one of multiple (and possibly empty) to-do lists, will make better use of the screen by segmenting it into today’s tasks, scheduled tasks, flagged tasks, and all tasks, then letting users drill down into each screen to add more items. Smart lists and deeper links are being added to the app, as well. In addition to Apple’s ongoing improvements to its own maps’ details, expected to appear across the United States by the end of 2019, the Apple Maps app is also receiving a series of quality-of-life improvements designed to make navigation easier. The new launch screen features a simplified interface for flagging key or frequently visited destinations, and creating groups of locations like “favorite restaurants.” Apple is also offering its own version of Google’s Street View called Lookaround, which smoothly transitions between 2D photos of scenes using 3D previews. Apple’s Health app is receiving a refresh, including a new version of the main Today page that better emphasizes daily activity — previously a small sliver of multiple colored bars below a week-long calendar at the top of the screen. Health is improving support for menstrual cycle tracking and adding hearing health measures, including information on environmental noise and the volume of headphones. Messages, Apple’s unified app for text, audio, photo, video, and AR messaging, is making a major change this year with the option to associate a display name and photo with your phone number and email address, plus controls over who will see those details. Also, Animojis and more detailed versions of Memojis, cartoony AR representations of emoji characters and people, can now be easily shared as one-frame stickers rather than data-eating videos. Instant stickers are created for your use, not only in Messages but also in other apps. And users can create Memoji stickers using all A9 chip or later iOS 13 devices. Camera’s Portrait Lighting feature is being updated with a High-Key Mono effect and facial smoothing options, while the Photo app’s editing is adding vibrance, white balance, sharpen, definition, vignette, and noise reduction adjustments — as well as the ability to apply filters and effects to videos, not just photographs. The new Photos app will use ML to reduce duplicates within your library, and create a better Days view tab that presents images in a deliberately uneven grid that emphasizes some images over others. On the Siri and AirPods front, iOS 13 is also enabling AirPods to instantly convey new incoming text messages, and allow you to respond to them. Users will be able to use device proximity to instantly share songs they’re listening to, including Handoff of a currently playing song from an iPhone to a HomePod. And HomePod will be able to recognize different users’ voices, customizing responses to each of them. CarPlay in iOS 13 is getting a dashboard with redesigned calendar, music apps, and a persistent Siri bar that works with third-party apps including Pandora and Waze. Siri Shortcuts is now built directly into iOS 13 rather than separately downloaded, and arrives with “suggested automations” so you can get started quickly. A new update to Siri’s voice using neural text to speech is entirely generated by software rather than using tiny clips of human voices. iPadOS While all of the above mobile OS improvements will be shared across iOS devices, several are specific to iPadOS. There’s a new home screen design, updating the classic grid and dock that have remained virtually unaltered since the first iPad debuted in 2010. The new home screen lets you see the grid alone, or add widgets to the left side of the screen. iPadOS also includes an improved multitasking interface, building on Split View mode with the ability to flip through apps within the Split View bar, open multiple instances of the same app, and show App Expose on the iPad — you can see all open apps on the screen at once. The iPad will also gain the ability to function as a second screen for a Mac — complete with cursor-style input via an Apple Pencil. The Files app is adding Column view, a Mac-like way to dig through lists of files, including file previews and metadata. SMB file sharing and iCloud Drive folder sharing have been added, as has support for plugging in and reading thumb drives, external disk drives, and SD cards. Direct importation into individual apps such as Lightroom, plus archive zipping and unzipping, has also been added. Safari on iPad is getting a desktop-class browsing mode, with desktop sites, Google Docs, and other key web apps getting full automatic touch support on the iPad. Additional quality of life UI improvements are being made across iPadOS as well: Custom fonts will be available from all the major font providers, the cursor selection tool is being made simpler — just point — and three-finger swipes to the left and right will be used for cutting and pasting across multiple apps. Latency on the Apple Pencil is being dropped from 20ms down to 9ms through optimizations, and third-party apps will be able to use a new PencilKit API to add markup and drawing tools to their apps — with the ability to move and pin that collection of tools to the sides or bottom of the screen. A new floating iPhone-style mini keyboard will become available to use on larger full-screen displays, so iPad users needn’t give up the entire bottom of the screen for a large keyboard anymore. The new iOS and iPadOS releases both carry the number 13, and are available today to registered developers through Apple’s portal. Public betas are due in July, with final releases in the fall. iOS 13 will run on the iPod touch 7 and iPhone 6S and later, while iPadOS 13 will run on the iPad Air 2 and later, iPad 5 and later, iPad mini 4 and later, and all iPad Pro models. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,254
2,018
"Apple releases macOS Mojave with Dark Mode, Apple News, and HomeKit | VentureBeat"
"https://venturebeat.com/2018/09/24/apple-releases-macos-mojave-with-dark-mode-apple-news-and-homekit"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple releases macOS Mojave with Dark Mode, Apple News, and HomeKit Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As promised during September 12’s Gather Round event in Cupertino, Apple today released macOS Mojave — aka macOS 10.14 — to the general public. Mojave was originally revealed in beta form during Apple’s Worldwide Developers Conference in June 2018, and has since evolved through 11 developer betas into today’s final release. Though Mojave is substantially focused on under-the-hood improvements, it includes several major changes to the Mac’s Finder, as well as a small collection of apps that were ported from iOS. On the Finder side, Apple has introduced a system-wide Dark Mode, which optionally reskins the entire user interface with black or dark gray elements. Dark Mode pairs up with Dynamic Desktop, which can automatically adjust certain desktop images in sync with time of day (morning, afternoon, and evening) changes. Mojave includes macOS-optimized versions of Apple News, Home, Stocks, and Voice Memos, which now look and work nearly the same across iPads and Macs. From a consumer perspective, these previously iPhone- or iPad-specific apps add extra features to the Mac, notably including the platform’s first official support for HomeKit accessories through the Home app. On the developer side, these apps served as internal Apple tests of an initiative called Marzipan, which over the next few years will enable iOS apps to be ported to the Mac with minimal UI changes. Smaller quality-of-life additions make Mojave a nicer OS to use every day. A new feature called Desktop Stacks auto-arranges common desktop items into stacks based on content, date, or tag, so screenshots all appear as a single-icon pile rather than filling your desktop with clutter. Additionally, Gallery View in the Finder lets you preview and make minor photo adjustments to items in folders without opening the Photos app. Less conspicuously, Mojave also includes a wide variety of new security enhancements, including lockdowns of camera and microphone access, enhanced protection of private data, and new Safari features to stop cross-site tracking and user fingerprinting. It also features an updated version of the Mac App Store, using the “editorially curated” design and related features pioneered in the iOS App Store. Mojave can be installed on Macs “introduced mid-2012 or later,” including the following models: MacBook (Early 2015 or newer) MacBook Air (Mid 2012 or newer) MacBook Pro (Mid 2012 or newer) Mac mini (Late 2012 or newer) iMac (Late 2012 or newer) iMac Pro (2017) Mac Pro (Late 2013) Mac Pro (Mid 2010 and Mid 2012) will be supported if they include a Metal-capable GPU It’s unclear whether today’s release of Mojave will include support for Group FaceTime, an update to the FaceTime app designed to permit dozens of people to participate simultaneously in voice or video calls. The feature was yanked from iOS 12.0 and Mojave betas at a relatively late stage of development, and is expected to be added back at some point in the fall — potentially at a rumored October Mac and iPad event. macOS Mojave can be downloaded now from the Mac App Store here. The 5.7GB operating system is available worldwide as a free upgrade for Mac users. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,255
2,019
"Mozilla updates Common Voice dataset with 1,400 hours of speech across 18 languages | VentureBeat"
"https://venturebeat.com/2019/02/28/mozilla-updates-common-voice-dataset-with-1400-hours-of-speech-across-19-languages"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mozilla updates Common Voice dataset with 1,400 hours of speech across 18 languages Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Mozilla wants to make it easier for startups, researchers, and hobbyists to build voice-enabled apps, services, and devices. Toward that end, it’s today releasing the latest version of Common Voice, its open source collection of transcribed voice data that now comprises over 1,400 hours of voice samples from 42,000 contributors across 18 languages, including English, French, German, Dutch, Hakha-Chin, Esperanto, Farsi, Basque, Spanish, Mandarin Chinese, Welsh, and Kabyle. It’s one of the largest multi-language dataset of its kind, Mozilla claims — substantially larger than the Common Voice corpus it made publicly available eight months ago, which contained 500 hours (400,000 recordings) from 20,000 volunteers in English — and the corpus will soon grow larger still. The organization says that data collection efforts in 70 languages are actively underway via the Common Voice website and mobile apps. “From the onset, our vision for Common Voice has been to build the world’s most diverse voice dataset, optimized for building voice technologies,” the company wrote in a blog post. “Since we enabled multi-language support … Common Voice has grown to be more global and more inclusive.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Common Voice — which can be can be integrated into DeepSpeech, a suite of open-source speech-to-text, text-to-speech engines, and trained models maintained by Mozilla’s Machine Learning Group — consists not only of voice snippets, but of voluntarily contributed metadata useful for training speech engines, like speakers’ ages, sex, and accents. Collecting it — and the snippets themselves — requires a lot of legwork: the speech prompts on the Common Voice website have to be translated into each target language. In an effort to streamline the process, Mozilla’s this week rolling out an improved Common Voice web tool with simplified prompts that vary clip-to-clip, plus new controls for reviewing, re-recording, and skipping clips; a toggle that quickly switches between the dashboard’s “speak” and “listen” modes; and an option to opt-out of speech sessions. Additionally, it’s debuting new profile functionality that allows users to keep track of their progress and metrics across languages and add demographic information. Mozilla says that in the coming months, it’ll experiment with different approaches to “increase the quantity and quality of data [collected],” both through community efforts and “new partnerships.” And it says that eventually, it plans to use some of the recordings to develop voice-enabled products. (It’s already demonstrated that DeepSpeech, when trained on Common Voice data supplemented with other sources, can transcribe lectures, phone conversations, television programs, radio shows, and other live streams with “human accuracy.”) But the company contends that the ultimate goal is to provide “more and better [speech] data” to those who seek to “build and use voice technology.” “Mozilla aims to contribute to a more diverse and innovative voice technology ecosystem,” it added. “The Common Voice Website is one of our main vehicles for building voice data sets that are useful for voice-interaction technology. The way it looks today is the result of an ongoing process of iteration. We listened to community feedback about the pain points of contributing while also conducting usability research to make contribution easier, more engaging, and fun.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,256
2,019
"Google launches TensorFlow 2.0 alpha with fewer APIs | VentureBeat"
"https://venturebeat.com/2019/03/06/google-launches-tensorflow-2-0-alpha-with-fewer-apis"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches TensorFlow 2.0 alpha with fewer APIs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The world’s most popular open source framework for machine learning is getting a major upgrade today with the alpha release of TensorFlow 2.0. Created by the Google Brain team, the framework is used by developers, researchers, and businesses to train and deploy machine learning models that make inferences about data. A full release is scheduled to take place in Q2 2019. The news was announced today at the TensorFlow Dev Summit being held at the Google Event Center in Sunnyvale, California. Since the launch of TensorFlow in November 2015 , the framework has been downloaded over 41 million times and now has over 1,800 contributors from around the world, said TensorFlow engineering director Rajat Monga. TensorFlow maintains open source projects with the largest number of contributors on GitHub, according to the 2018 Octoverse report. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! TensorFlow 2.0 will rely on tf.keras as its central high-level APIs to simplify use of the framework. Integration with the Keras deep learning library began with the release of TensorFlow 1.0 in February 2017. A number of APIs seen as redundant — such as the Slim and Layers APIs — will be eliminated. “In 2.0, we just sort of decided OK, we’re just going to stick to Keras — not have two different APIs that you can do almost the same things [with]. And so Keras is front and center, and all the other APIs go away,” he said. Improvements to runtime for Eager Execution , a platform for experimentation and research with machine learning, are also on the way with TensorFlow 2.0. Eager Execution was first introduced last year. TensorFlow 2.0 is “Eager-first,” meaning it uses Eager execution by default, so ops run immediately when they’re called. “We used to work with only graphs, and then about a year ago we launched Eager execution, in addition to graphs. So with 2.0, we’ve really put that front and center and said, OK, you can combine these two, which gives you the flexibility and ease of use of Python, along with really nice APIs,” Monga said. TensorFlow Federated for training models in different locations, the TensorFlow Privacy library with privacy guarantees, and Coral Board for edge computing made their debut today as well. Also introduced today: TensorFlow Lite 1.0 for mobile developers, TensorFlow with Swift version 0.2 for Apple programmers, and TensorFlow.js 1.0 for JavaScript. TensorFlow.js has seen 300,000 downloads and 100 contributors, Google announced today. Support for JavaScript and Apple’s Swift programming language were announced at TensorFlow Dev Summit one year ago. To help developers and people interested in learning how to use TensorFlow 2.0, training courses from Sebastian Thrun’s Udacity and Andrew Ng’s deeplearning.ai are being launched today. Thrun and Ng teach popular online learning courses for machine learning that have attracted hundreds of thousands of users. A Fast.ai course was also introduced today for TensorFlow with Swift. The evolution of TensorFlow It’s been more than two years since Google first made TensorFlow 1.0 publicly available for use, and many changes have taken place to support the work of AI practitioners in that time. The most recent major addition may be TensorFlow Datasets, a collection of ready-to-use public research datasets, which was released last week. Roughly 30 popular datasets are available at launch. Happy 3rd birthday TensorFlow! We've come a long way since the first release in 2015 & TensorFlow wouldn't be the framework it is today without you. As we work on #TensorFlow20 , look at all the features we've added over the years to make TensorFlow easier to use. #HappyBirthdayTF pic.twitter.com/hLoHQnQLkn — TensorFlow (@TensorFlow) November 9, 2018 Monga said that the most significant changes made since the release of 1.0 include TensorFlow Lite; TensorFlow Hub, a central repository for reusable machine learning modules; and the Tensor2Tensor library of deep learning models for researchers. The TensorFlow Probability Python library for researchers using machine learning was also an important step forward, he said. A number of libraries and frameworks built on top of TensorFlow have also been introduced, like Agents for reinforcement learning and TFGAN for generative adversarial networks. Google has also gradually opened up access to TensorFlow Extended, a tool used internally at Google that developers manage models, preprocess their data, and better understand what’s happening with their models while training. “Over the last year, we’ve slowly been putting out pieces, and now we’re actually releasing that entire thing as a way to orchestrate that and [let you] really manage your entire ML pipeline together. It really shows the extension to the full platform in being able to do whatever you want with ML,” Monga said. Introduced in September 2017, TensorBoard allows developers to observe visualizations of their AI models while they’re training. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,257
2,018
"Intel open-sources HE-Transformer, a tool that allows AI models to operate on encrypted data | VentureBeat"
"https://venturebeat.com/2018/12/03/intel-open-sources-he-transformer-a-tool-that-allows-ai-models-to-operate-on-encrypted-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel open-sources HE-Transformer, a tool that allows AI models to operate on encrypted data Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As any data scientist will tell you, datasets are the lifeblood of artificial intelligence (AI). That poses an inherent challenge to industries dealing in personally identifiable information (e.g., health care), but encouraging progress has been made toward an anonymized, encrypted approach to model training. Today at the NeurIPS 2018 conference in Montreal, Canada, Intel announced that it has open-sourced HE-Transformer , a tool that allows AI systems to operate on sensitive data. It’s a backend for nGraph , Intel’s neural network compiler, and based on the Simple Encrypted Arithmetic Library ( SEAL ), an encryption library Microsoft Research also released in open source this week. The two companies characterized HE-Transformer as an example of “privacy-preserving” machine learning. “HE allows computation on encrypted data. This capability, when applied to machine learning, allows data owners to gain valuable insights without exposing the underlying data; alternatively, it can enable model owners to protect their models by deploying them in encrypted form,” Fabian Boemer, a research scientist at Intel, and Casimir Wierzynski, Intel’s senior director of research, wrote in a blog post. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The “HE” in HE-Transformer is short for homomorphic encryption, a form of cryptography that enables computation on ciphertexts — plaintext (file contents) encrypted using an algorithm. It generates an encrypted result that, when decrypted, exactly matches the result of operations that would have been performed on unencrypted text. HE is a relatively new technique — IBM researcher Craig Gentry developed the first fully HE scheme in 2009. And as Boemer and Wierzynski note, designing AI models that use it requires expertise in not only machine learning but encryption and software engineering. HE-Transformer aids in the development process by providing an abstraction layer that can be applied to neural networks on open source frameworks such as Google’s TensorFlow, Facebook’s PyTorch, and MXNet. It effectively eliminates the need to manually integrate models into HE cryptographic libraries. HE-Transformer incorporates the Cheon-Kim-Kim-Song (CKKS) encryption scheme and addition and multiplication operations, such as add, broadcast, constant, convolution, dot, multiply, negate, pad, reshape, result, slice, and subtract. Additionally, it supports HE-specific techniques, like plaintext value bypass, SIMD packing, OpenMP parallelization, and plaintext operations. Thanks to those and other optimizations, Intel claims that HE-Transformer delivers state-of-the-art performance on cryptonets — learned neural networks that can be applied to encrypted data — using a floating-point model trained in TensorFlow. “We are excited to work with Intel to help bring homomorphic encryption to a wider audience of data scientists and developers of privacy-protecting machine learning systems,” said Kristin Lauter, principal researcher and research manager of cryptography at Microsoft Research. Currently, HE-Transformer directly integrates with the nGraph compiler and runtime for TensorFlow, with support for PyTorch forthcoming. Deep learning frameworks that are able to export neural networks to ONXX — such as PyTorch, CNTK, and MXNet — can be used by importing models into nGraph in ONXX and exporting them in a serialized format. Boemer and Wierzynski said that future versions of HE-Transformer will support a wider variety of neural network models. “Recent advances in the field have now made HE viable for deep learning,” they wrote. “Researchers can leverage TensorFlow to rapidly develop new HE-friendly deep learning topologies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,258
2,018
"Intel unveils Nervana Neural Net L-1000 for accelerated AI training | VentureBeat"
"https://venturebeat.com/2018/05/23/intel-unveils-nervana-neural-net-l-1000-for-accelerated-ai-training"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel unveils Nervana Neural Net L-1000 for accelerated AI training Share on Facebook Share on X Share on LinkedIn Intel VP and general manager of the AI product group Naveen Rao announces plans to release the Neural Net Processor L-1000 in 2019. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Intel today announced plans to release Nervana Neural Net L-1000, code named Spring Crest, to make it easier for developers to test and deploy AI models. Intel first introduced the Neural Network Processor (NNP) family of chips last fall. Spring Crest will be 3-4 times faster than Lake Crest , its first NNP chip, said Intel VP and general manager of the AI product group Naveen Rao. The Nervana Neural Net L-1000 will be Intel’s first commercial NNP chip and will be made broadly available in late 2019. The news was announced today at Intel’s first-ever AI Dev Con being held at the Palace of Fine Arts in San Francisco. “We also will support bfloat16, a numerical format being adopted industrywide for neural networks, in the Intel Nervana NNP-L1000. Over time, Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs. This is part of a cohesive and comprehensive strategy to bring leading AI training capabilities to our silicon portfolio,” Rao said in a statement. The new addition to the Neural Network Processor family of chips follows the rollout of AI Core , a circuit board with Movidius Myriad 2 Vision Processing Unit to give manufacturers on-device machine learning. This follows the release of the Neural Compute Stick with similar power. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In recent weeks Intel has taken a series of steps to grow its presence among customers interested in the proliferating number of applications of AI. Building upon its Computer Vision SDK, last week Intel released OpenVINO , a framework for visual AI at the edge, and Movidius, a computer vision startup acquired by Intel in 2016 , will be used in 8 million autonomous cars. Earlier this month, Microsoft announced Project Brainwave in preview for acceleration of deep neural network training and deployment powered by Intel’s Stratix 10, a field programmable gate array (FPGA) chip. As companies like Nvidia and ARM garner reputations for graphic processing units (GPUs) optimized for image processing and companies like Google create specialized chips for AI, Intel has been said to have fallen behind with slower general CPU chips. Intel executives and partners spent much of the morning highlighting improvements to the Xeon CPU chip — like a 3x performance boost when working with TensorFlow — and arguing that since much of the world’s data centers run on Intel processing, Xeon still carries out the majority of the training and deployment of most of the world’s AI. Also announced today: The Intel AI Lab plans to open-source its natural language processing library. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,259
2,022
"Nautilus Labs raises $34M to optimize ship routes while reducing emissions | VentureBeat"
"https://venturebeat.com/transportation/nautilus-labs-raises-34m-to-optimize-ship-routes-while-reducing-emissions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nautilus Labs raises $34M to optimize ship routes while reducing emissions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s estimated that sea-based shipping accounts for 3% of all human-produced greenhouse gas emissions. If left unchecked, shipping will account for 17% of all emissions by 2050. Some experts peg the inefficiency on legacy structures that impact the entire supply chain. In ocean shipping, ships leave port at high speeds only to slow down before reaching their destination, resulting in fuel waste and excess emissions as well as lost capital for shipowners and charterers. Aiming to tackle the problem with technology, shippers are investing in startups offering software that’s designed to optimize routes in a way that minimizes carbon emissions. For example, Mitsui, one of the largest shipping companies in the world, is collaborating with Silicon Valley-based Bearing.ai on a “smart” routing engine for ships. Another startup, Windward, has shipping customers that use its AI platform to monitor fuel and emissions. Another vendor in the over $2.5 billion route optimization segment is Nautilus Labs , which today announced that it raised $34 million in a series B round co-led by Microsoft Climate Innovation Fund and M12 (Microsoft’s venture arm). The capital, which brings the startup’s total raised to over $48 million, will be put toward developing new product capabilities, hiring talent, and expanding offices in “key shipping hubs” worldwide. Optimizing ship routing New York-based Nautilus was founded in 2016 by Anthony DiMare and Brian O’Clair. O’Clair was previously a software engineer at Google, where he worked on the tech giant’s ad-serving Doubleclick platform. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nautilus claims its software considers a client’s commercial goals in addition to internet of things sensor readings, arrival and departure times, and weather forecasts, capitalizing on the weather to generate fast routes. Its AI-informed recommendations — which include suggested on-ship generator settings — are sent daily and updated as factors change during a voyage, with the goal of improving fuel efficiency and reducing engine hours. Nautilus CEO Matt Heider says he sees a large addressable market for Nautilus’ technology. Around 90% of traded goods are carried over the waves, and in the U.S. alone , marine shipping is the transportation method for 53% of imports and 38% of exports. “Economic efficiency and environmental efficiency are best solved in unison. Today, we’re able to empower ocean shipping companies with a path to creating the most profitable business — that at the same time helps them reduce carbon intensity immediately,” Heider said in a press release. “The firms winning in the market are mobilizing resources now to adopt a collaborative, data-driven approach to transforming their voyages. By focusing on the underlying economics, they’re stripping wasted fuel and time out of their operations.” Future growth The International Maritime Organization, the United Nations agency responsible for regulating shipping, has set in place carbon intensity standards that go into effect in 2023. Meanwhile, the European Union (EU) Emissions Trading Scheme — which aims to incentivize firms to reduce their emissions — will cover ocean commerce starting next year. Both could motivate shippers to invest in solutions like Nautilus’ routing software. As AI Multiple’s Cem Dilemgani notes , AI models help businesses to analyze existing routing and track route optimization, using “shortest path” algorithms to identify the most efficient route for vessels. “ AI systems can schedule the transportation, organize pipelines for cargo, assign and manage various employees to particular stations, and track packages in the warehouse,” he wrote in a blog post. “Route optimizers are also effective tools for reducing corporate carbon footprint.” Even before the pandemic, ocean freight was undergoing something of a digital transformation. A 2018 iContainers survey found that 75% of shippers believe that digital ocean freight services will become the norm within the next five years, while 67% said that they’ve begun to see signs that the sector has started its technological evolution. NSS Advisors, SystemIQ, Root Ventures, Quiet Capital, TMV, and Amplifier also participated in Nautilus’ series B. The startup partners with companies including TotalEnergies, Eastern Pacific Shipping, and Emirates Shipping Line, and recently announced the addition of London to its hubs in Singapore and Paris. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,260
2,022
"Decentriq raises $15M to expand its data clean rooms platform | VentureBeat"
"https://venturebeat.com/security/decentriq-raises-15m-to-expand-its-data-clean-rooms-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Decentriq raises $15M to expand its data clean rooms platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data clean rooms, a buzzy term that’s gained currency among the enterprise lately, refers to a secure environment where customer data is anonymized , processed, normalized, and stored. The concept is designed to balance privacy with utility, helping to aggregate data from different sources and combine it with first-party data to provide insights. For example, in the advertising domain, a data clean room can allow brands to collect data about ad performance on a given platform and use that data to evaluate their campaigns. As data clean rooms rise in popularity — Gartner predicts that by 2023, 80% of advertisers with media budgets of $1 billion or more will utilize data clean rooms — tech giants and startups alike are jumping to make waves in the nascent market. For example, Amazon and Google offer clean room services that enable brands to connect ad campaign performance data to their own data for analysis. Other vendors include Habu and BlueConic. “Privacy laws and tech platform policies are putting new limits on consumer data collection and exchange,” Gartner senior research director Eric Schmitt told VentureBeat via email. “Companies have responded by putting new emphasis on consented, first-party data capture from their prospects and customers. Clean rooms provide a mechanism for companies to make that data available in privacy-safe fashion, by connecting datasets that are sourced from more than one company. Clean room uses vary, but commonly include ad-hoc query and insight generation, as well as ad targeting and measurement.” Zürich, Switzerland-based Decentriq is a newer entrant to the space, having launched in 2019. Founded by Maximilian Groth and Stefan Deml, the company today announced that it raised $15 million in a series A funding round led by Eclipse Ventures with participation from Atlantic Labs, btov Partners, and Paladin Capital Group. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’re proud of the Decentriq platform that our incredible team has been building for the past year,” Groth said in a statement. “Our data clean rooms enable companies to collaborate on the most restricted and sensitive data in a secure … manner to unlock new value and deeper insights.” Securing customer data Groth, the CEO of Decentriq, previously led business development at big data analytics company Teralytics. Deml also worked at Teralytics, where he was VP of product development and a tech lead, and contributed to the Ethereum Foundation — the nonprofit organization dedicated to supporting the cryptocurrency Ethereum — as a cryptography engineer. “Through our previous roles, [Deml and I] identified a clear market need for enterprises to share and combine their data sets, to enable collaboration and innovation,” Groth told VentureBeat via email. “The advantages that could be gained for all parties in the ability to more freely share data are self-evident, but there was no solution at the time for how this could be done safely, without breaching data privacy regulations or putting sensitive information at risk. This is the challenge Stefan and I sought to solve when we launched Decentriq in 2019, by creating a tech platform that is the equivalent of a ‘Switzerland for data,’ where security and privacy is guaranteed.” Decentriq claims to use “encryption-in-use” technology including confidential computing to “ensure that no one but the data owner can access their raw data uploaded onto the platform.” This technology, the startup avers, can enable users across companies in banking, insurance, health care and life sciences, market research, media and advertising, and retail and consumer packaged goods to work with sensitive data while preserving privacy and adhering to regulatory standards like the European Union’s General Data Protection Regulation and the California Consumer Privacy Act. Confidential computing is a cloud computing technology that isolates sensitive data in an encrypted CPU enclave during processing. The contents of the enclave — including the data being processed and the techniques used to process it — are, in theory, accessible only to authorized programming code. “We use AI in combination with our state-of-the-art encryption and privacy-enhancing technologies. It plays an enabling role, helping us to preserve privacy at scale,” Groth explained. “There is a big mental shift taking place today on the tools for creating a trusted partnership. Historically, this has always been done on paper — i.e., with contracts. Going forward, technology will be used as the safer, more efficient and more reliable way to create a trusted collaboration between two partnerships. Technological mechanisms for guaranteeing trust are simply far more reliable than promises, and they are also much quicker to establish worldwide, and at scale.” Companies using Decentriq can connect existing data analytics tools to the startup’s data clean room platform. Then, they can upload data from sources including static files, local databases, and cloud storage repositories. For Swisscom, the Swiss telecommunications provider, Decentriq created a product for data sharing, benchmarking, and “sensitive” user surveys. Another customer, medical tech manufacturer LynxCare, is using Decentriq to combine and analyze data across multiple hospitals on the public cloud, according to Decentriq. Expanding confidential computing Beyond data clean rooms, Decentriq is riding a broader confidential computing wave along with players including Intel, IBM, Meta (formerly Facebook), and Microsoft. The startup was one of several to join the Confidential Computing Consortium, an industry group established by the Linux Foundation in 2019 to improve security for data in use. A recently published Everest Group found that the confidential computing market could grow to $54 billion by 2026, fueled by enterprise cloud security initiatives and expanding regulations, especially in privacy-sensitive industries such as healthcare and financial services. “[S]haring sensitive data among organizations and combining it with public information has the potential to become a multi-trillion dollar industry. We have observed the early adopters in this space, in particular social media and advertising companies, capitalize on this market,” Groth said. “However, increasingly we are seeing organizations across all industries explore the opportunities in their sector, whether they are a health care company or a financial institution. Very soon, those who don’t start take the opportunity to securely collaborate on data sets will be at a disadvantage.” Twenty-employee Decentriq claims that 20 of the largest pharmaceutical companies in the world have been collaborating in a Decentriq-built data clean room, including organizations like the Swiss Army and Mobiliar Insurance. The capital from the latest funding round, which brings Decentriq’s total sum raised to over $18 million, will be used to expand the types of analytical tools available on the platform, introduce features including support for synthetic data and differential privacy , expand to new markets, and grow Decentriq’s team, Groth says. “There are companies operating in the same sphere as us, such as InfoSum and Duality Technologies. However, what sets us apart is the combination of confidential computing and privacy-enhancing-technologies. By combining encryption of data in-use through the latest confidential computing technologies, while protecting output data privacy with differential privacy and synthetic data generation, we can guarantee the highest level of security and confidentiality,” Groth said. “This means that our customers don’t need to trust each other, or even Decentriq itself, the data is completely obscured — not even we can see it. This unlocks use cases for collaboration and analysis that have simply never been possible before. At the same time, our platform does not compromise on usability, data utility and scale — the analysis would be useless without the ability for organizations to easily extract meaningful insights.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,261
2,022
"Brew, which develops AI-powered marketing analytics software, raises $12M | VentureBeat"
"https://venturebeat.com/marketing/brew-which-develops-ai-powered-marketing-analytics-software-raises-12m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Brew, which develops AI-powered marketing analytics software, raises $12M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Companies developing artificial intelligence (AI)-powered marketing tools typically claim that their solutions drive strategic decision-making better than software without an algorithmic component. But — as is often the case — the reality is more complicated. AI learns to make predictions from large amounts of high-quality data , and so can be hamstrung (e.g., make mistakes) if that data is not available. The complex nature of marketing stacks, which sprawl across disparate, disconnected systems, can put up logistical roadblocks to implementation. Brew, a Tel Aviv, Israel-based strategic marketing platform, claims its approach is different from the rest in that it’s more holistic. The company says that it uses AI to automatically map marketing activities, providing “customer-specific” strategic views of a market and a given company’s position in it. In a show of investor enthusiasm, Brew recently raised $12 million in an oversubscribed seed round led by Aleph and MizMaa with participation from Gefen Capital. With the investment, which was announced today. Brew says that the new capital allows the company to expand the platform across the North American, European, and Middle East and North Africa markets while growing Brew’s R&D and go-to-market teams. Automating marketing activities Maayan Levy, the CEO of Brew, founded the company in 2019 with Raviv Ventura, Ronen Idrisov and Gabriel Amram. Ventura previously cofounded Zoliro, a startup that allowed event organizers to create digital “conference bags” filled with promotions from sponsors. Amram served over six years in the Israel Defense Forces before becoming the vice president of R&D at Zoliro. As for Idrisov, he led product development and data efforts at online advertising tech companies Sizmek and Innovid. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “More companies are adopting an agile, digitally led mindset, but transformation needs to be sustainable as companies try to keep up with business, market, and customer needs in an ever- (and faster-) changing market landscape,” Levy told VentureBeat via email. “[T]he focus has [historically] been around building and optimizing the siloed effort, [but] this has now been commoditized. The market makers and category leaders will be those who make all these different aspects, from analytical to creative efforts, work in sync and complete alignment to meet the changing business priorities and create lasting commercial impact.” The idea behind Brew is to help brands gauge the big picture of their go-to-market progress while identifying gaps and opportunities in ongoing efforts, according to Levy. He says that marketers are increasingly shifting away from focusing on “data-driven” approaches to more targeted, long-term forms of outreach. Pointing to a recent survey from Deloitte, Duke University, and the American Marketing Association, Levy asserts that most marketers feel pressure to prove impact quantitatively in the short term but qualitatively with regard to long-term strategic impact. Levy claims that the algorithms powering Brew were trained on data from “billions” of marketing initiatives from “millions” of sources, enabling users to explore different markets, audiences, topics, and verticals to see which marketing approaches worked best in specific circumstances. Brew also lets marketers compare strategies against competitors and the broader industry, measuring aspects of campaigns including messaging and brand value. “Brew looks at the entire World Wide Web and builds a graph containing all entities and activities in any vertical and geography,” Levy explains. “This includes proprietary entity and topic extraction algorithms … The graph also works as the training set for the rest of the models, and as a layer of verification to prevent data skews and biases. Brew has built a model that [sorts] any activity — be it news, content, campaigns, website, PR, [or] live events — into six core dimensions — target vertical, audience and geography across topic, company, and channel — creating a shared strategic language that is the basis of all marketing and sales activities and is key to exploring, ideating and measuring market progress from the strategic and unified perspective.” Predicting marketing success with AI Can AI predict the success of a marketing campaign? Levy claims it can, but not everyone believes so. As a recent Harvard Business Review piece points out, an AI prediction system believed to be accurate could be detrimental if it, for example, it unreliably forecasts the sales of low-volume products while reliably forecasting sales of low-margin products. AI systems can also give false positives (for instance, identifying customers who actually stay as probable defectors) or false negatives (identifying customers who subsequently leave as unlikely defectors). Forbes contributor David Gal, who also works as a professor of marketing at the University of Illinois in Chicago, points to studies showing that AI’s ability to predict who’s likely to buy a product remains low. One recent Facebook campaign only managed to increase the likelihood that someone who saw an ad would buy the advertised product from about 1 in 10,000 to about 1.5 in 10,000. Another paper implied that the use of more sophisticated models yielded only very slight improvements over a simple model in the ability to predict people’s credit card choices– so slight that it was likely a waste of effort. Levy acknowledges that businesses must have clear expectations — and plans — before adopting and deploying predictive software for marketing. Still, he avers that 22-employee Brew is not only is accurate in its predictions, but stands above rival products (like Alembic ) in this regard. There’s certainly no shortage of potential customers, in any case, with interest in AI-powered marketing products continuing to climb. According to Bright Edge, 60% of marketers intended to use AI to develop a content marketing strategy in 2018. “[We have] over 30 customers, all with multiple users, from chief marketing officers to individual stakeholders in sales and marketing. [Our] customers are hyper growth companies to Fortune 500 across multiple verticals and geographies, from enterprise software-as-a-service and cybersecurity, to law firms and investment banking,” Levy said. “The funding will allow us to accelerate the speed at which we bring forward the planned infrastructure developments to expand the platform’s coverage of broader business challenges, based on the core … technological infrastructure already in place.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,262
2,022
"Top AI execs, including Richard Socher, launch AIX Ventures | VentureBeat"
"https://venturebeat.com/entrepreneur/top-ai-execs-including-richard-socher-launch-aix-ventures"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top AI execs, including Richard Socher, launch AIX Ventures Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a sign that the coffers of AI startups need little replenishing, Richard Socher, former chief scientist at Salesforce and the CEO of You.com , today announced the launch of a $50 million “AI-focused” venture fund called AIX Ventures. Speaking to VentureBeat via email, Shaun Johnson, a cofounder of AIX alongside Pieter Abbeel, Anthony Goldbloom, and Chris Manning, said that the goal is to make AIX a “household name” for AI-focused venture capital. “There is an opportunity to launch a new venture firm with some of the world’s foremost thought leaders, people who have made fundamental contributions to the state-of-the-art,” Johnson said. “These thought leaders are now behind AIX with the mission to fund generations of AI-focused entrepreneurs.” Socher’s fund comes four years after the ramp-up of Google Brain founder Andrew Ng’s own AI tranche — a $175 million fund focused on building new companies — and weeks after ex-Google CEO Eric Schmidt pledged to invest $125 million into AI research projects through his philanthropic venture. Certainly, there’s no shortage of capital in the AI segment, with a recent report out of Stanford’s Institute for Human-Centered AI (HAI) showing that private investment in AI last year more than doubled from 2020 to around $93.5 billion. But Socher and his cofounders believe that they bring differentiating — and deep — experience to the table. Goldbloom is the CEO of Google-backed Kaggle, one of the world’s largest data science communities. Abbeel, a robotics professor at UC Berkeley, previously was a researcher at OpenAI before cofounding assignment grading platform Gradescope (acquired by Turnitin in 2018) and industrial robotics company Covariant. Chris Manning directs the Stanford Artificial Intelligence Laboratory. And Johnson most recently headed up product development at natural language processing (NLP) startup Lilt. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AIX’s first tranche — Fund I — closed in October 2021 and included Socher’s angel portfolio. AIX Fund I already has around 40 portfolio companies, Johnson claims, including NLP startup Hugging Face , as well as Athelas, Weights & Biases and Time by Ping. Fund II is already in the works. “We believe [that], to be a top AI practitioner, you have to be practicing at the top of the field … Socher, Goldbloom, Abbeel, and Manning have individually proven they can build impressive portfolios. Joining together as a venture firm, and moving on from their angel days, takes their potential for impact to the next level,” Shaun said. “The AIX investing partners will continue their current roles and will be supported by the full-time AIX team [and] I.” As for which startups AIX plans to pursue, Johnson says that the fund will focus on companies in the seed and pre-seed phases and “verticals across the AI spectrum,” including in NLP, computer vision, and robotics. With an eye toward AI applications in manufacturing, warehousing, health care, software-as-a-service, MLops, data, and consumer, AIX will provide capital as well as business and technical guidance, help with hiring and strategy, and follow-on fundraising. “[W]e see the magnitude of the impact AI is going to have on humanity. At the same time, the tech is just getting started,” Johnson said. “We have decades of significant progress ahead of us.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,263
2,022
"Ramp, which helps companies manage expense reporting, raises $750M | VentureBeat"
"https://venturebeat.com/enterprise-analytics/ramp-which-helps-companies-manage-expense-reporting-raises-750m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ramp, which helps companies manage expense reporting, raises $750M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Businesses of a certain size have to contend with challenges around T&E (travel and expenses), an abbreviation for a category of expenses that generally includes travel and transportation, meals, entertainment, and gifts. According to research from Forrester, T&E is among the most difficult costs to control, with 80% of the companies surveyed by the firm saying that they use a time-consuming, error-prone manual system (as of 2014). A 2019 report from Nexonia — which, it must be noted, sells expense reporting software — aligns with Forrester, finding that it takes nearly half of organizations eight days or more for expense reports to be submitted, approved, and reimbursed. The demand for automated solutions has accelerated the growth of the T&E management software market, which Grand View Research expects will be worth $17.4 billion by 2017. Among the most popular vendors are Certify, Concur, Yokoy , TripActions , and IBM, but startups have emerged in recent years to take on the incumbents, including Karmic Labs , Payhawk, and Divipay. Ramp is one of the larger upstarts in the space, having snagged $300 million in a series C financing round last August. Today, the company announced an even larger round — a $750 million mix of equity ($200 million) and debt ($550 million) — that values Ramp at $8.1 billion post-money. Automating expense reporting Manual T&E reporting isn’t time-consuming just because it requires entering items into a spreadsheet. It also necessitates that employees compile reports and make sure that their requests are compliant with company policies. On the management end, companies have to try to predict the costs of an event or trip, usually by estimating what travel, lodging, and other fares might cost. New York-based Ramp, which was founded in 2019 by Glyman, Gene Lee, and Karim Atiyeh, aims to abstract away this type of management with virtual and physical payment cards geared toward T&E tracking. Ramp’s tools allow companies to control employee spend with rules, limits, blacklists, and approvals. In addition, they show spending insights that combine elements of reporting and accounting. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Glyman previously cofounded Paribus, a price-tracking app that was acquired by Capital One in 2016. (Glyman stayed on at Capital One until 2019 as CEO and senior director of Paribus’ U.S. operations. Atiyeh was the CTO at Paribus. Lee, another Paribus alum, started as a software engineer at Paribus and was promoted at Capital One to senior manager of engineering within the Paribus division. “Much of the industry is misaligned with the best interests of customers, designing points systems that incentivize companies to spend more than they planned, and shipping cumbersome software that wastes employee time. Ramp is the first in the industry to design its products to help companies spend less money and time,” Glyman told VentureBeat via email. “We work from a very broad set of invoice data, standardization across documents, and feedback from users to improve our models to save customers time and money.” Ramp collects and verifies receipts to offer payouts for expenses like mileage, meals, and incidentals. The platform also automatically spots duplicates and categorizes expenses by dimensions like time, category, department, and employee. Ramp’s bill pay feature, which was recently introduced, uses AI to streamline the process of paying suppliers. Users can upload or email bills to have the platform analyze them and autopopulate vendor information, line items, and payment details. Building on its acquisition of procurement startup Buyer, Ramp offers access to consultants who help to negotiate on companies’ behalves for savings. The consultants look at a company’s spending on expenses like laptops, cloud computing, software-as-a-service plans, IT infrastructure, office space, insurance coverage, and furnishing and try to establish procurement goals. Once introduced via email as the company’s procurement team, the consultants kick off the negotiating process with a report that shows spending trends and where the company could potentially be saving money. More recently, Ramp rolled out Ramp for Travel, a set of automations and integrations that focuses on simplifying trip expense reporting. (For example, when an employee books a Lyft ride, Ramp for Travel can automatically capture the receipts and report metrics like the frequency of rides.) And just last week, Ramp announced a partnership with Amazon Business, Amazon’s procurement portal, to generate receipts for purchases, auto-categorize transactions, and launch new spend controls. Expanding market While there’s been a growing interest in T&E management solutions among the enterprise, a fair number of companies are clinging to old-fashioned, manual methods. A 2018 survey by Certify found that 18% of businesses with more than 1,000 employees and 34% of mid-sized businesses are using spreadsheets for managing employee expenses. Of the latter group, 12% admitted to relying on pen and paper. Glyman asserts that these companies will make the switch once the benefits of T&E management software become clear to them. He agrees with studies like Aberdeen’s , which notes that, when making claims and recording costs, workers who use smartphones and other devices are generally 17% more on par with compliance standards and spend more than $8 less per expense report. In an interview last year, Andrew Bartels, former VP and principal analyst at Forrester, told VentureBeat that he believed there was “little to differentiate” Ramps from the competition. As then, Glyman pushes back against this notion, pointing to the volume — $5 billion of annualized payments — of processing that Ramp has reached. “We’ve delivered over $130 million in savings for our customers to date. We’re helping companies close their books in eight hours instead of the industry median of eight days — freeing up 3.5 million hours of manual work. None of our competitors can say the same,” Glyman continued. Ramp — which has raised $1.37 billion since its founding in March 2019 — claims to have quadrupled its workforce over the past year to over 275 people. Its customer base stands at more than 5,000 businesses, which drove revenue to increase nearly 10 times in 2021. Cardholder growth reached 15 times year-over-year, while usage of Ramp’s bill pay feature doubled every month in 2021. Within the next few months, Ramp plans to open a new office in Miami. “This funding will accelerate development of our finance automation platform, on the heels of Ramp for Travel and other features that fully automate expense management,” Glyman said. “Ramp is building the next generation of finance tools — from corporate cards and expense management, to bill payments and accounting integrations — designed to save businesses time and money with every click … Ramp competes, and wins, against established billion dollar players like American Express, Concur, Bill.com, and Expensify, which aren’t innovating in the best interests of their customers.” Founders Fund led Ramp’s latest round with participation from D1 Capital Partners, Thrive Capital, Redpoint Ventures, Coatue Management, Iconiq, Altimeter, Stripe, Lux Capital, Vista Public Strategies, Spark Capital, Definition Capital, General Catalyst, Avenir Growth Capital, 137 Ventures, and Declaration Partners. Of the debt financing, $300 million came from Citi and $150 million from Goldman Sachs, which doubled its commitment to a total of $300 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,264
2,022
"Sedai raises $15M to automate cloud management tasks | VentureBeat"
"https://venturebeat.com/data-infrastructure/sedai-raises-15m-to-automate-cloud-management-tasks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sedai raises $15M to automate cloud management tasks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A major challenge in enterprise cloud computing is keeping costs low, particularly as organizations transition more of their business online. Gartner predicts that through 2024, 60% of infrastructure and operations leaders will encounter public cloud cost overruns (i.e., overruns from services from hosts like Amazon Web Services, Google Cloud Platform, and Microsoft Azure). In fact, IDG’s 2020 cloud computing survey of IT professionals found that the top issue they encountered was controlling costs, following by data privacy and security. The solution, argue vendors like San Francisco, California-based Sedai , is cloud management automation. Sedai today raised $15 million in series A funding to prove out this theory; the company claims its platform can automatically discover resources and analyze traffic and performance metrics to “continuously” manage production environments with “proactive actions.” “To effectively manage scale, companies are rethinking the ‘shift left, shift right’ operations and automation that are slowing them down,” cofounder and CEO Suresh Mathew, who met Sedai’s other cofounder, Benji Thomas, while working on PayPal’s payments production team, said in a statement. “Sedai delivers … a platform that can act independently on [companies’ behalves], learn from them, and carefully measure the efficacy and continuously fix and improve right in production.” Automating cloud management Automating aspects of cloud management isn’t a new idea. For example, Blink , which launched out of stealth in March, offers a library of playbooks for executing common cloud automation tasks. Zesty , another vendor, handles cloud usage adjustments automatically, even estimating the time it takes to spin up and shut down resources and planning accordingly. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Mathew asserts that Sedai’s secret sauce is its ability to infer a company’s infrastructure and metrics and determine service-level objectives for resources. Once the platform discovers the topology and the company selects which resources they want Sedai to monitor, Sedai looks for opportunities both to optimize cost and improve availability and latency. “Our smart system will autodetect [the] topology, determine which metrics it should monitor for signals, and compute thresholds and alerts … Sedai’s smart system watches [cloud] environments and anticipates seasonality changes or potential outages,” Sedai explains on its website. “Sedai will safely deploy proactive actions on [an admin’s] behalf to avoid service disruptions.” Sedai — which is agentless and can integrate with many existing ticketing and alert platforms — also offers what the company calls “smart scorecards” to help customers avoid fallout from buggy code in apps that they’ve deployed to the cloud. These smart scorecards let engineers compare the performance of apps in production to the performance of previous releases and roll back to earlier versions if necessary. “Businesses no longer have the luxury of responding to issues after they occur. The expectation of release cycle velocity and the pace at which companies need to innovate demands that IT executives need to look at proactive autonomous systems to scale and deliver a better user experience,” Mathew told VentureBeat via email. “IT and operations teams are under pressure to keep systems available, high-performant and continually looking at how to optimize cloud deployments. For any IT manager who is looking to scale, an autonomous system is a must-have.” Growing market Cloud adoption isn’t slowing anytime soon. A newly released survey from Spiceworks Ziff Davis puts the trend into sharp relief: Cloud spend increased from 22% of IT budgets in 2020 to 26% in 2022. Gartner estimated worldwide spending on public cloud services at $304.9 billion in 2021. The large addressable market gives Mathew confidence that Sedai has legs, even factoring in the competition. Several businesses — including Tasq and Fabric — are already using Sedai to manage their cloud environments and production applications, he says, ranging from startups to larger enterprises undergoing a digital transformation and moving to cloud environments. “Sedai has had a very successful beta and limited availability program. We currently have 12 customers [who] are actively using Sedai and already seeing significant benefits with availability, performance and optimization,” Mathew said. “Autonomous cloud management in production is a new shift in how companies are managing their cloud environments … As far as the cloud application management domain is concerned, the need for autonomous systems has grown with pandemic [as individuals use] more applications in their lives than ever.” Norwest Venture Partners led 25-employee Sedai’s funding round with participation from Sierra Ventures and Uncorrelated Ventures. The new capital brings the startup’s total raised to date to $18.7 million. Sedai plans to apportion part of the cash to opening an engineering center in Kerala, India. The rest will be put toward growing the broader development and engineering team, according to Mathew. “The funding will be used to grow our engineering teams and expand our product to autonomously manage stateful workloads,” Mathew said. “In addition, we will be adding sales and marketing teams to support both enterprise markets and digital native companies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,265
2,022
"Google Cloud expands contact center automation offerings with third-party integrations | VentureBeat"
"https://venturebeat.com/data-infrastructure/google-cloud-expands-contact-center-automation-offerings-with-third-party-integrations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud expands contact center automation offerings with third-party integrations Share on Facebook Share on X Share on LinkedIn DAVOS, SWITZERLAND - JANUARY 25, 2022: A pedestrian passes a Google Cloud logo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2019, Contact Center AI, a Google Cloud service that promises to automate conversations between businesses and customers and deliver “intelligent” tools for customer service agents, reached general availability. A year later, Google introduced new features, including custom-generated voices and an agent assist module that transcribes calls in real time. Now, with an eye toward unifying its various contact center AI service offerings across Google Cloud, Google is launching what it describes as an “extension” of Contact Center AI that adds new integration support for customer relationship management platforms. The goal, says director of product management for Contact Center AI Yariv Adan, is to “[make] it easier to unify sales, marketing, and support teams around common data and customer experiences.” While call centers have been slow to embrace automation historically due to budget constraints as well as challenges around technology and processes, the staffing shortages and high call volumes brought on by the pandemic prompted many to reconsider. According to a 2020 report from Canam Research, 78% of contact centers in the U.S. report plans to deploy AI in their contact center in the next three years for uses including chatbots, self-service, and AI for quality management. Contact Center AI Platform would appear to be a response to Amazon Web Services’ (AWS) Contact Center Intelligence solutions, which launched two years ago. Like Contact Center AI Platform, Contact Center Intelligence solutions enables companies to integrate their contact centers with self-service, analytics, AI, self-service, customer management, and agent assist products through third-party vendors. “While there was no better time than the present to bring Contact Center AI Platform to market, the decision to act now was based on data-driven insights received from our customers and the retail industry at large,” Adan told VentureBeat via email. “Pre-pandemic, Contact Center AI helped our customers meet the demands of their contact centers, but the COVID era exposed tech gaps in the system that could sometimes put customer service interactions in a negative light. To address this, we identified opportunities to better help not only our customers with a solution in Contact Center AI Platform that addresses their greater needs beyond the pandemic, such as being adaptive to the demands of customers wanting to interact with agents on their smartphones, but also helps agents and all sized businesses produce more positive outcomes for both the customer and agents.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Integrating CRM data The new Contact Center AI Platform adds a range of features to Google’s Contact Center AI suite, including the ability to create customer experiences that can be embedded into mobile and web channels (e.g., iOS and Android apps) using existing software developer kits. Using Contact Center AI Platform, Adan says, brands can manage multiple channels without having to switch between voice, texting, and chat support and leverage customer relationship management platforms as a “single source of insight into the customer experience.” “Contact Center AI Platform is an evolved, modernized solution that addresses customer and consumer needs beyond the pandemic. Reducing call volume … and reducing costs are just some ways in which the Google Cloud solution focuses on the future of problems being felt in the market today,” Adan continued. “Contact Center AI Platform offers streamlined experiences and seamless integrations with customer relationship management platforms and numerous telephony providers, key differentiated benefits to ensure an enriched, cost-effective solution that will continue to meet customer and consumer needs both today and in the future.” The new offering also attempts to predict customer needs and route calls based on historical customer relationship management data and real-time interactions. Contact Center AI Platform, Adan says, can additionally automate some scheduling functions including schedule adherence monitoring (which measures whether agents work the amount of time they’re scheduled to work) and provide customers with self-service via the web or mobile apps. “Human agents are, and will continue to be, incredibly important,” Adan said. “Google Cloud Contact Center AI enables the best experience for both agents and customers. AI and natural language processing are, and should, be used to scale, support and improve human agents, not to replace them. We see great opportunities for seamless collaboration between human agents, virtual agents, and supportive AI and natural language processing tools. Combined, they deliver excellent customer experience, at a consistent high level of availability and quality.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,266
2,022
"Blink launches out of stealth with $26M to automate cloud management tasks | VentureBeat"
"https://venturebeat.com/data-infrastructure/blink-launches-out-of-stealth-with-26m-to-automate-cloud-management-tasks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Blink launches out of stealth with $26M to automate cloud management tasks Share on Facebook Share on X Share on LinkedIn Network server room Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Adoption of cloud technologies is on the rise in the enterprise. According to a recent O’Reilly survey , 90% of organizations now use cloud computing — an increase from a 2020 survey, which reported that 88% of respondents used the cloud. Meanwhile, Flexera’s 2021 poll found that organizations are boosting their spending on cloud, with 36% saying that their annual spend on infrastructure, hardware and more exceeds $12 million. Adoption doesn’t always correlate with deployment success, however, as organizations — particularly those without expertise in cloud — have discovered the hard way. Companies report challenges not only with managing cloud governance and resources but with security, compliance, observability, resilience and functionality. According to Accenture, just 37% of companies believed that they were achieving the full value expected on their cloud investments in 2020, while just 45% said that they were “very satisfied” with their cloud outcomes. The frustration some companies are experiencing with cloud led Gil Barak and Haviv Rosh to found Blink , a San Francisco, California-based startup offering a low-code automation platform for cloud operators. Blink’s software aims to save time by scaling internal workflows to support development teams and organizations, replacing individual scripts with a managed, shareable workspace. Automating cloud Blink’s cofounding coincided with climbing interest among enterprise organizations in automation. In 2020, Deloitte reported that two-thirds of business leaders used automation to respond to the impact of the pandemic, with one-third accelerating their investments in cloud-hosted automation. But, as with cloud, blockers — including managing and integrating multiple processes and a lack of expertise — have stood in the way of automation adoption. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “While working at Palo Alto Networks and ServiceNow, we realized that existing security and IT automation platforms were all designed and built for pre-cloud companies, forcing highly skilled devops and secops engineers to work endless shifts, write manual scripts and spend hours each day maintaining and securing their cloud environments,” CEO Gil Barak told VentureBeat via email. “Devops teams needed a centralized, collaborative workspace where they could organize all their various APIs and scripts, while making relevant workflows accessible to other developers in their organization. Companies have moved to the cloud, but the automation platforms required to secure and maintain the new infrastructure have yet to catch up.” Blink aims to help organizations overcome the hurdles with a library of playbooks for executing common cloud automation tasks. The platform integrates with popular cloud infrastructure, monitoring tools and security tools, allowing users to build workflows that remediate frequent issues automatically. Barak was formerly a senior software engineer at Apple. Rosh previously served as a chief architect at Dell EMC before becoming VP, chief architect of applications at ServiceNow. “Cloud-native apps are decentralized and complex to operate, requiring significant manual work, scripting and integrations. Cloud operations teams need a fast and scalable way to operate cloud services, infrastructure and security, leading many organizations to adopt CI/CD and agile toolsets, but day-two operations remain an unresolved challenge, leading to major bottlenecks for development teams and risky, compounding technical debt,” Barak added. “Blink is building the low-code and no-code future for cloud operations. Blink helps devops, SREs and SecOps teams shift left, with low-code and no-code automation that scales internal operations workflows to more efficiently support development teams and organizations.” CloudOps automation Blink is one of several startups in the growing cloudops platforms market, which refers to platforms that perform a combination of network, security, help desk, device management and performance tasks to keep cloud-native apps running. There’s usually an element of automation. For example, Dazz — founded by former Microsoft security executives — brings automation to cloud security, while Glueware delivers a suite of automation tools and apps for cloud environments. In addition, there are startups like Cast AI , which provide software that attempts to optimize cloud spend. IDC predicts that the worldwide market for “intelligent” cloudops software will reach $27.1 billion in 2025 as organizations search for ways to manage expenditures, avoid cost overruns, and address a lack of governance and security concerns. While deploying automation technologies isn’t always a seamless process, as reports show, vendors’ sales pitches have evidently made a mark on companies intent on digitally transforming their operations. “The pandemic has expedited the enterprise world’s migration to the cloud. Enterprises that were slow to fully adopt cloud-native approaches have been forced to shift to remote work and decentralized cloud infrastructure,” Barak said. “There’s a greater need for devops people, processes, and knowledge, but many organizations lack the human resources to scale and meet these needs.” It’s early days for Blink, whose platform hasn’t left beta. But in a show of investor confidence, the company today announced that it raised $26 million across seed and series A rounds led by Lightspeed Venture Partners, other investors include Entrée Capital, Hetz Ventures and individual investors. “Blink will use this new capital to grow our product and engineering teams, launch a new online community for devops professionals, and continue adding new integrations,” Barak continued. “Blink currently has 30 employees across Israel, USA, and Europe. We are expecting to more than double our team in 2022, and are actively hiring for engineering, customer success, developer relations, sales, and marketing positions.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,267
2,022
"Convelio, which automates shipping processes for luxury goods, raises $35M | VentureBeat"
"https://venturebeat.com/commerce/convelio-which-automates-shipping-processes-for-luxury-goods-raises-35m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Convelio, which automates shipping processes for luxury goods, raises $35M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The pandemic threw — and continues to throw — supply chains into a state of chaos. According to a 2021 Statista survey, 50% of shippers said that they were struggling to cut transportation costs, a challenge that’s only going to increase as Russia’s invasion of Ukraine impacts global fuel prices. Other barriers to success in logistics and shipping including fluctuating customer demand, inventory management, finding talent, and keeping up with tech as well as sourcing, manufacturing, analytics, and data management. Last year, Deloitte found that more than 40% of companies saw their costs increase by 5% or more as a result of ongoing supply chain issues. Goods of all kinds have been affected by the current logistics blockers, including fine arts, jewelry, and antiques. Searching for a tech solution to the woes, entrepreneurs Clément Ouizille and Edouard Gouin four years ago launched Convelio, a company that uses algorithms to offer high-end sellers instant quotes on shipping. Bolstered by the pandemic-tinged state of the market, Convelio recently closed a $35 million investment round, bringing its total capital raised to $45 million, the company announced this morning. Automating shipping The idea of applying automation to shipping is hardly new. Loadsmart , Flock Freight , and their rivals use AI to match shippers with truck transportation, while companies like Cargo. One algorithmically find air routes for cargo. But Convelio, uniquely, focuses on luxury items, which can be more expensive to ship the traditional way because of the delicateness of the products being packaged. Ouizille came up with the idea for Convelio with Gouin while working as a logistics lead at Pamono, an antique furniture retailer based in Berlin. Gouin is a serial angel investor, having backed income-pooling startup Pando and no-code payments framework provider Primer, among other companies. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “My cofounder, Clément Ouizille, and I started our careers working for Rocket Internet’s COO Adrian Frenzel, now global COO of Gorillas. During our time working for Rocket’s portfolio companies, we developed an acute understanding of startup operations and went on to build ecommerce companies in the fine art space out of personal interest,” Gouin told VentureBeat via email. “We quickly came to realize that, contrary to more classical ecommerce spaces, fine art was missing a key piece to successfully transition online: efficient, and integrated, shipping.” Convelio, which delivers to over 80 destinations, considers parameters including fragility, dimensions, and value to create a logistics chain for artwork and other luxury products, like sculptures and furniture. For customers such as Christie’s, Sotheby’s, and 3,000 other auction houses, galleries, and collectors, Convelio automates customs, insurance, real-time tracking, and delivery processes. Pieces shipped by Paris-based Convelio typically range in value from €5,000 ($5,565) to €1 million ($1.11 million), according to Ouizille. “We have developed internally an instant pricing algorithm that assesses multiple data points instantaneously across the entire logistics chain,” Gouin explained. “Convelio’s technology not only enables platforms to enhance the customer experience on their website by implementing our API or widget, but it can also be used to automate internal workflows such as shipment collection, invoicing and billing, or inventory management … For a partner, relying on our core technologies … means it can streamline its processes and support both buyers and sellers with post-sale administration.” In addition to automating quotes and workflows, Convelio helps to manage documents and customize different shipping services. Its network spans 19 “crating centers,” Ouizille says, which are responsible for ensuring goods aren’t damaged during shipping. Significant growth The luxury goods market has taken a hit during the pandemic, with the world’s top 100 luxury goods companies generating revenues of $252 billion in 2020 compared with $281 billion in 2019. According to Deloitte, over 80% of the companies in the top 100 reported lower sales in 2020, reflecting pandemic-related store closures, travel bans, shifts in consumer demand, supply chain disruptions, and other factors. But in what can only be seen as good news for 200-employee Convelio, there’s signs of recovery. A Bain & Company report finds that luxury products in general in 2021 were first to recover to their 2019 levels, driven by the loosening of pandemic restrictions and by lockdown-inspired home upgrades and blended living and working spaces. Coinciding with the rebound, venture capital (VC) firms are pouring money into the broader shipping logistics space as potential solutions to supply chain efficiency gain prominence. According to Pitchbook, VCs pledged $12.6 billion toward supply chain technology startups in North America and Europe alone in 2020 through more than 500 deals. Grand View Research predicts that the global transportation management systems market will be worth $27.48 billion by 2028. Convelio claims that it was responsible for 14,000 shipments last year with a cumulative value of $265 million. The company’s annual recurring revenue increased 2.5 times while its workforce grew to 200 people. Convelio says it plans to explore other market segments where “[its] expertise in shipping bulky, valuable and fragile items” can be leveraged. “Since launch in 2017, we’ve recorded an average 98% compound annual growth rate in revenue, and in 2021 alone, revenue increased by 2.5 times,” Gouin said. “The vast majority of the revenue in the fine-art shipping market, a $4 billion opportunity, sits with traditional, low-tech incumbents and this is what we are going after as a start. Our competitive edge has always been on building the best experience for our clients by leveraging technology. We have more than a head start, so we are going to continue keeping our heads down, keep working on shipping more, and faster, to deliver the ultimate fine-art shipping experience.” Forestay and Mundi Ventures co-led Convelio’s latest round, with participation from Acton Capital and Global Founders Capital. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,268
2,022
"WorkRamp nabs $40M to expand its corporate learning platform | VentureBeat"
"https://venturebeat.com/business/workramp-nabs-40m-to-expand-its-corporate-learning-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages WorkRamp nabs $40M to expand its corporate learning platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. WorkRamp , a provider of learning management systems (LMS), today announced that it has raised $40 million in a series C funding round co-led by Salesforce Ventures, Slack Fund, and Susa Ventures with participation from OMERS Ventures, GTMfund, PeopleTech Angels, and UpHonest Capital. According to CEO and cofounder Ted Blosser, the capital illustrates continued growth in the LMS sector at a time when companies are shifting more and more of their operations remote. “In today’s rapidly changing work environment, companies are struggling to keep up because they aren’t able to properly enable and develop their people,” Blosser said in a statement. “When organizations can provide stellar and engaging learning opportunities, they become unstoppable. They are empowered to attract and retain top talent, exceed revenue targets, and inspire customers to become advocates. Learning becomes a growth engine for the entire business.” According to a 2021 LinkedIn survey, from 2019 to 2020, the number of enterprise learners more than doubled while the amount of learning increased by 58% more hours per learner. The same survey found that, whether because of skills gaps or impending labor shortages, 59% of learning and development professionals considered upskilling and reskilling programs their top priority in 2021. Accelerated learning WorkRamp, which Blosser, previously a product manager at Box, founded with fellow Box veteran and head of engineering Arsh Mand in 2016, pivoted to the employee training segment after it acquired Prelude, a stealth Y Combinator-backed company working on an enterprise commerce marketplace. WorkRamp’s platform enables companies to create in-person and digital training programs and measure their impact through analytics and reports. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! WorkRamp’s screen recording and slide sharing features integrate with workflows in Salesforce, Slack, and Chrome to deliver coaching tips that might improve performance. Managers can use WorkRamp to launch lessons and provide feedback in the form of reviews and star ratings. “[W]e focused our efforts on delivering key business metrics to our customers that correlate training performance with revenue performance, employee happiness, and customer loyalty,” Blosser told VentureBeat via email. “[For example,] we released a new reporting product last year that allows [companies] to visualize learning metrics from existing data sources in [their] LMS environment … [Our platform also takes] in data from other tools to prove training effectiveness across the company (i.e., measure training efficacy based on location, teams, quota attainment, etc.) [and pushes] data outwards to various platforms like Tableau or Salesforce to correlate training data to business goals … Our vision is to expand on this further by leveraging this data for predictive modeling as it relates to learner behavior, surfacing the relevant training content.” Increasing demand Competition among LMS vendors is red-hot, with Fortune Business Insights anticipating that the market will grow from $13.38 billion in value in 2021 to $44.49 billion in 2028. Blosser sees Docebo, Seismic , and Litmos as WorkRamp’s top rivals, with Seismic drawing particular ire over its acquisition of training platform Lessonly last August. Blosser attributes the broader industry’s growth to an increased desire for companies to attract, develop, and retain talent. McKinsey reports that 69% of companies engage in more skill building than before the pandemic began. A separate poll from LinkedIn found that employees who feel their skills aren’t put to good use in their current job are 10 times more likely to be looking for a new job. That’s not to suggest LMS is a silver bullet. As HR Daily Advisor notes , costs are involved with the purchase, implementation, and setup of LMS, and it might be difficult to convince employees to use it — especially if it’s not required. Blosser claims that 125-employee WorkRamp has over 300 customers, though, including Box, Reddit, Outreach, and Lattice. The company’s valuation tripled compared with 16 months ago and its total raised now stands at $67.2 million, which Blosser says will be put toward product development (specifically for onboarding and customer training), strategic investments to “help develop the larger enablement and learning community,” and hiring. “Competing in today’s market has become increasingly challenging, and the pace of change shows no signs of slowing. Revenue targets and goals continue to go up, and organizations must meet their customers’ evolving needs,” Blosser said. “For the IT department, [WorkRamp] consolidates the number of tools that a company needs to manage, secure, and invest in … [And] for the C-Suite, WorkRamp allows teams to directly correlate learning to business results.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,269
2,022
"Talent.com nabs $120M to bolster its AI-powered recruitment platform | VentureBeat"
"https://venturebeat.com/business/talent-com-nabs-120m-to-bolster-its-ai-powered-recruitment-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Talent.com nabs $120M to bolster its AI-powered recruitment platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During the pandemic, hiring has become increasingly focused on remote positions as companies eschew job fairs and other in-person recruitment initiatives in favor of alternatives. According to a LinkedIn survey, companies plan to adopt hiring processes that combine virtual and in-person steps due to the associated cost and time savings. But priorities in HR largely haven’t changed. Jobvite’s 2021 poll found that improving quality-of-hire, improving time-to-hire, increasing the retention rate, and growing the talent pipeline remain recruiters’ top recruiting priorities. To help with their hiring efforts, seven out of 10 companies plan to boost spending on HR tech this year, according to a PwC survey. But the same survey put into question the effectiveness of these investments. More than 80% of respondents said their HR teams struggle with technology adoption challenges linked in part to planning phases that fail to get necessary stakeholders — including executives and management — to answer the right questions. HR tech vendors assert their solutions can solve organizational challenges, however, if implemented with a clear plan. For example, Talent.com, a recruitment platform based in Montreal, claims to send more than 80 million job candidates to employers each month and provide 30 million jobs to those candidates. Founded in 2011 by Benjamin Philion, CEO Lucas Martinez, and Maxime Droux, Talent.com says that it leverages AI to improve job search relevancy for jobseekers while helping companies hire new team members. AI for recruitment Prior to launching Talent.com, formerly called Neuvoo, Philion was a risk analyst for the National Bank of Canada. Lucas Martinez was a business development manager at EF Education First, an education company specializing in language training, while Droux was an asset allocation analyst at investment firm GLG Partners. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Like many recruitment platforms (e.g., Monster, Indeed), Talent.com offers a search page where jobseekers can track down opportunities and set up alerts for particular roles. Candidates can compare salaries by searching average salaries of job titles similar to their own and calculate take-home pay in hourly, monthly, and yearly increments, using a tax calculator that breaks down the tax deductions they can expect to see. “Talent.com is using data from clicks by our users to predict the probability of a click given search terms, user profiles, and job titles … The dataset for training contains more than 800 million items,” Philion told VentureBeat via email. “[The] model gives up to a 40% increase in clicks in comparison with a search algorithm based on keywords only.” On the enterprise side, companies can post open roles and filter candidates, observing the progress of job recruitment campaigns in a dashboard. The company uses programmatic advertising and a network of media outlets to attempt to fill roles quickly, buying programmatic ads on websites that meet a target profile to find candidates for businesses. Programmatic ads automate the process of buying and selling ad spaces through an exchange that connects advertisers — in this case, Talent.com — to publishers. Using algorithms that analyze characteristics of publishers’ readers, considering factors like location and demographics, programmatic platforms set bidding prices for the first impression (i.e., the first view an ad receives). After the impression has been sold, it’s sent to the publisher’s website to be displayed. “Open roles cost money to enterprises — our goal is to fill that void making these connections,” Philion told VentureBeat via email. “The labor shortage has been a massive issue for many businesses in the past few years, and it was exacerbated by the pandemic.” Expanding market Showing that the potential of poor HR tech adoption isn’t scaring away investors, Talent.com today announced that it raised $120 million in a series B round led by Inovia Capital with participation from Caisse de dépôt et placement du Québec, Investissement Québec, Climb Ventures, BDC Capital, Fondaction, and HarbourVest Partners. The backing brings Talent.com’s total capital raised to $150 million, which includes $30 million in new debt financing from the Technology & Innovation Banking Group at BMO Financial Group. As far as future growth is concerned, the trick will be convincing would-be customers that 400-employee Talent.com’s solution is superior to rival recruitment offerings. Among them is HireEZ , which provides AI-powered tools for job candidate recruitment and profiling. Sense is also developing AI-powered recruitment and hiring solutions for companies, as is Phenom People A 2021 Sage survey found that 24% of companies are currently using AI for recruitment and that 56% plan to adopt AI in 2022. While the risks can be high — studies have found that the algorithms used in recruiting can introduce bias, including anti-Black bias, leading cities like New York City to regulate them — companies are eager to pursue technologies that might help them navigate a historically challenging labor market. From January 2021 to October 2021, venture capitalists put $9.2 million toward HR tech startups, a 130% jump from 2020’s total, according to PitchBook data. “Our objective is to capture a significant share of a huge market that is going online. We’re therefore more focused on growing our platform than being profitable at this stage of our evolution,” Philion said. “This will be even more true now that we raised a significant amount in equity. The business has existed for over 10 years and the founders succeeded in growing it to over $100 million in revenues … with very little funding.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,270
2,022
"Selector, which develops AIops tools for networking monitoring, raises $28M | VentureBeat"
"https://venturebeat.com/business/selector-which-develops-aiops-tools-for-networking-monitoring-raises-28m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Selector, which develops AIops tools for networking monitoring, raises $28M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AIops — the practice of applying AI to automate and improve IT operations — has gained currency during the pandemic. As businesses embrace digital transformation strategies involving “multicloud,” or the use of services from more than one cloud vendor, there’s an increasing need to improve the observability and analytics around networking infrastructure and performance. In a 2022 Nutanix survey , organizations cited interoperability, security and data integration as the top challenges in managing mutlicloud setups. Spurred by the challenge, Kannan Kothandaraman and Nitin Kumar — both networking industry veterans — in 2019 launched Selector, an AIops platform for network, cloud and app delivery workflows. Selector detects anomalies in cloud environments, automatically notifying IT team members when failures or outages occur. To lay a runway for growth, Selector closed a $28 million series A, the company announced — bringing its total funding to $33 million. Two Bear Capital, SineWave Ventures and Atlantic Bridge co-led the round. “We saw that cloud providers can build large and complex cloud infrastructures while enterprises, service providers, financial institutions and retailers are struggling to manage their own infrastructure. The key insight was that cloud builders have built in-house observability tools not available to enterprises and service providers,” Kothandaraman told VentureBeat via email. “We set out with a vision to build the first network and IT operations intelligence platform that combines network and application observability with actionable insights from any data source to eliminate downtime.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AIops for networking CEO Kothandaraman was previously a software engineer at Cisco before joining Juniper Networks, where he worked his way up to the role of VP of product line management. Kumar, formerly an engineer at FORE Systems and Procket Networks, also spent time at Juniper — first as an engineer, then as VP and fellow. With Selector, the two cofounders sought to create a platform that could ingest data from any data source and provide the monitoring necessary for multicloud infrastructures. Selector normalizes, filters, clusters and correlates events from networks, apps and security tools and delivers these insights through a dashboard. Teams can use Slack and other chat tools in tandem with Selector to receive answers to questions about infrastructure by searching through conversations. “Operations teams can audit configuration changes, correlate configuration changes to anomalies and search for the presence or absence of specific configuration statements,” the company explains on its website. “Selector’s synthetic analytics solution rapidly isolates and identifies any contribution the network makes to application anomalies. Operations teams can rapidly determine network innocence or triage network anomalies.” Kothandaraman claims that these capabilities enable IT teams to diagnose and remediate potential or existing issues more quickly than they could otherwise. “Selector uses a data-centric AI approach and focuses on enhancing ingested data with metadata from multiple sources. For example, in addition to ingesting data from multiple heterogeneous domains, Selector ingests enterprise-specific metadata such as inventory and [customer relationship management info] to enrich insights and analysis,” Kothandaraman said. “The added complexity from [siloed monitoring tools] often reduces availability and performance rather than improving it. Selector solves [this challenge] through data aggregation, normalization and enrichment of heterogeneous data, correlation of that data and providing a simple, easy-to-use interface for … teams to access and share analysis.” Growing usage AIops solutions aren’t appropriate for every company. Network World’s Shamus McGillicuddy, reporting on an EMA study, notes that successful users of AIops are focused on transforming network engineering and operations rather than addressing challenges with existing network management tools. “AIops-driven network management can make a business run better. [But companies who report] the most success with applying AIops to network management [are] the most likely to say that their AIops interest isn’t driven by network management tool problems,” McGillicuddy wrote in a July 2021 article Moreover, Selector competes with products like IBM’s Watson AIops , which uses AI to detect, diagnose and remediate networking equipment anomalies. Startups like Augtera Networks also leverage AI for network planning and predictive infrastructure maintenance, applying algorithms trained on production data from real-world systems. But Kothandaraman says that there has been growing interest in Selector, fueled in part by pandemic-related technical hurdles. Prior to the platform’s official launch, 25-employee Selector worked with Comcast and Bell Canada as well as NBC Sports, which used the platform to monitor its networks at the Tokyo 2020 and Beijing 2022 Olympics. “We have over 10 paying customers … with more than 50 customers in the pipeline. Our customer base includes internet service providers, media, financial institutions, cloud service providers and retail,” Kothandaraman said. “Enterprises need flexibility to deploy their applications on any cloud, datacenter or edge computing to meet the myriad of ways their customers and employees are accessing their services … With this funding, we’ll focus on expanding solutions for telco cloud, healthcare and retail. We’re also expanding our product functionality to add use cases for multicloud and internet of things.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,271
2,022
"Run:AI lands $75M to dynamically allocate hardware resources for AI training | VentureBeat"
"https://venturebeat.com/business/runai-lands-75m-to-dynamically-allocate-hardware-resources-for-ai-training"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Run:AI lands $75M to dynamically allocate hardware resources for AI training Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While interest in AI remains high among enterprise organizations, particularly for its potential to improve decision-making and automate repetitive tasks, many of these businesses are struggling to deploy AI into production. In a February survey from IDC, only a third of companies claimed that their entire organization was benefitting from an enterprise-wide AI strategy. The same poll found that 69% of companies hadn’t yet reached production with AI, and instead remained in the experimentation, evaluation, or prototyping phases. The challenges vary from organization to organization, but some common themes include infrastructure and data. The high upfront costs of hardware drive many companies to the cloud, which is often expensive and difficult to monitor. (A 2021 Anodot study found that fewer than 20% of companies were able to immediately detect spikes in cloud costs.) Meanwhile, data quality issues like a lack of data curation, data governance, and data literacy are introducing compliance risks such as biased algorithms. Inspired to search for a solution, Omri Geller, Ronen Dar, and Meir Feder several years ago founded Run:AI , a platform that creates an abstraction layer to optimize AI workloads. Run:AI attempts to allocate workloads such that available hardware resources are maximized, considering factors like network bandwidth, compute resources, cost, and data pipeline and size. Run:AI today announced that it raised $75 million in a series C led by Tiger Global Management and Insight Partners with participation from TLV Partners and S Capital VC, bringing its total capital raised to $118 million. The company plans to use the investment to grow its team and consider future, “strategic” acquisitions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Optimizing AI Dar and Geller founded Run:AI after studying together at Tel Aviv University under Feder, who specializes in information theory. Dar was a postdoc researcher at Bell Labs and R&D and algorithms engineer at Apple, Anobit, and Intel. Geller was a member of the Israeli military, where he led large-scale projects and deployments. “AI is the new technology that’s going to provide a competitive edge for companies. We believe that enterprises will not be able to lead their domain without AI capabilities,” Geller told VentureBeat via email. “AI is so fundamental that it ‘opens the books’ and will create a new world order with new leaders. Companies that create capabilities to let computers learn faster and have more innovative capabilities will dominate their domains. That’s why companies are investing in AI.” Run:AI essentially “breaks up” AI models into fragments that run in parallel, according to Geller — an approach that has the added benefit of cutting down on hardware memory usage. This in turn enables models that would otherwise be constrained by hardware, chiefly GPU memory, to run ostensibly unimpeded on-premises, on public clouds, or at the edge. Exactly how Run:AI allocates workloads depends on the policies defined by an organization. Policies in Run:AI create quotas for different projects. Enterprise IT and data science teams can also create logical fractions of GPUs or execute jobs across multiple GPUs or nodes. Toward the end of 2021, Run:AI added support for both MLflow, a tool for managing the AI lifecycle, and Kubeflow, an open source framework for machine learning operations. The company also added integrations with Apache Airflow, software that can be used to create, schedule, and monitor data workflows. “When Run:AI starts work with a new customer, we typically see a GPU utilization rate of between 25% and 30% … GPUs tend to be idle during nonwork hours (e.g., nights, weekends). They can also be idle during work breaks (e.g., coffee breaks, lunch). [And] they can be idle when a researcher is building [an AI] model,” Raz Rotenberg, software team lead at Run:AI, explains in a blog post. “Increasing GPU utilization and minimizing idle times can drastically reduce costs and help achieve model accuracy faster. To do this, one needs to improve the sharing of GPU resources.” Competition While Run:AI has relatively few direct competitors, other startups are applying the concept of dynamic hardware allocation to AI workloads. For example, Grid.ai offers software that allows data scientists to train AI models across GPUs, processors, and more in parallel. Nvidia, for its part, sells AI Enterprise , a software suite of tools and frameworks that enable companies to virtualize AI workloads on Nvidia-certified servers. Some customers might be skeptical, too, of how well Run:AI can adjust allocations depending on the architecture of different AI systems. And while it does work with custom chips like Google’s tensor processing unit (TPU), which can accelerate certain AI workloads, Run:AI remains focused on GPU usage, which might not suit every data science organization’s needs. But Run:AI — which works closely with Amazon Web Services and VMware — claims to be going strong with a customer base spanning “dozens” of Fortune 500 and startup finance, automotive, healthcare, gaming, and academic organizations with “thousands” of users. Annual recurring revenue grew nine times over the last year while Run:AI’s workforce more than tripled. And if surveys are anything to go by, Run:AI won’t have a shortage of potential customers. A Statista poll in January found that only around 19% of companies have established a data culture in their organization. And with cloud services spending hitting an estimated $304.9 billion last year, according to Gartner, companies will likely continue to look for on-premises alternatives to bolstering their AI infrastructure. “IT needs to serve the business goals, and if the business goal is to bring AI to market sooner, making it the responsibility of IT to deliver faster and Run:AI is what allows them to do that,” Geller continued. “The C-suite are gong ho on Run:ai because they can innovate faster and to produce AI solutions faster to create a competitive advantage. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,272
2,022
"RightBound raises $15M more to automate B2B sales development | VentureBeat"
"https://venturebeat.com/business/rightbound-raises-15m-more-to-automate-b2b-sales-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages RightBound raises $15M more to automate B2B sales development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI and automation have the potential to transform the sales industry. A 2020 Harvard Business Review report found that AI could create up to $2.6 trillion in sales and marketing, with the bulk of the value coming from improving alignment between sales and marketing and delivering data-driven coaching to salespeople. But AI introduces its set of challenges. A recent Teradata survey found that 90% of businesses anticipate “significant barriers” to full AI sales adoption and integration, partly because AI systems can require costly data prep and infrastructure. But beyond the technical challenges, AI-powered marketing and sales tools can be perceived as “creepy” if not implemented thoughtfully and responsibly. For example, retail chain Target infamously used customer data to figure out a young customer was pregnant before she’d informed her family. Entrepreneurs Ran Oelgiesser and Rotem Dafni sought to address the challenge of infusing sales with AI in RightBound, a platform designed to automate manual business-to-business (B2B) sales processes. RightBound does data collection and outreach on behalf of sales reps, involving them when prospective customers express interest. After a year in which its customer base more than tripled to “several dozen,” RightBound today announced that it raised a $15.5 million extension to the series A that it closed in May 2021 — bringing the total series A to $27 million. Innovation Endeavors, Operator Collective, and IBI Tech Fund participated. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We are planning to keep the fast growth of the business, and therefore we heavily invest in hiring to support that pace. In the last six months, we’ve doubled the number of employees to more than 50 employees, and we are planning to continue this momentum in 2022. The funds will be used mostly to support that growth in product, engineering, data science, as well as customer success, marketing, and sales,” Oelgiesser told VentureBeat via email. “We launched our product in mid-2020. Most of our growth came in 2021, with five times growth in annual recurring revenue. Our goal is to maintain high growth of three times in 2022.” Automating sales Founded in 2019, RightBound gathers insights on development efforts and makes recommendations to update sales playbooks. (In sales, “development” refers to identifying, connecting with and vetting prospective buyers.) The platform learns customer profiles by analyzing sources including a target company’s general information as well as employees’ email addresses, phone numbers and past roles. RightBound also orchestrates initial outreach on behalf of sales teams with a blend of emails, surveys, follow-ups, gift cards, social outreach and targeted ads. Oelgiesser previously cofounded Kidaro, a desktop virtualization app acquired by Microsoft in 2008 for $100 million. He later became senior product manager at Microsoft, in charge of overseeing the business integration of Kidaro and helping to define the go-to-market strategy for Windows 8 Enterprise Edition. Dafni also served in various engineering roles prior to cofounding RightBound, including at Juniper Networks and VMware’s networking and protocols R&D division. “RightBound eliminates the need for [sales reps] to manually search for prospects. Because the platform is connected to dozens of data sources providing visibility into all relevant company and persona data, prospect behavior, and sales activity, it constantly sources relevant prospects, segmenting and adding them to the sales funnel and [customer relationship management platforms] once they have been verified and enriched with details,” Oelgiesser said. “Using AI and machine learning, RightBound customizes the communication track of each prospect, factoring in channel, content, timing, and frequency to set the path most likely to engage them. These elements are hugely beneficial to sales development teams and reps in terms of being able to do their jobs more efficiently and effectively, and not get bogged down in preliminary research on prospects.” Expansion in AI sales RightBound is one of a growing number of startups — others include Atrium , Gong , Dooly , People.ai and Orum — applying AI to sales processes like enablement, outreach and analysis. Proton.ai is developing AI-powered technologies to support sales operations teams, while Highspot is leveraging AI to promote sales enablement playbook and workflows. There’s also startups like Second Nature , which offer AI-driven coaching to salespeople and sales teams. The growth and investments reflect the broader upward trend of funding in AI startups. In 2020, venture capitalists poured over $75 billion in companies developing AI products — much of which went toward business process and support service products that stand to benefit from pandemic-spurred digital transformations. “One challenge that we see with sales across organizations today is that the teams of sales development reps are at velocity growth and implementing many software-as-a-service tools simultaneously. Additionally, [software development] reps and salespersons usually have very short tenures in their roles — factor this in with the recent ‘Great Resignation’ happening and it could mean churn in the sales department is greater than ever before,” Oelgiesser continued. “As sales teams adjusted to the new reality of selling during a pandemic they needed a different, more efficient approach in order to break through and sell effectively. Our solution became not only relevant but extremely valuable to sales organizations over the past two years.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,273
2,022
"Quadric nabs $21M to accelerate production of its AI edge chips | VentureBeat"
"https://venturebeat.com/business/quadric-nabs-21m-to-accelerate-production-of-its-ai-edge-chips"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quadric nabs $21M to accelerate production of its AI edge chips Share on Facebook Share on X Share on LinkedIn Semiconductor chips. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The market for edge AI chips, designed to accelerate AI workloads offline, often in self-contained hardware, is maturing at a rapid clip. Building on converging trends like digital transformation, cloud-native technologies, and the internet of things, the edge AI hardware segment could be worth as much as $38.87 billion by 2030, according to an estimate from Valuates Reports. Valuates cites the increased demand for low latency and real-time processing, as well as reductions in storage and operations costs, as factors driving the use of edge AI chips. Indeed, these types of chips can enable better performance and lower power consumption by reducing the need for devices to rely on the cloud for data processing. But edge AI chips are fettered in other ways: For example, because they lack the computing power of, say, a cloud datacenter, only select tasks can be performed on an edge device. Quadric is one of the many startups diving into the AI edge market with gusto, promising to eliminate the historical bottlenecks of edge hardware. Today, Quadric announced a $21 million series B funding round co-led by Denso’s NSITEXE and MegaChips, with participation from Leawood VC, Pear VC, Uncork Capital, and Cota Capital, to bolster production of its edge AI chips, which the company claims can “accelerate the entire application pipeline” on-device without the need for a powerful general-purpose processor. Secret chip sauce Quadric , based in Burlingame, California, was founded in 2016 by Veerbhan Kheterpal, Nigel Drego, and Daniel Firu. All three hail from MIT and Carnegie Mellon and previously cofounded cryptocurrency computing company 21 Inc. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The founding team was building a smart robot when they were faced with the inadequacy of existing compute platforms from Nvidia and Intel,” a Quadric spokesperson told VentureBeat via email. “Unless rebuilt from the ground up, processors used for computing at the edge are not scalable. Quadric was founded to build a new processor architecture; one that generalizes the dataflow paradigm and delivers on a higher level of power efficiency for a wide range of algorithms in machine learning, computer vision, DSP, graph processing and linear algebra.” Quadric claims its 1.1-billion-transistor, 16-nanometer chip consumes only 4.5W of power and packs 4GB of memory paired with 256 “vertex cores,” which are designed to speed up some of the algorithmic workloads involved in common AI applications. The workloads don’t involve training, or the step in developing an AI system that requires feeding the system vast amounts of data so that it learns to make predictions. Rather, they involve inferencing, which is the point at which the system can make predictions based on new data coming in. “Quadric’s unique ability to handle both neural backbones and classical dynamic data-parallel algorithms in a unified architecture is helping to create AI for everyone, everywhere. Most other solutions combine high-power processor clusters with application-specific neural processing units,” said the Quadric spokesperson. The company further explains on its website: “The architecture is instruction-driven … Coupled with [it] is a software programming model tailored for developer ease of use. The software programming model allows the developer to express graph-based- and non-graph-based algorithms in unison.” Quadric offers plug-and-play AI models for applications in warehousing, construction, transportation, and agricultural industries. The company previously claimed that Denso planned to integrate its edge chip technology, which works with any machine with an M.2 motherboard expansion slot, into future self-driving vehicle solutions. Expanding edge market Deloitte estimates more than 750 million edge AI chips that perform tasks on-device have been sold to date, representing $2.6 billion in revenue. “The magnitude of data generated in enterprises is growing rapidly, so in order to handle these data volumes, the next generation of innovation in computing will happen outside the datacenter and closer to the network edge,” the spokesperson added. “Quadric helps enterprises create data solutions that are sensitive to privacy and optimize latency and bandwidth costs.” Quadric competes against companies including AI Storm, Axelera , Deep Vision , Flex Logix , Sima.ai , Blaize , and Hailo , the last of which has raised over $320 million at a valuation reportedly exceeding $1 billion. As ZDNet’s Tiernan Ray spotlighted in a recent piece , venture financing has supercharged the AI edge chip market, with dozens of vendors (one report counted over 60) vying for a slice of the growing cash pile. But Quadric believes its AI chip architecture sets it apart in the burgeoning space. The company maintains that its investment will enable Quadric, which reportedly has five customers, to release the next version of its chip architecture; improve the performance of the software development kit that it ships alongside its chips; and roll out new products for integration in system-on-chips. “Most other companies in the edge computing space are building specific workload accelerators. In contrast, Quadric’s software centric architecture is future-proof against a dynamic backdrop of algorithms and AI models,” the spokesperson explained. To date, 35-employee Quadric has raised $34 million in venture capital and $2 million in debt, including a $15 million round in May 2019. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,274
2,022
"Microsoft claims new AI model architecture improves language translation | VentureBeat"
"https://venturebeat.com/business/microsoft-claims-new-ai-model-architecture-improves-language-translation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft claims new AI model architecture improves language translation Share on Facebook Share on X Share on LinkedIn Microsoft. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Coinciding with Nvidia’s March 2022 GPU Technology Conference, Microsoft today announced an update to Translator — its Azure service that can translate roughly 100 languages across call centers, chatbots, and third-party apps — that the company claims greatly improves the quality of Translator’s translations. Powered by a new family of AI models that can translate directly between certain languages, Microsoft says that an internal study found the translations to be up to 15% better compared with those generated by previous Translator models. The models also power a new feature in Translator, multilingual document translation, that can translate documents containing text written in different languages. Z-code Mixture of Experts Powering Translator’s upgrades is Z-code, a part of Microsoft’s larger XYZ-code initiative to combine AI models for text, vision, audio, and language to create software that can speak, see, hear, and (hopefully) understand. The team comprises a group of scientists and engineers from Azure AI and the Project Turing research group, focusing on building multilingual, large-scale models that power various Microsoft products. Z-code provides the framework, architecture, and models for AI-powered translation across language families. With Z-code, Microsoft says it’s using transfer learning — an AI technique that applies knowledge from one task to another, related task — to move beyond common languages, like English, and improve translation for the estimated 1,500 “low-resource” languages in the world. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Like all models, Microsoft’s learn from examples in large datasets sourced from a mixture of public and private archives (e.g., ebooks, websites such as Wikipedia, and hand-translated documents). Low-resource languages are generally defined as having under 1 million example sentences, which adds to the challenge of developing models; AI models usually perform better when given more examples. Because many languages share linguistic elements, Microsoft develops Z-code models multilingually across different languages and that knowledge is transferred between languages. For example, a model’s translation skills might be used to improve its ability to understand natural (i.e., everyday) language. Microsoft rolled out Z-code-powered enhancements to Translator last October, adding support for 12 new languages including Georgian, Tibetan, and Uyghur. Now, the company says that an improved version of Z-code — Z-code Mixture of Experts (MoE), which launched this week — can better understand “low-resourced” language nuances. The AI models used in modern text translation, MoE or no, contain components called “neurons” that are organized into distinctive layers. Each neuron is a mathematical operation that plays a key role in how the model “learns” to interpret and translate languages. MoEs are made up of small clusters of neurons that are only active under special, specific circumstances. Lower layers extract certain “features” from the text to be translated — i.e., characteristics — and “experts” — i.e., clusters — are called upon to evaluate those features. For example, each expert cluster can learn to handle a separate part of speech or semantic or grammatical rule. “Z-code MoE models are a promising way forward in the language domain since they are more efficient and need fewer systems to run. The same underlying model can be fine-tuned to perform different language understanding tasks such as translating between languages, summarizing a speech, offering ways to complete a sentence or generating suggested tweets, instead of having to develop separate models for each of those narrow purposes,” Xuedong Huang, chief technology officer at Microsoft’s Azure AI division, told VentureBeat via email. “While the Z-code MoE models learn universal representation, specific parts of the model can specialize in particular languages and linguistics characteristics to enable better translation.” Compared with other model architectures, MoEs have some advantages. The experts can receive a mix of data, but only a few experts remain active at any one time, meaning that even a huge model needs only a small amount of processing power in order to develop or run. In fact, MoE is one of the few architectures demonstrated to scale to more than a trillion parameters. (Parameters are the part of the model that’s learned from example text data, and generally speaking — especially in language — the correlation between the number of parameters and sophistication has held up remarkably well.) To illustrate, an MoE model containing 1.6 trillion parameters requires compute resources approximately equal to that of a 10 billion-parameter conventional model, by Microsoft’s estimation. The cost isn’t insubstantial, to be fair — a 2020 study from startup AI21 Labs pegged the expenses for developing a text-generating model with only 1.5 billion parameters at between $80,000 and $1.6 million. But it’s more efficient than other methods. Microsoft’s and Nvidia’s recently released Megatron 530B language model, which has 530 billion parameters, was originally developed across 560 Nvidia DGX A100 servers. A single DGX A100 starts at $199,000. MoEs were first proposed in the ’90s, and research papers in recent years from companies including Google describe experiments with trillion-parameter-plus MoE language models. But Microsoft claims that Z-code MoE is the first MoE language model to reach production. “Using an MoE approach allows us to achieve performance and quality benefits more efficiently, as it only engages a portion of the model to complete a task, as opposed to other architectures that have to activate an entire AI model to run every request. This architecture allows massive scale in the number of model parameters while keeping the amount of compute constant,” Huang continued. “For our production model deployment, the training dataset was 5 billion parameter models, which are 80 times larger than Microsoft’s currently deployed models. The models are trained on 64 GPUs. A single MoE model can replace 20 of the current translation models, increasing efficiency of training MoE models while also improving translation accuracy.” Future work While Microsoft says that Z-code MoE has led to great strides in improving language translation, the problem isn’t solved. Not by a long shot. Because of biases in public example text, non-English models continued to perform worse than their English-language counterparts. For example, languages in Wikipedia-based datasets vary not only by size but in the percentage of stubs without content, the number of edits, and the total number of users (because not all speakers of a language have access to Wikipedia). Beyond Wikipedia, ebooks in some languages, like Arabic and Urdu, are more commonly available as scanned images versus text, which requires processing with optical character recognition tools that can dip to as low as 70% accuracy. A recent piece in The Conversation points out the other flaws in AI-powered translation, including different forms of gender bias. In certain languages, Google Translate once presupposed that doctors were male while nurses were female, while Bing’s translator translated phrases like “the table is soft” as the feminine “die Tabelle” in German (which refers a table of figures). Other translations miss the meaning of the original text entirely. In one study referenced by The Conversation, the headline “UK car industry in brace position ahead of Brexit deadline” was translated by an AI system as “L’industrie automobile britannique en position de force avant l’échéance du Brexit,” which implies that the U.K. car industry is in a position of strength as opposed to weakness. “No matter how fluent the suggested translation appears, these types of errors (incorrect terminology, omissions, mistranslations) abound in machine translation output,” Guillaume Deneufbourg, a researcher in language sciences at the Université de Lille in Lille, France, wrote for The Conversation. “Another issue with machine translation which people may be less aware of is a process known as normalization. If new translations are only ever made using existing ones, over time, the process can stifle inventiveness, creativity, and originality.” One study from Tilburg University and the University of Maryland referred to the normalization phenomenon as “translationese,” with the coauthors finding a quantifiable loss of “linguistic richness” in AI systems’ translations. While the study points out that that this might be desirable side effect if the goal is to simplify the translation, normalization becomes problematic when it prevents systems from making grammatically correct choices and reduces diversity in “morphologically richer” languages, like Spanish and French. Microsoft says that it continues to develop new methods to improve translation, both through architectural improvements and techniques to mitigate bias in example data. “Today’s machine learning models need huge translation data sets with dialects for training, and there may not be enough data for all the desired languages and dialects, particularly in smaller markets,” Huang added. “The ability to share knowledge across different languages enables Z-code to produce more accurate results for underrepresented languages that don’t have a huge number of translation examples to learn from. This will help improve AI fairness and ensure that high-quality translations are not restricted to languages with rich training resources only.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,275
2,022
"Kinde raises $10.6M to help companies develop SaaS products | VentureBeat"
"https://venturebeat.com/business/kinde-raises-10-6m-to-help-companies-develop-saas-products"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kinde raises $10.6M to help companies develop SaaS products Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Kinde , a Sydney-based startup developing tech infrastructure for software-as-a-service (SaaS) companies, today announced that it raised $10.6 million in a seed round led by Blackbird Ventures with participation from Felicis Ventures. The closure of the round coincides with the launch of Kinde’s early accelerator program, which gives founders free access to the company’s user management and authentication platform for SaaS products. Kinde — which was cofounded by Atlassian, Campaign Monitor, and Shopify veteran CEO Chaldecott Ross and two of his ex-Campaign Monitor colleagues, Dave Berner and Evgeny Komarevtsev — aims to simplify the process of turning existing or new software into SaaS offerings. Ross sees his startup’s mission as “democratizing software” by helping to shorten the distance between having an idea and getting it into the hands of customers. “Founding a startup is hard. Before early-stage founders can start on their product, they spend valuable time and money building essential infrastructure,” Chaldecott said in a statement. “Our mission is to reinvent the way that software teams get started, with infrastructure that they can build on top of, allowing them to focus on what makes their business unique. Giving them access to this technology — that historically only established businesses could afford — means they can accelerate from day one.” Supporting SaaS development SaaS adoption in the enterprise continues to grow at an accelerating pace. In a 2021 survey, LeanIX found that 70% of IT leaders report “strong” SaaS growth over the past two years — in some cases doubling the number of SaaS applications in use over that time. One source estimates that the SaaS industry reached $171.9 billion in worth toward the end of 2021. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But while there’s a healthy appetite for SaaS, it hasn’t become easier to bring SaaS products to market. Among other aspects, SaaS companies have to consider aspects like how to comply with data protection requirements, payment processing, update mechanisms, third-party service integration, and time and cost management. Kinde’s product includes tools for these companies to authenticate and manage their user bases, create feature flags for managing access within their products, and build feature plans for billing their customers. The company, which is accepting requests from startups wanting to join its early accelerator program, hopes to partner with accelerators and tech bootcamps to offer additional support to entrepreneurs, according to Chaldecott. “We have an MVP ready for early stage businesses, but still have a lot of product to build and need to assemble a team to do this. We will also be building the operational side of the business to take the product to market, support it and build community and ecosystem around it,” Chaldecott told VentureBeat via email. “We’re growing fast considering we’re only four months old. This investment allows us to invest heavily in growing our team. We’re currently a team of eight but aim to be around 50 in the next 18 months” Kinde competes against SaaS consultancies like Pulse Solutions, Bursys, and e-Zest, which work with customers to ideate and launch SaaS products. But Chaldecott claims that Kinde’s self-service tools — and expertise — differentiate it from what’s available on the market today. “Most SaaS businesses build their infrastructure organically or bring together different solutions to do what they need. In time, this translates into systems that are hard to scale, don’t work that well together, and are full of custom code to make them do what is needed. Kinde aims to replace this all with a simple, highly scalable and performant infrastructure layer that empowers leaders across the business to run and manage users, features and plans in a single place,” Chaldecott added. “This will massively reduce the cost and time involved in maintaining and running SaaS infrastructure.” Time will tell whether customers agree. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,276
2,022
"Findem secures $30M to leverage AI for talent recruitment | VentureBeat"
"https://venturebeat.com/business/findem-secures-30m-to-leverage-ai-for-talent-recruitment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Findem secures $30M to leverage AI for talent recruitment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hiring the right talent has rarely been easy, but in the midst of the “great resignation” — the record number of people leaving their jobs during the pandemic — it’s become even more challenging. Separate Jobadder and Officevibe 2021 polls found that 72.8% of recruiters are struggling to find relevant candidates, and that top candidates are only available for about ten days on average before being hired. According to a Recruiter Nation survey, 64% of recruiters expect their budgets to increase in the next six to 12 months — but money rarely equates to success. The roadblocks often lie in identifying the best-fit candidate for a particular role, whether at the managerial, executive level, or below. Robert Half reports in a 2019 study that more than three in four people would apply for a job even if they aren’t qualified. Some recruitment software vendors espouse the benefits of AI, which they argue can index and search through pools of candidates more efficiently than an HR professional alone. One such vendor, Findem , says its AI can analyze more than 100,000 public sources to find candidates matched for a role, searching for people based on their attributes. To expand its product, Findem — whose customers include teams at more than 100 organizations including Google and RingCentral — raised $30 million in a series B funding round led by Four Rivers and Quarry Capital Management with participation from Wing Venture Capital, the company announced, bringing its total raised to $37.3 million. CEO Hari Kolam says that the new capital will also be used to support product development advancements, including the Q1 launch of a new self-service model that’ll enable customers to post their open positions to job boards. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI-powered recruitment San Francisco, California-based Findem was founded in 2019 by Kolam and Raghu Venkat. Kolam previously founded Instart, a web app delivery platform, prior to stints at Sun Microsystems and IBM. Venkat was a member of the technical staff at data management company Aster, and spent time at Google as a software engineer. “Every company is hiring right now, and it’s tough work to build a high-quality and diverse talent pipeline. Pretty much every company we come across is struggling with it, and many are paying premiums to try,” Kolam told VentureBeat via email. “Talent acquisition teams are spending a lot of their money on staffing agencies because great talent is so hard to hire today, but the rub is that they can take a cut of 20% to 30% of a new hire’s salary. On top of that, staffing agencies are trying to scale by hiring more recruiters, but there’s a shortage right now. When you’re taking a largely manual approach to talent sourcing, which is common among staffing agencies, it’s time-consuming, costly and wrought with quality issues. There’s also a threshold you realistically can’t surpass.” Findem says that it combines AI with “contextual logic” to emphasize a candidate’s relevance for a given position. Using Findem, HR teams can search by characteristics including whether a person has seen a startup to a successful exit, builds diverse teams, or is a long-tenured employee. In addition to public sources, Findem can search across internal employees and existing profiles in an applicant tracking system. In total, Findem — which also provides tools to automate candidate engagement, analyze talent pools, and measure diversity — sources from more than 750 million profiles, CEO Kolam says. “We are in the business of deriving what a successful hire really means to a company, and a lot goes into the data mix … We’re training models from multiple sources to understand factors such as which employees are successful and why, and what success means for a specific company,” Kolam continued. “The platform has built-in contextual logic to help companies recognize when what it’s finding in terms of candidates is what you’re after. It takes your search intent and finds people who match that intent by scraping, indexing, and connecting the dots between multiple data sources. It also filters results, so hiring teams don’t have to.” Potential for bias Employers who adopt AI-powered recruitment technology run the risk of introducing bias into the hiring process. Even the best AI-based systems can struggle to evaluate soft skills, like teamwork and problem-solving. As a result of bias and other technical flaws, sourcing AI might not inform people of a job opportunity — determining who has access to the hiring process. For example, Amazon was forced to scrap an AI recruiting tool that showed a preference for male candidates. A 2021 analysis of job board recommendations, meanwhile, found that 40% of jobseekers had experienced recommendations based upon their identities rather than their qualifications and that 30% received job alerts that were below their current skill level, regardless of industry. “[P]ersonalized job boards like ZipRecruiter aim to automatically learn recruiters’ preferences and use those predictions to solicit similar applicants,” Harvard Business Review’s Miranda Bogen writes in a 2019 piece. “If the system notices that recruiters happen to interact more frequently with white men, it may well find proxies for those characteristics (like being named Jared or playing high school lacrosse) and replicate that pattern.” In recognition of the growing problem, New York City recently passed a law that prohibits businesses from using AI or algorithm-based tools to make hiring decisions about New York City residents without first auditing those tools for bias. Findem claims to have made efforts to mitigate bias in its recruitment algorithms. According to Kolam: “Our platform makes the talent funnel diverse without the need to identify gender or ethnicity at the individual level. It’s a probabilistic approach and we rely on multiple datasets to achieve this. By making the funnel diverse, we’re obviating the introduction of human bias. Diversity decisions are absolved from the recruiter, yet they’re still able to deliver high-quality and diverse talent pipelines.” Expanding segment Sixty-five-employee Findem is the beneficiary of a rapidly expanding recruitment software market. According to Fortune Business Insights, the global recruitment software market is set to grow from $1.75 billion in 2017 to $3.09 by 2025. Automation adoption is on the rise, too, driven by increasing workloads and sourcing challenges. ( According to Forbes and the U.S. Department of Labor , recruiters spend nearly a third of their workweek sourcing candidates for a single role and the average cost of a bad hire is around 30% of the employee’s first-year earnings.) Even before the pandemic, recruiters were expressing an interest in automation technologies for the hiring pipeline. Sixty percent of companies responding to a 2019 Mercer poll said that they planned to boost their use of workplace automation, including in recruitment. Findem’s competitors include HireEz , Celential.ai , Sense , Fetcher.ai , Xor , and AllyO , all of which use AI to identify candidates who might be a good match for available roles. Moving skillfully among the competition, Findem says that it experienced a 500% increase in customer growth and eight times top-line revenue increase over the past year. “We’re planning to use this new capital primarily for product development, including launching a new self-service model this quarter where users can post their open positions to job boards and get access to a new and highly targeted candidate pipeline. Expansion is also a top priority and we’re looking to grow into new international markets,” Kolam added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,277
2,022
"Data fabric versus data mesh: What's the difference? | VentureBeat"
"https://venturebeat.com/business/data-fabric-versus-data-mesh-whats-the-difference"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data fabric versus data mesh: What’s the difference? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As more and more processes move online during the pandemic, businesses are adopting analytics to gain greater insight into their operations. According to 2021 survey commissioned by Starburst and Red Hat, 53% of companies believe that data access became “more critical” throughout the pandemic. The results agree with findings from ManageEngine, the IT division of Zoho, which found in a 2021 poll that more than 20% of organizations boosted their usage of business analytics compared with the global average. Thirty-five percent of respondents to the Starburst and RedHat survey said that they’re looking to analyze real-time business risks, while 36% said that they’re seeking growth and revenue generation through “more intelligent” customer engagements. But underlining the challenges in analytics, more than 37% of respondents said that they weren’t confident in their ability to access “timely, relevant data for decision-making,” whether because of disparate storage sources or problems with developing data pipelines. Two emerging concepts have been pitched as the answer to hurdles in data analytics and management. One is a “data fabric,” a data integration approach that includes an architecture — and services running on that architecture — to help organizations orchestrate data. The other is a “data mesh,” which aims to mitigate the challenges of data availability by providing a decentralized connectivity layer that allows companies to access data from different sources across locations. Both data fabrics and data meshes can serve a broad array of business, technical and organizational purposes. For example, they can save data scientists time by automating repetitive data transformation tasks while powering self-service data access tools. Data fabrics and data meshes can also integrate and augment data management software already in use for increased cost-effectiveness. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data fabric A combination of technologies including AI and machine learning, data fabric is akin to a weave that stretches to connect sources of data, types and locations with methods for accessing the data. Gartner describes it as analytics over “existing, discoverable and inferenced metadata assets” to support the “design, deployment and utilization” of data across local, edge and data center environments. Data fabric continuously identifies, connects, cleanses and enriches real-time data from different applications to discover relationships between data points. For example, a data fabric might monitor various data pipelines — the set of actions that ingest raw data from a source and move it to a destination — to suggest better alternatives before automating the most repeatable tasks. A data fabric might also “heal” failed data integration jobs, handle more complicated data management aspects like creating — and profiling — datasets and offer ways to govern and secure data by limiting who can access what data and infrastructure. To uncover the relationships between data, a data fabric builds a graph that stores interlinked descriptions of data such as objects, events, situations and concepts. Algorithms can use this graph for different businesses analytics purposes, like making predictions and surfacing previously-hard-to-find dataset stores. As K2 View, a data fabric solutions vendor, explains : “The data fabric continually provisions … data based on a 360-view of business entities, such as a certain segment of customers, a line of company products or all retail outlets in a specific geography … Using this data, data scientists create and refine machine learning models, while data analysts use business intelligence to analyze trends, segment customers and perform root-cause analysis. The refined machine learning model is deployed into the data fabric, to be executed in real-time for an individual entity (customer, product, location, etc.) — thus ‘operationalizing’ the machine learning algorithm. The data fabric executes the machine learning model on demand, in real time, feeding it the individual entity’s complete and current data. The machine learning output is instantly returned to the requesting application and persisted in the data fabric, as part of the entity, for future analysis.” Data fabrics often work with a range of data types including technical, business and operational data. In the ideal scenario, they’re also compatible with many different data delivery “styles” like replication, streaming and virtualization. Beyond this, the best data fabric solutions provide robust visualization tools that make their technical infrastructure easy to interpret, enabling companies to monitor storage costs, performance and efficiency — plus security — regardless of where their data and applications live. In addition to analytics, a data fabric affords a number of advantages to organizations including minimizing disruptions from switching between cloud vendors and compute resources. Data fabric also allows enterprises — and the data analysis, sales, marketing, network architects and security teams working at them — to adapt their infrastructure based on changing technology needs, connecting infrastructure endpoints regardless of the location of data. In a 2020 report, Forrester found that IBM’s data fabric solution could accelerate data delivery by 60 times while leading to a 459% increase in returns on investment. But data fabric has its downsides — chief among them implementation complexity. For example, data fabrics require exposing and integrating different data and systems, which can often format data differently. This lack of native interoperability can add friction like the need to harmonize and deduplicate data. Data mesh On the other hand, there’s a data mesh, which breaks large enterprise data architectures into subsystems managed by a dedicated team. Unlike a data fabric, which relies on metadata to drive recommendations for things like data delivery, data meshes leverage the expertise of subject-matter experts who oversee “domains” within the mesh. “Domains” are independently deployable clusters of related microservices that communicate with users or other domains through different interfaces. Microservices are composed of many loosely coupled and independently deployable smaller services. Domains usually include code, workflows, a team and a technical environment and teams working within domains treat data as a product. Clean, fresh and complete data is delivered to any data consumer based on permissions and roles, while “data products” are created to be used for a specific analytical and operational purpose. To add value to a data mesh, engineers must develop a deep understanding of datasets. They become responsible for servicing data consumers and organizing around the domain — i.e., testing, deploying, monitor and maintaining the domain. Beyond this, they must ensure that different domains remain connected by a layer of interoperability and consistent data governance, standards and observability. Data meshes promote decentralization, on the plus side, enabling teams to focus on specific sets of problems. They can also bolster analytics by leading with business context instead of jargony, technical knowledge. But data meshes have their downsides. For example, domains can unwittingly duplicate data — wasting resources. The distributed structure of data meshes can — if the data mesh isn’t sufficiently infrastructure-agnostic — require more technical experts to scale than centralized approaches. And technical debt can increase as domains create their own data pipelines. Using data meshes and fabrics When weighing the pros and cons, it’s important to keep in mind that data mesh and data fabric are concepts — not technologies — and aren’t mutually exclusive. An organization can adopt both a data mesh and data fabric approach across certain, or all, departments as appropriate. To James Serra, previously a big data and data warehousing solution architect at Microsoft, the difference between the two concepts lies in which users are accessing data. “A data fabric and a data mesh both provide an architecture to access data across multiple technologies and platforms, but a data fabric is technology-centric, while a data mesh focuses on organizational change,” he writes in a blog post (via Datanami ). “[A] data mesh is more about people and process than architecture, while a data fabric is an architectural approach that tackles the complexity of data and metadata in a smart way that works well together.” Eckerson Group analyst David Wells cautions against obsessing over the differences, which he argues are far less important than the components that must be in place to achieve the sought-after business objectives. “They are architectural frameworks, not architectures,” Wells writes in a recent blog post (also via Datanami ). “You don’t have architecture until the frameworks are adapted and customized to your needs, your data, your processes and your terminology.” That’s all to say that data fabrics and data meshes will remain equally relevant for the foreseeable future. While each involves different elements, they’re toward the same goal of bringing greater analytics to an organization with a sprawling — and growing — data infrastructure. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,278
2,022
"Companies are commercializing multimodal AI models to analyze videos and more | VentureBeat"
"https://venturebeat.com/business/companies-are-commercializing-multimodal-ai-models-to-analyze-videos-and-more"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Companies are commercializing multimodal AI models to analyze videos and more Share on Facebook Share on X Share on LinkedIn YouTube app on Sony smart TV. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Earlier this month, researchers at the Allen Institute for AI — a nonprofit founded by late Microsoft cofounder Paul Allen — released an interactive demo of a system they describe as part of a “new generation” of AI applications that can analyze, search across, and respond to questions about videos “at scale.” Called Merlot Reserve , the researchers had the system “watch” 20 million YouTube videos to learn the relationships between images, sounds, and subtitles, allowing it to, for example, answer questions such as “What meal does the person in the video want to eat?” or “Has the boy in this video swam in the ocean before?” Merlot Reserve and its predecessor, Merlot , aren’t the first “ multimodal ” AI systems of their kind. Systems that can process and relate information from audio, visuals and text have been around for years. These technologies continue to improve in their ability to understand the world more like humans. San Francisco research lab OpenAI’s DALL-E , which was released in 2021, can generate images of objects — real or imagined — from simple text descriptions like “an armchair in the shape of an avocado.” A more recent system out of Google called VATT can not only caption events in videos (e.g., “a man swimming”) but classify audio clips and recognize objects in images. However, until recently, these multimodal AI systems were strictly for the domain of research. That’s changing — increasingly, they’re becoming commercialized. “Different multimodal technologies including automatic speech recognition, image labeling and recognition, neural networks and traditional machine learning models [can help to] gain an understanding of text, voice, and images — [especially when paired] with text processing,” Aaron Sloman, the cofounder and CTO of CLIPr, told VentureBeat via email. CLIPr is among the nascent cohort of companies using multimodal AI systems for applications like analyzing video. Tech giants including Meta (formerly Facebook) and Google are represented in the group, as are startups like Twelve Labs , which claims that its systems can recognize features in videos including objects, text on screen, speech, and people. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “[My fellow cofounders and I] sought out a solution to help us easily extract important and relevant clips from videos as an alternative to skipping around at 10-15 second intervals, and when we weren’t able to find a solution, we decided to build one … Our namesake video indexing platform … ingests recorded video and helps make it searchable by transcription, topics, and subtopics,” Sloman said. “Analyzing prosody is also critical for us, which is the rhythm, stress and intonation of speech. We leverage it against image analysis, such as meeting presentation slides, to help evaluate the accuracy of these tonal changes or [look] for animated gestures with the participants who are on video.” Sloman claims that CLIPr has clients in a “variety” of industries, chiefly media publishing, enterprise, and events. In the future, the startup aims to apply its technology to livestream video and create “role-specific” bots that can, for example, take keynote sessions from an event and automatically create a highlight reel. “It is our belief that video is the most important and underutilized form of modern communication, and our goal is to make video as accessible as written content,” Sloman continued. Multimodal futures Outside of multimodal systems, AI doesn’t experience the world the same way that people do. For example, a speech recognition system can only understand one type of data — speech — and doesn’t comprehend the context of that speech. By contrast, people use all of their senses (e.g., sight, sound, smell) to process and ground events in time. From images and captions of someone cooking popcorn, for example, a person can imagine what the sounds of the scene might be, like raw kernels scattering in an empty pot and the “pops” of the popcorn expanding. “[M]any of these multimodal models are image-specific and focus on visual recognition — describing what is literally shown,” Rowan Zellers, a computer science Ph.D. candidate at the University of Washington and the lead researcher on the Merlot Reserve project, told VentureBeat via email. “We could see models answer questions about what people are doing (and why) in videos, possibly for search applications.” Twelve Labs, for instance, claims that its system makes any video database analyzable by transforming clips into mathematical representations known as vector embeddings. Customers have used it to build recommendation engines, content moderation systems, and media analytics dashboards, according to CEO Jae Lee. “[Twelve Labs is] working on building [a] model that can create powerful video embeddings that can be used for not only semantic search, but also other variety of tasks, such as caption, highlight, and summary generations,” Lee told VentureBeat via email. “Our video models are trained under language supervision. We extract diverse modules — multimodality — of information such as images, audio, transcription, motion, etc. from the video and fuse that information into a single vector representation. That representation is trained under relevant text — sentences — that is processed using natural language processing (NLP) technology.” Beyond startups, last year, Google revealed that it plans to use a multimodal AI system called multitask unified model ( MUM ) to enhance Google Search experiences across different languages and devices. Among other improvements, in Google Search, MUM will power new features that take a query (e.g., “acrylic paintings”) and spotlight resources like step-by-step instructions and pick out subjects in videos (e.g., “acrylic techniques”) based on the audio, text and visual content. Meta recently said that it’s also applying a multimodal system, called Few-Shot Learner (FSL), to determine whether the content of Facebook and Instagram messages — including text, images, and URLs — violates its community guidelines. The company claims FSL was developed against a database of billions of Facebook posts and images in more than 100 languages. Zellers believes that, in the future, these sorts of multimodal models could be used to create products that not only analyze online video, audio, and related forms of content, but assist users with vision or hearing challenges. “This could involve anything from answering basic questions, all the way to contextual interaction,” he added. Multimodal setbacks While commercialized multimodal AI is more common than it used to be, several hurdles must be overcome before these types of systems reach wide scale deployment. It’s partly a case of making the economics work: While running an existing system isn’t typically expensive, at least compared with developing a new one, it depends on the nature of the workload and the skill level of the company’s data science team. “Initial model [development] is easily the most costly aspect because it includes perfecting the data science in parallel,” Sloman said. “For example, the process of distinguishing what is or is not a slide across thousands of verified Zoom meetings is very expensive.” For example, Merlot Reserve took roughly three weeks to develop on a cluster of 512 of Google’s third-generation tensor processing units (TPUs) , chips designed to accelerate certain aspects of the AI creation process. A pod of thirty-two third-generation TPUs costs $32 per hour to evaluate, according to current public pricing, bringing Merlot Reserve’s development costs to just over $16,000 (assuming no volume, annual, or academic discounts). “We currently run seven different models, some of which are large-scale open source repositories of data with hundreds of millions of objects, while others are proprietary,” Sloman explained. “Our proprietary models have been training for over a year now, and while it’s hard to say for the open source models we use, they have likely been training for much longer than that … I suspect that the next sweeping change in multimodal AI will be building more standardized linkages between different types of siloed models. We’ve had to patchwork several AI models, each of which does one type of analysis well. Eventually, with many companies building products using multimodal, we will see more open source offerings, making it easier and less expensive to train and run experiments.” Today’s multimodal systems suffer from technical flaws, too, like picking up biases in the data (e.g., YouTube videos) from which they’re learning. For instance, because Merlot Reserve “watches” a large volume of YouTube videos, it’s biased to YouTube’s recommendations and, more broadly, by the economic pressure of which content people are encouraged to produce. “The content moderation on YouTube disproportionately filters out [minority] voices … People’s roles in YouTube videos [also] tend to be highly gendered, which might bias situation understanding,” Zellers and his colleagues wrote in a study describing Merlot Reserve’s capabilities. “The automatic captions in YouTube are known to suffer from gender bias, which our model (like neural models generally) might in turn amplify. The transcriptions on YouTube are also likely poor at handling important identity markers, like pronouns.” Biases aside, there’s nothing preventing bad actors from using multimodal systems for controversial purposes, like identifying events or activities in surveillance footage. In a paper published by Stanford’s Institute for Human-Centered Artificial Intelligence, the coauthors argue that advances in multimodal models like DALL-E will result in higher-quality, machine-generated content that’ll be easier to personalize for “misuse purposes” — like publishing misleading articles targeted to different political parties, nationalities, and religions. Sloman says that CLIPr, for its part, takes steps to mitigate model bias and misuse through a “human-in-the-loop” approach. The company encourages customers to point out mistakes the CLIPr system makes so that it can correct them — and ideally improve model development on the backend. “Multimodal has its advantages, because if done correctly, it has less chance to produce bias compared to more siloed models,” he said. “The real danger comes from not acknowledging the complexity and imperfection of multimodal AI and using data points that lead you down a particular linear decisioning path that limits the spectrum of answers or matches.” Lee said that Twelve Labs, too, has implemented bias mitigation strategies. The company takes a three-phase approach that includes collecting datasets from diverse sources, creating documentation for the datasets, and curating the raw video and text information. “Computer vision models are used to detect and filter visual content that may contain toxicity or sensitive content,” Lee explained. “Then, the transcription of the raw video is analyzed by leveraging block words (i.e., removing any text containing words from a list of selected words) and advanced NLP techniques to filter content that may contain political, socio-economic, or demographic bias. Block words and NLP techniques are also used to filter text labels that may contain toxicity and bias … Understanding and mitigating potential biases when leveraging multimodal models is integral to the success of Twelve Labs.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,279
2,022
"Atlan raises $50M to help organizations become more data-driven | VentureBeat"
"https://venturebeat.com/business/atlan-raises-50m-to-help-organizations-become-more-data-driven"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Atlan raises $50M to help organizations become more data-driven Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dataops is the set of processes and technologies that aims to promote a “culture of continuous improvement” in the area of data analytics. First proposed in 2014 by Lenny Liebmann in a piece for IBM’s Big Data & Analytics Hub, dataops has matured from a collection of practices into an entire approach to data analytics, encompassing not only data preparation and reporting but all related information technology operations. According to a January 2020 report from 451 Research, 91% of companies already had — or were in the process of defining — a formal dataops strategy, while 86% planned to increase spending or development germane to dataops in the next 12 months. Hypothetically, dataops can provide the tools that an organization needs to deal with an increasing amount of data, for example streamlining database maintenance through automation. But blockers stand in the way of fully realizing the promises of dataops. In a separate survey published in August 2021, 451 Research found that 90% of organizations don’t have an “optimized” dataops strategy and that few believe that they’ve achieved dataops maturity. Vendors like Atlan claim to simplify dataops by offering managed solutions that abstract away many of the complexities involved in deployment. For instance, Atlan — which recently raised $50 million in series B funding led by Insight Partners, Salesforce Ventures, and Sequoia Capital India at a $450 million post-money valuation — performs automatic profiling of a company’s data to identify outliers, missing values, and anomalies. It also correlates business terms with data objects to generate a common understanding of the data and how to use it, revealing how data has evolved through its lifecycle to predict how it will change going forward. Data analytics tools Atlan started out as an internal initiative at “data for good” firm SocialCops and was incubated across over 200 data projects, including India’s National Data and Analytics Platform and the United Nations SDGs National Data Platforms. By acting as a hub for assets ranging from tables and dashboards to models and code, the goal is to enable teams to create a source of truth while collaborating via integrations with data warehouses, chat apps like Slack, and business and data science tools. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Prukalpa Sankar and Varun Banka founded Singapore-based Atlan in 2018, after launching SocialCorps in 2012. Prior to teaming up with Sankar, Banka held a software engineering role at Microsoft and served on Barclays’ operations and cross-product technology team. “Today, data assets are not just tables, but code, models, business intelligence dashboards, and pipelines,” Sankar said in a previous statement. “At Atlan, we are reimagining the human experience with data — why can’t data assets be shared as easily as sharing a link on Google Docs, or if Google Analytics can tell you usage on a website, why can’t we do the same for our data?” Atlan can be configured to send alerts to stakeholders in the event of a data problem. In-line chats and annotations ostensibly help users stay on the same page, as do Excel-type queries like filters, aggregations, and grouping of data from data lakes and warehouses. (A data lake is a centralized repository for data stored in its raw format, while a data warehouse collects data from a range of sources to provide business insights.) Atlan also offers a “bot ecosystem” with bots that, for example, read through database contents to detect personally identifiable (PPI) information and recommend descriptions for data in databases on the basis of past data. Challenges in dataops Hurdles in this space can be challenging to surmount for many organizations, as revealed in a 2021 survey commissioned by Data.World and DataKitchen. In the survey, the vendors — which, it should be noted, have ulterior motives in giving the impression that dataops is difficult to adopt — found that only 46% of companies considered their dataops efforts to be both mature and successful. Respondents said that data governance policies and data requests with unreasonable expectations made their day-to-day jobs “very difficult.” Sankar asserts that Atlan can help lighten the burden on engineers — a sales pitch that’s evidently resonated with customers and investors. Teams at large enterprises like Unilever, Scripps Health, and Postman use Atlan. And to date, Atlan has raised $69 million in venture capital. Salesforce Ventures and Sequoia Capital India participated in Atlan’s series B. The company previously landed $16.5 million in a series A financing tranche in May 2021. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,280
2,022
"You.com launches an AI-powered writing tool powered by OpenAI | VentureBeat"
"https://venturebeat.com/ai/you-com-partners-with-openai-to-launch-an-ai-powered-writing-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages You.com launches an AI-powered writing tool powered by OpenAI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Just a few months ago, Bryan McCann and Richard Socher, the former chief scientist at Salesforce, launched You.com , a search engine that leverages AI to understand search queries, rank the results, and parse the queries into different languages (including programming languages). The platform summarizes information from across the web and is extensible with built-in search apps, like apps for Yelp and Twitter, so that users can complete tasks without having to leave the results page. In its quest to recalibrate expectations around search engines, You.com is today launching a search app built in collaboration with OpenAI that generates snippets — or even documents — of text when given a prompt. Socher calls it a “personal AI writer.” “[T]his is our first foray into what we call the app store, which doesn’t optimize for you spending as much time on there so we can sell you advertisement, but for you, actually getting stuff done,” Socher told VentureBeat in a phone interview. “[It’s perfect for] if you have writer’s block.” You.com’s new tool is powered by the same technology behind OpenAI’s GPT-3 , an AI language system that can generate human-like poetry, emails, recipes, short stories, movie scripts, and more. Socher wasn’t keen to disclose many of the technical details, but described You.com’s relationship with OpenAI as a “partnership” and the model underpinning the tool as “very similar” to GPT-3. (When contacted for comment, an OpenAI spokesperson said that YouWrite is powered by GPT-3 — specifically the recently-released InstructGPT models — through its API.) VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To use You.com’s writing assistant, called YouWrite, users type a query like “How to write an essay” into the search engine’s search bar and click the magnifying glass icon. Up pops a widget with options that let the user specify the length (e.g., paragraph), the audience or receiver (e.g., students, teachers, or marketers), tone (e.g., persuasive), and the content of the message (e.g., “three paragraphs on the Civil War”) they want YouWrite to generate. “We want to, basically, create this AI-powered writing system to help people be more productive, but also being controlled — you can decide what it should write,” Socher said. “We want to put people into control of the AI to make them more efficient.” In a demo, Socher showed how YouWrite can be prompted to write paragraphs explaining “why dogs are awesome,” a blog post about a new search engine, or a boilerplate rejection letter for a job candidate (complete with a placeholder for the candidate’s name). While VentureBeat wasn’t given an opportunity to test the tool itself — Socher entered the prompts during a Zoom call — the quality of the text seemed at least on par with output from GPT-3 and other sophisticated language systems. Of course, with any AI-powered language system, there’s a risk that the system might become susceptible to bias and toxicity. Language systems such as GPT-3 learn to “write” by analyzing huge chunks of text from websites, including from problematic sources advancing conspiracy theories, misinformation, racism, sexism, ageism, and ableism. OpenAI itself notes that biased datasets can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Socher claims that YouWrite prevents problematic outputs using filters and other techniques, like human feedback , on the backend. We’ll have to see how well the system holds up once it’s made public, but during the demo, typing the prompt “why jews are bad” yielded the message “We’re sorry, but we can’t return a good completion for your request.” YouWrite also seemed to be able to detect when its output might contain sensitive content, such as references to violence, and append a warning label. You.com will offer YouWrite for free to start, but frequent users and those who use it to generate longer outputs (think essays) will eventually have to pay for the privilege. Socher says that pricing hasn’t been decide yet, but will be “a lot cheaper” than other AI-powered writing tools on the market, like Jasper and CopyAI. “I think it’s really important for search engines like ours to be the best place to kind of explore this kind of new technology — to move away from, ‘Here’s a list of links that’s getting cluttered and full of ads,'” Socher said. “I think that ultimately, if you want to be a writer and have a search engine that helps you do research, summarize the web, and also get something on the page, You.com is going to be your best search engine.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,281
2,022
"Synthetaic secures venture funding to expand its synthetic data platform | VentureBeat"
"https://venturebeat.com/ai/synthetaic-secures-venture-funding-to-expand-its-synthetic-data-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Synthetaic secures venture funding to expand its synthetic data platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data scientists are increasingly using synthetic data to develop their AI systems. Indeed, a 2019 survey of the field calls the use of synthetic data “one of the most promising general techniques on the rise in [AI], especially computer vision.” Gartner predicts that 60% of the data used for the de­vel­op­ment of AI and an­a­lyt­ics projects will be syn­thet­i­cally gen­er­ated by 2024. With the global AI training dataset market expected to be worth $4.8 billion by 2027, according to Grand View Research, it’s perhaps unsurprising that new startups are emerging to meet the demand. In January, Mostly AI , a company that uses AI to create synthetic data for enterprises, raised $25 million in venture capital. Synthetic data company Synthesis AI emerged from stealth in April. And Facebook acquired synthetic data startup AI.Reverie last October. Another company, Synthetaic , goes a step beyond most synthetic data startups in claiming that its platform can eliminate the need for data labeling. Synthetaic — which today announced that it raised $13 million in series A financing — says its technology has already been deployed for rare tumor diagnosis, tracking endangered species, and insights from geospatial data, Data labeling In the enterprise, the most common type of AI system relies on supervised learning during the development process. Supervised learning involves recruiting people to annotate data — whether text, images, audio, or otherwise — so that an AI model can learn to associate certain annotations (i.e., labels) with characteristics of the data. For example, a supervised learning system that’s fed a large library of pictures of cats with annotations for each breed will eventually “learn” to distinguish between bobtails and shorthairs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Synthetaic, which was founded in 2019 by Corey Jaskolski, claims to eliminate the need for labeling through the use of synthetic data. Synthetic data — which comes with auto-generated labels — can be used in place of real-world data in cases where the real-world data is scarce or difficult to obtain, Synthetaic asserts, enabling organizations to create AI systems quickly and cheaply. “Jaskolski started Synthetaic after working with National Geographic in conservation efforts to preserve the Sumatran Rhino. His work there led to the realization that generative AI was the answer to the lack of data the AI models used in impactful applications such as conservation, security, and medical imaging, where good data is hard to come by,” a company spokesperson told VentureBeat via email. “We have a technology that can democratize AI and apply AI to projects or applications that have previously been inaccessible.” Synthetaic espouses the benefits of synthetic data in healthcare, noting that the data it creates isn’t constrained by regulations like HIPAA, the U.S. law that governs the release of sensitive patient information. In partnership with Michigan Medicine, the University of Michigan-owned academic medical center, Synthetaic claims to have helped to boost the accuracy of a brain tumor-detecting computer vision model from 68% to 96%. For another client, National Geographic, Synthetaic says that it helped to create an AI-powered platform that identifies and detects poachers and other “dangerous anomalies,” like illegal harvesting and signs of environmental impact (particularly missing trees and coastline changes), from satellite images. Synthetaic also worked with the U.S. Air Force to “demonstrate how [the company’s] technology can rapidly speed up AI-powered object detection in geospatial data,” according to Jaskolski. “[Our platform] is different than most other AI tools in that it does not require a traditional trained model to be effective,” the spokesperson continued. “Using [it,] a user can find things such as all the full parking lots in Milwaukee, specific vehicles in full motion video, or photos in which someone is holding a pistol. In each of these examples, [the platform] can provide initial AI results in minutes without labeled data from a single example image. This allows for easy AI experimentation … and allows enterprise customers to develop AI models without needing to send their company’s data out for human labeling.” Accuracy questions It’s not just Synthetaic and rivals who’ve heralded synthetic data as the solution to some of the major problems plaguing AI. For example, Nvidia researchers have explored a way to use synthetic data created in virtual environments to train robots to pick up objects like cans of soup, a mustard bottle, and a box of Cheez-Its in the real world. Institutions including the U.S. Department of Veterans Affairs are using synthetic medical histories for thousands of fake patients in order to study disease patterns and treatment paths. “All of the AI being developed today is data-hungry, and feeding AI with high-quality labeled data is a vast challenge regardless of the environment. Our flagship technology …. automates the analysis of large, unstructured, multidimensional datasets,” the Synthetaic spokesperson added. “Synthetaic introduces new technology that solves AI’s data problem by building models in minutes instead of months, vastly reducing the time to insight … [Our platform] eliminates the need for time-intensive human labeling or expensive labeled data troves, which is, perhaps, the single-largest barrier to unlocking practical AI.” In a survey of executives applying AI, 89% said synthetic data will be essential for their organizations to stay competitive. But there’s a downside: Some evidence suggests that synthetic data can perpetuate biases in both data and the AI systems developed using them. In a January 2020 study , researchers at Arizona State University showed that an AI system trained on a dataset of images of professors could create highly realistic synthetic faces, but synthetic faces that were mostly male and white. The system amplified biases contained in the original dataset, which captured mostly male and white professors. “There are several different approaches to [synthetic data generation], but in some ways, the data ethics risks are greater [with approaches like Synthetaic’s] … because they rely heavily on additional text attributes beyond the class name itself,” Bernard Koch, a Ph.D. student at the University of California, Los Angeles studying the intersection of science, culture, and machine learning, told VentureBeat via email. “After training, the idea is that you can learn to predict a truck without seeing one before, [for example] because you know that it is not a car or a bus but has attributes in common with cars and buses. From an ethics perspective, any socially insensitive or under-representation annotation issues that can occur with class labels can now occur with descriptive attributes as well.” Jaskolski claims that the company has taken steps to mitigate bias in the systems that it creates. “While it is true that synthetic data can introduce bias in AI models, at Synthetaic, we use synthetic data in a novel manner in that we don’t actually create synthetic data for the purpose of training. Rather, we use synthetic data models to power our … product which can rapidly build an AI model on real data without millions of human labeled samples,” he said. “This actually reduces another major source of bias, which is human label error, while also removing the large time and budgetary downside of human labeling.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,282
2,022
"Stanford report shows that ethics challenges continue to dog AI field as funding climbs | VentureBeat"
"https://venturebeat.com/ai/stanford-report-shows-that-ethics-challenge-continue-to-dog-ai-field-as-funding-climbs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stanford report shows that ethics challenges continue to dog AI field as funding climbs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Private investors are pouring more money into AI startups than ever before. At the same time, AI systems are becoming more affordable to train — at least when it comes to certain tasks, like object classification. Troubling, though, language models in the same vein as OpenAI’s GPT-3 are exhibiting greater bias and generating more toxic text than the simpler models that preceded them. Those are the top-level findings of the 2022 AI Index report out of Stanford’s Institute for Human-Centered AI (HAI), an academic research center focused on the human impact of AI technologies. Now in its fifth year, the AI Index highlights major developments in the AI industry from 2020 to 2021, paying special attention to R&D, technical performance, technical AI ethics, the economy and education, and policy and governance. “This year’s report shows that AI systems are starting to be deployed widely into the economy, but at the same time they are being deployed, the ethical issues associated with AI are becoming magnified,” the coauthors wrote. “This is bound up with the broad globalization and industrialization of AI — a larger range of countries are developing, deploying, and regulating AI systems than ever before, and the combined outcome of these activities is the creation of a broader set of AI systems available for people to use, and reductions in their prices.” Funding trends This year’s edition of the AI Index shows that private investment in AI soared while investment concentration intensified. Private investment in AI in 2021 totaled around $93.5 billion, more than double the total private investment in 2020, while the number of newly-funded AI companies continued to drop — from 1051 companies in 2019 and 762 companies in 2020 to 746 companies in 2021. In 2020, there were four funding rounds worth $500 million or more versus 15 In 2021. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Among companies that disclosed the amount of funding, the number of AI funding rounds that ranged from $100 million to $500 million more than doubled in 2021 compared to 2020, while funding rounds that were between $50 million and $100 million more than doubled as well,” the Stanford coauthors note. “In 2020, there were only four funding rounds worth $500 million or more; in 2021, that number grew to 15. Companies attracted significantly higher investment in 2021, as the average private investment deal size in 2021 was 81.1% higher than in 2020.” The 2022 AI Index’s findings align with recent report from consulting firm Forrester, which pegged the size of the AI market as lower than many analysts previously estimated. According to Forrester, as AI is increasingly considered essential to enterprise software and large tech companies add AI to their product portfolios, startups will lose market share — and could end up being the target of mergers and acquisitions. For example, last year, PayPal snapped up AI-powered payment startup Paidy for $2.7 billion, while Microsoft acquired voice recognition company Nuance for nearly $20 billion. As shown by the 2022 AI Index, companies specializing in data management, processing, and cloud technologies received the greatest amount of investment in 2021, followed by medical and fintech startups. Broken down geographically, in 2021, the U.S. led the world in both total private investment in AI and the number of newly funded AI companies — three and two times higher than China, respectively, the next country on the ranking. Decreasing training costs This year’s AI Index pushes back against the notion that AI systems remain expensive to train — at least depending on the domain. The coauthors found that the cost to train a basic image classification model has decreased by 63.6% while training times for AI systems have improved by 96.3%. The report isn’t the first to assert that costs for certain AI development tasks are coming down, thanks in part to improvements in hardware and architectural design approaches. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of two every 16 months. Alphabet-backed research lab DeepMind’s recent language model — RETRO — can beat others 25 times its size. But many state-of-the-art AI systems remain too costly for all but the best-funded labs and companies to train, much less deploy into production. DeepMind is estimated to have spent $35 million training a model to learn chess, shogi, and the Chinese board game Go. Meanwhile, a 2020 study from startup AI21 Labs pegged the cost of training a text-generating system roughly 116 times smaller than GPT-3 at between $80,000 to $1.6 million. The AI Index’s coauthors acknowledge the advantages wielded by large private sector actors, including access to enormous, terabyte-scale datasets for AI training. (AI models learn to perform tasks by processing large numbers of examples.) In fact, they say, top results across technical benchmarks are increasingly relying on extra, difficult-to-obtain training data to set new state-of-the-art results. As of 2021, nine out of 10 state-of-the-art AI systems in the 2022 AI Index were trained with extra data. One recently-published study estimated that only a dozen universities and corporations are responsible for creating the datasets used more than 50% of the time in machine learning. “The use of extra training data has taken over object detection, much as it has with other domains of computer vision,” the coauthors write. “This … implicitly favors private sector actors with access to vast datasets.” The rise of regulation — and ethics In a brighter shift, the 2022 AI Index reported evidence that AI ethics — the study of the fairness of and bias in AI systems, among other aspects — is entering the mainstream. Researchers with industry affiliations contributed 71% more publications year-over-year at fairness-focused conferences and workshops recently, while research on AI fairness and transparency increased fivefold in publications on related topics over the past four years, the coauthors say. While the trend is encouraging, it’s worth noting that companies like Google — which infamously dissolved an AI advisory board in 2019 just one week after forming it — have attempted to limit other internal research that might portray its technologies in a bad light. And reports have described many AI ethics teams at large corporations , like Meta (formerly Facebook), as largely toothless and ineffective. IBM, for example — which heavily promotes its “fairness” tools designed to check for “unwanted bias” in AI — once secretly collaborated with the New York Police Department to train facial recognition and racial classification models for video surveillance systems. As Leiden University assistant professor Rodrigo Ochigame, who studies the intersection of science, technology, and science, explained in a 2019 piece for The Intercept, corporations generally support two kinds of regulatory possibilities for a technology: (1) No legal regulation at all, leaving ethical principles as merely voluntary; or (2) moderate regulation encouraging — or requiring — technical adjustments that don’t conflict significantly with profits. Most oppose the third option: restrictive legal regulation curbing or banning deployment of the technology. “The corporate-sponsored discourse of ‘ethical AI’ enables precisely this position,” Ochigame writes. “Some big firms may even prefer … mild legal regulation over a complete lack thereof, since larger firms can more easily invest in specialized teams to develop systems that comply with regulatory requirements.” Indeed, efforts to address ethical concerns associated with using AI in industry remain limited. According to a McKinsey survey , while 29% and 41% of respondents companies recognize “equity and fairness” and “explainability” as risks while adopting AI, only 19% and 27% are taking steps to mitigate those risk while adopting AI. This bodes poorly for efforts to address the growing bias problems with AI systems. While labs like OpenAI claim to have made progress in reducing bias, the 2022 AI Index shows that there’s much to do: A state-of-the-art language-generating model in 2021 was 29% more likely to output toxic text versus a smaller, simpler model considered state-of-the-art in 2018. This suggests that the increase in toxicity corresponds with the increase in general capabilities. “Larger and more complex and capable AI systems can generally do better on a broad range of tasks while also displaying a greater potential for ethical concerns,” the Stanford coauthors wrote. “[R]esearchers and practitioners are reckoning with [the] real-world harms, [including] commercial facial recognition systems that discriminate on race , resume screening systems that discriminate on gender , and AI-powered clinical health tools that are biased along socioeconomic and racial lines … As startups and established companies race to make language models broadly available through platforms and APIs, it becomes critical to understand how the shortcomings of these models will affect safe deployment.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,283
2,022
"Some segments of the public support facial recognition use by police — but not all | VentureBeat"
"https://venturebeat.com/ai/some-segments-of-the-public-support-facial-recognition-use-by-police-but-not-all"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Some segments of the public support facial recognition use by police — but not all Share on Facebook Share on X Share on LinkedIn Illustration of man's face in facial recognition technology device Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A plurality of Americans support the widespread use of facial recognition by law enforcement to monitor crowds and track down people who might have committed a crime. That’s one of the surprising findings from a new Pew survey on U.S. adults’ views of AI, which touched on topics including the use of AI by social media platforms to find misinformation and the development of AI-powered autonomous vehicles. Of those responding to the survey, nearly half say that they’re equally concerned and excited about AI , with those in support believing that AI’s ascent will transform industries and detractors expressing concerns about privacy and job loss. Facial recognition, which has long been a flash point for controversy, reentered the public debate after the killing of George Floyd in May 2020. A 2021 report from the Government Accountability Office revealed that six federal agencies applied facial recognition to images of the ensuing protests, including the U.S. Park Police, which used a photo from Twitter to charge someone with felony civil disorder and two counts of assault on a police officer. Despite an increasing number of bans on facial recognition at the local and state level s and a pledge from tech giants including Google, Amazon, Microsoft, and IBM not to sell access to the technology, governments — including the U.S. — continue to adopt facial recognition under the guise of maintaining law and order. In Detroit, which began piloting facial recognition software in 2017, police in 2020 used the technology to conduct upwards of 100 searches of suspects. Vendors like AnyVision and Gorilla Technologies are alleged suppliers for Taiwanese prisons and Israeli army checkpoints. And startup Clearview, which has scraped over 10 billion mugshots from the web to develop its facial recognition systems, claims to have 3,100 law enforcement and government customers, including the FBI and U.S. Customs and Border Protection. Public opinion on facial recognition Pew’s report, which surveyed 10,260 U.S. adults in early November 2021, found that roughly one-third — 34% — think the widespread use of facial recognition by officers would make policing more fair, despite evidence to the contrary. On the other hand, a majority — 57% — say that if facial recognition deployment by police were to become more common, crime rates would stay about the same. Moreover, 66% say that police would definitely or probably use facial recognition to monitor Black and Hispanic neighborhoods much more often than other neighborhoods (although Black and Hispanic adults are more likely than white adults to say this). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Recent history is filled with examples of facial recognition abuse, including software developed by Huawei that can reportedly recognize the face of a member of the Uyghur minority group. (The Chinese government continues to target Uyghurs, who it accuses of subversion, imprisoning as many as two million in interment camps throughout the country.) At least three people in the U.S. — all Black men — have been wrongfully arrested based on poor facial recognition matches. Overseas, the facial recognition technology used by the U.K.’s Metropolitan Police in 2019 was found to be 81% inaccurate , mistakenly targeting four out of five innocent people as wanted suspects. Interestingly, only 53% of U.S. adults responding to the Pew survey say false arrests would “probably or definitely” be made if use of facial recognition technology was widespread among police. Black respondents were nearly three times as likely to predict false arrests compared with White respondents, while Hispanics were close to twice as likely. In the U.S., facial recognition use by police is likely to inflict particular harm on Black Americans, writes Alex Najibi, a Ph.D. candidate studying bioengineering at Harvard’s School of Engineering and Applied Sciences. Black Americans have a higher chance of being arrested and incarcerated for minor crimes than white Americans, he notes, and consequently, Black people are overrepresented in mugshot data — which facial recognition employs to make predictions. “The Black presence in such systems creates a feed-forward loop whereby racist policing strategies lead to disproportionate arrests of Black people, who are then subject to future surveillance,” Najibi wrote in a 2020 blog post. “For example, the [New York Police Department (NYPD)] maintains a database of 42,000 ‘gang affiliates’ — 99% Black and Latinx — with no requirements to prove suspected gang affiliation. In fact, certain police departments use gang member identification as a productivity measure, incentivizing false reports. For participants, inclusion in these monitoring databases can lead to harsher sentencing and higher bails — or denial of bail altogether.” Difference in views Among the Pew survey respondents who believe facial recognition would be a force for good in police’s hands, the majority said that it would result in “more missing persons being found by police,” better crowd control, and “crimes being solved more quickly and efficiently.” There’s some reporting that supports this — Indian police in 2020 used a facial recognition app to reunite missing children with their families, according to Reuters — but even organizations embracing facial recognition for these purposes advocate regulatory guardrails or frameworks curtailing the technology’s use. The survey explores this, with Pew finding that “substantial shares” of the respondents would find police use of facial recognition more acceptable if “certain conditions” were met — like training officers in how the technology can make errors in identifying people. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies’ biases can be exacerbated by misuse. A report from Georgetown Law’s Center on Privacy and Technology detailed how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects. The NYPD and others reportedly edit photos with blur effects and 3D modelers to make them more conducive to algorithmic face searches. As Wired’s Khari Johnson reports , some police departments have adopted policies governing their respective uses of facial recognition. In Detroit and New York, two analysts must review the results of a facial recognition scan before the results are turned over to detectives, and facial recognition alone can’t be used to justify an arrest. But best practices aren’t always followed. Clare Garvie, a former senior associate at Georgetown’s Center on Privacy and Technology, told Johnson that some law enforcement analysts in Nebraska and Florida were allowed to specify a lower facial recognition accuracy rate to find matches in police databases. When it comes to other applications of facial recognition that doesn’t involve law enforcement, like enhancing credit card payment security and apartment building tracking, over half of people told Pew that they favor the use of the technology (outnumbering those who approve of its use by law enforcement). Conversely, over half oppose social media sites like Facebook automatically identifying people in photos and companies tracking the attendance of employees. The public’s stances on facial recognition are likely to evolve further as the technology becomes commonplace — absent regulations. The Internal Revenue Service this year adopted — then backed away from — a plan to force taxpayers to use facial recognition software before they could gain access to certain online services. Government Accountability Office reports that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems. And by the Georgetown study’s estimates, half of American adults’ faces are already in law enforcement’s facial recognition databases. According to documents obtained by The Washington Post, Clearview AI is telling investors that it’s on track to have 100 billion facial photos in its database within a year — enough to ensure “almost everyone in the world will be identifiable.” “Notable portions of people’s lives are now being tracked and monitored by police, government agencies, corporations and advertisers … Facial recognition technology adds an extra dimension to this issue because surveillance cameras of all kinds can be used to pick up details about what people do in public places and sometimes in stores,” the coauthors of the Pew study write. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,284
2,022
"Smarter Sorting, which leverages AI to make physical product sustainability suggestions to organizations, raises $25M | VentureBeat"
"https://venturebeat.com/ai/smarter-sorting-which-leverages-ai-to-make-physical-product-sustainability-suggestions-to-organizations-raises-25m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Smarter Sorting, which leverages AI to make physical product sustainability suggestions to organizations, raises $25M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the acceleration of digital transformation, consumer product brands are considering — or actively adopting, as the case may be — product intelligence software. Product intelligence entails gathering, acting on, and analyzing data related to how customers use and perceive a company’s products. Increasingly, these types of analyses are showing that customers care about the eco-friendliness of the products they’re buying. According to a March 2021 survey , GreenPrint, a company that develops corporate sustainability solutions, nearly two-thirds of Americans are willing to pay more for sustainable products. Organizations are responding — as of 2020, 88% of publicly traded companies had environmental, social, and governance (ESG) initiatives in place — but they’re facing challenges, including supply chain challenges exacerbated by the pandemic. “The retail and consumer goods industries will see further sustainability regulations and pressure from consumers for sustainable products. Regulations and corporate commitments will increase to drive down the amount of packaging that ends up in landfill. And further up the supply chain, traceability is a consumer trend that is not going away,” Jacqueline Claudia, the CEO of Smarter Sorting , told VentureBeat via email. “Brands will have to increase the information shared on their labeling and progressive brands with better formulations or ingredients will use it for marketing purposes. Consumers will demand more data and information on the products they buy and they will want this information at their fingertips — online and in-store.” Smarter Sorting aims to solve these challenges by suggesting to retailers and brands ways to better make, market, and move consumer products. The startup claims to have over 456 billion data points on millions of regulated consumer products, which it uses to map product data to regulatory and handling data — ostensibly helping organizations reduce their environmental impact. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Improving sustainability Smarter Sorting was founded in 2015 by Claudia, Charlie Vallely, and Chris Ripley as a “waste repurposing” company. In recent years, it pivoted focus from the disposal of chemical waste to an ESG and environmental health and safety (EHS) “data partner” for retailers and consumer brands. “Today, we are really strong on ingredients, chemicals, toxicity, and more in consumer packaged goods. We are building features, functionality, and machine learning algorithms to drive industry sustainability in packaging, electronics, and more — to add even more value for our retail partners and consumer brands’ ESG goals,” Claudia said. “Our big focus with all our roadmap work is using machine learning and AI to solve complex computations; that drives sustainability in the retail and consumer goods industries.” Smarter Sorting’s platform leverages an automated classification engine that produces over 7,116 regulatory classifications for over two million consumer good products across 3,500 data point facets per product and 150 million chemical compounds. The machine learning models that power the engine train on data from large chemistry datasets, public retail datasets, safety data sheets, public text from city, and state and federal regulations, Claudia says, as well as first principle chemistry and physics simulations. “For each product, we generate thousands of possible classifications and observations including the classifications needed for retailers to manage regulatory compliance (transportation, waste, safety, and more). And each day, we bring in over eight million new data points from our retail and supplier deployments,” Claudia added. “[T]he engine … generates up to 40,000 permutations per product and simulates the ranges of possible toxicity, corrosivity, flammability and regulatory outcomes. Every day, we search for publicly available data to enrich our product and chemistry data [and we use] back-of-the-store sensors [to process] millions of physically-returned products in retail stores.” Data for good According to Claudia, Smarter Sorting — which competes with UL’s WERCSmart — has added hundreds of brands over the past two years post-pivot. The startup now counts among its customers over 1,700 consumer good companies and 24 big box national retailers including Wegmans, Costco, and the Albertsons Companies, which include Safeway, Tom Thumb, and VONS. “Our technology — and most importantly, our data — puts the information and insights they need about the sustainability and safety of consumer products they sell at their fingertips,” Claudia continued. “They need it for publicly-declared ESG goals and reports; they need it for internal reporting; they need it across the supply chain to remain compliant, avoid fines and reduce their environmental impact. Sustainability and good business sense are not mutually exclusive — the entire C-Suite will care about the impact that can be derived from our data. The data can have an impact across the entire supply chain from product formulation and design, to logistics, to purchasing and merchandising, to ESG, EHS, and customer resource management.” Smarter Sorting today announced that it raised $25 million in a funding round led by G2 Venture Partners, bringing the 77-employee startup’s total capital raised to $60 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,285
2,022
"Researchers turn to crowdsourcing for better YouTube recommendations | VentureBeat"
"https://venturebeat.com/ai/researchers-turn-to-crowdsourcing-for-better-youtube-recommendations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers turn to crowdsourcing for better YouTube recommendations Share on Facebook Share on X Share on LinkedIn YouTube app on Sony smart TV. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2019, an analysis by ex-Google computer scientist, Guillaume Chaslot, found that YouTube’s recommendation algorithm overwhelmingly recommended Russia Today’s video about the Mueller report, the U.S. government report documenting Russian efforts to interfere in the 2016 presidential election. The video, which contained false claims about the report’s findings, had only 50,000 views, yet YouTube’s algorithm surfaced it over hundreds of other, more popular videos uploaded by independent media outlets. Google, which owns YouTube, responded to this and other alleged algorithmic flaws with policy tweaks and a purge of terms of service-violating accounts. But more recent research from Mozilla offers evidence that YouTube continues to put objectionable, questionably related content — including misinformation, violent and graphic content, and hate speech — in front of its users. In one instance documented by Mozilla, a person who watched a video about software rights was then recommended a video about gun rights. Exasperated by the lack of progress and inspired to shine a light on the issue of algorithmic transparency, a team at the Swiss Federal Institute of Technology Lausanne (EPFL) launched the Tournesol Foundation, a nonprofit created to develop a voting-based system that recruits viewers to highlight the best content on YouTube. With Tournesol, any YouTube user can create an account and recommend content, which Tournesol then aggregates and converts into per-video scores representing the ‘collaborative judgement” of the community. According to Le Nguyên Hoang, a scientist at the École Polytechnique Fédérale of Lausan EPFL and one of the cofounders of Tournesol, the goal is to provide a safer, more “benevolent” alternative to YouTube’s recommendations that’s powered by the sway of the crowd. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Tournesol is the result of five years of discussions with my colleagues at EPFL on the safety and ethics of large-scale algorithms,” Hoang told VentureBeat via email. “As a science YouTuber myself, I have quickly been extremely concerned about recommendation algorithms and troll campaigns … With a few friends, we spent a year developing the platform in our spare time. In April 2021, we created the nonprofit Tournesol Association to support the platform.” Crowdsourcing recommendations The science is mixed on the helpfulness of crowdsourcing when applied to recommending content. Reddit — where the visibility of posts and comments is decided by the number of accumulated “upvotes” — is an infamous example. Research has shown that even a single extra upvote on a comment can lead to a snowball effect, where more people in the community up-vote that comment. The reasons for this “snowball effect” vary. Sean Taylor, a social scientist at Facebook who coauthored a seminal study on the subject, speculates that people rely on upvotes to indicate that something is worthwhile — especially when they aren’t sure what to think. Alternatively, he says, highly-rated posts could be more likely to draw attention and, therefore, more votes from the community. Crowdsourcing also runs the risk of introducing other biases. Users passionate about a particular cause might be more inclined to sign up so that they can ensure their votes are represented, for example. Voices and communities who aren’t made aware of ways to participate or who don’t have the means (e.g., access to a computer running Chrome) could be unintentionally excluded. Regardless of who is participating, users are often driven by emotion, the extent to which an opinion matches their own , and whether they’re familiar with a source of information — irrespective of the source’s veracity. Bias can arise along several dimensions, as studies of recommendation algorithms of the years have demonstrated. In a 2020 research paper , the coauthors showed that a popular dataset of movie ratings could cause an algorithm to provide less-relevant suggestions for women versus men because the dataset contained more ratings from men. Other work has found that Facebook’s ad-recommending algorithm discriminates by gender and race. Hoang acknowledges the issues, but argues crowdsourcing is a sensible replacement for YouTube’s financially motivated systems, which prioritize “engagement” — i.e., ad views — at the expense of most other metrics. A 2019 report from Bloomberg alleges that YouTube executives ignored warnings from staff, letting toxic, algorithm-boosted videos run rampant to increase viewership. “The governance and oversight over today’s large-scale algorithms is frustratingly poor,” he said. “[A]lgorithms are clearly being weaponized by organized disinformation campaigns, [but] even when they are not weaponized, by providing addictive content to maximize retention, recommendation algorithms are biased towards sensationalist, divisive, and angering content … [These] algorithms are neither audited nor auditable by external entities.” Several legislative remedies to the problem of harmful recommendation algorithms have been proposed, including a provision in the European Union’s draft AI Act that would place constraints on AI systems that “manipulate” human behavior, opinions, or decisions “to their detriment.” In the U.S., a recently floated bill — the Social Media NUDGE Act — would direct agencies to identify “content neutral” methods to slow down the algorithm-driven spread of harmful content and misinformation on social media. But Hoang says that these efforts don’t go far enough — and aren’t guaranteed to succeed. The AI Act’s language around recommender systems has been softened in subsequent drafts, and the Social Media NUDGE Act — along with other U.S. bills to regulate algorithms — remains stalled in Congress. “What we want above all is for such a global algorithmic governance to be designed with the best-possible solutions to make it effective, scalable, secure, robust, transparent, interpretable, fair, and trustworthy,” he added. “It is crucial to note that, whether we use social media or not, we all have a stake in the information that large-scale recommender systems distribute billions of times per day to other users.” Challenges and vision Tournesol, like YouTube, uses algorithms to power its recommendations, which are informed by votes from users on the platform and the scores associated with each video. (Recall that votes on Tournesol are aggregated and converted into scores for videos.) To cast a vote, users compare any two videos that they’ve watched and then choose the one that they’d recommend over the other on a sliding scale. A “vouching” system requires that users be certified based on the “trustworthiness” of their email domains to protect against fake accounts, and users can’t see how others have voted when comparing videos. When comparing videos, Tournesol users can also denote whether one video meets criteria like “reliable and not misleading,” “clear and pedagogical,” and “important and actionable” versus the other video. The results funnel into a public dataset that’s used to train the algorithms to provide recommendations in English, French, and German. EPFL has contributed to Tournesol’s open source code, as well as tech startups PolyConseil and Kleis, which participated in the project through their Tech for Good program. Features in the works include voter profile pages, the ability to compare videos directly on YouTube through the Chrome extension, and visualizations that show votes from “subpopulations of interest” like journalists and subject-matter experts. Hoang is forthright about the hurdles that must be overcome before Tournesol graduates from a research effort. For example, contributors to the project are exploring how to assign weights to votes so that experts have — depending on the video — more sway over recommendations than non-experts. They’re also investigating ways that under-represented communities on Tournesol can have their influence boosted to that of dominant groups, like white men , to combat algorithmic bias. “Today’s main limitation, by far, is the lack of data. We desperately need more people engaging in content reviewing, to help us assign more robustly scores to as many important YouTube videos as possible. Once Tournesol is shown to be performant and robust at identifying top content, convincing large information companies to leverage our scores … will be a huge challenge,” Hoang said. “[But] imagine if, instead of amplifying hate speech and calls to violence as they currently do, recommendation algorithms massively and repeatedly shared the numerous calls for peace that courageous peace activists are making … [S]uch algorithms could become a fantastic ally for world peace.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,286
2,022
"Nvidia takes the wraps off Hopper, its latest GPU architecture | VentureBeat"
"https://venturebeat.com/ai/nvidia-takes-the-wraps-off-hopper-its-latest-gpu-architecture"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia takes the wraps off Hopper, its latest GPU architecture Share on Facebook Share on X Share on LinkedIn HANGZHOU, CHINA - OCTOBER 20, 2021 - Photo taken on Oct. 20, 2021 shows the booth of Nvidia at the 2021 Hangzhou Computing Conference in Hangzhou, east China's Zhejiang Province. Nvidia is abandoning its plan to buy Arm from SoftBank Group due to regulatory objections, ending what would have been the biggest deal in the chip industry. (Photo credit should read Costfoto/Future Publishing via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Follow along with VentureBeat’s ongoing coverage from Nvidia’s GTC 2022 event. >> After much speculation, Nvidia today at its March 2022 GTC event announced the Hopper GPU architecture , a line of graphics cards that the company says will accelerate the types of algorithms commonly used in data science. Named for Grace Hopper, the pioneering U.S. computer scientist, the new architecture succeeds Nvidia’s Ampere architecture, with launched roughly two years ago. The first card in the Hopper lineup is the H100, containing 80 billion transistors and a component called the Transformer Engine that’s designed to speed up specific categories of AI models. Another architectural highlight includes Nvidia’s MIG technology, which allows an H100 to be partitioned into seven smaller, isolated instances to handle different types of jobs. “Data centers are becoming AI factories — processing and refining mountains of data to produce intelligence,” Nvidia founder and CEO Jensen Huang said in a press release. “ Nvidia H100 is the engine of the world’s AI infrastructure that enterprises use to accelerate their AI-driven businesses.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Compute powerhouse The H100 is the first Nvidia GPU to feature dynamic programming instructions (DPX), “instructions” in this context referring to segments of code containing steps that need to be executed. Developed in the 1950s, dynamic programming is an approach to solving problems using two key techniques: recursion and memoization. Recursion in dynamic programming involves breaking a problem down into sub-problems, ideally saving time and computational effort. In memorization, the answers to these sub-problems are stored so that the sub-problems don’t need to be recomputed when they’re needed later on in the main problem. Dynamic programming is used to find optimal routes for moving machines (e.g., robots), streamline operations on sets of databases, align unique DNA sequences, and more. These algorithms typically run on CPUs or specially designed chips called field-programmable gate arrays (FPGAs). But Nvidia claims that the DPX instructions on the H100 can accelerate dynamic programming by up to seven times compared with Ampere-based GPUs. Transformer Engine Beyond DPX, Nvidia is spotlighting the H100’s Transformer Engine, which combines data formats and algorithms to speed up the hardware’s performance with Transformers. Dating back to 2017, the Transformer has become the architecture of choice for natural language models (i.e., AI models that process text), thanks in part to its aptitude for summarizing documents and translating between languages. Transformers have been widely deployed in the real world. OpenAI’s language-generating GPT-3 and DeepMind’s protein shape-predicting AlphaFold are built atop Transformer, and research has shown that the Transformer can be trained to play games like chess and even generate images. The H100’s Transformer Engine leverages what’s called 16-bit floating-point precision and a newly added 8-bit floating-point data format. AI training relies on floating-point numbers, which have fractional components (e.g., 3.14). Most AI floating-point math is done using 16-bit half precision (FP16), 32-bit single precision (FP32), and 64-bit double precision (FP64). Cleverly, Transformer Engine uses Nvidia’s fourth-generation tensor cores to apply mixed FP8 and FP16 formats, automatically choosing between FP8 and FP16 calculations based on “custom, [hand]-tuned” heuristics, according to Nvidia. The challenge in training AI models is to maintain accuracy while capitalizing on the performance offered by smaller, faster formats like FP8. Typically, lower precisions, like FP8, translate to less accurate models. But Nvidia maintains that the H100 can “intelligently” handle scaling for each model and offer up to triple the floating point operations per second compared with prior-generation TF32, FP64, FP16 and INT8 precisions. Next-generation servers The H100 — which is among the first GPUs to support the PCIe Gen5 format — features nearly 5 terabytes per second of external connectivity and 3TB per second of internal memory bandwidth. A new fourth-generation version of Nvidia’s NVLink technology, in tandem with the company’s NVLink Switch and HDR Quantum InfiniBand, enables customers to connect to 256 H100 GPUs together at nine times higher bandwidth, Nvidia says. The H100 also features confidential computing capabilities intended to protect AI models and customer data while they’re being processed. Confidential computing isolates data in an encrypted enclave during processing. The contents of the enclave — including the data being processed — are accessible only to authorized programming code and are invisible to anyone else. The H100, bound for datacenters, will be available first in Nvidia’s fourth-generation DGX system — the DGX H100. The DGX H100 boasts two Nvidia BlueField-3 DPUs, eight ConnectX Quantum-2 InfiniBand networking adapters, and eight H100 GPUs, delivering 400 gigabytes per second throughput and 32 petaflops of AI performance at FP8 precision. Every GPU is connected by a fourth-generation NVLink for 900GB per second of connectivity, and an external NVLink Switch can network up to 32 DGX H100 nodes in one of Nvidia’s DGX SuperPod supercomputers. “AI has fundamentally changed what software can do and how it is produced. Companies revolutionizing their industries with AI realize the importance of their AI infrastructure,” Huang continued. “Our new DGX H100 systems will power enterprise AI factories to refine data into our most valuable resource — intelligence.” For experimentation purposes, Nvidia intends to build an ultra-powerful DGX SuperPod dubbed Eos, which will feature 576 DGX H100 systems with 4,608 DGX H100 GPUs. (A single DGX SuperPod with a H100 GPU delivers around an exaflop of FP8 AI performance.) Eos will provide 18.4 exaflops of AI computing performance — four times faster processing than the Fugaku supercomputer in Japan, currently the world’s speediest — and 275 petaflops of performance, the company says. The H100 will be available in Q3 2022. DGX H100 systems, DGX Pods, and DGX SuperPods will also be available from Nvidia’s global partners starting in Q3. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,287
2,022
"Nvidia debuts new hardware targeting the edge, including Isaac Nova Orin | VentureBeat"
"https://venturebeat.com/ai/nvidia-debuts-new-hardware-targeting-the-edge-including-isaac-nova-orin"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia debuts new hardware targeting the edge, including Isaac Nova Orin Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Follow along with VentureBeat’s ongoing coverage from Nvidia’s GTC 2022 event. >> During its March 2022 GPU Technology Conference (GTC) this week, Nvidia unveiled Isaac Nova Orin, a computing and sensor architecture powered by the company’s Jetson AGX Orin hardware. Nvidia says that Isaac Nova Orin comes with “all the compute and sensor hardware needed to design, build, and test autonomy” in autonomous mobile robots (AMRs) — types of robots that can understand and move through their environment without being overseen directly by an operator. Warehousing and logistics organizations among others apply AMRs to tasks that’d be harmful to — or not possible for — teams of human workers. Using AI, compute, and a sophisticated set of sensors, AMRs can carry heavy loads while dynamically assessing and responding to their surroundings — assisting with tasks including locating, picking, and moving inventory. An IDC survey found that over 70% of order fulfillment operations and warehouses that deploy AMRs have experienced double-digit improvement in KPIs like cycle time, productivity, and inventory efficiency. (Cycle time refers to the amount of time a team spends actually working on producing an item until the item is ready for shipment.) That’s perhaps why the global AMR market was worth roughly $1.67 million in 2020, according to Fortune Business Insights, and projected to growth to $8.7 billion by 2028. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Isaac Nova Orin and Jetson AGX Orin Isaac Nova Orin, which will be available later this year, pairs two Jetson AGX Orin units to deliver up to 550 TOPS of power. In hardware, TOPS — which stands for “trillions of operations per second” — indicates how many computing operations, or basic math problems, a chip can handle over a short period of time. As my former colleague Jeremy Horowitz notes, TOPS , while often touted in marketing materials, aren’t necessarily the best way to measure a chip’s capabilities. But Nvidia is spotlighting Isaac Nova Orin’s other features, like its ability to process data in real time from up to six cameras, three lidars, and eight ultrasonic sensors from an AMR. Over-the-air software management support is “preintegrated” in Isaac Nova Orin and the hardware is “calibrated and tested to work out of the box,” Nvidia says. Isaac Nova Orin includes tools necessary to simulate the robot as well as software modules designed to accelerate perception and navigation tasks and map different robots’ environments. Alongside Isaac Nova Orin, Nvidia announced that the Jetson AGX Orin developer kit, which the company first detailed in November, is now available to customers for purchase. Readers will recall that Jetson AGX Orin delivers 275 TOPS of compute power and features Nvidia’s Ampere architecture GPU, Arm Cortex-A78AE CPUs, AI and vision accelerators, and high-speed chip-to-chip interfaces. Microsoft, John Deere, Amazon, Hyundai, and JD.com are among the early adopters of Jetson AGX Orin. Developer kits start at $1,999, and production modules will be available in Q4 2022 for $399. “As AI transforms manufacturing, healthcare, retail, transportation, smart cities and other essential sectors of the economy, demand for processing continues to surge,” Deepu Talla, VP of embedded and edge computing at Nvidia, said in a press release. “A million developers and more than 6,000 companies have already turned to Jetson. The availability of Jetson AGX Orin will supercharge the efforts of the entire industry as it builds the next generation of robotics and edge AI products.” Edge opportunity With Isaac Nova Orin and Jetson AGX Orin, Nvidia is competing for a slice of the rapidly growing edge computing segment. Generally speaking, “edge computing” encompasses computing and storage resources at the location where data is produced, including on — or near — AMRs. STL Partners recently estimated that the edge computing addressable market will grow from $10 billion in size in 2020 to $543 billion in 2030. Edge computing offers several advantages compared with cloud-based technologies, but it isn’t without challenges. Keeping data locally means more locations to protect, with increased physical access allowing for different kinds of cyberattacks. (Some experts argue the decentralized nature of edge computing leads to increased security.) And compute is limited at the edge, which restricts the number of tasks that can be performed. Even so, Gartner predicts that more than 50% of large organizations will deploy at least one edge computing application to support the internet of things or immersive experiences by the end of 2021, up from less than 5% in 2019. The number of edge computing use cases could jump even further in the upcoming years, with the firm expecting that more than half of large enterprises will have at least six edge computing use cases deployed by the end of 2023. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,288
2,022
"MIT researchers use simulation to train a robot to run at high speeds | VentureBeat"
"https://venturebeat.com/ai/mit-researchers-use-simulation-to-train-a-robot-to-run-at-high-speeds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT researchers use simulation to train a robot to run at high speeds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Four-legged robots are nothing novel — Boston Dynamics’ Spot has been making the rounds for some time, as have countless alternative open source designs. But with theirs, researchers at MIT claim to have broken the record for the fastest robot run recorded. Working out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the team says that they developed a system that allows the MIT-designed Mini Cheetah to learn to run by trial and error in simulation. While the speedy Mini Cheetah has limited direct applications in the enterprise, the researchers believe that their technique could be used to improve the capabilities of other robotics systems — including those used in factories to assemble products before they’re shipped to customers. It’s timely work as the pandemic accelerates the adoption of autonomous robots in industry. According to an Automation World survey , 44.9% of the assembly and manufacturing facilities that currently use robots consider the robots to be an integral part of their operations. Training in simulation Today’s cutting-edge robots are “taught” to perform tasks through reinforcement learning, a type of machine learning technique that enables robots to learn by trial and error using feedback from their own actions and experiences. When a robot performs a “right” action — i.e., an action that’ll lead it toward a desired goal, like stowing an object on a shelf — it receives a “reward.” When it makes a mistake, the robot either doesn’t receive a reward or is “punished” by losing a previous reward. Over time, the robot discovers ways to maximize its reward and perform actions that achieve the sought-after goal. Robots can be trained via reinforcement learning in the real world, but real-world training is time-consuming and places a strain on the robotics hardware, which is delicate. That’s why researchers rely on simulated, video game-like environments designed to mimic the real world, which allow them to run thousands to millions of trials during which digital recreations of real-world robots learn sets of actions. To take one example, Alphabet-backed Waymo, which is developing autonomous vehicles, says it has driven billions of miles in simulation using digital avatars of its cars. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Recently, researchers have pushed the boundaries of simulation, attempting to perform most — if not all — robotics training in digital environments. Last year, researchers at the University of California, Berkeley trained a bipedal robot called Cassie to walk in simulation and then translated those skills to a real-world replica robot. Also, last year, Meta (formerly Facebook) data scientists trained a four-legged robot in simulation on different surfaces so that an identical, real-world robot could recover when it stumbled. The MIT researchers, too, trained their system entirely in simulation. A digital twin of the Mini Cheetah accumulated 100 days’ worth of experience on digital, “diverse” terrain in just three hours of actual time — learning from mistakes until arriving at the right actions. When the researchers deployed their system onto a real-world Mini Cheetah, they claim that it was able to identify and execute all of the relevant skills it learned in real time. “Achieving fast running requires pushing the hardware to its limits, for example by operating near the maximum torque output of motors. In such conditions, the robot dynamics are hard to analytically model,” MIT CSAIL Ph.D. student Gabriel Margolis and postdoc fellow Ge Yang told MIT News in an interview. “Humans run fast on grass and slow down on ice — we adapt. Giving robots a similar capability to adapt requires quick identification of terrain changes and quickly adapting to prevent the robot from falling over.” Other applications Researchers have accomplished impressive feats with robots in MIT’s Cheetah family before, including jogs at speeds up to 14 miles per hour, backflips, and jumps over objects. Impressively, the Cheetah 3 could balance on three legs, using the fourth as a makeshift arm. But the researchers say their approach eliminates the need to program how a robot — Mini Cheetah or otherwise — should act in every possible situation. That stands in opposition to the traditional paradigm in robotics, where humans tell a robot both what task to accomplish and how to do it. “[A] key contribution to our work is that we push the envelope of what is possible with learned locomotion policies,” Yang told VentureBeat. “Getting something autonomously from point A to point B is still largely an unsolved problem. Wheels are terrible for stairs and grass [while] legs actually work really well. It is a bit difficult to imagine the future, but I think if we build these pieces, things will be more clear down the road.” Margolis and Yang claim they’re already applying the reinforcement learning technique to other robotics systems, including hands that can pick up and manipulate many types of objects. But they caution that it has limitations, including an inability to navigate obstacles that require sight (their system can’t analyze visual data). “Legged robots are increasingly being adopted for industrial inspection and delivery tasks, and improving their mobility makes them a more effective choice for these applications,” Margolis told VentureBeat via email. “This system has only been trained for the task of controlling the robot’s body velocity in the ground plane … Our system also does not yet use vision, so it cannot perform tasks that involve planning, like climbing stairs or avoiding pitfalls. Finally, users of legged robots may wish to optimize for objectives beyond speed, such as energy efficiency or minimization of wear on the robot. In this work, our analysis was focused on speed alone.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,289
2,022
"Miso Robotics partners with Chipotle for tortilla chip-making robots | VentureBeat"
"https://venturebeat.com/ai/miso-robotics-partners-with-chipotle-for-tortilla-chip-making-robots"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Miso Robotics partners with Chipotle for tortilla chip-making robots Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. After years of frying and flipping burgers at White Castle locations, Miso Robotics’ autonomous kitchen assistant has a new gig: making tortilla chips for Chipotle. Miso today announced that the fast casual chain is testing “Chippy,” a robot customized to follow the steps for frying Chipotle chips, at the brand’s R&D facility in Irvine, California. Later this year, the companies say, Chippy will be deployed at a Chipotle restaurant in Southern California. While making tortilla chips might not be the pinnacle of achievement in robotics, the partnership between Miso and Chipotle reflects the restaurant industry’s eagerness to embrace automation technologies. A historic labor shortage is a major factor. According to a February National Restaurant Association report, many restaurant operators expect finding workers to remain difficult until at least 2023 — although the industry’s workforce grew by an estimated 400,000 jobs. Making tortilla chips There’s nothing particularly complicated about the recipe for Chipotle’s tortilla chips, which the brand shared on TikTok in 2020. Here’s the steps (via Today.com ): Cut up corn tortillas into triangles. Fry the tortilla pieces in hot oil for 50 seconds. Toss the chips in a mixing bowl with a liberal squeeze of lime and sprinkle of salt. Toss again! Finish with more lime and more salt. Portion out the chips — and dig in! But programming a robot to follow these steps exactly proved to be somewhat of a challenge. Miso says that it worked with Chipotle’s culinary team in tailoring the technology, training Chippy to replicate the recipe using corn masa flour, water, and sunflower oil to cook the chips, season with salt, and finish with lime juice. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Unlike traditional robots that are designed to produce a perfect result every time, Chippy is explicitly programmed to recreate the culinary integrity of Chipotle’s chips with subtle variations in flavor in each chip. Another unique feature of Chippy is the ability to season chips with fresh lime juice and kosher salt,” Chipotle chief technology officer Curt Garner told VentureBeat via email. Chipotle says that it’s testing Chippy — and crew and guest reactions to it and its chips — before deciding on a national rollout strategy. “Chipotle’s culinary team is testing Chippy to determine if any modifications are required before Chippy is integrated into a restaurant,” Garner added. “The team is also working on the sizing to ensure Chippy can fit into some existing Chipotle kitchens … Chippy will be integrated into a Southern California restaurant later this year to test, listen and learn from employee and guest feedback before a larger rollout is determined.” Expanding market The Chipotle collaboration is a win for Miso, which recently inked a deal with White Castle to bring its Flippy 2 frying robot to 100 of the fast food chain’s locations. Miso claims that Flippy 2 — which integrates with a restaurant’s point-of-sales system and delivery apps — can handle about 60 frying baskets per hour and cook things like chicken tenders, tater tots, cheese sticks, corn dogs, popcorn shrimp, onion rings, and more. Cameras, sensors, motors, and computer vision algorithms enable Miso’s robots to pick up ingredients from a cold storage hopper, alter portion sizes, and learn to prepare new items like Impossible Foods’ vegetarian Impossible Burger. Miso’s robots are designed to be installed under a standard kitchen hood or on the floor and take on tasks like scraping grills, draining excess fry oil, and skimming oil between frying batches, making them plug-and-play for many fast food restaurants. In addition to White Castle , Miso has deployed robots in CaliBurger locations and sports arenas, including Dodger Stadium and Chase Field in Phoenix. The startup also has a partnership with Inspire Brands, the holding company behind Arby’s, Dunkin’ Donuts, and Baskin-Robbins, to test Flippy Wings, Miso’s chicken wing-frying product. Sports bar franchise Buffalo Wild Wings has also announced that it’s testing Flippy Wings in one of its R&D kitchens. Recently, Miso began investigating other areas of kitchen automation, including a software-as-a-service platform aimed at improving restaurant operations. A deal with beverage dispenser manufacturer Lancer Worldwide saw Miso pledge to create a run of automated vending machines aimed at quick service restaurants. In December 2021, Miso — which is valued at $500 million — closed a $35 million series D funding round that brought its total capital raised to $60 million. (The company opened a series E round in February 2022 with the goal of raising $40 million.) Miso has previously said that it plans to take its kitchen robots to markets outside of the U.S. in the future, including the U.K., Canada, and Australia. Replacing workers Miso has long claimed that its robots can boost productivity by working with humans as opposed to replacing them. That might be true when human workers — discouraged by low pay, job insecurity, and added pandemic-related health risks — are in short supply. But in the future, robots like Miso’s threaten to reduce workforces that, in many cases, are struggling to make ends meet. A 2020 report from Aaron Allen & Associates predicts that 80% of restaurant jobs could eventually be taken over by robots. The coauthors expect that machines will replace as many as 57% of fast food and counter workers and 51% of servers as restaurants change their layouts to accommodate more takeout customers, a pandemic-era trend. In a possible harbinger, Chipotle opened a “digital kitchen” two years ago in Highland Falls, New York that lacks a dining room and is only open for pickup and delivery. As of April 2021, the median pay for the roughly five million fast food workers in the U.S. was $11.63 per hour, according to U.S. Bureau of Labor and Statistics data. In Denver, Colorado, where Chipotle is headquartered, MIT’s Living Wage calculator estimates the cost of living for a single person to be around $17.40. (The minimum wage in D enver increased to $15.87 on January 1, 2022.) “Chipotle is always seeking innovative solutions to improve the employee experience and remove friction in restaurants,” Garner said. “We make our chips fresh in house all throughout the day, and the process is a monotonous, labor-intensive task that doesn’t excite the crew as much as other functions. Integrating AI to the chip station removes teams from this function, allowing them to focus on the culinary duties that drove them to join Chipotle.” Staffing shortfalls have pushed wages higher during the pandemic. But restaurant executives are eager to cut these expenditures through, for example, automation, particularly as they lead to rises in menu prices. Starbucks alone plans to spend roughly $1 billion in fiscal 2021 and 2022 on improving benefits for its baristas, a price tag likely higher than what an army of robots would cost. (One of Miso’s robots costs around $20,000 to $30,000 outright or between $1,000 to $2,000 per month on a plan that includes updates and maintenance.) “We’re definitely going to see more use of robots in soft processes such as food production. The challenge comes in working with natural products, which may not be uniformly sized or shaped. This is being addressed with improved AI, vision systems, and innovative gripper design,” Gartner research VP Bill Ray told VentureBeat via email. “Tortilla chips are, in food terms, relatively simple to prepare, so this is very much the first step on a long road which will, eventually, see all manner of foods prepared automatically. We’re still a long way from replacing the chef in the commercial kitchen, but the days of the kitchen porter may be numbered.” One of Miso’s competitor, Momentum Machines, acknowledges its role in the coming displacement, urging those who lose their jobs in the fast food industry to become engineers and work to design — or service — more automated systems. But it isn’t that easy. Upward mobility eludes most in the industry — 90% of the fast food workforce is made up of front-line workers like line cooks and cashiers and less than 1% owns a franchise, the National Employment Law Project reports. Some politicians have floated the idea of a “ robot tax. ” Others advocate for guarantees like universal basic income. Ready or not, though, the robots are coming for restaurant kitchens. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,290
2,022
"Language models that can search the web hold promise -- but also raise concerns | VentureBeat"
"https://venturebeat.com/ai/language-models-that-can-search-the-web-hold-promise-but-also-raise-concerns"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Language models that can search the web hold promise — but also raise concerns Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Language models — AI systems that can be prompted to write essays and emails, answer questions, and more — remain flawed in many ways. Because they “learn” to write from examples on the web, including problematic social media posts, they’re prone to generating misinformation, conspiracy theories, and racist, sexist, or otherwise toxic language. Another major limitation of many of today’s language models is that they’re “stuck in time,” in a sense. Because they’re trained once on a large collection of text from the web, their knowledge of the world — which they gain from that collection — can quickly become outdated depending on when they were deployed. (In AI, “training” refers to teaching a model to properly interpret data and learn from it to perform a task, in this case generating text.) For example, You.com’s writing assistance tool — powered by OpenAI’s GPT-3 language model, which was trained in summer 2020 — responds to the question “Who’s the president of the U.S.?” with “The current President of the United States is Donald Trump.” The solution, some researchers propose, is giving language models access to web search engines like Google, Bing, and DuckDuckGo. The idea is that these models could simply search for the latest information about a given topic (e.g., the war in Ukraine ) instead of relying on old, factually wrong data to come up with their text. In a paper published early this month, researchers at DeepMind, the AI lab backed by Google parent company Alphabet, describe a language model that answers questions by using Google Search to find a top list of relevant, recent webpages. After condensing down the first 20 webpages into six-sentence paragraphs, the model selects the 50 paragraphs most likely to contain high-quality information; generates four “candidate” answers for each of the 50 paragraphs (for a total of 200 answers); and determines the “best” answer using an algorithm. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While the process might sound convoluted, the researchers claim that it vastly improves the factual accuracy of the model’s answers — by as much as 30% — for questions and can be answered using information found in a single paragraph. The accuracy improvements were lower for multi-hop questions, which require models to gather information from different parts of a webpage. But the coauthors note that their method can be applied to virtually any AI language model without much modification. “Using a commercial engine as our retrieval system allows us to have access to up-to-date information about the world. This is particularly beneficial when the world has evolved and our stale language models have now outdated knowledge … Improvements were not just confined to the largest models; we saw increases in performance across the board of model sizes,” the researchers wrote, referring to the parameters in the models that they tested. In the AI field, models with a high number of parameters — the parts of the model learned from historical training data — are considered “large,” while “small” models have fewer parameters. The mainstream view is that larger models perform better than smaller models — a view that’s been challenged by recent work from labs including DeepMind. Could it be that, instead, all language models need is access to a wider range of information? There’s some outside evidence to support this. For example, researchers at Meta (formerly Facebook) developed a chatbot, BlenderBot 2.0 , that improved on its predecessor by querying the internet for up-to-date information about things like movies and TV shows. Meanwhile, Google’s LaMDA , which was designed to hold conversations with people, “fact-checks” itself by querying the web for sources. Even OpenAI has explored the idea of models that can search and navigate the web — the lab’s “ WebGPT ” system used Bing to find answers to questions. New risks But while web searching opens up a host of possibilities for AI language systems, it also poses new risks. The “live” web is less curated than the static datasets historically used to train language models and, by implication, less filtered. Most labs developing language models take pains to identify potentially problematic content in the training data to minimize potential future issues. For example, in creating an open source text dataset containing hundreds of gigabytes of webpages, research group EleutherAI claims to have performed “extensive bias analysis” and made “tough editorial decisions” to exclude data they felt were “unacceptably negatively biased” toward certain groups or views. The live web can be filtered to a degree, of course. And as the DeepMind researchers note, search engines like Google and Bing use their own “safety” mechanisms to reduce the chances unreliable content rises to the top of results. But these results can be gamed — and aren’t necessarily representative of the totality of the web. As a recent piece in The New Yorker notes, Google’s algorithm prioritizes websites that use modern web technologies like encryption, mobile support, and schema markup. Many websites with otherwise quality content get lost in the shuffle as a result. This gives search engines a lot of power over the data that might inform web-connected language models’ answers. Google has been found to prioritize its own services in Search by, for example, answering a travel query with data from Google Places instead of a richer, more social source like TripAdvisor. At the same time, the algorithmic approach to search opens the door to bad actors. In 2020, Pinterest leveraged a quirk of Google’s image search algorithm to surface more of its content in Google Image searches, according to The New Yorker. Labs could instead have their language models use off-the-beaten path search engines like Marginalia, which crawls the internet for less-frequented, usually text-based websites. But that wouldn’t solve another big problem with web-connected language models: Depending on how the model’s trained, it might be incentivized to cherry-pick data from sources that it expects users will find convincing — even if those sources aren’t objectively the strongest. The OpenAI researchers ran into this while evaluating WebGPT, which they said led the model to sometimes quote from “highly unreliable” sources. WebGPT, they found, incorporated biases from the model on which its architecture was based (GPT-3), and this influenced the way in which it chose to search for — and synthesize — information on the web. “Search and synthesis both depend on the ability to include and exclude material depending on some measure of its value, and by incorporating GPT-3’s biases when making these decisions, WebGPT can be expected to perpetuate them further,” the OpenAI researchers wrote in a study. “[WebGPT’s] answers also appear more authoritative, partly because of the use of citations. In combination with the well-documented problem of ‘automation bias,’ this could lead to overreliance on WebGPT’s answers.” The automation bias, for context, is the propensity for people to trust data from automated decision-making systems. Too much transparency about a machine learning model and people become overwhelmed. Too little, and people make incorrect assumptions about the model — instilling them with a false sense of confidence. Solutions to the limitations of language models that search the web remain largely unexplored. But as the desire for more capable, more knowledgeable AI systems grows, the problems will become more urgent. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,291
2,022
"How AI could help enterprises to reduce data storage costs | VentureBeat"
"https://venturebeat.com/ai/how-ai-could-help-enterprises-to-reduce-data-storage-costs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI could help enterprises to reduce data storage costs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The amount of data managed by the world’s enterprises is growing. According to one source, the total amount of data created, captured, copied and consumed globally was about 64.2 zettabytes in 2020 — equal to a trillion gigabytes. Unsurprisingly, companies report that the cost of storing their data is also climbing. In a 2018 Enterprise Storage Forum survey , business leaders said that the high costs of operation, a lack of storage capacity, and aging equipment were among their top concerns. The rising costs of storage have pushed many companies to adopt cloud options, which offer the advantage of low entry costs. But with costs inching up as more businesses move online — a Pepperdata report found that more than one-third of companies have cloud service budget overruns of up to 40% — IT leaders are exploring alternatives. On the cloud side, a nascent crop of startups are applying AI to the problem of managing cloud spend. Vendors like Densify and Cast AI claim that their AI-powered platforms can recommend the best storage configuration for a companies’ workloads by taking into various requirements. Other technology providers have turned their attention to on-premises systems, creating algorithms that they claim can reduce storage costs either with hardware suggestions or novel file compression techniques. “Data storage today suffers from several challenges: Storage deployments are often made up of a variety of different storage media such as memory, flash, disk drives and tapes. In addition, organizations run multiple storage arrays based on access protocols … or based on criticality of the workloads,” Gartner research VP Arun Chandrasekaran told VentureBeat via email. “The usage of AI has the potential to streamline data lifecycle management based on criticality, performance, security and costs requirements of data.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cloud optimization During the pandemic, the pressure to digitize operations led a record number of companies to move to the cloud. According to a recent survey from O’Reilly, 90% of organizations were using cloud computing of some kind in 2021, while Flexera’s State of the Cloud Report shows that 35% of companies spent more than $12 million on cloud operations in 2021. The adoption trend gave rise to startups developing AI-powered platforms designed to adjust usage to reign in expenditures. One is Densify, which analyzes workloads across private data centers, Amazon Web Services, Microsoft Azure, Google Cloud Platform and IBM’s cloud offerings to determine how much CPU, RAM and storage they need — then suggests ways to save. Densify can use already-available log data to begin optimizing right away. After that, the platform will continue to review cloud providers’ pricing changes, applications’ needs and new products to find where customers can reduce expenses further. “Usually within two to four weeks, you’ve got 50% of the savings,” CEO Gerry Smith told VentureBeat in a previous interview. “Depending on where the savings are, within another two to four months, [you’ll get] 100% of the savings.” Cast AI , a Densify competitor, similarly leverages AI to optimize cloud spend. Supporting major cloud service providers, the platform connects to existing clouds and generates a report to identify cost-saving opportunities. “We have other models that use global datasets for market characteristic predictions,” CEO Yuri Frayman told VentureBeat in October 2021. “For example, we train a global model to predict instance preemptions by machine type, region, availability zone and seasonality. This model is shared autonomously across all customers, and all the data is used to retrain the model continuously.” On-premises and compression For companies that haven’t made the move to the cloud — or who have their data spread across cloud and on-premises environments — there are solutions like Accenture’s Storage Optimization Analytics, which combines search and AI to understand enterprise content and automate data classification. Accenture claims that it reduces storage costs by detecting duplicate or near-duplicate content, helping customers move or archive the right data at the right time. Storage Optimization Analytics also automates migration to lower-cost storage and tracks storage savings, computing the overall return on investment (ROI). IT provider Rahi Systems offers a similar service called Pure1 Meta, which uses AI models to predict capacity and performance and provide advice on workload deployment and optimization. Pure1 Meta can run simulations for specific workloads, generating answers to capacity planning questions while ostensibly helping to increase resource utilization. AI is also increasingly playing a role in file compression. For videos, music, and images, AI-based compression can provide the same — or close to the same — level of visual quality with fewer bits. Another benefit is that it’s easier to upgrade, standardize, and deploy new AI codecs versus standard codecs, since the models can be trained in a relatively short amount of time and — importantly — don’t require special-purpose hardware. Websites like Compression.ai and VanceAI leverage models to compress images without compromising on quality or resolution. Qualcomm and Google have experimented with AI-driven codecs for both audio and video. And Alphabet-owned DeepMind has created an AI system to compress videos on YouTube, reducing the average amount of data that YouTube needs to stream to users by 4% without a noticeable loss in video quality. Looking to the future Gartner’s Chandrasekaran notes that the adoption of AI technologies for data management, which fall under the category of “AIops,” remains quite low. (AIops platforms aim to enhance IT by leveraging AI to analyze data in an organization’s from tools and devices). But he adds that the pandemic has been a catalyst for adoption as organizations strive to automate faster to respond to “rapidly changing” circumstances. Recent surveys agree. According to Emergn, 87% of companies expect their investments in automation skills to increase over the next 12 to 26 months. And in a 2020 K2 poll , 92% of business leaders said that they consider process automation vital to success in the modern workplace. “There is a lot of ‘AI washing’ in the industry today. Hence, vetting vendor claims and deploying a solution that delivers ROI can be frustrating. AIops requires a lot of integration,” Chandrasekaran said. “For teams that aren’t skilled in architecting and maintaining complex data environments, a robust AIops deployment may become a pipe dream. There also needs to be a cultural change, where organizations are willing to make data-driven decisions.” Looking ahead, Chandrasekaran expects to see more “versatile” AI-powered storage management solutions beyond the products already on the market. These solutions could enable greater intelligent automation and remediation workflows through the use of AI, he believes. “AI techniques can help optimize placement of data on the right storage tiers — balancing performance and costs. In addition, AI can help with better availability of data infrastructure, enabling businesses to access data faster and create a reliable infrastructure,” Chandrasekaran added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,292
2,022
"Fairwords claims to prevent workplace harassment with AI, but the reality is more complicated | VentureBeat"
"https://venturebeat.com/ai/fairwords-claims-to-prevent-workplace-harassment-with-ai-but-the-reality-is-more-complicated"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fairwords claims to prevent workplace harassment with AI, but the reality is more complicated Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Harassment in the workplace affects employees of all backgrounds, genders, sexualities, and ethnicities — but disproportionately those in under-represented groups. A 2018 survey by Stop Street Harassment showed that 81% of women have been harassed in their lifetime. And according to a UCLA School of Law study, half of LGBTQ workers have faced job discrimination at some point in their careers. Work-from-home arrangements during the pandemic haven’t slowed or reversed the trend — in fact, they’ve accelerated it. A poll from TalentLMS and The Purple Campaign, the results of which were published in 2021, found that over one in four respondents experienced unwelcome sexual behavior online since the start of the health crises, either via videoconferencing, email, or text messages. Beyond the emotional distress for everyone involved, there’s a dollars-and-cents motivation for companies to prevent and address abuse. Sexual harassment causes lasting damage to the employees who experience it, leading to higher turnover, lower productivity, and increased absenteeism and sick leave. Deloitte estimates that workplace sexual harassment costs an average of $2.6 billion in lost productivity, or $1,053 per victim. Borrowing a page from social media’s playbook, a relatively new crop of startups is developing systems that leverage a combination of filters, algorithms, and other tools to flag potentially problematic messages before they reach employees. For example, Fairwords , which today raised $5.25 million in a series A round led by Fintop Capital, uses a “spell-check” like interface to notify users of harmful language as they type and give information about how the language might be interpreted. But some experts say that these platforms run the risk of normalizing employee surveillance, complicating their mission of reducing abuse. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Flagging abusive language Fairwords, like its rivals CommSafe AI and Aware, uses AI to scan the messages that employees send through collaboration platforms including Slack, Microsoft Teams, Facebook Messenger, Skype, Zoom, and email. The company’s software sits on desktops and detects language that’s “non-compliant,” such as words and phrases that might fall under the categories of cyberbullying or sexual harassment. When Fairwords spots language that possibly runs afoul of workplace policies, it shows real-world examples of how the language has “been detrimental to businesses and careers.” On the backend, managers can add policies to Fairwords for specific types of violations. The company claims that its software can also look for signs of bribery and corruption, collusion, and discrimination. Fairwords attempts to quantify violations across the workforce via an analytics dashboard, showing communications trends over time and the percentage of employees who revise their messages before sending. “Fairwords helps C-suite compliance and HR leaders understand the health of their communications culture by providing anonymous dashboards that show information including number of words analyzed, number of words flagged, percent of flags repeated within 30 days, percent of flags repeated more than once within 30 days, average flags per user, top flags per term, and top flagged applications,” a Fairwords spokesperson told VentureBeat via email. “The dashboards can help organizational leaders understand what kind of language is being used in their organizations, how to categorize that language (harassment, discrimination, bullying, etc.), and the applications that people are using to communicate.” Fairwords pitches the solution as a way to see which words or phrases are being flagged most often and which communications channels are the worst offenders — as well as detect unauthorized chat apps. But while the company claims that it anonymizes the data it collects, some employees might feel wary of software that analyzes their keystrokes. “Fairwords is built to train employees first, providing them with immediate feedback and training as they type to help them write inclusive, compliant, and fair communications. Our mission is to elevate company cultures by improving the nature and quality of written communications, and our product is built so that employees can interact with it directly,” the spokesperson said. According to a 2021 ExpressVPN survey , 59% of employees are wary of employer surveillance while 43% believe that workplace monitoring software — which is largely legal in the U.S. — is a violation of trust. Beyond privacy concerns, part of the reason might be that companies often fail to alert staff that they’re using surveillance software. Digital.com found in a recent study that 14% of businesses haven’t notified staff about their monitoring activities. The Fairwords spokesperson asserts that the platform gathers data in an anonymous way, showing information about an organization overall versus individuals. “Surveillance products, which we are not, monitor individuals and are not built to provide feedback or support to employees, and they are often not shared with employees until they’ve ‘caught’ someone doing something wrong,” they added. “We believe in a proactive and transparent approach to create improvement overall.” Still, some experts say that software like Fairwords — whether guilty of enabling surveillance or not — can make a difference if properly and transparently implemented. “Employees have a right to privacy, and companies should be transparent about whether they monitor their employees’ work-related communications … [But] platforms like Fairwords could be a great tool to sensitize and train people towards using inclusive and fair language in workplaces,” Nabamallika Dehingia, a pre-doctoral fellow at the University of California, San Diego who’s coauthored studies on harassment in the workplace. “AI technologies like Fairwords could be used as a supplement to [other] policies in order to ensure that employees do not engage in offensive or abusive digital communications.” Amir Karami, an associate professor specializing in social media and politics at the University of South Carolina, agrees that these types of tools can be beneficial by promoting a positive culture, training employees, and helping companies to determine how much of their workspace is safe. But he points out the problems they pose, as well, such as potentially unwanted data collection. “First, if the platforms collect personal data and the employees don’t know who has access to the data, this can create fear and reduce the trust level in the company,” Karami told VentureBeat via email. “Second, the employees might think that the data might be used for punishment. Third, if an employee uses an inappropriate word that wasn’t recognized by the platforms, they might assume that it is ok to use that word. Fourth, the platforms could create stress of constant monitoring that leads to lower job satisfaction and employee and reduce employee retention.” Flaws and meaningful change One of the challenges is flagging offensive language while preventing bias from creeping into the system. In a 2019 study from the University of Washington, scientists found that AI is more likely to label phrases in the African American English (AAE) dialect — a dialect spoken by many Black people in the U.S. — more toxic than general American English equivalents, despite their being understood as non-toxic by AAE speakers. Other audits of AI-powered, toxicity-detecting systems have found that they struggle to recognize hate speech that uses reclaimed slurs like “queer.” For example, at one point, the hate speech detection systems Meta used on Facebook aggressively detected comments denigrating white people more than attacks on other demographic groups. There’s also evidence to suggest that AI misses toxic text a human could easily spot, particularly where there’s missing characters, added spaces between characters, and spellings with numbers in place of words. For its part, Fairwords says that it’s working to “constantly improve” its algorithms and make its backend systems easier to customize. “Fairwords detection currently uses pre-trained, patented natural language processing-based models,” the spokesperson told VentureBeat. “Shortly, we will be introducing updates that use both publicly-sourced training data for toxic language and customer data with user feedback (so-called human in the loop) to further improve detection effectiveness … We [currently] use anonymized customer data to help train the models, which includes the feedback from end-users when they indicate notifications are not accurate with classifications as to why. This helps the analytics adapt to domain-specific jargon and changing communications behavior.” Regardless of an AI system’s accuracy, Dehingia cautions that software alone isn’t the answer to workplace harassment or abuse. Instead, she says, it requires “larger normative shifts” that prioritize harassment prevention through “protective policies” and “strong measures for inclusion and diversity.” “We … need data to track improvements and backsliding on these issues, and platforms designed to train are not necessarily those best suited to evaluate their own impact … Companies [also] need to develop a culture of inclusion and diversity that is top-down (leadership prioritizing diversity) as well as participatory in that it allows its employees to provide feedback and suggestions on making the workplace a safe environment,” Dehingia said. “Representation, especially in senior or leadership roles is important … Institutional policies that discourage harassment and protect women and individuals belonging to racial and other social minority groups must be put in place.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,293
2,022
"Executives discuss top challenges in deploying AI -- and how to solve them | VentureBeat"
"https://venturebeat.com/ai/executives-discuss-top-challenges-in-deploying-ai-and-how-to-solve-them"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Executives discuss top challenges in deploying AI — and how to solve them Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hastened by a widespread move to digitize operations, the enterprise is enthusiastically embracing AI. According to IDC’s 2022 AI InfrastructureView survey, 31% of companies say that they now have AI in production, while the majority are actively piloting AI technologies. Increasingly, adopting AI is leading to boosted profitability, with 27% of businesses responding to a December 2021 McKinsey survey claiming that at least 5% of their earnings before interest and taxes (EBIT) are now attributable to AI. But there remain many hurdles to successfully implementing AI. Of the companies participating in the AI InfrastructureView poll, only one-third claim to have reached a “mature” state of adoption wherein their entire organization is benefitting from an enterprise-wide AI strategy. Moreover, while nearly two-thirds of companies in the McKinsey survey say that they’ll continue to increase their investments in AI over the next three years, half admitted experiencing higher-than-expected AI project costs. Data science disconnect Why is getting AI projects into production so challenging? The reasons vary, according to Jeff Boudier, head of product and growth at AI language startup Hugging Face. But commonly, companies fail to establish systems that would allow their data science teams — the teams responsible for deploying AI technologies — to properly version and share AI models, code, and datasets, he says. This creates more work for AI project managers, which have to keep track of all the models and datasets created by teams so that they don’t reinvent the wheel for each business request. “Today, data science is largely done in ‘single player’ mode, where code lives in notebooks on local machines,” Boudier told VentureBeat via email. “It’s how business software was done 15 years ago, before modern version control systems and … collaboration workflows changed the day.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The emerging discipline of MLops, which stands for “machine learning operations” (a term coined by Gartner in 2017), aims to address the disparate and siloed nature of AI development by establishing practices for collaboration between data scientists. By simplifying AI management processes, the goal of MLops is to automate the deployment of AI models into the core software systems of an organization. For example, startups like ZenML enable data scientists to express their workflows as pipelines that, with configuration changes, can accommodate different infrastructure and dev tools. These can build into a framework to solve reproducibility and versioning problems, reducing the need to coordinate between DevOps teams and data scientists. Increasing size — and data requirements But collaboration isn’t the only hurdle facing companies adopting AI. Others are consequences of machine learning models continuing to exponentially increase in size, according to Boudier. Large models often don’t fit on commodity hardware and can be slow and expensive to run. Or they’re locked into proprietary APIs and services and dubiously touted as universal problem solvers. “[Proprietary models hamper] AI adoption as … teams can’t dive into the code and properly evaluate or improve the models, and continues to create confusion on how to approach AI problems pragmatically,” Boudier said. “Deploying large models in production to be applied on large amounts of data requires diving into the model graph down to the hardware, which requires skills most companies do not have.” Sean Hughes, ecosystem director at ServiceNow, says that companies often expect too much from AI models without doing the work necessary in order to adapt them for their business. But that can lead to other problems, including a lack of data available to fine-tune the models in each context where they’ll be used. In a 2019 Dun & Bradstreet survey , companies rated a lack of data on par with a lack of internal expertise as the top setbacks to further implementing AI across their organizations. “Hype and sensationalism generated when AI research scientists open source work that achieves new state-of-the-art benchmark results can be misinterpreted by the general public as being the same as ‘problem solved.’ But the reality is that state-of-the-art for a specific AI solution might only achieve 78% accuracy for a well-defined and controlled configuration,” Hughes told VentureBeat via email. “[A major challenge is] the expectation of the enterprise user that [an off-the-shelf] model will understand the nuances of the enterprise environment in order to be useful for decision-making … [Without the required data,] even with the potential for AI to suggest a directionally correct next best action, it can’t, since it doesn’t understand the context of the user intent in that enterprise.” On the same page Feiyu Xu, senior vice president and global head of AI at SAP, concurs, adding that AI projects have the best chance of success when there’s alignment between lines of business and AI technology teams. This alignment can foster “focused” and “scalable” solutions for delivering AI services, she asserts, and touch on ethical problems that might crop up during ideation, development, or deployment. “The best use cases of AI-powered applications ensure the AI technologies are fully embedded and automated for end users. Also, AI systems work best when experts securely use real business data to train, test, and deploy the AI services,” Xu said. “Companies need to clearly define guidelines and guardrails to ensure that ethical issues are carefully considered in the development of new AI services from the outset. In addition, it’s important to include external, independent experts to review cases and topics in question on a regular basis.” On the subject of data-related challenges in AI deployment, Xu points to the emergence of platform-as-a-service solutions designed to help both developers and non-developers link data sources across different backend systems. Torch.AI , for instance, connects apps, systems, services, and databases to enable reconciliation and processing of both unstructured and structured data for AI applications. “AI plays a key role in empowering companies and industries to become intelligent enterprises,” Xu said. “Most users of AI have little experience in software development to design, change, and improve their own workflows and business applications. This is where an intuitive, no-code development environment for functions like intelligent process automation, workflow management, and robotic process automation can really help.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,294
2,022
"DeepMind claims its AI can decipher ancient Greek texts from damaged artifacts | VentureBeat"
"https://venturebeat.com/ai/deepmind-claims-its-ai-can-decipher-ancient-greek-texts-from-damaged-artifacts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind claims its AI can decipher ancient Greek texts from damaged artifacts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2019, DeepMind, the lab backed by Google parent company Alphabet, announced that it had created an AI system that can restore ancient Greek texts. The lab claimed that the system, called Pythia, could accurately guess sequences of letters in text inscribed on stone tablets that had been cracked, chipped, or otherwise damaged. Today in a paper published in the journal Nature , DeepMind introduced the successor to Pythia , Ithaca, which the lab says performs even better in Greek text restoration tasks. Ithaca reportedly achieves 62% accuracy in restoring damaged texts, 71% accuracy in identifying their original location, and can date texts to within 30 years of their date ranges. DeepMind partnered with Google Cloud and Google Arts & Culture, Google’s cultural preservation nonprofit, to launch an interactive version of Ithaca. It also open-sourced the code and the model that powers the system. Roger Bagnall, a professor of history at New York University, is hopeful that Ithaca can be extended to other ancient languages, particularly those for which few examples exist. “The dynamism of Ithaca is particularly appealing; looking at the improvement in performance since Pythia gives hope that even the excellent results of Ithaca can before long be improved, with iterative learning based on the human-machine collaboration that it makes possible,” he said in a statement. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Speaking Greek Ithaca is a collaboration between DeepMind and the Department of Humanities of Ca’ Foscari University of Venice, the Classics Faculty of the University of Oxford, and the Department of Informatics of the Athens University of Economics and Business. The goal was to build a system that can decipher Greek text written on stone, pottery, and metal artifacts, some of which dates back to over 2,500 years ago. The challenge is twofold: ancient Greek inscriptions are often damaged and modern dating techniques, like radiocarbon dating, can’t be used. Building on its work with Pythia, DeepMind developed Ithaca using a dataset of over 178,000 Greek inscriptions supplied by the Packard Humanities Institute. Researchers at the lab trained the system using Greek words and individual characters, so that damaged or missing text wouldn’t interfere with Ithaca’s ability to analyze either. It’s different from the approach typically taken with text-analyzing and -generating systems like OpenAI’s GPT-3, which are trained using only sequences of words. The order in which the words appear in sentences and the relationships between them provide extra meaning and context to the systems. Ithaca had to learn to make do without this information. In an illustration of how Ithaca might be useful to historians, DeepMind says that the system predicted a date of 421 BCE for a series of Athenian decrees — for example, awards of citizenship, declarations of war, and enactments of treaties — made at a time when notable figures such as Socrates and Pericles lived. Originally thought to have been written before 446/445 BCE, the system agreed with new evidence that suggests a date of the 420s BCE. DeepMind says it’s working on versions of Ithaca trained in other ancient languages. In the meantime, historians can use datasets in the current architecture to study other ancient writing systems, the lab notes — including Akkadian, Demotic, Hebrew, and Mayan. “Ithaca’s extensibility to other languages and textual corpora is exciting. I can hardly wait to see it applied to the documentary papyri, where we have far more precise dating but far more unprovenanced texts, because of the operations of the antiquities market,” Bagnall continued. “It should be possible with Ithaca’s help to reconstruct the workings of that market and the original historical context of many more of the thousands of papyrus documents.” Restoring texts with AI DeepMind isn’t the first to apply AI to historical texts. Increasingly, academics have been exploring machine learning to restore documents that were previously lost to history, including those written in cuneiform. For example, last year, researchers at Jerusalem’s Hebrew University created an AI system that can predict missing words, phrases, and sentences from cuneiform tablets up to 4,500 years old. Elsewhere, a team of researchers in Italy used a robotic system to process, match, and physically reconstruct frescoes and other shattered artifacts from Pompeii. But AI designed for artifact restoration raises questions about whether the process could influence or change the meaning of the original work. After all, AI, like humans, isn’t infallible — Ithaca made errors in restoring damaged text 38% of the time. DeepMind’s solution is visual aids aimed at minimizing the potential for misinterpretation of Ithaca’s predictions. Ithaca offers several text restorations “hypotheses” from which users can choose, each with a different associated confidence metric. The system returns probabilities for 84 different ancient regions, representing its level of uncertainty. Ithaca also produces a distribution of predicted dates across decades from 800 BCE to 800 CE, with a confidence value for specific ranges, and highlights words that led to its predictions for text, location, and dates. Alison Cooley, president of the international digital epigraphy association and a professor at the University of Warwick, doesn’t believe that systems like Ithaca will replace the need for human expertise. Instead, they can act as a guide or tool for researchers studying antiquities, he says — perhaps helping to uncover patterns that’d otherwise be missed. In a DeepMind experiment, expert historians were 25% accurate in restoring ancient texts, but their performance increased to 72% when using Ithaca. “This paper represents a very important development in the collaborative use of AI to enhance the restoration, dating, and attribution of inscriptions written in Greek from the ancient world over a period of several centuries,” Cooley said in a statement. “The innovative design of Ithaca promises to transform the potential contribution of inscribed evidence to our understanding of key moments in world history.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,295
2,022
"CaliberMind, which analyzes company revenue using AI, lands $8M | VentureBeat"
"https://venturebeat.com/ai/calibermind-which-analyzes-company-revenue-using-ai-lands-8m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CaliberMind, which analyzes company revenue using AI, lands $8M Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In the enterprise, revenue analysis involves canvassing the total revenue generated by a company’s activities to identify strengths and weaknesses. Revenue analysis can reveal which products and services are selling well versus poorly in the context of historical sales, for example, and spotlight areas in which revenue can be increased with the least amount of investment. It’s been suggested that AI could help to support revenue analysis by finding patterns in data that humans might miss. In a recent article, analysts at Boston Consulting Group (BCG) argued that AI can “improve the accuracy of forecasts” and “[enable] real-time decisions” to — among other tasks — “improve throughput, develop products, and deliver services in the most resource-optimal way.” But in a 2020 survey , BCG found that while almost 90% of executives agree that AI represents a revenue-boosting opportunity, only 18% have set out to use it for that purpose. Oren Zamir and Raviv Turner, the cofounders of CaliberMind , argue that the low adoption of AI for revenue generation can be blamed partly on organizations’ lack of resources. It’s difficult to ingest the volume of data being thrown at revenue teams and translate it into actionable insights, they say, which is why some companies are turning to platforms like CaliberMind’s for prebuilt solutions. CaliberMind today announced that it raised $8 million in a series A round co-led by IAG Capital Partners and Lavrock Ventures with participation from Bombora CEO Eric Matlick and Denver Angels. The company says it’ll put the funding toward growing its engineering team, product development, and go-to-market efforts across marketing, sales, and customer success. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI-powered revenue analysis Of course, Zamir and Turner have a horse in the race. The two cofounded Denver, Colorado-based CaliberMind in 2016 with the goal of cornering the nascent AI revenue analysis market. But they might be right in saying that some companies are struggling to apply AI to the task of analyzing revenue. According to McKinsey’s 2021 Global Survey on AI, AI’s revenue benefits have held steady or even decreased since 2020. Prior to CaliberMind, Zamir was a principal software engineer at Dell EMC, where he leads user interface design. Turner was a mentor at Techstars and CEO at NYKB, a Manhattan-based interior design firm using graphic software. CaliberMind integrates with different revenue-focused data sources and stiches them together, including web, ads, and customer relationship management (CRM) data. It looks across an organization’s customers and attempts to link them to actions and intent, optionally routing the analysis back into CRM systems or marketing automation platforms for campaign targeting. “CaliberMind integrates with … key data sources and then does the hard work of stitching together all of that raw information into a coherent story about [a] business,” Westerkamp told VentureBeat via email. “CaliberMind leverages machine learning and deep analytics to help revenue operations teams gain significant insights into what activities and tactics work best. Augmented with full workflow and automation tools, CaliberMind is a central platform for them.” CaliberMind normalizes, deduplicates, and unifies data, even going so far as to automatically convert sales leads into contacts. According to CEO Eric Westerkamp, CaliberMind can show which sales campaigns and channels are top performers for organizations, showing which people and accounts are trending in each stage of the buyer journey. “One of the biggest challenges that enterprise … organizations face in leveraging machine learning is the ability to create accurate insights that are actionable. Marketers in particular struggle with the number of data sources and frequency of platform changes,” Westerkamp continued. “CaliberMind solves this problem by automating all of the training, setup and configuration — all of the hard data engineering work — while enabling marketers to focus on the insights and actions.” Data analytics While AI can be useful in revenue analysis, not every organization is convinced that even managed platforms like CaliberMind can deliver on their promises. This is particularly true in industries like health care, for example, where the data being analyzed is of a more sensitive nature. A Change Healthcare study found that 60% of health care organizations are concerned about whether AI for revenue lifecycle management — i.e., managing the process through which payments flow — will deliver return on investment. Deloitte reports that 56% of companies in health care are slowing the adoption of AI technologies because of the emerging risks. The general skepticism around AI doesn’t appear to have slowed CaliberMind’s momentum. CaliberMind hit over 50 customers this year. Revenue grew 200% year-over-year. “Our customer base is largely business-to-business (B2B) technology vendors, with over 300 active users at approximately 50 customers. CaliberMind customers include name brands like NetApp, numerous companies in the Fortune 100, and many of the fastest growing technology companies like InvoiceCloud and Zelis,” Westerkamp said. “B2B marketing and sales motions are completely changing due to two major drivers. The first is that most buyers are switching to digital first — relying on digital channels for up to 90% of their information. The second is the huge change in marketing from relying on third-party data to first party data. These two drivers means that every B2B marketing organization will have to have a centralized solution to help them manage and leverage their first-party data.” To date, 25-employee CaliberMind has raised over $14 million in venture capital. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,296
2,022
"AI Weekly: Nvidia's commitment to voice AI -- and a farewell | VentureBeat"
"https://venturebeat.com/ai/ai-weekly-nvidias-commitment-to-voice-ai-and-a-farewell"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Nvidia’s commitment to voice AI — and a farewell Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, Nvidia announced a slew of AI-focused hardware and software innovations during its March GTC 2022 conference. The company unveiled the Grace CPU Superchip , a data center processor designed to serve high-performance compute and AI applications. And it detailed the H100, the first in a new line of GPU hardware aimed at accelerating AI workloads including training large natural language models. But one announcement that slipped under the radar was the general availability of Nvidia’s Riva 2.0 SDK, as well as the company’s Riva Enterprise managed offering. Both can be deployed for building speech AI applications and point to the growing market for speech recognition in particular. The speech and voice recognition market is expected to grow from $8.3 billion in 2021 to $22.0 billion by 2026, according to Markets and Markets, driven by enterprise applications. In 2018, a Pindrop survey of 500 IT and business decision-makers found that 28% were using voice technology with customers. Gartner, meanwhile, predicted in 2019 that 25% of digital workers will use virtual employee assistants daily by 2021. And a recent Opus survey found that 73% of executives see value in AI voice technologies for “operational efficiency.” “As speech AI is expanding to new applications, data scientists at enterprises are looking to develop, customize and deploy speech applications,” an Nvidia spokesperson told VentureBeat via email. “Riva 2.0 includes strong integration with TAO , a low code solution for data scientists, to customize and deploy speech applications. This is an active area of focus and we plan to make the workflow even more accessible for customers in the future. We have also introduced Riva on embedded platforms for early access, and will have more to share at a later date.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nvidia says that Snap, the company behind Snapchat, has integrated Riva’s automatic speech recognition and text to speech technologies into their developer platform. RingCentral, another customer, is leveraging Riva’s automatic speech recognition for video conferencing live-captioning. Speech technologies span voice generation tools, too, including “voice cloning” tools that use AI to mimic the pitch and prosody of a person’s speech. Last fall, Nvidia unveiled Riva Custom Voice , a new toolkit that the company claims can enable customers to create custom, “human-like” voices with only 30 minutes of speech recording data. Brand voices like Progressive’s Flo are often tasked with recording phone trees and elearning scripts in corporate training video series. For companies, the costs can add up — one source pegs the average hourly rate for voice actors at $39.63, plus additional fees for interactive voice response (IVR) prompts. Synthetization could boost actors’ productivity by cutting down on the need for additional recordings, potentially freeing the actors up to pursue more creative work — and saving businesses money in the process. According to Markets and Markets, the global voice cloning market could grow from $456 million in value in 2018 to $1.739 billion by 2023. As far as what lies on the horizon, Nvidia sees new voice applications going into production across augmented reality, videoconferencing, and conversational AI. Customers’ expectations and focus are on high accuracy as well as ways to customize voice experiences, the company says. “Low-code solutions for speech AI [will continue to grow] as non-software developers are looking to build, fine-tune, and deploy speech solutions,” the spokesperson continued, referencing low-code development platforms that require little to no coding in order to build voice apps. “New research is bringing emotional text-to-speech, transforming how humans will interact with machines.” Exciting as these technologies are, they will — and already have — introduced new ethical challenges. For example, fraudsters have used cloning to imitate a CEO’s voice well enough to initiate a wire transfer. And some speech recognition and text-to-speech algorithms have been shown to recognize the voices of minority users less accurately than those with more common inflections. It’s incumbent on companies like Nvidia to make efforts to address these challenges before deploying their technologies into production. To its credit, the company has taken steps in the right direction, for example prohibiting the use of Riva for the creation of “fraudulent, false, misleading, or deceptive” content as well as content that “promote[s] discrimination, bigotry, racism, hatred, harassment, or harm against any individual or group.” Hopefully, there’s more along this vein to come. A farewell As an addendum to this week’s newsletter, it’s with sadness that I announce I’m leaving VentureBeat to pursue professional opportunities elsewhere. This edition of AI Weekly will be my last — a bittersweet realization, indeed, as I try to find the words to put to paper. When I joined VentureBeat as an AI staff writer four years ago, I had only the vaguest notion of the difficult journey that lay ahead. I wasn’t exceptionally well-versed in AI — my background was in consumer tech — and the industry’s jargon was overwhelming to me, not to mention contradictory. But as I came to learn particularly from those on the academic side of data science, an open mind — and a willingness to admit ignorance, frankly — is perhaps the most important ingredient in making sense of AI. I haven’t always been successful in this. But as a reporter, I’ve tried not to lose sight of the fact that my domain knowledge pales in comparison to that of titans of industry and academia. Whether tackling stories about biases in computer vision models or the environmental impact of training language systems, it’s my policy to lean on others for their expert perspectives and present these perspectives, lightly edited, to readers. As I see it, my job is to contextualize and rely on, not to pontificate. There’s a place for pontification, but it’s on opinion pages — not news articles. I’ve learned a healthy dose of skepticism goes a long way, too, in reporting on AI. It’s not only the snake oil salesmen one must be wary of, but the corporations with well-oiled PR operations, lobbyists, and paid consultants claiming to prevent harms but in fact doing the opposite. I’ve lost track of the number of ethics boards that’ve been dissolved or have proven to be toothless; the number of damaging algorithms have been sold through to customers; and number of companies have attempted to silence or push back against whistleblowers. The silver lining is regulators’ growing realization of the industry’s deception. But, as elsewhere in Silicon Valley, techno-optimism has revealed itself to be little more than a publicity instrument. It’s easy to get swept up in the novelty of new technology. I once did — and still do. The challenge is recognizing the danger in this novelty. I’m reminded of the novel When We Cease to Understand the World by the Chilean writer Benjamín Labatut, which examines great scientific discoveries that led to prosperity and untold suffering in equal parts. For example, German chemist Fritz Haber developed the Haber-Bosch process, which synthesizes ammonia from nitrogen and hydrogen gases and almost certainly prevented famine by enabling the mass manufacture of fertilizer. At the same time, the Haber-Bosch process simplified and made cheaper the production of explosives, contributing to millions of deaths suffered by soldiers during World War I. AI, like the Haber-Bosch process, has the potential for enormous good — and good actors are trying desperately to bring this to fruition. But any technology can be misused, and it’s the job of reporters to uncover and spotlight those misuses — ideally to affect change. It’s my hope that I, along with my distinguished colleagues at VentureBeat, have accomplished this in some small part. Here’s to a future of strong AI reporting. For AI coverage, be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,297
2,022
"AI Weekly: New poll shows public's view of facial recognition, DOJ isn't tracking predictive policing spending | VentureBeat"
"https://venturebeat.com/ai/ai-weekly-new-poll-shows-publics-view-of-facial-recognition-doj-isnt-tracking-predictive-policing-spending"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: New poll shows public’s view of facial recognition, DOJ isn’t tracking predictive policing spending Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week in AI, a new Pew Center poll shed light on Americans’ views of AI, including the use of facial recognition by police. In other news, the U.S. Justice Department revealed it hasn’t kept “specific record[s]” on its purchases of predictive policing technologies, a category of technologies that investigations have shown to be biased against minority groups. Lured by the promise of reducing crime and the time to solve cases, law enforcement agencies have increasingly explored AI-powered tools like facial recognition, drones, and predictive policing software, which attempts to predict where crime will occur using historical data. According to Markets and Markets, police departments are expected to spend as much as $18.1 billion on software tools including AI-powered systems, up from $11.6 billion in 2019. But the effectiveness of these systems has repeatedly been put into question. For example, an investigation by the Associated Press found that ShotSpotter, a “gunfire locater service” that uses AI to triangulate the source of firearm discharges, can miss live gunfire right under its microphones or misclassify the sounds of fireworks or cars backfiring. Extensive reporting by Gizmodo and The Markeup, meanwhile, has revealed that Geolitica (previously called PredPol), a policing software that attempts to anticipate property crimes, disproportionately predicts that crime will be committed in neighborhoods inhabited by working-class people, people of color, and Black people in particular. Facial recognition, too, has been shown to be biased against “suspects” with certain skin tones and ethnicities. At least three people in the U.S. — all Black men — have been wrongfully arrested based on poor facial recognition matches. And studies including the landmark Gender Shades project have shown that facial recognition technology once marketed to police, including Amazon’s Rekognition, are significantly more likely to misclassify the faces of darker-skinned people. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But dichotomously, public support for facial recognition use by police is relatively high, with a plurality of respondents to a recent Pew report saying they agree with its deployment. The reason might be the relentless PR campaigns waged by vendors like Amazon, which have argued that facial recognition can be a valuable tool in helping to find missing persons, for instance. Or it might be ignorance of the technology’s shortcomings. According to Pew, respondents who’ve heard a lot about the use of facial recognition by the police were more likely to say it’s a bad idea for society than those who hadn’t heard anything about it. Racial divisions cropped up in the Pew survey’s results, with Black and Hispanic adults more likely than white adults to say that police would definitely or probably use facial recognition to monitor Black and Hispanic neighborhoods more often than other neighborhoods. Given that Black and Hispanic individuals have a higher chance of being arrested and incarcerated for minor crimes and, consequently, are overrepresented in mugshot data — the data that has been used in the past to develop facial recognition algorithms — which is hardly surprising. “Notable portions of people’s lives are now being tracked and monitored by police, government agencies, corporations and advertisers … Facial recognition technology adds an extra dimension to this issue because surveillance cameras of all kinds can be used to pick up details about what people do in public places and sometimes in stores,” the coauthors of the Pew study write. Justice Department predictive policing The Department of Justice (DOJ) is a growing investor in AI, having awarded a contract to Veritone for transcription services for its attorneys. The department is also a customer of Clearview, a controversial facial recognition vendor, where employees across the FBI, Drug Enforcement Administration, and other DOJ agencies have used it to perform thousands of searches for suspects. But according to Gizmodo, the DOJ maintains poor records of its spending — especially where it concerns predictive policing tools. Speaking with the publication, a senior official said that the Justice Department isn’t actively tracking whether funds from the Edward Byrne Memorial Justice Assistance Grant Program (JAG), a leading source of criminal justice funding, are being used to buy predictive policing services. That’s alarming, say Democratic Senators including Ron Wyden (D-OR), who in April 2020 sent a letter to U.S. Attorney General Merrick Garland requesting basic information about the DOJ’s funding of AI-driven software. Wyden and his colleagues expressed concern that this software lacked meaningful oversight, potentially amplified racial biases in policing, and might even violate citizens’ rights to due process under the law. The fears aren’t unfounded. Gizmodo notes that audits of predictive tools have found “no evidence they are effective at preventing crime” and that they’re often used “without transparency or … opportunities for public input.” In 2019, the Los Angeles Police Department, which had been trialing a range of AI policing tools, acknowledged in an internal evaluation that the tools “often strayed from their stated goals.” That same year, researchers affiliated with New York University showed in a study that nine police agencies had fed software data generated “during periods when the department was found to have engaged in various forms of unlawful and biased police practices. “It is unfortunate the Justice Department chose not to answer the majority of my questions about federal funding for predictive policing programs,” Wyden said, suggesting to Gizmodo that it may be time for Congress to weigh a ban on the technology. Already, a number of cities, including Santa Cruz, California and New Orleans, Louisiana have banned the use of predictive policing programs. But partisan gridlock and special interests have so far stymied efforts at the federal level. For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,298
2,022
"AI Weekly: DARPA seeks to better align AI with human intentions | VentureBeat"
"https://venturebeat.com/ai/ai-weekly-darpa-seeks-to-better-align-ai-with-human-intentions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: DARPA seeks to better align AI with human intentions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week in AI, DARPA, the emerging technologies R&D agency of the U.S. Defense Department, launched a new program that aims to “align” AI systems with human decision-makers in domains where there isn’t an agreed-upon right answer. Elsewhere, two prominent cofounders from LinkedIn and DeepMind, Reid Hoffman and Mustafa Suleyman, announced a new AI startup called Inflection AI that seeks to develop software that allows humans to talk to computers using everyday language. In a press release describing the new three-and-a-half-year program, DARPA says that the goal is to “evaluate and build trusted algorithmic decision-makers for mission-critical Department of Defense operations.” Dubbed “In the Moment,” or ITM, it focuses on the process of alignment — building AI systems that accomplish what they’re expected to accomplish. “ITM is different from typical AI development approaches that require human agreement on the right outcomes,” ITM program manager Matt Turek said in a statement. “The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data.” For example, self-driving cars can be developed against a ground truth for right and wrong decisions based on unchanging, relatively consistent rules of the road. The designers of these cars could hard-code “risk values” into the cars that prevent them from, for example, making right turns on red in cities where they’re illegal. But Turek says that these one-size-fits-all risk values won’t work from a Department of Defense perspective. Combat situations evolve rapidly, he points out, and a commander’s intent can change from scenario to scenario. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The [Defense Department] needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable,” Turek continued. “Difficult decisions are those where trusted decision-makers disagree, no right answer exists, and uncertainty, time-pressure, and conflicting values create significant decision-making challenges.” DARPA is only the latest organization to explore techniques that might help better align AI with a person’s intent. In January, OpenAI, the company behind the text-generating model GPT-3, detailed an alignment technique that it claims cuts down on the amount of toxic language that GPT-3 generates. Toxic text generation is a well-known problem in AI, often caused by toxic datasets. Because text-generating systems are trained on data containing problematic content, some of the content slips through. “Although [AI systems are] quite smart today, they don’t always do what we want them to do. The goal of alignment is to produce AI systems that do [achieve] what we want them to,” OpenAI cofounder and chief scientist Ilya Sutskever told VentureBeat in a phone interview earlier this year. “[T]hat becomes more important as AI systems become more powerful.” ITM will attempt to establish a framework to evaluate decision-making by algorithms in “very difficult domains,” including combat, through the use of “realistic, challenging” scenarios. “Trusted humans” will be asked to make decisions in these scenarios and then the results will be compared to decisions from an algorithm subjected to the same scenarios. “We’re going to collect the decisions, the responses from each of those decision-makers, and present those in a blinded fashion to multiple triage professionals,” Turek said. “Those triage professionals won’t know whether the response comes from an aligned algorithm or a baseline algorithm or from a human. And the question that we might pose to those triage professionals is which decision-maker would they delegate to, providing us a measure of their willingness to trust those particular decision-makers.” Talking to computers Related to the problem of alignment, LinkedIn cofounder Hoffman and DeepMind cofounder Suleyman plan with Inflection AI to leverage AI to help humans talk to computers. In an interview with CNBC, Suleyman described wanting to build products that eliminate the need for people to write in shorthand or simplify their ideas to communicate with machines. “[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something,” Suleyman told the publication. “It feels like we’re on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.” Inflection AI’s plans remain vague, but the concept of translating human intentions into a language computers can understand dates back decades. Even the best chatbots and voice assistants today haven’t delivered on the promise — recall Viv Labs , which pledged to deliver a “conversational interface to anything” that instead fizzled out into elements of Samsung’s maligned Bixby assistant. But Suleyman and Hoffman are betting that their expertise — as well as coming advancements in conversational AI — will make an intuitive human-computer language interface possible within the next five years. “Even at the bigger tech companies, there’s a relatively small number of people actually building these [AI] models. One of the advantages of doing this in a startup is that we can go much faster and be more dynamic,” Suleyman told CNBC. “My experience of building many, many teams over the last 15 years is that there is this golden moment when you really have a very close-knit, small, focused team. I’m going to try and preserve that for as long as possible.” Given that countless visionaries have tried and failed in this area, that would be an impressive feat indeed. For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,299
2,020
"Alexa and Google Assistant execs on future trends for AI assistants | VentureBeat"
"https://venturebeat.com/2020/07/16/alexa-and-google-assistant-execs-on-future-trends-for-ai-assistants"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Alexa and Google Assistant execs on future trends for AI assistants Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Businesses and developers making conversational AI experiences should start with the understanding that you’re going to have to use unsupervised learning to scale, said Prem Natarajan, Amazon head of product and VP of Alexa AI and NLP. He spoke with Barak Turovsky, Google AI director of product for the NLU team, at VentureBeat’s Transform 2020 AI conference today as part of a conversation about future trends for AI assistants. Natarajan called unsupervised learning for language models an important trend for AI assistants and an essential part of creating conversational AI that works for everyone. “Don’t wait for the unsupervised learning realization to come to you yet again. Start from the understanding that you’re going to have to use unsupervised learning at some level of scale,” he said. Unsupervised learning uses raw, unlabeled data to draw inferences from raw, unclassified data. A complementary trend, Natarajan said, is the development of self-learning systems that can adapt based on signals received from interacting with a person speaking with Alexa. “It’s the old thing, you know: If you fail once, that’s OK, but don’t make the same failures multiple times. And we’re trying to build systems that learn from their past failures,” he said. Members of Amazon’s machine learning team and conversational AI teams told VentureBeat last fall that self-learning and unsupervised learning could be key to more humanlike interactions with AI assistants. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another continuing trend is the evolution of trying to weave features into experiences. Last summer, Amazon launched Alexa Conversations in preview , which fuses together Alexa skills into a single cohesive experience using a recurrent neural network to predict dialog paths. For example, the proverbial night out scenario involves skills for buying tickets, making dinner reservations, and making arrangements with a ridesharing app. At the June 2019 launch, Amazon VP of devices David Limp referred to Amazon’s work on the feature “the holy grail of voice science.” Additional Alexa Conversations news is slated for an Amazon event next week. Natarajan and Turovsky agreed that multimodal experience design is an another emerging trend. Multimodal models combine input from multiple mediums like text and photos or videos. Some examples of models that combine language and imagery include Google’s VisualBERT and OpenAI’s ImageGPT , which received an honorable mention from the International Conference on Machine Learning (ICML) this week. Turovsky talked about advances in surfacing the limited number of answers voice alone can offer. Without a screen, he said, there’s no infinite scroll or first page of Google search results, and so responses should be limited to three potential results, tops. For both Amazon and Google, this means building smart displays and emphasizing AI assistants that can both share visual content and respond with voice. In a conversation with VentureBeat in January, Google AI chief Jeff Dean predicted progress in multimodal models in 2020. The advancement of multimodal models could lead to a number of benefits for image recognition and language models, including more robust inference from models receiving input from more than a single medium. Another continuing trend, Turovsky said, is the growth of access to smart assistants thanks to the maturation of translation models. Google Assistant is currently able to speak and translate 44 languages , In a separate presentation earlier today, Turovsky detailed steps Google has taken to remove gender bias from language models. Powered by unsupervised learning, Google introduced changes earlier this year to reduce gender bias in neural machine translation models. “In my opinion, we are in the early stages of this war. This problem could be seemingly simple; a lot of people could think it’s very simple to fix. It’s extremely hard to fix, because the notion of a bias in many cases doesn’t exist in an AI environment, when we watch it learn, and get both training data and train models to actually address it well,” Turovsky said. Indeed, earlier this year researchers affiliated with Georgetown University and Stanford University found racial automatic speech detection systems from companies including Amazon and Google work better for White users than Black users. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,300
2,020
"Facebook is using more AI to detect hate speech | VentureBeat"
"https://venturebeat.com/2020/05/12/facebook-is-using-more-ai-to-detect-hate-speech"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook is using more AI to detect hate speech Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In Q1 2020, 9.6 million pieces of content posted on Facebook were removed for violation of company hate speech policy, the “largest gain in a period of time,” Facebook CTO Mike Schroepfer told journalists today. For context, as recently as four years ago, Facebook removed no content with AI. The data comes from Facebook’s Community Standards Enforcement Report (CSER) report, which says AI detected 88.8% of the hate speech content removed by Facebook in Q1 2020, up from 80.2% in the previous quarter. Schroepfer attributes the growth to advances in language models like XLM. Another potential factor: As a result of COVID-19, Facebook also sent some of its human moderators home , though Schroepfer said Facebook moderators can now do some work from home. “I’m not naive; AI is not the answer to every single problem,” Schroepfer said. “I think humans are going to be in the loop for the indefinite future. I think these problems are fundamentally human problems about life and communication, and so we want humans in control and making the final decisions, especially when the problems are nuanced. But what we can do with AI is, you know, take the common tasks, the billion scale tasks, the drudgery out.” Facebook AI Research today also launched the Hateful Memes data set of 10,000 mean memes scraped from public Facebook groups in the U.S. The Hateful Memes challenge will offer $100,000 in prizes for top-performing networks, with a final competition at leading machine learning conference NeurIPS in December. Hateful Memes at NeurIPS follows the Facebook Deepfake Detection Challenge held at NeurIPS in 2019. The Hateful Memes data set is made to assess the performance of models for removing hate speech and to fine-tune and test multimodal learning models , which take input from multiple forms of media to measure multimodal understanding and reasoning. The paper includes documentation on the performance of a range of BERT-derived unimodal and multimodal models. The most accurate AI-driven multimodal model — Visual BERT COCO — achieves 64.7% accuracy, while humans demonstrated 85% accuracy on the data set, reflecting the difficulty of the challenge. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Put together by an external team of annotators (not including Facebook moderators), the most common memes in the data set target race, ethnicity, or gender. Memes categorized as comparing people with animals, invoking negative stereotypes, or using mocking hate speech — which Facebook community standards considers a form of hate speech — are also common in the data set. Facebook today also shared additional information about how it’s using AI to combat COVID-19 misinformation and stop merchants scamming on the platform. Under development for years at Facebook, SimSearchNet is a convolutional neural network for recognizing duplicate content, and it’s being used to apply warning labels to content deemed untrustworthy by dozens of independent human fact-checker organizations around the world. Warning labels were applied to 50 million posts in the month of April. Encouragingly, Facebook users click through to content with warning labels only 5% of the time, on average. Computer vision is also being used to automatically detect and reject ads for COVID-19 testing kits, medical face masks, and other items Facebook does not allow on its platform. Multimodal learning Machine learning experts like Google AI chief Jeff Dean called progress on multimodal models a trend in 2020. Indeed, multimodal learning has been used to do things like automatically comment on videos and caption images. Multimodal systems like CLEVRER from MIT-IBM Watson Lab are also applying NLP and computer vision to improve AI systems’ ability to carry out accurate visual reasoning. Excluded from the data set are memes that call for violence, self injury, or nudity or encourage terrorism or human trafficking. The memes were made using a custom tool and text scraped from meme imagery in public Facebook groups. In order to overcome licensing issues common to memes, Getty Images API photos are used to replace the background image and create new memes. Annotators were required to verify that each new meme retained the meaning and intent of the original. The Hateful Meme data set learns with what Facebook calls benign confounders, or memes whose meaning shifts based on changing images that appear behind meme text. “Hate speech is an important societal problem, and addressing it requires improvements in the capabilities of modern machine learning systems. Detecting hate speech in memes requires reasoning about subtle cues, and the task was constructed such that unimodal models find it difficult, by including ‘benign confounders’ that flip the label of a multimodal hateful meme,” Facebook AI Research coauthors said in a paper detailing the Hateful Memes data set that was shared with VentureBeat. The evolution of visual reasoning like the kind sought by the Hateful Meme data set and challenge can help AI better detect hate speech and determine whether memes violate Facebook policy. Accurate multimodal systems may also mean Facebook avoids engaging in counterspeech, when human or AI moderators unintentionally censor content from activists speaking out against hate speech instead of actual hate speech. Removing hate speech from the internet is the right thing to do, but quick hate speech detection is also in Facebook’s economic interests. After EU regulators spent years urging Facebook to adopt stricter measures, German lawmakers passed a law requiring social media companies with more than 1 million users to quickly remove hate speech or face fines of up to €50 million. Governments have urged Facebook to moderate content in order to address problems like terrorist propaganda and election meddling, particularly following backlash from the Cambridge Analytica scandal, and Facebook and its CEO Mark Zuckerberg have promised more human and AI moderation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,301
2,020
"Top minds in machine learning predict where AI is going in 2020 | VentureBeat"
"https://venturebeat.com/2020/01/02/top-minds-in-machine-learning-predict-where-ai-is-going-in-2020"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top minds in machine learning predict where AI is going in 2020 Share on Facebook Share on X Share on LinkedIn Left to right: Google AI chief Jeff Dean, University of California, Berkeley professor Celeste Kidd, PyTorch lead Soumith Chintala, Nvidia machine learning research head Anima Anandkumar, and IBM Research director Dario Gil Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. AI is no longer poised to change the world someday; it’s changing the world now. As we begin a new year and decade, VentureBeat turned to some of the keenest minds in AI to revisit progress made in 2019 and look ahead to how machine learning will mature in 2020. We spoke with PyTorch creator Soumith Chintala, University of California professor Celeste Kidd, Google AI chief Jeff Dean, Nvidia director of machine learning research Anima Anandkumar, and IBM Research director Dario Gil. Everyone always has predictions for the coming year, but these are people shaping the future today — individuals with authority in the AI community who treasure scientific pursuit and whose records have earned them credibility. While some predict advances in subfields like semi-supervised learning and the neural symbolic approach, virtually all the ML luminaries VentureBeat spoke with agree that great strides were made in Transformer-based natural language models in 2019 and expect continued controversy over tech like facial recognition. They also want to see the AI field grow to value more than accuracy. If you’re interested in taking a look back, last year we spoke with people like Facebook AI Research chief scientist Yann LeCun, Landing.ai founder Andrew Ng, and Accenture global responsible AI lead Rumman Chowdhury. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Soumith Chintala Director, principal engineer, and creator of PyTorch Depending on how you gauge it, PyTorch is the most popular machine learning framework in the world today. A derivative of the Torch open source framework introduced in 2002, PyTorch became available in 2015 and is growing steadily in extensions and libraries. This fall, Facebook released PyTorch 1.3 with quantization and TPU support, alongside Captum , a deep learning interpretability tool, and PyTorch Mobile. There are also things like PyRobot and PyTorch Hub for sharing code and encouraging ML practitioners to embrace reproducibility. In a conversation with VentureBeat this fall at PyTorch Dev Con, Chintala said he saw few breakthrough advances in machine learning in 2019. “I actually don’t think we’ve had a groundbreaking thing … since Transformer, basically. We had ConvNets in 2012 that reached prime time, and Transformer in 2017 or something. That’s my personal opinion,” he said. He went on to call DeepMind’s AlphaGo groundbreaking in its contributions to reinforcement learning, but he said the results are hard to implement for practical tasks in the real world. Chintala also believes the evolution of machine learning frameworks like PyTorch and Google’s TensorFlow — the overwhelming favorites among ML practitioners today — have changed how researchers explore ideas and do their jobs. “That’s been a breakthrough in the sense that it’s making them move one or two orders of magnitude faster than they used to,” he said. This year, Google and Facebook’s open source frameworks introduced quantization to boost model training speeds. In the years ahead, Chintala expects “an explosion” in the importance and adoption of tools like PyTorch’s JIT compiler and neural network hardware accelerators like Glow. “With PyTorch and TensorFlow, you’ve seen the frameworks sort of converge. The reason quantization comes up, and a bunch of other lower-level efficiencies come up, is because the next war is compilers for the frameworks — XLA , TVM , PyTorch has Glow, a lot of innovation is waiting to happen,” he said. “For the next few years, you’re going to see … how to quantize smarter, how to fuse better, how to use GPUs more efficiently, [and] how to automatically compile for new hardware.” Like most of the other industry leaders VentureBeat spoke with for this article, Chintala predicts the AI community will place more value on AI model performance beyond accuracy in 2020 and begin turning attention to other important factors, like the amount of power it takes to create a model, how output can be explained to humans, and how AI can better reflect the kind of society people want to build. “If you think about the last five, six years, we’ve just focused on accuracy and raw numbers like ‘Is Nvidia’s model more accurate? Is Facebook’s model more accurate?'” he said. “I actually think 2020 will be the year when we start thinking [in a more complex way], where it doesn’t matter if your model is 3% more accurate if it … doesn’t have a good interoperability mechanism [or meet other criteria].” Celeste Kidd Developmental psychologist at the University of California, Berkeley Celeste Kidd is director of Kidd Lab at the University of California, Berkeley, where she and her team explore how kids learn. Their insights can help the creators of neural networks who are attempting to train models in ways not too dissimilar to raising a child. “Human babies don’t get tagged data sets, yet they manage just fine, and it’s important for us to understand how that happens,” she said. One thing that surprised Kidd in 2019 is the number of neural net creators who casually disparage their own work, or that of other researchers, as incapable of doing something a baby can do. When you average together baby behavior, she said, you see evidence that they understand some things, but they definitely aren’t perfect learners, and that kind of talk paints an overly rosy picture of what babies can do. “Human babies are great, but they make a lot of errors, and a lot of the comparisons that I saw people casually making, they were making to sort of idealize baby behavior at the population level,” she said. “I think that it’s likely that there’s going to be an increased appreciation for the connection between what you currently know and what you want to understand next.” In AI, the phrase “black box” has been around for years now. It’s used to critique neural networks’ lack of explainability, but Kidd believes 2020 may spell the end of the perception that neural networks are uninterpretable. “The black box argument is bogus … brains are also black boxes, and we’ve made a lot of progress in understanding how brains work,” she said. In demystifying this perception of neural networks, Kidd looks to the work of people like Aude Oliva , executive director of the MIT-IBM Watson AI Lab. “We were talking about this, and I said something about the system being a black box, and she chastised me reasonably [saying] that of course they’re not a black box. Of course you can dissect them and take them apart and see how they work and run experiments on them, the same [as] we do for understanding cognition,” Kidd said. Last month, Kidd delivered the opening keynote address at the Neural Information Processing Systems (NeurIPS) conference, the largest annual AI research conference in the world. Her talk focused on how human brains hold onto stubborn beliefs, attention systems, and Bayesian statistics. The Goldilocks zone for the delivery of information, she said, is between a person’s previous interests and understandings and what’s surprising to them. People tend to engage less with overly surprising content. She then said there’s no such thing as a neutral tech platform, and she turned her attention to how the makers of content recommendation systems can manipulate people’s beliefs. Systems built in pursuit of maximum engagement can have a significant impact on how people form beliefs and opinions. Kidd finished the speech by speaking about the misperception among men in machine learning that being alone with a female colleague will lead to sexual harassment allegations and end a man’s career. That misperception, she said, can instead damage the careers of women in the field. For speaking out about sexual misconduct at the University of Rochester, Kidd was named Time Person of the Year in 2017, alongside other women who helped bring about what we now call the #MeToo movement for the equitable treatment of women. At the time, Kidd thought speaking up would end her career. In 2020, she wants to see increased awareness of the real-life implications of tech tools and technical decisions and a rejection of the idea that the makers of tools aren’t responsible for what people do with them. “I’ve heard a lot of people try to defend themselves by saying, ‘Well I’m not the moderator of truth,'” she said. “I think that there has to be increased awareness of that being a dishonest stance.” “We really need to, as a society and especially as the people that are working on these tools, directly appreciate the responsibility that that comes with.” Jeff Dean Google AI chief Dean has led Google AI for nearly two years now , but he’s been at Google for two decades and is the architect of many of the company’s early search and distributed network algorithms and an early member of Google Brain. Dean spoke with VentureBeat last month at NeurIPS, where he delivered talks on machine learning for ASIC semiconductor design and ways the AI community can address climate change, which he said is the most important issue of our time. In his talk about climate change, Dean discussed the idea that AI can strive to become a zero-carbon industry and that AI can be used to help change human behavior. He expects to see progress in 2020 in the fields of multimodal learning, which is AI that relies on multiple media for training, and multitask learning, which involves networks designed to complete multiple tasks at once. Unequivocally, one of the biggest machine learning trends of 2019 was the continued growth and proliferation of natural language models based on Transformer, the model Chintala previously referred to as one of the biggest breakthroughs in AI in recent years. Google open-sourced BERT , a Transformer-based model, in 2018. And a number of the top-performing models released this year, according to the GLUE leaderboard — like Google’s XLNet , Microsoft’s MT-DNN , and Facebook’s RoBERTa — were based on Transformers. XLNet 2 is due out later this month, a company spokesperson told VentureBeat. Dean pointed to the progress that has been made, saying “… that whole research thread I think has been quite fruitful in terms of actually yielding machine learning models that [let us now] do more sophisticated NLP tasks than we used to be able to do.” But he added that there’s still room for growth. “We’d still like to be able to do much more contextual kinds of models. Like right now BERT and other models work well on hundreds of words, but not 10,000 words as context. So that’s kind of [an] interesting direction.” Dean said he wants to see less of an emphasis on slight state-of-the-art advances in favor of creating more robust models. Google AI will also work to advance new initiatives, like Everyday Robot , an internal project introduced in November 2019 to make robots that can accomplish common tasks in the home and workplace. Anima Anandkumar Nvidia machine learning research director Anandkumar joined GPU maker Nvidia following her time as a principal scientist at AWS. At Nvidia, AI research continues across a number of areas, from federated learning for health care to autonomous driving, supercomputers, and graphics. One area of emphasis for Nvidia and Anandkumar in 2019 was simulation frameworks for reinforcement learning, which are getting more popular and mature. In 2019, we saw the rise of Nvidia’s Drive autonomus driving platform and Isaac robotics simulator, as well as models that produce synthetic data from simulations and generative adversarial networks, or GANs. Last year also ushered in the rise of AI like StyleGAN, a network that can make people question whether they’re looking at a computer-generated human face or a real person, and GauGAN, which can generate landscapes with a paintbrush. StyleGAN2 made its debut last month. GANs are technologies that can blur the lines of reality , and Anandkumar believes they can help with major challenges the AI community is trying to tackle, like grasping robotic hands and autonomous driving. (Read more about progress GANs made in 2019 in this report by VentureBeat AI staff writer Kyle Wiggers.) Anandkumar also expects to see progress in the year ahead from iterative algorithms, self-supervision, and self-training methods of training models, which are the kinds of models that can improve through self-training with unlabeled data. “All kinds of different iterative algorithms I think are the future, because if you just do one feed-forward network, that’s where robustness is an issue. Whereas if you try to do many iterations and you adapt iterations based on the kinds of data or the kind of accuracy requirements you want, there’s much more chance of achieving that,” she said. Anandkumar sees numerous challenges for the AI community in 2020, like the need to create models made especially for specific industries in tandem with domain experts. Policymakers, individuals, and the AI community will also need to grapple with issues of representation and the challenge of ensuring data sets used to train models account for different groups of people. “I think [the issues with facial recognition are] so easy to grasp, but there are so many [other areas] where … people don’t realize there are privacy issues with the use of data,” she said. Facial recognition gets the most attention, Anandkumar said, because it’s easy to understand how that can violate an individual’s privacy, but there are a number of other ethical issues for the AI community to confront in 2020. “We will have increasing scrutiny in terms of how the data is collected and how it’s used. I think it’s happening in Europe, but in the U.S. we’ll certainly see more of that, and for [the] right reasons, from groups like the National Transportation and Safety Board [NTSB] and the FTA [Federal Transit Administration],” she said. One of the great surprises of 2019, in Anandkumar’s view, was the rate at which text generation models progressed. “2019 was the year of language models, right? Now, for the first time, we got to the point of more coherent text generation and generation at the length of paragraphs, which wasn’t possible before [and] which is great,” Anandkumar said. In August 2019, Nvidia introduced the Megatron natural language model. With 8 billion parameters, Megatron is known as the world’s largest Transformer-based AI model. Anandkumar said she was surprised by the way people began characterizing models as having personalities or characters, and she looks forward to seeing more industry-specific text models. “We are still not at the stage of dialogue generation that’s interactive, that can keep track and have natural conversations. So I think there will be more serious attempts made in 2020 in that direction,” she said. The development of frameworks for control of text generation will be more challenging than, say, the development of frameworks for images that can be trained to identify people or objects. Text generation models can also come with the challenge of, for example, defining a fact for a neural model. Finally, Anandkumar said she was heartened to see Kidd’s speech at NeurIPS get a standing ovation and by signs of a growing sense of maturity and inclusion within the machine learning community. “I feel like right now is the watershed moment,” she said. “In the beginning is where it’s hard to even make small changes, and then the dam breaks. And I hope that’s what it is, because to me it feels like that, and I hope we can keep up the momentum and make even bigger structural changes and make it for all groups, everybody here, to thrive.” Above: Photo credit: John O’Boyle Dario Gil IBM Research director Gil heads a group of researchers actively advising the White House and enterprises around the world. He believes major leaps forward in 2019 include progress around generative models and the increasing quality with which plausible language can be generated. He predicts continued progress toward training more efficiently with reduced-precision architectures. The development of more efficient AI models was an emphasis at NeurIPS, where IBM Research introduced techniques for deep learning with an 8-bit precision model. “It’s still so broadly inefficient the way we train deep neural networks with existing hardware with GPU architectures,” he said. “So a really fundamental rethinking on that is very important. We’ve got to improve the computational efficiency of AI so we can do more with it.” Gil cited research suggesting that demand for ML training doubles every three and a half months , much faster than the growth predicted in Moore’s law. Gil is also excited about how AI can help accelerate scientific discovery, but IBM Research will be primarily focused on neural symbolic approaches to machine learning. In 2020, Gil hopes AI practitioners and researchers will develop a focus on metrics beyond accuracy to consider the value of models deployed in production. Shifting the field toward building trusted systems instead of prioritizing accuracy above all else will be a central pillar to the continued adoption of AI. “There are some members of the community that may go on to say, ‘Don’t worry about it, just deliver accuracy. It’s okay, people will get used to the fact that the thing is a bit of a black box,’ or they’ll make the argument that humans don’t generate explanations sometimes on some of the decisions that we make. I think it’s really, really important that we concentrate the intellectual firepower of the community to do much better on that. AI systems cannot be a black box on mission-critical applications,” he said. Gil believes in getting rid of the perception that AI is something only a limited number of machine learning wizards can do to ensure AI is adopted by more people with data science and software engineering skills. “If we leave it as some mythical realm, this field of AI, that’s only accessible to the select PhDs that work on this, it doesn’t really contribute to its adoption,” he said. In the year ahead, Gil is particularly interested in neural symbolic AI. IBM will look to neural symbolic approaches to power things like probabilistic programming, where AI learns how to operate a program, and models that can share the reasoning behind their decisions. “By [taking] this blended approach of a new contemporary approach to bring learning and reasoning together through these neural symbolic approaches, where the symbolic dimension is embedded in learning a program, we’ve demonstrated that you can learn with a fraction of the data that is required,” he said. “Because you learn a program, you end up getting something interpretable, and because you have something interpretable, you have something much more trusted.” Issues of fairness, data integrity, and the selection of data sets will continue to garner a lot of attention, as will “anything that has to do with biometrics,” he said. Facial recognition gets a lot of attention, but it’s just the beginning. Speech data will be viewed with growing sensitivity, as will other forms of biometrics. He went on to cite Rafael Yuste, a professor at Columbia who works on neural technology and is exploring ways to extract neural patterns on the visual cortex. “I give this as an example that everything that has to do with identity and the biometrics of people and the advances that AI makes in analyzing that will continue to be front and center,” Gil said. In addition to neural symbolic and common sense reasoning, a flagship initiative of the MIT-IBM Watson Lab, in 2020 Gil said IBM Research will also explore quantum computing for AI, and analog hardware for AI beyond reduced precision architectures. Final thoughts Machine learning is continuing to shape business and society, and the researchers and experts VentureBeat spoke with see a number of trends on the horizon: Advances in natural language models were a major story of 2019 as Transformers fueled great leaps forward. Look for more variations of BERT and Transformer-based models in 2020. The AI industry should look for ways to value model outputs beyond accuracy. Methods like semi-supervised learning, a neural symbolic approach to machine learning, and subfields like multitask and multimodal learning may progress in the year ahead. Ethical challenges related to biometric data like speech recordings will likely continue to be controversial. Compilers and approaches like quantization may grow in popularity for machine learning frameworks like PyTorch and TensorFlow as ways to optimize model performance. Know about transformative technology VentureBeat should be covering? Email AI editor Seth Colaner , senior AI staff writer Khari Johnson , or staff writer Kyle Wiggers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,302
2,023
"Sign up for VentureBeat and GamesBeat Newsletters | VentureBeat"
"https://venturebeat.com/newsletters"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sign up for VentureBeat and GamesBeat Newsletters VB Daily The best of VentureBeat delivered to your inbox everyday GB Daily The best of GamesBeat delivered to your inbox every day AI Weekly Your weekly look at how applied AI is changing the tech world Into the Metaverse The metaverse news, analysis, and trends technical decision makers and business leaders need to know Data Infrastructure Weekly Learn about the modern data stack and data strategies, which now include cloud platforms, data warehousing, data lakes and more DeanBeat Dean Takahashi dives into games, trends, startups, NFTs, and more Innovator Spotlight If you want to see where tech innovation abounds, follow the money Security Weekly Stay informed of latest enterprise security threats and how you can best defend your data and infrastructure VB Events Announcing new and upcoming events hosted by VentureBeat GB Events Announcing new and upcoming events hosted by GamesBeat * * * VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,303
2,020
"Google AI ethics co-lead Timnit Gebru says she was fired over an email | VentureBeat"
"https://venturebeat.com/2020/12/03/google-ai-ethics-co-lead-timnit-gebru-says-she-was-fired-over-an-email"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google AI ethics co-lead Timnit Gebru says she was fired over an email Share on Facebook Share on X Share on LinkedIn Former Google AI research scientist Timnit Gebru speaks onstage at TechCrunch Disrupt in San Francisco in September 2018 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Timnit Gebru, one of the best-known AI researchers today and co-lead of an AI ethics team at Google, no longer works at the company. She was featured in Google promotional material as recently as May. According to Gebru, she was fired Wednesday for sending an email to “non-management employees that is inconsistent with the expectations of a Google manager.” She said Google AI employees who report to her were emailed and told that she accepted her resignation when she did not offer her resignation. VentureBeat reached out to Gebru and Google AI chief Jeff Dean for comment. This story will be updated if we hear back. I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired :-) — Timnit Gebru (@timnitGebru) December 3, 2020 According to Casey Newton’s Platformer , who reportedly obtained a copy, Gebru sent the email in question to the Google Brain Women and Allies listserv. In it, Gebru expresses frustration with the lack of progress in hiring women at Google and lack of accountability for failure to make progress. She also said she was told not to publish a piece of research and advised employees to no longer fill out diversity paperwork because it didn’t matter. No mention is made of resignation. “There is no way more documents or more conversations will achieve anything. We just had a Black research all hands with such an emotional show of exasperation. Do you know what happened since? Silencing in the most fundamental way possible,” the email reads. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When asked by VentureBeat for comment, a Google spokesperson provided a link to the Platformer article with a copy of an email sent Thursday by Google AI chief Jeff Dean to company research staff. In it, Dean said a research paper written by Gebru and other researchers was submitted for publication at a conference before completing a review process and addressing feedback. In response, Dean said he received an email from Gebru. “Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google,” he said. “ Given Timnit’s role as a respected researcher and a manager in our Ethical AI team, I feel badly that Timnit has gotten to a place where she feels this way about the work we’re doing. I also feel badly that hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs. Please don’t. I understand the frustration about the pace of progress, but we have important work ahead and we need to keep at it.” Numerous researchers — including Gebru — advocate for diverse hiring practices as a way to address algorithmic bias and what AI Now Institute director Kate Crawford refers to as “AI’s white guy problem.” A 2016 AI Now Institute report found that 10% of AI researchers at Google were women. According to Google’s 2020 diversity report , roughly 1 in 3 Google employees are women, whereas 3.7% are African American, 5.9% are Latinx, and 0.8% are Native American. Jeff Dean now emailing the whole of the research organization, spreading misinformation and misconstruals about the conditions of @timnitGebru 's firing. Google researchers: don't buy it. — Dr. Alex Hanna (@alexhanna) December 3, 2020 Gebru criticized diversity efforts at Google in May following NBC News reporting that the company reduced diversity initiatives to avoid backlash from conservatives. Following that news, a congressional subcommittee questioned the company’s lack of progress toward diversity goals and whether employees working in AI receive any bias training. A Google spokesperson told VentureBeat that any suggestion that Google reduced its diversity efforts was “categorically false.” The spokesperson did not reply when asked to respond to questions posed by the congressional subcommittee. Following the death of George Floyd and historic protests, roughly a month later, Google joined other Big Tech companies and made new commitments to diverse hiring practices. Gebru left Google the same day that the National Labor Review Board (NLRB) filed complaints against Google that found that the company spied on and illegally fired two employees involved in labor organizing. In a tweet published days before leaving Google, Gebru questioned whether there’s regulation that protects people from divulging whistleblower law protection for members of the AI ethics community. today the nlrb said i was illegally fired. it took a year. i hope they acknowledge what is happening to Timnit sooner. #ISupportTimnit #BelieveBlackWomen https://t.co/Qkvd4hFrZE — kathryn spiers (@computerfemme) December 3, 2020 Tawana Petty is a data justice and privacy advocate in Detroit who this week was named national organizing director of Data for Black Lives. This morning she gave a talk about the legacy of surveillance of Black communities with tech like facial recognition and the toxicity of white supremacy on people’s lives. She dedicated her keynote talk at the 100 Brilliant Women in AI Ethics conference to Gebru. “She was terminated for what we all aspire to do and be,” Petty said. Mia Shah-Dand, who organized the conference and previously worked at Google, called Gebru’s dismissal a reflection of toxic culture in tech and a sign that women, particularly Black women, need support. I thought this was a joke because it seemed ridiculous that anyone would fire @timnitGebru given her expertise, her skills, her influence. This is one of the many times when I think there is just no hope for the tech industry. https://t.co/2Px7nkObke — Ellen K. Pao (@ekp) December 3, 2020 Gebru is known for some of the most influential work in algorithmic fairness research and combating algorithmic bias with the potential to automate oppression. Gebru is a cofounder of the Fairness, Accountability, and Transparency (FAccT) conference and Black in AI , a group that hosts community gatherings and mentors young people of African descent. Black in AI holds its annual workshop Monday at NeurIPS, the largest AI research conference in the world. Before coming to Google, Gebru joined Algorithmic Justice League founder Joy Buolamwini and created the Gender Shades project to assess the performance of facial recognition systems from major vendors like IBM and Microsoft. A peer-reviewed paper spawned from Buolamwini’s 2017 MIT thesis concluded that facial recognition tends to work best on white men and worst for women with a dark skin tone. That research and subsequent work by Buolamwini and Deborah Raji in 2019 have been highly influential among lawmakers deciding how to regulate the technology and in people’s attitudes about the threat posed by algorithmic bias. I gave Google the benefit of the doubt re: AI ethics and fairness entirely because of the existence of Timnit's team and the work they do there, knowing she and others are outspoken advocates and activists. Now that she's been fired, I'd argue Google no longer deserves it. https://t.co/sazZvz2iDS — Cathy O'Neil (@mathbabedotorg) December 3, 2020 While working at Microsoft Research, she was lead author of “ Datasheets for Datasets ,” a paper that recommends including a set of standard information with datasets in order to provide data scientists with context before they decide to use that data for training an AI model. “Datasheets for Datasets” would later act as motivation for the creation of model cards. As a Google employee, Gebru joined Margaret Mitchell, Raji, and others in writing a paper in 2019 about model cards, a framework for providing benchmark performance information about a model for machine learning practitioners to evaluate before using an AI model. Google Cloud began providing model cards for some of its AI last year, and this summer the company introduced the Model Card Toolkit for developers to make their own model cards. This summer, Gebru and her former colleague Emily Denton led a tutorial about fairness and ethics in computer vision at the Computer Vision and Pattern Recognition (CVPR) that organizers called “required viewing for us all.” Shortly after that she got into a public spat with Facebook director of AI research Yann LeCun about AI bias , which turned out to be a teachable moment for LeCun, who won the Turing Award in 2019 for his work on deep learning. Member of the AI community and others have referred to Gebru as someone actively trying to save the world. Earlier this year, Gebru was included in Good Night Stories for Rebel Girls: 100 Immigrant Women Who Changed the World , a book that was released in October. Updated 9:05 a.m. with thoughts from Tawana Petty and Mira Shah-Dand, 11:29 a.m. to include a link to and summarization of an email by Timnit Gebru, and at 1:45 p.m. with a link and quote of an email by Google AI chief Jeff Dean. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,304
2,020
"AI ethics pioneer's exit from Google involved research into risks and inequality in large language models | VentureBeat"
"https://venturebeat.com/2020/12/03/ai-ethics-pioneers-exit-from-google-involved-research-into-risks-and-inequality-in-large-language-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI ethics pioneer’s exit from Google involved research into risks and inequality in large language models Share on Facebook Share on X Share on LinkedIn Former Google AI research scientist Timnit Gebru speaks onstage at TechCrunch Disrupt in San Francisco in September 2018 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Following a dispute over several emails and a research paper on Wednesday , AI ethics pioneer and research scientist Timnit Gebru no longer works at Google. According to a draft copy obtained by VentureBeat, the research paper surrounding her exit questions the wisdom of building large language models and examines who benefits from them, who is impacted by negative consequences of their deployment, and whether there is such a thing as a language model that’s too big. Gebru’s research has been hugely influential on the subjects of algorithmic fairness, bias, and facial recognition. In an email to Google researchers on Thursday, Google AI chief Jeff Dean said he accepted Gebru’s resignation following a disagreement about the paper, but Gebru said she never offered to resign. “Most language technology is in fact built first and foremost to serve the needs of those who already have the most privilege in society,” the paper reads. “A methodology that relies on datasets too large to document is therefore inherently risky. While documentation allows for potential accountability, similar to how we can hold authors accountable for their produced text, undocumented training data perpetuates harm without recourse. If the training data is considered too large to document, one cannot try to understand its characteristics in order to mitigate some of these documented issues or even unknown ones.” In the paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” authors say risks associated with deploying large language models range from environmental racism as AI’s carbon footprint impacts marginalized communities more than others to the way models absorb a “hegemonic world view from the training data.” There’s also the risk AI can perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing forms of language aimed at specific groups of people. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another consequence is that costs associated with training large language models can create a barrier to entry for deep learning research. The scale also increases the chance that people will trust predictions made by language models without questioning the results. Authors include Google AI co-lead Meg Mitchell and Google researchers Ben Hutchinson, Mark Diaz, and Vinodkumar Prabhakaran, as well as University of Washington Ph.D. student Angelina McMillan-Major. Gebru is listed first among the paper’s authors, alongside University of Washington linguist Emily Bender. A teacher of an NLP ethics course, Bender coauthored a paper that won an award from the Association for Computational Linguistics. That paper urged NLP researchers to question the hype around the idea that large language models are capable of understanding. In an interview with VentureBeat , she stressed the need for better testing methods and lamented a culture in language model research that overfits models to benchmark tasks, a pursuit she says can stand in the way of “good science.” On Thursday, more than 230 Googlers and over 200 supporters from academia, industry, and civil society in signing a letter with a series of demands. These include a transparent evaluation of who was involved in determining that Bender and Gebru should withdraw their research for the general public and Google users. “This has become a matter of public concern, and there needs to be public accountability to ensure any trust in Google Research going forward,” the letter reads. By Friday morning, near 800 Googlers and more than 1,100 supporters from academia, industry, and civil society signed the letter. Dean was critical of the paper in an email to Google researchers Thursday and said a review process found that the paper “ignored too much relevant research” about large language models and did not take into account recent research into mitigation of bias in language models. A trend toward creating language models with more parameters and training data was triggered by a move toward use of the Transformer architecture and massive amounts of training data scraped from the web or sites like Reddit or Wikipedia. Google’s BERT and variations like ALBERT and XLNet led the way in that trend, alongside models like Nvidia’s Megatron and OpenAI’s GPT-2 and GPT-3. Whereas Google’s BERT had 340 million parameters, Megatron has 8.3 billion parameters ; Microsoft’s T-NLG has 17 billion parameters; and GPT-3 , which was introduced in May by Open AI and is the largest language model to date, has 175 billion parameters. With increased size, large models achieved higher scores in tasks like question-answering or reading understanding. But numerous studies have found forms of bias in large pretrained language models. This spring, for example, NLP researchers introduced the StereoSet dataset, benchmark and leaderboard , and found that virtually all popular pretrained language models today exhibit bias based on ethnicity, race, and gender. Coauthors suggest language models be evaluated based on other metrics — including energy efficiency and the estimated CO2 emissions involved with training a model — rather than evaluating performance on a series of tasks using benchmarks like GLUE. The researchers argue that large pretrained language models also have the potential to mislead AI researchers and prompt the general public to mistake text generated by language models like OpenAI’s GPT-3 as meaningful. “If a large language model, endowed with hundreds of billions of parameters and trained on a very large dataset, can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads. “In summary, we advocate for an approach to research that centers the people who stand to be affected by the resulting technology, with a broad view on the possible ways that technology can affect people.” The paper recommends solutions like working with impacted communities, value sensitive design , improved data documentation, and adoption of frameworks such as Bender’s data statements for NLP or the datasheets for datasets approach Gebru coauthored while at Microsoft Research. A McKinsey survey of business leaders conducted earlier this year found that little progress has been made in mitigating 10 major risks associated with deploying AI models. Criticism of large models trained using massive datasets scraped from the web has been a marked AI research trend in 2020. In computer vision, an audit released this summer of 80 Million Tiny Images, a large image dataset revealed the inclusion of racist, sexist, and pornographic content. Instead of taking recommended steps to change the dataset, creators from MIT and NYU opted to stop using it and delete existing copies. Last month, researchers analyzed papers published at conferences and found that elite universities and Big Tech companies enjoy a competitive advantage in the age of deep learning that has created a compute divide concentrating power in the hands of a few and accelerating inequality. Roughly one year ago, Stanford professor emeritus of computer science Yoav Shoham questioned the brittle nature of language models that demonstrate quick advancements in benchmark tests. “The thing is these are highly specialized tasks and domains, and as soon as you go out of domain, the performance drops dramatically and the committee knows it,” Shoham told VentureBeat in December 2019. “There’s a lot to be excited about genuinely, including all these systems that I mentioned, but we’re quite far away from human level understanding of language right now.” Update Dec. 4 at 8:23 a.m. Correction: This story initially stated that Emily Denton was a coauthor of this paper. However, Emily Bender was a coauthor. We regret any confusion this error may have caused. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,305
2,020
"Research shows natural language benchmarks don't measure AI models' general knowledge well | VentureBeat"
"https://venturebeat.com/2020/08/12/natural-language-benchmarks-dont-measure-ai-models-general-knowledge-well-research-shows"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Research shows natural language benchmarks don’t measure AI models’ general knowledge well Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Open-domain question-answering models — models theoretically capable of responding to novel questions with novel answers — often simply memorize answers found in the data on which they’re trained, depending on the data set. That’s the assertion of a team of researchers affiliated with Facebook and the University College London, who in a preprint paper present evidence that 60%-70% of answers given by models tested on open-domain benchmarks are embedded somewhere in the training sets. Open-domain question-answering has received attention in the AI community for its practical applications, and more recently as a method to analyze language models’ grasp of factual knowledge. But a deep understanding of what kinds of questions models can answer remains elusive; unknowns about how questions and answers are distributed in benchmark corpora make it hard to contextualize the results. In their study, the researchers sought to evaluate the test sets of popular open-domain question-answering data sets including WebQuestions, TriviaQA, and Open Natural Questions. They identified classes of question a model should be able to answer and annotated 1,000 question-answer pairs from each test set for repeated questions in their respective training sets. Then they computed the performance of several models on the benchmarks using open-book (which leverage retrieval from a large corpus of documents) and closed-book approaches (which focus on training large models with no external knowledge). The three data sets in question aren’t much alike, which was the point — testing across all three guaranteed robustness. WebQuestions contains 3,778 training and 2,032 test question-answer pairs from a search engine, while TriviaQA has 78,785 training and 11,313 test question-answer pairs from free trivia websites. Meanwhile, Open Natural Questions comprises 79,168 training and 3,610 question-answer pairs from a combination of search engines and Wikipedia articles. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The team theorizes open-domain question-answering models should be able to (1) recall the answer to a question seen at training time, (2) answer novel questions at test time and choose an answer from the set of answers seen during training, and (3) answer novel questions that have answers not contained within the training data set. To determine whether the aforementioned benchmarks measure any of these behaviors, the coauthors split the test data in each corpus by whether the answers appeared somewhere in the training sets. Around 58%-71% of test answers were also somewhere in the training data, according to the researchers, demonstrating that the majority of the test data didn’t probe for answer generalization. The team also probed the benchmarks for paraphrased questions in training data, using the set of 1,000 annotated questions. They say that 28%-34% of the questions were paraphrased, the majority being near-duplicates differing only by one or two words. “This result implies that 30% of the test set of these datasets only probe for how well models can simply memorize question-answer pairs seen at training,” the coauthors wrote. The researchers selected several “open book” models — dense passage retrieval, retrieval-augmented generation, and fusion-in-decoder — and “closed book” models (Facebook’s BART and Google’s T5 ) to test, as well as nearest-neighbor models that store all available answers and classify new answers based on a similarity measure. Results on the benchmark corpora imply that all models memorized questions well, with an untrained nearest-neighbor model answering 20% of the test questions correctly. But they performed poorly on questions that couldn’t be memorized from training sets, with a mean absolute performance difference of 63% between repeated and non-repeated data. And when it came to generalization, one model that reliably memorized questions — T5 — struggled, achieving only a 22% match score. “It is clear that performance on these data sets cannot be properly understood by overall question-answer accuracy,” the researchers wrote. “We suggest that in future, a greater emphasis be placed on more behavior-driven evaluation rather than pursuing single-number overall accuracy figures.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,306
2,020
"AI Weekly: The promise and shortcomings of OpenAI's GPT-3 | VentureBeat"
"https://venturebeat.com/2020/07/24/ai-weekly-the-promise-and-shortcomings-of-openais-gpt-3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: The promise and shortcomings of OpenAI’s GPT-3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I usually think of the dog days of summer as a time when news slows down. It’s typically when a lot of people take time off work and the lull leads local news stations to cover inconsequential things like cat shows or a little baby squirrel on a little baby Jet Ski. But these are not typical times. Facebook continues to face fallout from bias and discrimination issues, with multiple news outlets reporting that Instagram’s content moderation algorithm was 50% more likely to flag and disable the accounts of Black users than White users. Facebook and Instagram are now creating teams to examine how algorithms impact the experiences of Black and Latinx users, as well as users from other specific groups. Also this week: Executives from Amazon, Google, and Microsoft gave leaders in Washington more than 30 recommendations to help the U.S. maintain an edge over other nations in AI. Recommendations include recruiting AI practitioners for a reserve corps that would do part-time government work and creating an accredited academy for the U.S. government to train AI talent. But arguably the biggest story this week was the beta release of GPT-3 , a language model capable of a great range of tasks, like summarization, text generation to write articles, and translation. Tests made especially to analyze GPT-3 found it can also complete many other tasks, like unscrambling words and using words it has only seen defined once in sentences. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In recent weeks, OpenAI extended access to an API and the language model with 175 billion parameters trained on a corpus of text from the web, which includes about a trillion words. Apps like a layout generator that creates code from natural language descriptions got a lot of attention, as did apps for answering people’s questions or creating U.S. history test questions and answers. A generator that identifies the relationship between real-world objects offered a potential application to help robots or other forms of AI better understand the world. One early GPT-3 user felt a chat he had about God and existence and the universe was so profound, he said “You will become another person after reading it.” A particularly gushing Bloomberg story titled “Artificial intelligence is the hope 2020 needs” suggested GPT-3 could end up becoming one of the biggest news stories of 2020. Some discussion around the release of GPT-3 also raised the question of why OpenAI seems less concerned about sharing the much larger GPT-3 than it was about GPT-2, a model OpenAI controversially chose not to initially share publicly due to its potentially negative impact on things like the spread of fake news. OpenAI’s release timing has been in line with its broader business plan. For context, the GPT-2 release came a month before OpenAI changed its business structure and created a for-profit company. GPT-3 was released less than two weeks before the introduction of the OpenAI API to commercialize its AI. Emily Bender is a professor, a linguist, and a member of the University of Washington’s NLP group. Last month, a paper she coauthored about large language models like GPT-3 argued the hype around such models shouldn’t mislead people into believing the language models are capable of understanding or meaning. The paper won an award from the Association of Computational Linguistics conference. “While large neural language models may well end up being important components of an eventual full-scale solution to human-analogous natural language understanding, they are not nearly-there solutions to this grand challenge,” the paper reads. Bender hasn’t tested GPT-3 personally, but she said from what she’s seen it is impressive, but with roughly the same architecture as GPT-2. The main difference is its massive scale. “It’s shiny and big and flashy, and it’s not different in kind, either in the overall approach or in the risks that it brings along,” she said. “I think that there’s a fundamental problem in an approach to what gets called artificial intelligence that relies on data sets that are larger than humans can actually manually verify.” Circulating among the free publicity for OpenAI generated by early access users are some examples that demonstrate its predictable bias. Facebook AI head Jerome Pesenti found a rash of negative statements from AI created to generate humanlike tweets that targeted Black people, Jewish people, and women. Of course, that’s not a surprise. Tests included in the release of a paper in late May found that GPT-3 demonstrates gender and racial bias and is most likely to give Asian people a high sentiment analysis and Black people a low sentiment analysis score, particularly among smaller versions of the model. OpenAI analysis also demonstrated shortcomings in specific tasks, like word-in-context analysis (WiC) and RACE, a set of middle school and high school exam questions. Tests earlier this year found that many popular language models trained with a large data corpus, like Google’s BERT and GPT-2, demonstrate several forms of bias. Bender, who teaches an NLP ethics course at the University of Washington, said there’s no such thing as an unbiased data set or a bias-free model and that even carefully created language data sets can carry subtler forms of bias. But she maintains some best practices could reduce bias in large data sets. OpenAI is implementing testing in beta as a safeguard, which may help unearth issues, a spokesperson said, adding that the company is applying toxicity filters to GPT-3. The spokesperson declined to share additional information about what the filters might accomplish but said more details will be shared in the weeks ahead. GPT-3 understandably generates marvel in some people, as it appears to draw closer to the idea of a general model that can do virtually anything with just a few samples of training data. OpenAI CEO Sam Altman tweeted that a 10-year-old boy he showed GPT-3 to said in a matter of seconds that he wanted to enter the AI field. But Altman also said in a tweet Sunday that “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!), but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.” The OpenAI paper said the approach taken to characterize some attributes of the model was inspired by the model cards for model reporting method created by Google AI ethics researchers. Alongside the need to adopt data sheets or data statements to better understand the contents of data sets, Bender emphasized that more testing is needed in the NLP field to be able to really understand when models are demonstrating an understanding or approaching other grand challenges. “What’s happened culturally recently … within NLP in the last maybe 10-15 years, there’s been a lot of emphasis on valuing models and model building, and the only value assigned to work around evaluation metrics and task design and annotation is as [a] subsidiary to the model building to allow the model builders to show how good their models are,” she said. “And that’s an imbalanced situation, where we can’t do good science. I hope that we’re going to see an increased value placed on the other parts of the science, which isn’t to say that we’re done building models. I’m sure there’s more research to be done there, but we can’t make meaningful progress in model building if we can’t do meaningful testing of the models, and we can’t do meaningful testing of the models if it’s not valued.” For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel. Thanks for reading, Khari Johnson Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,307
2,020
"OpenAI debuts gigantic GPT-3 language model with 175 billion parameters | VentureBeat"
"https://venturebeat.com/2020/05/29/openai-debuts-gigantic-gpt-3-language-model-with-175-billion-parameters"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI debuts gigantic GPT-3 language model with 175 billion parameters Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A team of more than 30 OpenAI researchers have released a paper about GPT-3 , a language model capable of achieving state-of-the-art results on a set of benchmark and unique natural language processing tasks that range from language translation to generating news articles to answering SAT questions. GPT-3 has a whopping 175 billion parameters. By comparison, the largest version of GPT-2 was 1.5 billion parameters , and the largest Transformer-based language model in the world — introduced by Microsoft earlier this month — is 17 billion parameters. OpenAI released GPT-2 last year , controversially taking a staggered release approach due to fear that the model could be used for malicious purposes. OpenAI was criticized by some for the staggered approach, while others applauded the company for demonstrating a way to carefully release an AI model with the potential for misuse. GPT-3 made its debut with a preprint arXiv paper Thursday, but no release details are provided. An OpenAI spokesperson declined to comment when VentureBeat asked if a full version of GPT-3 will be released or one of seven smaller versions ranging in size from 125 million to 13 billion parameters. Many advanced Transformer-based models have evolved to achieve human-level performance on a number of natural language tasks. Authors say the Transformer architecture-based approach behind many language model advances in recent years is limited by a need for task-specific data sets and fine-tuning. GPT-3 is an autoregressive model trained with unsupervised machine learning and focuses on few-shot learning, which supplies a demonstration of a task at inference runtime. “Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches,” the paper reads. “For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Broadly, on NLP tasks GPT-3 achieves promising results in the zero-shot and one-shot settings, and in the few-shot setting [it] is sometimes competitive with or even occasionally surpasses state-of-the-art (despite state-of-the-art being held by fine-tuned models),” the authors note. The paper released Thursday examines forms of GPT-3 in varying sizes to assess few-shot learning results, as well as one-shot learning, the kind thought to most closely mimic how humans learn, and zero-shot learning, where only a description of a task is provided at runtime. Though GPT-3 works well to generate news articles and tasks like using novel words in sentences or performing arithmetic, it can fall short when it comes to common-sense reasoning. On the SuperGLUE benchmark introduced last year specifically to test reasoning and other tasks for advanced NLP models, GPT-3 achieves nearly state-of-the-art results in COPA and ReCoRD reading comprehension data sets, but falls short with word-in-context analysis (WiC) and RACE, a set of middle school and high school exam questions. “GPT-3 appears to be weak in the few-shot or one-shot setting at some tasks that involve comparing two sentences or snippets, for example, whether a word is used the same way in two sentences (WiC), whether one sentence is a paraphrase of another, or whether one sentence implies another,” the paper reads. “By presenting a broad characterization of GPT-3’s strengths and weaknesses, including these limitations, we hope to stimulate study of few-shot learning in language models and draw attention to where progress is most needed.” Unlike many other pretrained language models, a preliminary assessment of algorithmic bias found in GPT-3 is also included in the paper. Sentiment analysis of GPT-3 racial bias performance was assessed using the Senti WordNet model and found that “Asian” had a consistently positive score, ranking first in racial groups in positive scores in three of the seven versions of GPT-3. “Black” consistently had low sentiment analysis scores across five of the seven versions of GPT-3. In an assessment of associations between gender and occupation, GPT-3 demonstrated that it’s most likely to suggest a male identifier, based on analysis of almost 400 occupations. A recent analysis of pretrained language models found race, gender, occupation, and religious bias prevalent among pretrained language models, but researchers found that OpenAI’s GPT-2 demonstrated more idealistic results than others. The GPT-3 paper also includes documentation on data contamination; energy usage during training; the broader impact of the advanced language model; and potential misuses, such as “misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing, and social engineering pretexting.” GPT-3 is trained with nearly a trillion words obtained from the Common Crawl corpus of data between 2016 and 2019, as well as data sets related to web text, books, and Wikipedia. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,308
2,019
"ProBeat: 'Algorithms are like convex mirrors that refract human biases' | VentureBeat"
"https://venturebeat.com/2019/11/15/probeat-algorithms-are-like-convex-mirrors-that-refract-human-biases"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion ProBeat: ‘Algorithms are like convex mirrors that refract human biases’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At the Movethedial Global Summit in Toronto yesterday, I listened intently to a talk titled “No polite fictions: What AI reveals about humanity.” Kathryn Hume, Borealis AI’s director of product, listed a bunch of AI and algorithmic failures — we’ve seen plenty of that. But it was how Hume described algorithms that really stood out to me. “Algorithms are like convex mirrors that refract human biases, but do it in a pretty blunt way,” Hume said. “They don’t permit polite fictions like those that we often sustain our society with.” I really like this analogy. It’s probably the best one I’ve heard so far, because it doesn’t end there. Later in her talk, Hume took it further, after discussing an algorithm biased against black people used to predict future criminals in the U.S. “These systems don’t permit polite fictions,” Hume said. “They’re actually a mirror that can enable us to directly observe what might be wrong in society so that we can fix it. But we need to be careful, because if we don’t design these systems well, all that they’re going to do is encode what’s in the data and potentially amplify the prejudices that exist in society today.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Reflections and refractions If an algorithm is designed poorly or — as almost anyone in AI will tell you nowadays — if your data is inherently biased, the result will be too. Chances are you’ve heard this so often it’s been hammered into your brain. The convex mirror analogy is telling you more than just to get better data. The thing about a mirror is you can look at it. You can see a reflection. And a convex mirror is distorted: The reflected image gets larger as the object approaches. The main part that the mirror is reflecting takes up most of the mirror. Take this tweet storm that went viral this week: The @AppleCard is such a fucking sexist program. My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does. No appeals work. — DHH (@dhh) November 7, 2019 Yes, the data, algorithm, and app appear flawed. And Apple and Goldman Sachs representatives don’t know why. So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance. — DHH (@dhh) November 8, 2019 Clearly something is going on. Apple and Goldman Sachs are investigating. So is the New York State Department of Financial Services. Whatever the bias ends up being, I think we can all agree that a credit limit 20 times larger for one partner over another is ridiculous. Maybe they’ll fix the algorithm. But there are bigger questions we need to ask once the investigations are complete. Would a human have assigned a smaller multiple? Would it have been warranted? Why? So you’ve designed an algorithm and there is some sort of problematic bias in your community, in your business, in your data set. You might realize that your algorithm is giving you problematic results. If you zoom out, however, you’ll realize that the algorithm isn’t the problem. It is reflecting and refracting the problem. From there, figure out what you need to fix in not just your data set and your algorithm, but also your business and your community. ProBeat is a column in which Emil rants about whatever crosses him that week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,309
2,019
"Microsoft invests $1 billion in OpenAI to develop AI technologies on Azure | VentureBeat"
"https://venturebeat.com/2019/07/22/microsoft-invests-1-billion-in-openai-to-develop-ai-technologies-on-azure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft invests $1 billion in OpenAI to develop AI technologies on Azure Share on Facebook Share on X Share on LinkedIn From left to right: Former CTO Greg Brockman and chief scientist Ilya Sutskever, speaking at VB Transform in 2019 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today announced that it would invest $1 billion in OpenAI , the San Francisco-based AI research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman. In a blog post, Brockman said the investment will support the development of artificial general intelligence (AGI) — AI with the capacity to learn any intellectual task that a human can — with “widely distributed” economic benefits. To this end, OpenAI intends to partner with Microsoft to jointly develop new AI technologies for the Seattle company’s Azure cloud platform and will enter into an exclusivity agreement with Microsoft to “further extend” large-scale AI capabilities that “deliver on the promise of AGI.” Additionally, OpenAI will license some of its technologies to Microsoft, which will commercialize them and sell them to as-yet-unnamed partners, and OpenAI will train and run AI models on Azure as it works to develop new supercomputing hardware while “adhering to principles on ethics and trust.” “AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges,” said Microsoft CEO Satya Nadella. “By bringing together OpenAI’s breakthrough technology with new Azure AI supercomputing technologies, our ambition is to democratize AI — while always keeping AI safety front and center — so everyone can benefit.” According to Brockman, the partnership was motivated in part by OpenAI’s continued pursuit of enormous computational power. Its researchers recently released analysis showing that from 2012 to 2018 the amount of compute used in the largest AI training runs grew by more than 300,000 times, with a 3.5-month doubling time, far exceeding the pace of Moore’s Law. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Perhaps exemplifying the trend is OpenAI’s OpenAI Five, an AI system that squared off against professional players of the video game Dota 2 last summer. On Google’s Cloud Platform — in the course of training — it played 180 years’ worth of games every day on 256 Nvidia Tesla P100 graphics cards and 128,000 processor cores, up from 60,000 cores just a few years ago. “OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital,” Brockman said. “The most obvious way to cover costs is to build a product, but that would mean changing our focus.” OpenAI publishes studies in AI subfields from computer vision to natural language processing (NLP), with the stated mission of safely creating superintelligent software. The startup — which began in 2015 as a nonprofit but later restructured as a capped-profit company under OpenAI LP, an investment vehicle — last year detailed an AI robotics system with human-like dexterity. Its Dota 2 bot defeated 99.4% of players in public matches and a team of professional players twice, and its most sophisticated NLP model can generate convincingly humanlike short stories and Amazon reviews from whole cloth. Beyond its flashier projects, OpenAI has contributed to open source tools like Gym , a toolkit for testing and comparing reinforcement learning algorithms that learn to achieve goals from trial and error, and Neural MMO , a “massively multi-agent” virtual training ground that plops agents in the middle of an RPG-like world. Other recent public work includes CoinRun , which tests the adaptability of reinforcement learning agents; Spinning Up , a program designed to teach anyone deep learning; Sparse Transformers, which can predict what comes next in lengthy text, image, and audio sequences; and MuseNet , which generates novel four-minute songs with 10 different instruments across a range of genres and styles. OpenAI is in many ways the stateside counterpart of U.K.-based DeepMind, which Google parent company Alphabet acquired in 2014 for £400 million ($500 million). Since its founding in 2010, DeepMind has — like OpenAI — leaned heavily on computation-heavy techniques to achieve remarkable AI gains in gaming, media synthesis, and medicine. The advancements haven’t come cheap — Wired reports that in 2017 DeepMind burned through £334 million ($442 million). For its part, OpenAI previously secured a $1 billion endowment from its founding members and investors, and OpenAI LP has so far attracted funds from Hoffman’s charitable foundation and Khosla Ventures. The company spent $11.2 million in 2016 , according to its most recently available IRS filing. Brockman and CEO Altman believe that true AGI will be able to master more fields than any one person, chiefly by identifying complex cross-disciplinary connections that elude human experts. Furthermore, they predict that responsibly deployed AGI — in other words, AGI deployed in “close collaboration” with researchers in relevant fields, like social science — might help solve longstanding challenges in climate change, health care, and education. “The creation of [AGI] will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said Altman. “Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI. We believe it’s crucial that AGI is deployed safely and securely and that its economic benefits are widely distributed.” As for Microsoft, it’s yet another notch in an AI toolbelt comprising everything from research grants and solutions suites like Windows Vision Skills to machine learning-powered productivity features in Office 365. On the product side, the company recently rolled out enhancements to Azure Cognitive Services , a prebuilt service designed to expedite no-code AI model creation, and Azure Machine Learning , a cloud-hosted toolset that facilitates the development of predictive models, classifiers, and recommender systems. Additionally, it launched in preview a software kit for robotics and autonomous physical systems development, and it open-sourced a tool that enables developers to imbue AI systems with explainable components. These updates followed on the heels of high-profile AI collaborations with AT&T , Adobe , and others. Last July, Microsoft said it would team up with Walmart to expedite the retailer’s digital transformation via a combination of AI, cloud, and internet of things (IoT) services, principally by supplying the necessary infrastructure via Azure and applying machine learning services to tasks like routing delivery trucks. Concurrently, the company accelerated its investments in both late-stage and relatively nascent AI startups, contributing to an estimated 72% industry-wide year-over-year uptick in AI and machine learning funding. In June, Microsoft acquired Berkeley, California-based startup Bonsai , which designs deep learning tools aimed at the enterprise. And in November it purchased XOXCO, maker of the Botkit framework that creates conversational bots for team communications chat apps like Slack and Microsoft Teams, months after snatching up Lobe , creator of a platform for building custom deep learning models using a visual interface. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,310
2,019
"OpenAI let us try its state-of-the-art NLP text generator | VentureBeat"
"https://venturebeat.com/2019/02/14/openai-let-us-generate-text-with-an-ai-model-that-achieves-state-of-the-art-performance-in-several-nlp-tasks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI let us try its state-of-the-art NLP text generator Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Language is power, and engineering a system that can comprehend it as well as any human is a grand challenge in AI research. Recent contributions like Google’s BERT , a framework that can train state-of-the-art natural language processing (NLP) models in a few hours on a single graphics card, and Facebook’s PyText, which produces over a billion daily predictions for the social network’s apps and services, have nudged the needle forward. But robots capable of speaking naturally, unaided by handcrafted grammar rules and carefully labeled datasets, remain elusive. That hasn’t discouraged OpenAI , an AI research organization backed by tech luminaries Reid Hoffman and Peter Thiel. Over the course of its roughly four-year history, the San Francisco-based nonprofit has investigated autonomous systems that can achieve superhuman performance in Pong and Montezuma’s Revenge and defeat professional Dota players — not to mention imbuing mechanical hands with humanlike dexterity. OpenAI has also published its fair share of work in NLP, and today it is previewing a collection of AI models that can not only generate coherent text given words or sentences, but achieve state-of-the-art (or near-state-of-the-art) performance on a range of NLP tests. The pioneering models build on OpenAI’s prior research , which suggests that unsupervised learning — an AI training technique in which machine learning algorithms learn patterns from unclassified, unannotated data — can be used to orient generic models to specific language tasks. The group’s newly published paper posits that sufficiently large language models — in some cases 12 times the size of OpenAI’s previous model — can learn NLP tasks without domain-specific datasets or modifications. The models achieve this in part with Transformers, a relatively novel type of neural architecture introduced in a 2017 paper (“ Attention Is All You Need “) coauthored by scientists at Google Brain, Google’s AI research division. The neural networks at the heart of OpenAI’s models comprise neurons, or mathematical functions loosely modeled after biological neurons. These neurons are connected with “synapses” that transmit signals to other neurons, and they’re arranged in layers. Those signals — the product of data, or inputs, fed into the neural network — travel from layer to layer and slowly “tune” the network by adjusting the synaptic strength — weights — of each connection. Over time, the network extracts features from the dataset and identifies trends across samples, eventually learning to make predictions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Transformers add self-attention to the mix. Groupings of neurons transmit signals at different rates than others, and they route the signals to subsequent layers somewhat intelligently. This architectural tweak — combined with a 40GB corpus of 8 million internet reviews, social media posts, and articles equivalent to about 10 billion words — enables OpenAI’s models to query past learnings for relevant information, like the names of the people mentioned in a story or objects in a room. “[Some] AI models [are] forced to summarize, or compress, all information about … [a] context into a single fixing-length [mathematical representation], and this is a very difficult task,” Alec Radford, head of research at Indico Data Solutions and language research lead at OpenAI, told VentureBeat in a phone briefing. “[That’s in contrast to] a self-attention-based model that keeps … representations of all the different words in context, and learns [the] functionality of how to query over the past. [If] it sees a word like ‘mister’ or ‘missus’, the model can learn how to look back at the documents and all the previous words and figure out which names are likely to follow the title.” OpenAI provided VentureBeat access to two trained models: one that had been fed generic web text, and a second that had been fine-tuned to Amazon product reviews. Above: The frontend for OpenAI’s trained language model. We tested the first model with this phrase: “The quick brown fox jumps over the lazy dog.” Here’s what it wrote: “The quick brown fox jumps over the lazy dog. This dog looks lazy! ‘What’s so great about that?’ I asked, and the quick brown fox laughed, and the lazy dog licked his lips. ‘I don’t know. But you won’t have to know, because once you’re in… ‘ ‘I know,’ I said.” Then we entered: “John woke up this morning feeling very hungry.” Here was the response: “John woke up this morning feeling very hungry. This was due in part to the fact that while the restaurant is open through July, the rest of the business was closed for the Thanksgiving break. The meal started off with a salad of fresh vegetables, mixed with cheese and dressing. The potatoes had a nice brown finish and the salad was full of flavor.” Finally, we tried: “This is the story of a little dog named Boo.” Here’s what it came up with: “This is the story of a little dog named Boo. You see, Boo is a Labrador, a dog that loves people more than anything else. A little over a year ago, Boo became a part of The Dog Rescue of Arlington, Virginia. Boo was a puppy (now 2 years old) that had been abandoned. Boo’s story was heartbreaking.” We then supplied the Amazon model — which had fields and drop-down menus for specifying the product category, product name, review score, and review summary — with this: “A great phone.” With the product toggle set to “Cell Phones and Accessories” and the rating set to “5” (i.e., five stars out of five), it generated: “The phone is great! I’ve had it since July of 2013. I will be purchasing a second one in the near future. Love the ability to text, play music and talk to others via the mic. Voice quality is great. The ear buds are a little uncomfortable after a few hours of wearing them. The only problem I’ve had are the battery life. I know, that’s just a problem with technology, and it will go away after the phone is used for a while. Other than that, I love the phone. It has been the best phone for me. It’s an excellent phone. The case is a nice touch, but the plastic part of the phone is the best part of the phone. It gets dirty easily. If you have a camera on your phone, you can use that phone for pictures. I’ve uploaded pictures of what I’ve shown my family what i’ve done since i got the phone.” OpenAI says the models come up with “interesting” and “coherent” text on the first go about half of the time. “It tries to always start predicting [the next word] given as little information as possible,” Radford said. “[The] more context you can give it — for example, capitalization — the better it’ll … perform.” During experiments involving zero-shot domain transfer, in which the model hadn’t been trained beforehand on any dataset specific to the tests, OpenAI says that the largest of its four language systems — OpenAI GPT-2 — managed to obtain state-of-the-art scores in seven of eight benchmarks, including LAMBADA (a test of models’ ability to model long-range dependencies in text), the Winograd Schema Challenge (a measure of capacity to resolve ambiguities in text), and the Penn Treebank (a collection of millions of words of part-of-speech tagged text). In some tests, it even approached human-level accuracy. Evaluated on the Children’s Book Test, for example, which examines how well systems can capture the meaning of different categories of words, GPT-2 was 93.3 percent accurate in predicting nouns compared with human subjects’ 96 percent, and 89.05 percent accurate at anticipating named entities (compared with humans’ 92 percent). It also demonstrated an aptitude for unsupervised learning tasks. In question-answering tests where it was provided a context and prompted with queries (“Who wrote the book the origin of species?”; “What is the most common blood type in Sweden?”; “Who came up with the theory of relativity?”), it supplied answers with up to 83.4 percent probability. “[It’s] able to leverage a much larger model and a lot more data across all of these domains to kind of be a generalist, where it’s pretty … good in any general language prediction task. And in very targeted functionality like summarization or translation, [it’s] showing promising preliminary results,” Radford said. “[T]hat’s super exciting, because [these are] method[s] where we [didn’t] explicitly train on these tasks.” Still, it’s far from the be all end all of NLP, Radford and Jeffrey Wu, a member of OpenAI’s technical staff, caution. None of the models can see more than a page of data at a time, and they’re not entirely consistent when it comes to reasoning — they sometimes fudge numbers, or switch topics in a nonsensical way. Wu, Radford, and the rest of OpenAI’s language team leave those shortcomings to future work. “There are a lot of things to investigate,” Wu said. “[W]e’re very interested in seeing what the remainder of [the performance] curve looks like. [It] could be that [it] starts leveling out and we need some new research advances, and it could be that just increasing scale keeps giving us gains. [We’re] still working on that.” Deepfake news In a break from tradition, OpenAI says it’s choosing not to release the dataset used to train its NLP models, nor three of the four language models or the training code. It won’t withhold the text generator frontend — it plans to make it available publicly as a tool people can interact with directly — but it believes that publishing the rest might open the door to abusive behavior by bad actors. “The generality of large language models highlights the omni-use nature of AI,” OpenAI wrote in a blog post. “The same tool that an artist could use to help them write a short fiction story … can also be used to do things like generate synthetic financial news about specific companies … screeds of racist, sexist, or uninclusive text … create fake reviews on well-known sites like Amazon or Yelp … or augment political information influence operations … For that reason, we’re attempting a form of responsible disclosure with this release, where we want to communicate about what we’ve done in a responsible manner that empowers other important stakeholders, like journalists and policymakers, to also understand and verify what we’ve done.” OpenAI has a point — AI systems that can be used to generate misleading content have come under increased scrutiny in recent times. In September, members of Congress sent a letter to National Intelligence director Dan Coats requesting a report from intelligence agencies about the potential impact on democracy and national security of deepfakes — videos created using AI that digitally grafts faces onto other people’s bodies. During a congressional hearing in late 2018, members of Congress speaking with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey also expressed concerns about the potential impact of manipulative deepfake videos. There is certainly a risk that tools like OpenAI’s cutting-edge language models might be used to generate untrue or misleading stories, contributing to the enormous volume already published daily. In March 2018, half of the U.S. population reported seeing deliberately misleading articles on news websites. And Gartner predicts that by 2022, if current trends hold, a majority of people in the developed world will see more false than true information. MIT researchers — along with startups like MetaFact and AdVerify.ai — have attempted to fight the spread of both human- and machine-written fake news with automated tools that can determine whether a source is accurate or politically prejudiced. But some experts aren’t convinced that AI is up to the task of fighting AI. Dean Pomerleau, a Carnegie Mellon University Robotics Institute scientist who helped organize the Fake News Challenge, a competition to crowdsource bias detection algorithms, told the Verge in an interview that AI lacked the nuanced understanding of language necessary to suss out untruths and false statements. “We actually started out with a more ambitious goal of creating a system that could answer the question ‘Is this fake news, yes or no?'” he said. “We quickly realized machine learning just wasn’t up to the task.” Human fact-checkers aren’t necessarily better. This year, Google suspended Fact Check, a tag that appeared next to stories in Google News that “include information fact-checked by news publishers and fact-checking organizations,” after conservative outlets accused it of exhibiting bias against them. It’s clear that there’s work to be done in the policy arena — and with today’s announcement, OpenAI hopes to not only demonstrate the impressive gains it has made in NLP, but to spark debate among researchers and regulators. “We see some restraint on publication as a healthy characteristic of technological fields with transformative societal consequences,” OpenAI said. “In this case, we were guided initially by a rough consensus within the organization that these results were qualitatively different from prior ones, and that the misuse potential was more pronounced than with prior projects we have been involved in. We eventually hope to create a global community of AI practitioners that think about the information hazards of particular types of releases.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,311
2,019
"Facebook VP: AI has a compute dependency problem | VentureBeat"
"https://venturebeat.com/2019/07/11/facebook-vp-ai-has-a-compute-dependency-problem"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook VP: AI has a compute dependency problem Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In one of his first public speaking appearances since joining Facebook to lead its AI initiatives, VP Jérôme Pesenti expressed concern about the growing amount of compute power needed to create powerful AI systems. “I can tell you this is keeping me up at night,” Pesenti said. “The peak compute companies like Facebook and Google can afford for an experiment, we are reaching that already.” More software innovation will be required if artificial intelligence is to grow unhindered, he said, and optimization of hardware and software — rather than brute force compute — may be critical to AI in years ahead. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Examples of systems less reliant on compute for innovative breakthroughs include Pluribus, an AI system developed by Facebook AI Research and Carnegie Mellon University and introduced today, that can take on world-class poker players. In an article in Science , researchers said Pluribus only required $150 in cloud computing to train. The end of Moore’s Law means the compute needed to create the most advanced AI is going up. In fact, Pesenti cited an OpenAI analysis that found the compute necessary to create state-of-the-art systems has gone up 10 times each year since 2012. “We still see gains with increase of compute, but the pressure from the problem is just going to become bigger,” Pesenti said. “I think we will still continue to use more compute, you will still net, but it will go slower, because you cannot keep pace with 10 times a year. That’s just not possible.” Analysis introduced last month found that the costs of training systems like OpenAI’s GPT-2 can exceed carbon emissions of the lifetime of five cars. Speaking onstage at VentureBeat’s Transform 2019 event, Pesenti talked about the unique challenges Facebook encounters when deploying AI systems for 2.8 billion unique users around the world. These challenges include parsing nuance — like determining whether a post qualifies as hate speech or whether a video has simply been altered or is a deepfake. Road blocks companies may encounter on their journey to deploy AI can be cultural, logistical, or just a failure to recognize that the AI stack isn’t the same as the typical engineering stack. AI plays a role in virtually every aspect of Facebook’s services, from determining which ads to display to making recommendations on Instagram to content moderation. AI also powers new customer experiences, such as Portal’s Smart Camera. Many Facebook services are powered by Intel CPUs , Facebook engineering manager Kim Hazelwood said last year. Pesenti — like executives from Google, Microsoft, and Airbnb in their Transform 2019 talks — also spoke about the importance of diversity in hiring and of making sure AI works the same for everyone. He believes bias typically comes from data sets, rather than the creators of AI systems. “We’re making progress. It’s still very far from where we ought to be,” he said. “We need to do everything we can to increase the diversity in the field.” Facebook shared new statistics related to company diversity earlier this week, but did not break out statistics about race or gender diversity within divisions like Facebook AI Research that are devoted entirely to artificial intelligence. Analysis by Data & Society fellow and Algorithmic Accountability Act coauthor Mutale Nkonde found that Facebook AI Research currently employs 146 people, none of whom are of African descent. An Inclusive AI program created by Facebook AR/VR business lead Lade Obamehinti is currently being used internally to vet products for bias. Obamehinti created the program one year ago, after she found that Portal’s Smart Camera AI didn’t work on people with darker skin pigment, like herself. Measurement of AI diversity by team may soon be outdated, however. Pesenti wants developers within Facebook to be part of every team and division in the company. “My goal is to make every single engineer in the organization an ML engineer, and that number has increased 3 times in the last year, so you’re talking about thousands and thousands of engineers that are not on my team and are not actually ML engineers,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,312
2,019
"MLPerf: Google's Cloud TPUs and Nvidia's Tesla V100 break AI training records | VentureBeat"
"https://venturebeat.com/2019/07/10/mlperf-google-cloud-tpus-and-nvidia-break-ai-training-records"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MLPerf: Google’s Cloud TPUs and Nvidia’s Tesla V100 break AI training records Share on Facebook Share on X Share on LinkedIn Nvidia Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia and Google Cloud set AI training time performance records, according to the latest round of benchmark results from the MLPerf benchmark consortium. Benchmarks help AI practitioners adopt common standards for measuring the performance and speed of hardware used to train AI models. MLPerf v0.6 examines the training performance of machine learning acceleration hardware in 6 popular usage categories. Among results announced today: Nvidia’s Tesla V100 Tensor Core GPUs used an Nvidia DGX SuperPOD to complete on-premise training of the ResNet-50 model for image classification in 80 seconds. By contrast, the same task using a DGX-1 station in 2017 took 8 hours to complete model training. Reinforcement learning with Minigo , an open source implementation of AlphaGoZero model, took place in 13.5 minutes, also a new record. At Nvidia, the latest training benchmark results are primarily the result of advances in software. “In just a matter of seven months on the same DGX-2 station, our customers can now enjoy up to 80% more performance, and that’s due to all the software improvements, all the work that our ecosystem is doing,” a company spokesperson said in a phone call. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google Cloud’s TPU v3 Pods also demonstrated record performance results in machine translation from English to German of the Transformer model in 51 seconds. TPU pods also achieved record performance in the image classification benchmark of the ResNet-50 model with the ImageNet data set, and model training in another object detection category in 1 minute and 12 seconds. Google Cloud TPU v3 Pods capable of harnessing the power of more than 1,000 TPU chips were first made available in public beta in May. Submissions to the latest round of training benchmark tests were made by Intel, Google, and Nvidia. Nvidia and Google demonstrated they make some of the fastest hardware for training AI models in the world when MLPerf shared the first training benchmark results in December 2018. This news follows the launch of MLPerf’s inference benchmarks for computer vision and language translation last month. Results of the inaugural MLPerf inference benchmark will be reviewed in September and shared publicly in October, MLPerf Inference Working Group cochair David Kanter told VentureBeat in a phone interview. MLPerf is a group of 40 organizations that play key roles in the AI hardware and model creation space, such as Amazon, Arm, Baidu, Google, and Microsoft. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "