id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
15,967
2,021
"Microsoft details Speller100, an AI system that checks spelling in over 100 languages | VentureBeat"
"https://venturebeat.com/2021/02/08/microsoft-details-speller100-an-ai-system-that-checks-spelling-in-over-100-languages"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft details Speller100, an AI system that checks spelling in over 100 languages Share on Facebook Share on X Share on LinkedIn Bing. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In a post on its AI research blog , Microsoft today detailed a new language system, Speller100, that the company claims is one of the most comprehensive ever made in terms of linguistic coverage and accuracy. Comprising a number of AI models that understand speech in over 100 languages collectively, Speller100 now powers all spelling correction on Bing, which previously only supported spell check for about two dozen languages. For languages with little web presence, it’s challenging to collect an amount of data sufficient to train a spell-correcting model. Moreover, systems can’t rely solely on training data to learn the spelling of a language. At its core, spelling correction is about building an error model and a language model, and not all errors are the same. For example, n on-word errors occur when a word isn’t in the vocabulary for a given language, while real-word errors occur when the word exists but doesn’t fit in a larger context. Speller100 is built around the concept of language families , or larger groups of languages based on similarities that multiple languages share. It also employs zero-shot learning , a technique that allows a model to learn and correct spelling without additional language-specific labeled training data. Above: Spelling correction on Bing, powered by AI. To scale Speller100 to over 100 languages, Microsoft says it developed a spelling correction pretraining approach that relies on functions to take text extracted from webpages and generate errors like deletion, addition, rotation, and replacement. This eliminated the need for a massive dataset of misspelled searches, enabling Speller100 to reach 50% of correction recall for top candidates in languages for which zero training data existed. Deployed as-is on Bing, where about 15% of searches are misspelled, it would have reduced the number of misspellings by 7.5%. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To improve performance even further, Microsoft leveraged the orthographic, morphological, and semantic similarities between languages to build a dozen or so language family-based models. This maximized the zero-shot benefit and kept Speller100 compact enough for runtime, making the system well-suited to spelling correction for languages with relatively little training data, like Afrikaans and Luxembourgish. Microsoft says that to date on Bing, Speller100 has reduced the number of pages with no results by up to 30% and the number of times users have to manually reformulate their searches by 5%. It’s also increased the number of times users click on Bing’s spelling suggestion from 8% to 67%. Microsoft says it plans to implement Speller100 in more of its products going forward. “Spelling correction is the very first component in the Bing search stack because searching for the correct spelling of what users mean improves all downstream search components,” principal applied science manager Jingwen Lu, principal applied software engineering manager Jidong Long, and vice president Rangan Majumder wrote in the blog post. “Our spelling correction technology powers several product experiences across Microsoft. Since it is important to us to provide all customers with access to accurate, state-of-the-art spelling correction, we are improving search so that it is inclusive of more languages from around the world with the help of large-scale AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,968
2,021
"Plus raises $200 million to develop its autonomous truck platform | VentureBeat"
"https://venturebeat.com/2021/02/10/plus-raises-200-million-to-develop-its-autonomous-truck-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Plus raises $200 million to develop its autonomous truck platform Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Plus , a startup developing autonomous truck technology, today announced it has completed a $200 million series B round led by new investors Guotai Junan, Hedosophia, and Wanxiang. Plus plans to use the funds to accelerate the commercialization and deployment of its automated trucking system. As the company begins mass production in 2021, Plus says it will develop a sales, engineering, and support network to help fleets integrate its platform into their daily operations. The company will also scale deployments in the U.S. and China and expand to Europe and other parts of Asia. Some experts predict the pandemic will hasten adoption of autonomous vehicles for delivery. Self-driving cars, vans, and trucks promise to minimize the risk of spreading disease by limiting driver contact. This is particularly true with regard to short-haul freight, which is experiencing a spike in volume during the COVID-19 outbreak. The producer price index for local truckload carriage jumped 20.4% from July to August, according to the U.S. Bureau of Labor Statistics, most likely propelled by demand for short-haul distribution from warehouses and distribution centers to ecommerce fulfillment centers and stores. Cupertino, California-based Plus, which was cofounded in 2016 by Stanford Ph.D. students Hao Zheng, David Liu, Shawn Kerrigan, and Tim Daly, aims to develop semiautonomous trucks in partnership with automakers, chipmakers, and shipping companies. Its technology relies on a combination of radars, lidar sensors, and cameras to “see” 360 degrees around, with a fusion-based perception system that allows Plus-equipped driverless trucks to track vehicles hundreds of meters away. Plus says it trains and deploys a number of AI models to perform tasks like detecting and analyzing ground objects and road structures, and to predict the behavior of its trucks and surrounding vehicles. Complementary models and mechanisms like odometry provide functional redundancy and ostensibly ensure smooth transitions between operating modes. “[Our] models were trained using a diverse dataset consisting of data collected from 14 states in the U.S. and seven provinces in China under varying environmental conditions, including rain, snow, dust storms, and fog. Data augmentation was done using a combination of heuristics, simulation, and generative adversarial networks (GANs) to ensure that the models generalize to novel situations to mitigate potential bias,” Liu told VentureBeat via email. “For example, we can create realistic images of new kinds of road geometries and textures using GANs, even when that is not part of the data collected. Event miners are used to automatically identify interesting data segments containing new information by temporal correlation and cross-correlation with other sensors. An example of interesting data is an object that should have been detected but was not. When new model versions are trained, we make sure that this new class of information gets incorporated appropriately and the model’s capacity continues to develop.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Back in March 2017, Plus, which has around 200 employees in its U.S. and China offices, became one of the first driverless trucking companies to land a California Autonomous Vehicle Testing License, which allows manufacturers to test autonomous vehicles with a human in the driver seat. The company claims to have partnered with some of the world’s largest logistics providers, some of whom have preordered its driving platform ahead of mass production in 2021 with heavy truck manufacturer FAW. (Plus says it has over 10,000 units of preorders so far.) In December 2019, Plus completed a cross-country commercial freight run in the U.S. with Land O’Lakes. It has also put its automated driving system through independent testing via the Transportation Research Center, a vehicle test facility and proving grounds. “Trucking accidents and a growing truck driver shortage affect our economy and daily lives. All of us at Plus are inspired every day to develop automated trucks that are going to make our world safer and greener and help fleets drive more fuel efficiently and reduce operating costs,” Liu said in a statement, adding that Plus expects to have its automated driving system deployed on “tens of thousands” of trucks across the U.S., China, and Europe in the next few years. “The additional funding and continued support of our investors will help us further scale our commercialization efforts, enabling us to serve fleets in more countries.” The value of goods transported as freight cargo in the U.S. was estimated to be about $50 billion each day in 2013. And the driverless truck market — which is anticipated to reach 6,700 units globally after totaling $54.23 billion in 2019 — stands to save the logistics and shipping industry $70 billion annually while boosting productivity by 30%. Besides promised cost savings, the growth of trucking automation has been driven by a shortage of drivers. In 2018, the American Trucking Associations estimated that 50,000 more truckers were needed to close the gap in the U.S., despite the sidelining of proposed U.S. Transportation Department screenings for sleep apnea. Plus has competition in Daimler, which in 2018 obtained a permit from the Chinese government that allows it to test self-driving cars powered by Baidu’s Apollo platform on public roads in Beijing. And startup Optimus Ride built out a small driverless shuttle fleet in Brooklyn. Waymo, which has racked up more than 20 million real-world miles in over 25 cities across the U.S. and billions of simulated miles, in November 2018 became the first company to obtain a driverless car testing permit from the California Department of Motor Vehicles (DMV). Other competitors include Tesla, Aptiv, May Mobility, Cruise, Aurora, Argo AI, Pronto.ai, Pony.ai (which this week raised $100 million ), and Nuro. Sequoia, Lightspeed, GSR Ventures, CGC, Mayfield, Chinese carmaker SAIC, and trucking platform Full Truck Alliance — a Plus customer — also participated in Plus’ latest funding round. It brings the company’s total raised to close to $400 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,969
2,021
"Locus Robotics raises $150 million to scale its warehouse robotics platform | VentureBeat"
"https://venturebeat.com/2021/02/17/locus-robotics-raises-150-million-to-scale-its-warehouse-robotics-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Locus Robotics raises $150 million to scale its warehouse robotics platform Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Locus Robotics , a Wilmington, Massachusetts-based warehouse robotics startup, today announced it has raised $150 million in series E funding at a $1 billion post-money valuation. The company says the funding will allow it to accelerate product innovation and global expansion. Locus expects that in the next four years, over a million warehouse robots will be installed and that the number of warehouses using them will grow tenfold. Worker shortages attributable to the pandemic have accelerated the adoption of automation. According to ABI Research , more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. In China, Oxford Economics anticipates 12.5 million manufacturing jobs will become automated, while in the U.S., McKinsey projects machines will take upwards of 30% of such jobs. Locus’ autonomous robots — called LocusBots — can be reconfigured with totes, boxes, bins, containers, or peripherals like barcode scanners, label printers, and sensors. They work collaboratively with humans, minimizing walking with an app that recognizes workers’ Bluetooth badges and switches to their preferred language. On the backend, Locus’ LocusServer directs robots so they learn efficient travel routes, sharing the information with other robots and clustering orders to where workers are. As orders come into warehouse management systems, Locus organizes them before transmitting back confirmations, providing managers real-time performance data, including productivity, robot status, and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When new LocusBots are added to the fleet, they share warehouse inventory status and item locations. Through LocusServer, they detect blockages and other traffic issues to improve item pick rates and order throughput. Locus’ directed picking technology points workers to their next picks, optionally providing challenges through a gamification feature that supports individual, team, and shift goals, plus events and a mechanism managers can use to provide feedback. In addition, Locus’ backend collates various long-tail metrics, including hourly pick data, daily and monthly pick volume, current robot locations, and robot charging levels. Locus offers a “robot-as-a-service” program through which customers can scale up by adding robots on a limited-time basis. For a monthly subscription fee, the company sends or receives robots to warehouses upon request, and it provides those robots software and hardware updates, in addition to maintenance. Locus claims that its system, which takes about four weeks to deploy, has delivered a 2 to 3 times increase in productivity and throughput and 15% less overtime spend for brands like Boots UK, Verst Logistics, Ceva, DHL, Material Bank, Radial, Port Logistics Group, Marleylilly, and Geodis. The company’s robots passed 100 million units picked in February 2020, and last April UPS announced that it would be piloting Locus machines in its own facilities. Locus recently opened a new headquarters in the EU and surpassed 50 customer deployments, with companies including DHL, Boots UK, and Geodis. As of late 2020, the company’s robots had picked and sorted over 300 million units, equating to ten thousand units every 15 minutes or roughly a million units a day. Bond and Tiger Global Management led the round announced today. It brings the Quiet Logistics spinout’s total raised to over $105 million, following a $40 million series D investment last June. Locus competes in the $3.1 billion intelligent machine market with Los Angeles-based robotics startup InVia , which leases automated robotics technologies to fulfillment centers. Gideon Brothers, a Croatia-based industrial startup backed by TransferWise cofounder Taavet Hinrikus, is another contender. And then there’s robotics systems company GreyOrange; Otto Motors ; and Berkshire Grey , which combines AI and robotics to automate multichannel fulfillment for retailers, ecommerce, and logistics enterprises. Fulfillment alone is a $9 billion industry — roughly 60,000 employees handle orders in the U.S., and companies like Apple manufacturing partner Foxconn have deployed tens of thousands of assistive robots in assembly plants overseas. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,970
2,021
"Amazon launches computer vision service to detect defects in manufactured products | VentureBeat"
"https://venturebeat.com/2021/02/24/amazon-launches-computer-vision-service-to-detect-defects-in-manufactured-products"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches computer vision service to detect defects in manufactured products Share on Facebook Share on X Share on LinkedIn The logo of Amazon is seen at the company logistics center in Boves, France, September 18, 2019. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today announced the general availability of Amazon Lookout for Vision, a cloud service that analyzes images using computer vision to spot product or process defects and anomalies in manufactured goods. Amazon says Lookout for Vision, which is available in select Amazon Web Services (AWS) regions via the AWS console and supporting partners, is able to train an AI model using as few as 30 baseline images. Tasks in manufacturing can be error-prone when humans are in the loop. A study from Vanson Bourne found that 23% of all unplanned downtime in manufacturing is the result of human error, compared with rates as low as 9% in other segments. The $327.6 million Mars Climate Orbiter spacecraft was destroyed because of a failure to properly convert between units of measurement. And one pharma company reported a misunderstanding that resulted in an alert ticket being overridden, which cost four days on the production line at £200,000 ($253,946) per day. Lookout for Vision aims to combat this by injecting a bit of AI into the mix, detecting manufacturing and production defects in products — including cracks, dents, incorrect colors, and irregular shapes — from their appearance. The service can process thousands of images an hour and requires no upfront commitment or minimum fee, Amazon says. Customers pay by the hour for usage to train the model and detect anomalies or defects using the service. After analyzing the data, Lookout for Vision reports images that differ from the baseline via the service dashboard or a real-time API. Amazon claims Lookout for Vision is sophisticated enough to maintain accuracy with variances in camera angle, pose, and lighting arising from changes in work environments. But customers have the ability to provide feedback on the results and whether a prediction correctly identified an anomaly. Lookout for Vision will automatically retrain the underlying model so the service continuously improves. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Customers using Lookout for Vision include GE Healthcare, Basler, and Sweden-based Dafgards. (Lookout for Vision launched in preview with customers beginning in December 2020 after being unveiled during Amazon’s virtual re:Invent conference.) Dafgards is using the service to automate the inspection of its production lines and detect whether pizzas, hamburgers, and quiches have the correct toppings. And Amazon’s own Print-On-Demand facility, which prints books to fulfill customer orders, is tapping Lookout for Vision to automate and scale visual inspection at each step of book manufacturing. “Whether a customer is placing toppings on a frozen pizza or manufacturing finely calibrated parts for an airplane, what we’ve heard unequivocally is that guaranteeing only high-quality products reach end users is fundamental to their business. While this may seem obvious, ensuring such quality control in industrial pipelines can in fact be very challenging,” AWS Amazon machine learning VP Swami Sivasubramanian said in a press release. “We’re excited to deliver Amazon Lookout for Vision to customers of all sizes and across all industries to help them quickly and cost-effectively detect defects at scale to save time and money while maintaining the quality their consumers rely on — with no machine learning experience required.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,971
2,021
"IBM launches AI platform to discover new materials | VentureBeat"
"https://venturebeat.com/2021/03/05/ibm-launches-ai-platform-to-discover-new-materials"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM launches AI platform to discover new materials Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. IBM today announced the launch of the Molecule Generation Experience (MolGX), a cloud-based, AI-driven molecular design platform that automatically invents new molecular structures. MolGX, a part of IBM’s overarching strategy that aims to accelerate the discovery of new materials by 10 to 100 times, uncovers materials from the property targets of a given product. The chemical sciences have made strides in the discovery of novel and useful materials over the past decades. For example, in the area of polymers, the recent development of thermoplastics has had an influence on applications ranging from new paints to clothing fibers. But while the discovery of new materials is the driving force in the expansion and improvement of industrial products, the vastness of chemical space likely exceeds the ability of human experts to explore even a fraction of it. By observing and selecting a dataset, MolGX leverages generative models to produce molecules from chemical properties like “solubility in water” and “heatability.” The platform trains an AI model to predict chemical characteristics within given parameters and synthesizes molecular structures based on the model built. “The development of new materials follows a number of different pathways, depending on both the nature of the problem being pursued and the means of investigation. Breakthroughs in the discovery of new materials span from pure chance, to trial-and-error approaches, to design by analogy to existing systems,” Seiji Takeda, technical lead of material discovery at IBM, wrote in a blog post. “While these methodologies have taken us far, the challenges and requirements for new materials are more complex — so too are the demands and issues for which new materials are needed. As we face global problems such as pandemics and climate change, the necessity and urgency to design and develop new medicines and materials at a faster pace and on a molecular scale through to the macroscopic level of a final product is becoming increasingly important.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IBM has released a free trial version of MolGX trained using a built-in dataset, which the company applied internally to the development of a new photoacid generator — a key material in electronics manufacturing. A professional version of MoIGX with additional functionality including data upload, results exportation, customized modeling, and more is available with a license. According to IBM, this paid release carried out the inverse design of sugar and dye molecules over 10 times faster than human chemists at Nagase & Co Ltd, a chemical manufacturing company. Beyond IBM, startups like Kebotix are developing AI tools that automate lab experiments to uncover materials faster than with manual techniques. Meanwhile, Facebook and Carnegie Mellon have partnered on a project to discover better ways to store renewable energy, in part by tapping AI to accelerate the search for electrocatalysts, or catalysts that participate in electrochemical reactions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,972
2,021
"Nvidia and Harvard develop AI tool that speeds up genome analysis | VentureBeat"
"https://venturebeat.com/2021/03/08/nvidia-and-harvard-develop-ai-tool-that-speeds-up-genome-analysis"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia and Harvard develop AI tool that speeds up genome analysis Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Researchers affiliated with Nvidia and Harvard today detailed AtacWorks, a machine learning toolkit designed to bring down the cost and time needed for rare and single-cell experiments. In a study published in the journal Nature Communications , the coauthors showed that AtacWorks can run analyses on a whole genome in just half an hour compared with the multiple hours traditional methods take. Most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus. But an individual cell pulls out only the subsection of genetic components that it needs to function, with cell types like liver, blood, or skin cells using different genes. The regions of DNA that determine a cell’s function are easily accessible, more or less, while the rest are shielded around proteins. AtacWorks, which is available from Nvidia’s NGC hub of GPU-optimized software, works with ATAC-seq, a method for finding open areas in the genome in cells pioneered by Harvard professor Jason Buenrostro, one of the paper’s coauthors. ATAC-seq measures the intensity of a signal at every spot on the genome. Peaks in the signal correspond to regions with DNA such that the fewer cells available, the noisier the data appears, making it difficult to identify which areas of the DNA are accessible. ATAC-seq typically requires tens of thousands of cells to get a clean signal. Applying AtacWorks produces the same quality of results with just tens of cells, according to the coauthors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AtacWorks was trained on labeled pairs of matching ATAC-seq datasets, one high-quality and one noisy. Given a downsampled copy of the data, the model learned to predict an accurate high-quality version and identify peaks in the signal. Using AtacWorks, the researchers found that they could spot accessible chromatin, a complex of DNA and protein whose primary function is packaging long molecules into more compact structures, in a noisy sequence of 1 million reads nearly as well as traditional methods did with a clean dataset of 50 million reads. AtacWorks could allow scientists to conduct research with a smaller number of cells, reducing the cost of sample collection and sequencing. Analysis, too, could become faster and cheaper. Running on Nvidia Tensor Core GPUs, AtacWorks took under 30 minutes for inference on a genome, a process that would take 15 hours on a system with 32 CPU cores. In the Nature Communications paper, the Harvard researchers applied AtacWorks to a dataset of stem cells that produce red and white blood cells — rare subtypes that couldn’t be studied with traditional methods. With a sample set of only 50 cells, the team was able to use AtacWorks to identify distinct regions of DNA associated with cells that develop into white blood cells, and separate sequences that correlate with red blood cells. “With very rare cell types, it’s not possible to study differences in their DNA using existing methods,” Nvidia researcher Avantika Lal, first author on the paper, said. “AtacWorks can help not only drive down the cost of gathering chromatin accessibility data, but also open up new possibilities in drug discovery and diagnostics.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,973
2,021
"WorkFusion raises $220M to automate repetitive enterprise backend processes | VentureBeat"
"https://venturebeat.com/2021/03/09/workfusion-raises-220m-to-automate-repetitive-enterprise-backend-processes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages WorkFusion raises $220M to automate repetitive enterprise backend processes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Intelligent process automation provider WorkFusion today announced it has raised $220 million, bringing the company’s total raised to date to over $340 million. According to a spokesperson, WorkFusion plans to put the funds toward customer acquisition as it expands the size of its workforce. Intelligent process automation, or technology that automates monotonous, repetitive chores traditionally performed by human workers, is big business. Forrester estimates that automation and other AI subfields created jobs for 40% of companies in 2019 and that a tenth of startups now employ more digital workers than human ones. According to a McKinsey survey , at least a third of activities could eventually be automated in about 60% of occupations. And the pandemic is bolstering the uptake of automation, with nearly half of executives telling McKinsey in a report that their automation adoption has accelerated moderately. WorkFusion, which has roughly 300 employees, was founded 11 years ago by Max Yankelevich and Andrew Volkov. Their mission was to use AI to perform knowledge work currently performed manually. The company evolved into developing the concept of intelligent automation, which combines AI, robotic process automation, optical character recognition, and operational analytics to augment key business processes, including news screening, name screening , transaction screening, custody and treasury operations, and email processing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! WorkFusion’s industry-specific automation solutions are powered by cloud-federated bots, which automate complex, document-heavy operations, particularly in regulated sectors like banking, insurance, and health care. Machine learning models for certain use cases are pretrained with existing datasets, and customers use their own datasets to refine the models. Meanwhile, the bots learn in real time from data and end users. WorkFusion bots also aggregate and share those learnings across the bot ecosystem to create intelligence network effects from which all customers benefit. WorkFusion has a number of competitors in a global intelligent process automation market that’s estimated to be worth $15.8 billion by 2025, according to KBV Research. Automation Anywhere last secured a $290 million investment from SoftBank at a $6.8 billion valuation. Within a span of months, Blue Prism raised over $120 million, Kryon $40 million, and FortressIQ $30 million. Tech giants have also made forays into the field, including Microsoft, which acquired RPA startup Softomotive, and IBM, which purchased WDG Automation. But by focusing on the needs of enterprise users in banking and insurance industries, WorkFusion says it has managed to set itself apart. Five of the world’s top 10 banks outside of China use WorkFusion, including Deutsche Bank, Standard Bank, and Axis Bank. For one customer, Carter Bank & Trust, WorkFusion automated over 210 processes, including account opening reviews, identity verification, adverse media monitoring, and fraud due diligence. Within the first few months, WorkFusion says it achieved a net positive impact, reducing operating expenses by more than $2.5 million. “The first wave of robotic process automation brought the power of technology to users’ desktops in all industries and companies of all sizes. Today, we see a second wave emerging,” CEO Alex Lyashok told VentureBeat via email. “Cloud-based, AI-enabled robots bringing Intelligent Automation to enterprises. WorkFusion was first to identify Intelligent Automation as a category in 2015, and we continue to bring pioneering technology to banking and insurance customers today.” Georgian led WorkFusion’s series F announced today. It’s the New York City-based company’s first fundraising round since April 2018, when it closed a $50 million series E led by Declaration Partners and Hawk Equity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,974
2,021
"AI patent intelligence platform PatSnap secures $300M | VentureBeat"
"https://venturebeat.com/2021/03/16/ai-patent-intelligence-platform-patsnap-secures-300m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI patent intelligence platform PatSnap secures $300M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. PatSnap , which offers a patent and R&D platform and services, today announced that it raised $300 million in series E funding from SoftBank Vision Fund 2 and Tencent Investment. The Toronto-, Singapore-, and London-based company plans to use the funds to further develop its intelligence platform, support software product development, and expand the size of its global workforce. PatSnap also says the tranche will enable it to grow its sales division and invest in its employees’ professional growth and professional development. Companies are constantly under pressure to increase the pace of their innovation. And while more money is spent on R&D every year — $2.4 trillion in 2021, according to R&D World — the returns are dwindling. An article published in Harvard Business Review noted a 65% drop in R&D productivity. That’s despite the fact that the federal governments of the U.S. and Canada provide more than $15 billion in innovation incentives to private companies and nearly a third of U.S. patents rely directly on U.S. government-funded research. PatSnap, which was founded in 2007 by Jeffrey Tiong, began as a directory for intellectual property (IP), helping enterprises pull in data for R&D and ideation purposes. Since then, it’s developed technologies in the AI subfields of machine learning, natural language processing, and computer vision that analyze and identify the relationships between millions of unstructured data points across disparate IP sources. PatSnap claims this can help deliver insights that guide R&D decisions and help to shorten the time it takes to bring new creations to market. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “PatSnap’s machine learning engine analyzes millions of data points coming from patents data around the world, social, litigation, mergers and acquisitions, and other datasets,” cofounder Ray Chohan told VentureBeat via email. “Our AI has deep tech capabilities within life sciences. We enable biotech and pharma companies to connect complex unstructured data to enable insights not possible before. For example, we enable R&D and patent professionals at life sciences companies to discover brand new applications for a molecule or to repurpose existing drugs for net new disease indications. This is possible via our computer vision, natural language processing, deep learning, and machine learning technology stack.” PatSnap says that its natural language processing algorithms analyze more than 500 million documents and 200 data sources, providing a link between different documents, entities, and taxonomy systems. Based on patent and document information, the company also uses AI and machine learning to create a series of models that generate a unique index analysis for every patent and paper. There are 80 different indicators to support this index, which along with valuation ranking and indexing is designed to spotlight high-value patents. Indeed, most enterprises have to wrangle countless data buckets — some of which inevitably become underused or forgotten. A Forrester survey found that between 60% and 73% of all data within corporations is never analyzed for insights or larger trends. The opportunity cost of this unused data is substantial, with a Veritas report pegging it at $3.3 trillion by 2020. PatSnap says it connects 140 million patents, licensing, litigation, and company information with nonpatent literature to bring together over 250 million data points across 116 jurisdictions into a single platform. Using this data, PatSnap’s IP teams prepare research including patent landscape reports that feature diagrams and analysis informing strategic planning for product development and commercialization. PatSnap also compiles competitive intelligence reports that provide clients with business intelligence about competitors’ strengths and strategies. PatSnap claims to have more than 10,000 customers and over 100,000 users around the world including global brands, universities, and research institutions. Over the past year, the company says it’s enabled those clients — among them Disney, PayPal, Tesla, and Spotify — to accelerate time to insight when dealing with unstructured data by an estimated 12 times, leading to a roughly 3 times increase in successful product launches. “We initially experienced minor slowdowns from the market but since Q2 2020, we have seen an uplift in growth across all key markets. This is a great validation that innovation intelligence is a movement necessary for companies to grow and survive regardless of the economic environment. With the investment, we plan to continue leading the way and help our customers innovate,” Chohan continued. “Some features that we plan to add include innovation knowledge graphs, deep post search AI driven analytics, and vertical- and job-specific deep collaboration features. In addition, the funding will enable us to explore potential deep tech acquisitions.” Existing investors CITIC Industrial Fund, Sequoia China, Shun Wei Capital, and Vertex Ventures also participated in PatSnap’s latest funding round. It brings the company’s total raised to date to over $450 million as its workforce exceeds 800 people across locations in China, the U.S., and other offices. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,975
2,021
"Facebook claims AI can predict drug combinations to treat complex diseases | VentureBeat"
"https://venturebeat.com/2021/04/16/facebook-claims-ai-can-predict-drug-combinations-to-treat-complex-diseases"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook claims AI can predict drug combinations to treat complex diseases Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Facebook today detailed what it claims is the first single AI model capable of predicting the effects of drug combinations, dosages, timing, and other types of interventions like gene deletion. Developed in collaboration with Helmholtz Zentrum München, Facebook says the model could accelerate the process of identifying combinations of medications and other treatments that might lead to better outcomes for diseases. Discovering ways to repurpose existing drugs has proven to be a powerful tool to treat diseases including cancer. In recent years, doctors have seen success with “drug cocktails” to combat malignant conditions and continue to explore personalized treatments for patients. But finding an effective combination of existing drugs at the right dose is extremely challenging, in part because there are nearly infinite possibilities. Researchers would have to try from 5,000 to 19 billion solutions to find the optimal regimen given a pool of 100 drugs. Facebook’s open source model — Compositional Perturbation Autoencoder (CPA) — ostensibly addresses this with a self-supervision technique that observes cells treated with drug combinations and predicts the effect of new combinations. Unlike supervised models that learn from labeled datasets, Facebook’s generates labels from data by exposing the relationships between the data’s parts, a step believed to be critical to achieving human-level intelligence. CPA’s predictions take hours as opposed to the years that might elapse with conventional methods, allowing researchers to select the most promising results for validation and follow-up, according to Facebook. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In biology, RNA sequencing is used as a way to measure the gene expressions of cells at the molecular level and study the effects of perturbations including drug combinations. Academia and industry have released RNA sequencing datasets containing up to millions of cells and 20,000 readouts per cell to facilitate biomedical research. Facebook leveraged these datasets to train CPA using an approach called auto-encoding, in which data is compressed and decompressed until summarized into patterns useful for prediction. CPA first separates and learns the key attributes about a cell, such as the effects of a certain drug, combination, dosage, time, gene deletion, or cell type. It then independently recombines the attributes to project their effects on the cell’s gene expressions. For example, if one of the datasets had information on how drugs affect different types of cells A, B, C, and A+B, CPA would learn the impact of each drug in a cell-type specific fashion and then recombine each in order to extrapolate interactions between A+C, B+C, and A+B. To test CPA, Facebook says it applied the model to five publicly available RNA sequence datasets with measurements and outcomes of drugs, doses, and other confounders on cancer cells. Benchmarked in terms of the R2 metric, which represents the accuracy of the gene expression predictions, Facebook claims that CPA “stayed consistent” between training and testing — an indication of robustness. Moreover, CPA’s predictions of the effects of drug combinations and doses on cancer cells matched those found in the testing dataset “reliably.” Facebook believes that CPA can “dramatically” accelerate the process of identifying optimal combinations of treatments, as well as pave the way for new opportunities in the development of medications. Toward this end, the company is making available APIs and a software package designed to let researchers plug in datasets and run through predictions. “Our hope is that pharmaceutical and academic researchers as well as biologists will utilize [CPA] to accelerate the process of identifying optimal combinations of drugs for various diseases,” Facebook program manager Anna Klimovskaia and research scientist David Lopez-Paz wrote in a blog post. “In the future, [CPA] could not only speed up drug repurposing research, but also — one day — make treatments much more personalized and tailored to individual cell responses, one of the most active challenges in the future of medicine to date.” While Facebook claims that CPA is novel in its architecture, it isn’t the first algorithm engineered to predict drug interactions. In July 2018, Stanford researchers detailed an AI system that can anticipate the effects of drug combinations by modeling the more than 19,000 proteins in the body that interact with each other and with medications. Researchers at the MIT-IBM Watson AI Lab, Harvard School of Public Health, Georgia Institute of Technology, and IQVIA more recently created an AI tool called CASTER that estimates potentially harmful and unsafe drug-to-drug interactions. A separate Harvard group has proposed applying AI to identify candidates for drug repurposing in Alzheimer’s disease. And researchers at Aalto University, University of Helsinki, and the University of Turku in Finland created a machine learning model that projects how combinations of drugs might kill various cancer cells. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,976
2,021
"EU proposes strict AI rules, with fines up to 6% for violations | VentureBeat"
"https://venturebeat.com/2021/04/21/eu-proposes-strict-ai-rules-with-fines-up-to-6-for-violations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EU proposes strict AI rules, with fines up to 6% for violations Share on Facebook Share on X Share on LinkedIn 365 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — The European Commission on Wednesday announced tough draft rules on the use of artificial intelligence, including a ban on most surveillance , as part of an attempt to set global standards for a technology seen as crucial to future economic growth. The rules, which envisage hefty fines for violations and set strict safeguards for high-risk applications, could help the EU take the lead in regulating AI, which critics say has harmful social effects and can be exploited by repressive governments. The move comes as China moves ahead in the AI race, while the COVID-19 pandemic has underlined the importance of algorithms and internet-connected gadgets in daily life. “On artificial intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” European tech chief Margrethe Vestager said in a statement. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Commission said AI applications that allow governments to do social scoring or exploit children will be banned. High risk AI applications used in recruitment, critical infrastructure, credit scoring, migration and law enforcement will be subject to strict safeguards. Companies breaching the rules face fines up to 6% of their global turnover or 30 million euros ($36 million), whichever is the higher figure. European industrial chief Thierry Breton said the rules would help the 27-nation European Union reap the benefits of the technology across the board. “This offers immense potential in areas as diverse as health , transport, energy, agriculture, tourism or cyber security ,” he said. However, civil and digital rights activists want a blanket ban on biometric mass surveillance tools such as facial recognition systems, due to concerns about risks to privacy and fundamental rights and the possible abuse of AI by authoritarian regimes. The Commission will have to thrash out the details with EU national governments and the European Parliament before the rules can come into force, in a process that can take more than year. ($1 = 0.8333 euros) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,977
2,021
"AI-powered cybersecurity platform Vectra AI raises $130M | VentureBeat"
"https://venturebeat.com/2021/04/29/ai-powered-cybersecurity-platform-vectra-ai-raises-130m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered cybersecurity platform Vectra AI raises $130M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. San Jose, California-based cybersecurity startup Vectra AI today announced it has raised $130 million in a funding round that values the company at $1.2 billion. Vectra says the investment will fuel the company’s growth through expansion into new markets and countries. According to Markets and Markets , the security orchestration, automation, and response (SOAR) segment is expected to reach $1.68 billion in value this year, driven by a rise in security breaches and incidents and the rapid deployment and development of cloud-based solutions. Data breaches exposed 4.1 billion records in the first half of 2019, Risk Based Security found. This may be why 68% of business leaders in a recent Accenture survey said they feel their cybersecurity risks are increasing. Vectra was founded in 2012 by Hitesh Sheth, and the company provides AI-powered network detection and response services. Vectra’s platform sends security-enriched metadata to data lakes and security information and event management (SIEM) systems while storing and investigating threats in this enriched data. The aforementioned metadata is wide-ranging but includes patterns, precursors, account scores, saved searches, host scores, and campaigns. It’s scraped from sensors and processing engines deployed across cloud environments, where the sensors record metrics from traffic and ingest logs and other external signals. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: The Vectra AI dashboard. AI is a core component of Vectra’s product suite. Algorithms suss out and alert IT teams to anomalous behavior from compromised devices in network traffic metadata and other sources, automating cyberattack mitigation. Specifically, Vectra uses supervised machine learning techniques to train its threat detection models and unsupervised techniques to identify attacks that haven’t been seen previously. Vectra’s data scientists build and tune self-learning AI systems that complement the metadata with key security information. “For us, it all starts with collecting the right data because attack behaviors always vary. We’re continuously creating machine learning models for any type of new or current threat scenario,” Sheth told VentureBeat via email. “AI-based security defenses are the right tool for modern network defenders, not because current threats will become some dominant force, but because they are transformative in their own right.” Global ambitions Signaling its global ambitions, in August 2019 Vectra opened a regional headquarters in Sydney, Australia. Last July, the company launched a range of new advisory and operational cybersecurity services, weeks after revamping its international channel partner program. A growing number of cloud users are suffering malicious account takeovers, according to a survey conducted by Vectra. Nearly 80% of respondents claimed to have “good” or “very good visibility” into attacks that bypass perimeter defenses like firewalls. Yet there was a contrast between opinions of management-level respondents and practitioners, with managers exhibiting greater confidence in their organizations’ defensive abilities. “The pandemic has caused a further shift toward the cloud, even for organizations that were previously cloud adverse. Companies needed to prioritize the health and safety of their employees, which in many cases meant a shift to remote work,” Sheth said. “However, the only way to keep teams connected and productive was to further adopt cloud applications that enable collaboration from anywhere. For security teams, this means finding new solutions to protect assets and users because traditional network security doesn’t translate to securing a dispersed workforce that has adopted cloud technologies … Vectra can automate threat detection and investigation, provide visibility, and audit remote endpoint security posture to make sure users and company assets are secure.” Vectra, which has 375 employees and claims a 100% compound annual growth rate in 2020, counts Texas A&M University and Tribune Media Group among its customer base. In February, the company closed the strongest quarter in its history. Attack and defend “To me, this funding round confirms today’s cybersecurity capital markets are rewarding the most effective and innovative technology — not just the best pitch,” Sheth said. “Contrary to flashy Hollywood headlines about some Skynet-like AI hacker coming to get you, actual human attackers are far more clever than any contemporary offensive AI systems. This is in part because AI systems conform to a series of ‘rules,’ and as every human hacker knows, rules are made to be broken.” Sheth added “The most likely scenario is that some AI techniques merely make it into the toolkit of human adversaries, such as incorporating natural language AI into large-scale phishing attacks. We shouldn’t downplay the impact of a good phishing campaign, but if this is the sum total of what your C-suite is preparing for, you have your work cut out [for you]. Decisions about which AI cybersecurity solution to commit to should be driven based on outcome-based evaluations. This means selecting functional, rather than purely ornamental, solutions.” Blackstone Growth led Vectra’s latest round of funding. Existing investors also participated, bringing the company’s total raised to over $350 million. Vectra previously nabbed $100 million in a growth equity round led by TCV. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,978
2,021
"Huawei trained the Chinese-language equivalent of GPT-3 | VentureBeat"
"https://venturebeat.com/2021/04/29/huawei-trained-the-chinese-language-equivalent-of-gpt-3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Huawei trained the Chinese-language equivalent of GPT-3 Share on Facebook Share on X Share on LinkedIn The logo of Huawei Technologies is pictured in front of the German headquarters of the Chinese telecommunications giant in Duesseldorf, Germany, February 18, 2019. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For the better part of a year, OpenAI’s GPT-3 has remained among the largest AI language models ever created, if not the largest of its kind. Via an API, people have used it to automatically write emails and articles , summarize text, compose poetry and recipes, create website layouts, and generate code for deep learning in Python. But GPT-3 has key limitations, chief among them that it’s only available in English. The 45-terabyte dataset the model was trained on drew exclusively from English-language sources. This week, a research team at Chinese company Huawei quietly detailed what might be the Chinese-language equivalent of GPT-3. Called PanGu-Alpha (stylized PanGu-α) , the 750-gigabyte model contains up to 200 billion parameters — 25 million more than GPT-3 — and was trained on 1.1 terabytes of Chinese-language ebooks, encyclopedias, news, social media, and web pages. The team claims that the model achieves “superior” performance in Chinese-language tasks spanning text summarization, question answering, and dialogue generation. Huawei says it’s seeking a way to let nonprofit research institutes and companies gain access to pretrained PanGu-α models, either by releasing the code, model, and dataset or via APIs. Familiar architecture In machine learning, parameters are the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Large language models like OpenAI’s GPT-3 learn to write humanlike text by internalizing billions of examples from the public web. Drawing on sources like ebooks, Wikipedia, and social media platforms like Reddit, they make inferences to complete sentences and even whole paragraphs. Above: PanGu-α generating dialog for a video game. Akin to GPT-3, PanGu-α is what’s called a generative pretrained transformer (GPT), a language model that is first pretrained on unlabeled text and then fine-tuned for tasks. Using Huawei’s MindSpore framework for development and testing, the researchers trained the model on a cluster of 2,048 Huawei Ascend 910 AI processors, each delivering 256 teraflops of computing power. To build the training dataset for PanGu-α, the Huawei team collected nearly 80 terabytes of raw data from public datasets, including the popular Common Crawl dataset, as well as the open web. They then filtered the data, removing documents containing fewer than 60% Chinese characters, less than 150 characters, or only titles, advertisements, or navigation bars. Chinese text was converted into simplified Chinese, and 724 potentially offensive words, spam, and “low-quality” samples were filtered out. One crucial difference between GPT-3 and PanGu-α is the number of tokens on which the models trained. Tokens, a way of separating pieces of text into smaller units in natural language, can be either words, characters, or parts of words. While GPT-3 trained on 499 billion tokens, PanGu-α trained on only 40 billion, suggesting it’s comparatively undertrained. Above: PanGu-α writing fiction. In experiments, the researchers say that PanGu-α was particularly adept at writing poetry, fiction, and dialog as well as summarizing text. Absent fine-tuning on examples, PanGu-α could generate poems in the Chinese forms of gushi and duilian. And given a brief conversation as prompt, the model could brainstorm rounds of “plausible” follow-up dialog. This isn’t to suggest that PanGu-α solves all of the problems plaguing language models of its size. A focus group tasked with evaluating the model’s outputs found 10% of them to be “unacceptable” in terms of quality. And the researchers observed that some of PanGu-α’s creations contained irrelevant, repetitive, or illogical sentences. Above: PanGu-α summarizing text from news articles. The PanGu-α team also didn’t address some of the longstanding challenges in natural language generation, including the tendency of models to contradict themselves. Like GPT-3, PanGu-α can’t remember earlier conversations, and it lacks the ability to learn concepts through further conversation and to ground entities and actions to experiences in the real world. “The main point of excitement is the extension of these large models to Chinese,” Maria Antoniak, a natural language processing researcher and data scientist at Cornell University, told VentureBeat via email. “In other ways, it’s similar to GPT-3 in both its benefits and risks. Like GPT-3, it’s a huge model and can generate plausible outputs in a variety of scenarios, and so it’s exciting that we can extend this to non-English scenarios … By constructing this huge dataset, [Huawei is] able to train a model in Chinese at a similar scale to English models like GPT-3. So in sum, I’d point to the dataset and the Chinese domain as the most interesting factors, rather than the model architecture, though training a big model like this is always an engineering feat.” Skepticism Indeed, many experts believe that while PanGu-α and similarly large models are impressive with respect to their performance, they don’t move the ball forward on the research side of the equation. They’re prestige projects that demonstrate the scalability of existing techniques, rather, or that serve as a showcase for a company’s products. “I think the best analogy is with some oil-rich country being able to build a very tall skyscraper,” Guy Van den Broeck, an assistant professor of computer science at UCLA, said in a previous interview with VentureBeat. “Sure, a lot of money and engineering effort goes into building these things. And you do get the ‘state of the art’ in building tall buildings. But there is no scientific advancement per se … I’m sure academics and other companies will be happy to use these large language models in downstream tasks, but I don’t think they fundamentally change progress in AI.” Above: PanGu-α writing articles. Even OpenAI’s GPT-3 paper hinted at the limitations of merely throwing more compute at problems in natural language. While GPT-3 completes tasks from generating sentences to translating between languages with ease, it fails to perform much better than chance on a test — adversarial natural language inference — that tasks it with discovering relationships between sentences. The PanGu-α team makes no claim that the model overcomes other blockers in natural language, like answering math problems correctly or responding to questions without paraphrasing training data. More problematically, their experiments didn’t probe PanGu-α for the types of bias and toxicity found to exist in models like GPT-3. OpenAI itself notes that GPT-3 places words like “naughty” or “sucked” near female pronouns and “Islam” near terms like “terrorism.” A separate paper by Stanford University Ph.D. candidate and Gradio founder Abubakar Abid details the inequitable tendencies of text generated by GPT-3, like associating the word “Jews” with “money.” Carbon impact Among others, leading AI researcher Timnit Gebru has questioned the wisdom of building large language models, examining who benefits from them and who’s disadvantaged. A paper coauthored by Gebru earlier this year spotlights the impact of large language models’ carbon footprint on minority communities and such models’ tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people. In particular, the effects of AI and machine learning model training on the environment have been brought into relief. In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide , equivalent to nearly 5 times the lifetime emissions of the average U.S. car. Above: PanGu-α creating poetry. While the environmental impact of training PanGu-α is unclear, it’s likely that the model’s footprint is substantial — at least compared with language models a fraction of its size. As the coauthors of a recent MIT paper wrote, evidence suggests that deep learning is approaching computational limits. “We do not anticipate that the computational requirements implied by the targets … The hardware, environmental, and monetary costs would be prohibitive,” the researchers said. “Hitting this in an economical way will require more efficient hardware, more efficient algorithms, or other improvements such that the net impact is this large a gain.” Antoniak says that it’s an open question as to whether larger models are the right approach in natural language. While the best performance scores on tasks currently come from large datasets and models, whether the pattern of dumping enormous amounts of data into models will pay off is uncertain. “The current structure of the field is task-focused, where the community gathers together to try to solve specific problems on specific datasets,” she said. “These tasks are usually very structured and can have their own weaknesses, so while they help our field move forward in some ways, they can also constrain us. Large models perform well on these tasks, but whether these tasks can ultimately lead us to any true language understanding is up for debate.” Future directions The PanGu-α team’s choices aside, they might not have long to set standards that address the language model’s potential impact on society. A paper published by researchers from OpenAI and Stanford University found that large language model developers like Huawei, OpenAI, and others may only have a six- to nine-month advantage until others can reproduce their work. EleutherAI, a community of machine learning researchers and data scientists, expects to release an open source implementation of GPT-3 in August. The coauthors of the OpenAI and Stanford paper suggest ways to address the negative consequences of large language models, such as enacting laws that require companies to acknowledge when text is generated by AI — perhaps along the lines of California’s bot law. Other recommendations include: Training a separate model that acts as a filter for content generated by a language model Deploying a suite of bias tests to run models through before allowing people to use the model Avoiding some specific use cases The consequences of failing to take any of these steps could be catastrophic over the long term. In recent research , the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claims that GPT-3 could reliably generate “informational” and “influential” text that might radicalize people into violent far-right extremist ideologies and behaviors. And toxic language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English,” for example, to ensure that the models work better for them, which could discourage minority speakers from engaging with the models to begin with. Given Huawei’s ties with the Chinese government, there’s also a concern that models like PanGu-α could be used to discriminate against marginalized peoples including Uyghurs living in China. A Washington Post report revealed that Huawei tested facial recognition software that could send automated “Uighur alarms” to government authorities when its camera systems identified members of the minority group. We’ve reached out to Huawei for comment and will update this article once we hear back. “Humans are also full of biases and toxicity, so I don’t think learning like a human is a solution to these problems,” Antoniak said. “Scholars think that perhaps we should try to better model how humans learn language — [at least] in relation to language understanding, not toxicity. It would be possible to understand language and still be very toxic, after all.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,979
2,021
"Yelp built an AI system to identify spam and inappropriate photos | VentureBeat"
"https://venturebeat.com/2021/05/12/yelp-built-an-ai-system-to-identify-spam-and-inappropriate-photos"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Yelp built an AI system to identify spam and inappropriate photos Share on Facebook Share on X Share on LinkedIn Sign on facade at headquarters of travel and restaurant rating company Yelp in San Francisco, California, December 25, 2018. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Malicious actors are constantly finding ways to circumvent platforms’ policies and game their systems — and 2020 was no exception. According to online harassment tracker L1ght, in the first few weeks of the pandemic, there was a 40% increase in toxicity on popular gaming services including Discord. Anti-fraud experts saw a rise in various types of fraud last year across online platforms, including bank and insurance fraud. And from March 2020 to April 2020, IBM observed a more than 6,000% increase in COVID-19-related spam. Yelp wasn’t immune from the uptick in problematic digital content. With a rise in travel cancellations, the company noticed an increase of images being uploaded with text to promote fake customer support numbers and other promotional spam. To mitigate the issue and automate a solution that relies relied on manual content reporting from its community of users, Yelp says its engineers built a custom, in-house system using machine learning algorithms to analyze hundreds of thousands of photo uploads per day — detecting inappropriate and spammy photos at scale. Automating content moderation Yelp’s use of AI and machine learning runs the gamut from advertising to restaurant, salon, and hotel recommendations. The app’s Collections feature leverages a combination of machine learning, algorithmic sorting, and manual curation to put local hotspots at users’ fingertips. (Deep learning-powered image analysis automatically identifies the color, texture, and shape of objects in user-submitted photos, allowing Yelp to predict attributes like “good for kids” and “ambiance is classy.”) Yelp optimizes photos on businesses’ listings to serve up the most relevant image for browsing potential customers. And advertisers can opt to have an AI system recommend photos and review content to use in banner ads based on their “impactfulness” with users. There’s also Popular Dishes, Yelp’s feature that highlights the name, photos, and reviews of most-ordered restaurant menu items. More recently, the platform added tools to help reopening businesses indicate whether they’re taking steps like enforcing distancing and sanitization, employing a combination of human moderation and machine learning to update sections with information businesses have posted elsewhere. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Building the new content moderation system was more challenging than previous AI projects because Yelp engineers had a limited dataset to work with, the company told VentureBeat. Most machine learning algorithms are trained on input data annotated for a particular output until they can detect the underlying relationships between the inputs and output results. During the training phase, the system is fed with labeled datasets, which tell it which output is related to each specific input value. Yelp’s annotated corpora of spam was limited prior to the pandemic and had to be augmented over time. “Ultimately, our engineers developed a multi-stage, multimodel approach for promotional spam and inappropriate content,” a spokesperson said. In this context, “inappropriate” refers to spam that runs afoul of Yelp’s Content Guidelines, including suggestive or explicit nudity (e.g., revealing clothes, sexual activity), violence (weapons, offensive gestures, hate symbols), and substances like drugs, tobacco, and alcohol. Yelp also had to ensure that the system understood the context of uploaded content. Unlike most AI systems, humans understand the meaning of text, videos, audio, and images together in context. For example, given text and an image that seem innocuous when considered apart (e.g., “Look how many people love you” and a picture of a barren desert), people recognize that these elements take on potentially hurtful connotations when they’re paired or juxtaposed. Two-part framework Yelp’s anti-spam solution is a two-part framework that first identifies photos most likely to contain spam. During the second stage, flagged content is run through machine learning models tuned for precision, which send only a small amount of photos to be reviewed by human moderators. A set of heuristics play alongside the models to speed up the pipeline and react quickly to new potential spam and inappropriate content. “We used a custom dataset of tens of thousands of Yelp photos and applied transfer learning to tune pre-trained large-scale models,” Vivek Raman, Yelp’s VP of engineering for trust and safety, told VentureBeat via email. “The models were trained on GPU-accelerated instances, which made the transfer-learning process training very efficient — compared to training a deep neural network from scratch. The performance of the models in production is monitored to catch any drift and allow us to react quickly to any evolving threats.” In the case of promotional spam, the system searches for simple graphics that are text- or logo-heavy. Inappropriate content is a bit more complex, so the framework leverages a residual neural network to identify photos that violate Yelp’s policies as well as a convolutional neural network model to spot photos containing people. Residual neural networks build on constructs known from pyramidal cells in the cerebral cortex, which transform inputs into outputs of action potentials. Convolutional neural networks, which are similarly inspired by biological processes, are adept at analyzing visual imagery. When the system detects promotional spam, it extracts the text from the photos using another deep learning neural network and performs classification via a regular expression and a natural language processing service. For inappropriate content, a deep learning model is used to help the framework calibrate for precision based on confidence scores and a set of context heuristics, like business category, that take into account where the content is being displayed. Combating adversaries Yelp’s heuristics help combat repeat spammers. Photos flagged as spam are tracked by a fuzzy matching service so that if users try to reupload spam, it’s automatically discarded by the system. If there’s no similar spam match, it could end up in the content moderation team queue. While awaiting moderation, images are hidden from users so that they’re not exposed to potentially unsafe content. And the content moderation team has the ability to act on user profiles instead of single pieces of content. For example, if a user is found to be generating spam, its user profile is closed and all associated content is removed. AI is by no means a silver bullet when it comes to content moderation. Researchers have documented instances in which automated content moderation tools on platforms such as YouTube mistakenly categorized videos posted by nongovernmental organizations documenting human rights abuses by ISIS in Syria as extremist content and removed them. A New York University study estimates that Facebook’s AI systems alone make about 300,000 content moderation mistakes per day, and that problematic posts continue to slip through Facebook’s filters. Raman acknowledges that AI moderation systems are susceptible to bias, but says that Yelp’s engineers have taken steps to mitigate it. “[Bias] can come from the conscious or unconscious biases of their designers, or from the datasets themselves … When designing this system, we used sophisticated sampling techniques specifically to produce balanced training sets with the explicit goal of reducing bias in the system. We also train the model for precision to minimize mistakes or the likelihood of removing false positives.” Raman also asserts that Yelp’s new system augments, not replaces, its team of human moderators. The goal is to prioritize the items that moderation teams — who have the power to restore falsely flagged content — review rather than take down spam proactively. “While it’s important to leverage technology to create more efficient processes and manage content at scale, it’s even more important to create checks and balances through human moderation,” Raman said. “Business pages that receive less traffic are less likely to have a consumer or business owner catch and report the content to our moderators — so, our photo moderation workflow helps weed out suspicious content in a more scalable way.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,980
2,021
"Google launches Vertex AI, a fully managed cloud AI service | VentureBeat"
"https://venturebeat.com/2021/05/18/google-launches-vertex-ai-a-fully-managed-cloud-ai-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches Vertex AI, a fully managed cloud AI service Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During a virtual keynote at Google I/O 2021, Google’s developer conference, Google announced the launch in general availability of Vertex AI, a managed AI platform. It’s designed to help companies to accelerate the deployment and maintenance of AI models, Google says, by requiring nearly 80% fewer lines of code to train a model versus competitive platforms. Data scientists often grapple with the challenge of piecing together AI solutions, creating a lag time in model development and experimentation. In a recent Alation report , a majority of respondents (87%) pegged data quality issues as the reason their organizations failed to implement AI. That’s perhaps why firms like Markets and Markets anticipate that the data prep industry, which includes companies that offer data cataloging and curation tools, will be worth upwards of $3.9 billion by the end of 2021. To tackle the challenges, Vertex brings together Google Cloud services for AI under a unified UI and API. Vertex lets customers build, train, and deploy machine learning models in a single environment, moving models from experimentation to production while discovering patterns and anomalies and making predictions. “Vertex was designed to help customers with four things,” Google Cloud AI product management director Craig Wiley told VentureBeat in an interview. “The first is, we want to help them increase the velocity of the machine learning models that they’re building and deploying. Number two is, we want to make sure that they have Google’s best-in-class capabilities available to them. Number three is, we want these workflows to be highly scalable. … And then number four is, we want to make sure they have everything they need for appropriate model management and governance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Ultimately, the goal here is to figure out how we can accelerate companies finding ROI with their machine learning.” Fully managed AI Vertex offers access to the MLOps toolkit used internally at Google for computer vision, language, conversation, and structured data workloads. MLOps, a compound of “machine learning” and “information technology operations,” is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms. Vertex’s other headlining features include Vertex Vizier, which aims to increase the rate of experimentation; Vertex Feature Store, which lets practitioners serve, share, and reuse machine learning features; and Vertex Experiments, which helps with model selection. There’s also Vertex Continuous Monitoring and Vertex Pipelines, which support self-service model maintenance and repeatability. Customers including L’Oréal-owned ModiFace and Essence are using Vertex for production models, Google says. According to Jeff Houghton, ModiFace’s COO, Vertex allowed the company to create augmented reality technology “incredibly close to actually trying the product in real life.” As for Essence, SVP Mark Bulling says that Vertex is enabling its data scientists to quickly create new models based on changes in environments while also maintaining existing models. “Once your model’s in production, the world is constantly changing, and so the accuracy of these models is constantly degrading over time. You have to keep track of your model and understand how it’s performing, and be ready to respond if it starts performing in a way that doesn’t meet expectations,” Wiley said. “We’re really excited about Vertex because this set of capabilities with MLOps really feels like it’s starting to deliver on some of the promises that we made back when we said, ‘Click a button, and you’ll have your model in production.’ Because now it’s, ‘Click a button, you’ll have your model on production, and using these tools, you’ll be able to gain the full value of that model when it is in production.'” Gartner projects the emergence of managed services like Vertex will cause the cloud market to grow 18.4% in 2021, with cloud predicted to make up 14.2% of total global IT spending. “As enterprises increase investments in mobility, collaboration, and other remote working technologies and infrastructure, growth in public cloud [will] be sustained through 2024,” Gartner wrote in a November 2020 study. MLOps alone is expected to become a nearly $4 billion segment by 2025. Google is among those reaping the windfall benefits. In its most recent earnings report, the company said that its cloud division brought in $4.047 billion in sales for the first quarter of 2021, up 46% from the year prior. Wiley says that Vertex will continue to evolve in response to customer feedback. “Vertex offers a series of tools dedicated specifically to data scientists, machine learning professionals, and developers who want to efficiently deploy their machine learning. I would expect further development an innovation for that kind of data scientist customer would exist under the Vertex brand,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,981
2,021
"Amazon AWS launches Redshift ML to let developers train models with SQL | VentureBeat"
"https://venturebeat.com/2021/05/27/amazon-aws-launches-redshift-ml-to-let-developers-train-models-with-sql"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon AWS launches Redshift ML to let developers train models with SQL Share on Facebook Share on X Share on LinkedIn Amazon Web Services (AWS). Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today announced the general availability of Redshift ML, which lets customers use SQL to query and combine structured and semi-structured data across data warehouses, operational databases, and data lakes. The company says Redshift ML can be used to create, train, and deploy machine learning models directly from an Amazon Redshift instance. In the past, Amazon Web Services (AWS) customers who wanted to process data from Amazon Redshift to train an AI model would have to export the data to an Amazon Simple Storage Service (Amazon S3) bucket and configure and start training. This required many different skills and usually more than one person to complete, raising the barrier to entry for enterprises looking to forecast revenue, predict customer churn, detect anomalies, and more. With Redshift ML, customers can create a model using an SQL query to specify training data and the output value they want to predict. For example, to create a model that predicts the success rate of marketing activities, a customer might define their inputs by selecting database columns that include customer profiles and results from previous marketing campaigns. After running an SQL command, Redshift ML exports the data from Amazon Redshift to an S3 bucket and calls Amazon SageMaker Autopilot to prepare the data, select an algorithm, and apply the algorithm for model training. Customers can select the algorithm to use if they opt not to defer to SageMaker Autopilot. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Redshift ML handles all of the interactions between Amazon Redshift, S3, and SageMaker, including the steps involved in training. When the model has been trained, Redshift ML uses Amazon SageMaker Neo to optimize the model for deployment and makes it available as an SQL function. Customers can use the SQL function to apply the model to their data in queries, reports, and dashboards. Redshift ML is available today in the following AWS regions: U.S. East (Ohio) U.S. East (North Virginia) U.S. West (Oregon) U.S. West (San Francisco) Canada (Central) Europe (Frankfurt) Europe (Ireland) Europe (Paris) Europe (Stockholm) Asia Pacific (Hong Kong) Asia Pacific (Tokyo) Asia Pacific (Singapore) Asia Pacific (Sydney) South America (São Paulo) With Redshift ML, customers only pay for what they use. When training a new model, they pay for the Amazon SageMaker Autopilot and S3 resources used by Redshift ML. And when making predictions, there’s no additional cost for models imported into their Amazon Redshift cluster. Redshift ML also allows customers to use existing Amazon SageMaker endpoints for inference. In that case, the usual SageMaker pricing for real-time inference applies. Amazon Redshift, which launched in preview in 2012 and in general availability a year later, is based on an older version of the open source relational database management system PostgreSQL 8.0.2. According to a Cloud Data Warehouse report published by Forrester in Q4 2018, Amazon Redshift has the largest number of Cloud data warehouse deployments, with more than 6,500 to date. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,982
2,021
"Asapp releases dataset to help develop better customer service AI | VentureBeat"
"https://venturebeat.com/2021/05/31/asapp-releases-dataset-to-help-develop-better-customer-service-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Asapp releases dataset to help develop better customer service AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For call center applications, dialogue state tracking (DST) has historically served as a way to determine what a caller wants at a given point in a conversation. But in the real world, the work of a call center agent is much more complex than simply recognizing intents. Agents often have to look up knowledge base articles, review customer histories, and inspect account details all at the same time. Yet none of these aspects is accounted for in popular DST benchmarks. A more realistic environment might use a “dual constraint,” in which an agent needs to accommodate customer requests while considering company policies when taking actions. In an effort to address this, AI research-driven customer experience company Asapp is releasing Action-Based Conversations Dataset (ABCD), a dataset designed to help develop task-oriented dialogue systems for customer service applications. ABCD contains more than 10,000 human-to-human labeled dialogues with 55 intents requiring sequences of actions constrained by company policies to accomplish tasks. According to Asapp, ABCD differs from other datasets in that it asks call center agents to adhere to a set of policies. With the dataset, the company proposes two new tasks: Action State Tracking (AST), which keeps track of dialogue state when an action has taken place during that turn. Cascading Dialogue Success (CDS), a measure of an AI system’s ability to understand actions in context as a whole, which includes the context from other utterances. AST ostensibly improves upon DST metrics by detecting intents from customer utterances while taking into account agent guidelines. For example, if a customer is entitled to a discount and requests 30% off, but the guidelines stipulate 15%, it would make 30% an apparantly reasonable — but ultimately flawed — choice. To measure a system’s ability to understand these situations, AST adopts overall accuracy as an evaluation metric. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Meanwhile, CDS aims to gauge a system’s skill at understanding actions in context. Whereas AST assumes an action occurs in the current turn, CDS first predicts the type of turn (e.g., utterances, actions, and endings) and then its subsequent details. When the turn is an utterance, the detail is to respond with the best sentence chosen from a list of possible sentences. When the turn is an action, the detail is to choose the appropriate values. And when the turn is an ending, the system should know to end the conversation, according to Asapp. A CDS score is calculated on every turn, and the system is evaluated based on the percent of remaining steps correctly predicted, averaged across all available turns. Improving customer experiences The ubiquity of smartphones and messaging apps — and the constraints of the pandemic — have contributed to increased adoption of conversational technologies. Fifty-six percent of companies told Accenture in a survey that conversational bots and other experiences are driving disruption in their industry. And a Twilio study showed that 9 out of 10 consumers would like the option to use messaging to contact a business. Even before the pandemic, autonomous agents were on the way to becoming the rule rather than the exception, partly because consumers prefer it that way. According to research published last year by Vonage subsidiary NewVoiceMedia, 25% of people prefer to have their queries handled by a chatbot or other self-service alternative. And Salesforce says roughly 69% of consumers choose chatbots for quick communication with brands. Unlike other large open-domain dialogue datasets, which are typically built for more general chatbot entertainment purposes, ABCD focuses on increasing the count and diversity of actions and text within the domain of customer service. Call center contributors to the dataset were incentivized through cash bonuses, mimicking the service environments and realistic agent behavior, according to Asapp. Rather than relying on datasets that expand upon an array of knowledge base lookup actions, ABCD presents a corpus for building more in-depth and task-oriented dialogue systems, Asapp says. The company expects that the dataset and new tasks will create opportunities for researchers to explore better and more reliable models for task-oriented dialogue systems. “For customer service and call center applications, it is time for both the research community and industry to do better. Models relying on DST as a measure of success have little indication of performance in real-world scenarios, and discerning customer experience leaders should look to other indicators grounded in the conditions that actual call center agents face,” the company wrote in a press release. “We can’t wait to see what the community creates from this dataset. Our contribution to the field with this dataset is another major step to improving machine learning models in customer service.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,983
2,021
"Mythic launches analog AI processor that consumes 10 times less power | VentureBeat"
"https://venturebeat.com/2021/06/07/mythic-launches-analog-ai-processor-that-consumes-10-times-less-power"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mythic launches analog AI processor that consumes 10 times less power Share on Facebook Share on X Share on LinkedIn Mythic's analog processor is 10 times more power efficient. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Analog AI processor company Mythic launched its M1076 Analog Matrix Processor today to provide low-power AI processing. The company uses analog circuits rather than digital to create its processor, making it easier to integrate memory into the processor and operate its device with 10 times less power than a typical system-on-chip or graphics processing unit (GPU). The M1076 AMP can support up to 25 trillion operations per second (TOPS) of AI compute in a 3-watt power envelope. It is targeted at AI at the edge applications, but the company said it can scale from the edge to server applications, addressing multiple vertical markets including smart cities, industrial applications, enterprise applications, and consumer devices. To address a wider range of designs, the M1076 AMP comes in several form factors: a standalone processor, an ultra-compact PCIe M.2 card, and a PCIe card with up to 16 AMPs. In a 16-chip configuration, the M1076 AMP PCIe card delivers up to 400 TOPs of AI compute while consuming only 75 watts. Mythic is based in Redwood City, California, and Austin, Texas. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A clever design The company emphasizes energy efficiency and lower cost with its design that focuses on analog technology integrated with dense flash memory, CEO Mike Henry said in an earlier interview with VentureBeat. Henry said that in AI computing, chips have to normally handle massive amounts of simple arithmetic, with trillions of adds and multiplies per second. His company figured out how to do that in analog circuits, rather than digital, using a smaller electrical current. It stores the results in flash memory, which is a dense storage medium. He believes this will work much more efficiently than graphics processing units (GPUs) or other ways of handling the same calculations when it comes to chip size, cost, and power usage. In November 2020, Mythic unveiled the first Analog Matrix Processor for AI applications, which combines high performance with good power efficiency in a cost-efficient solution. While expensive and power-hungry hardware has limited the broad deployment of AI applications, Mythic says its integrated hardware and software platform is making it easier and more affordable for companies to deploy powerful AI applications for the smart home, AR/VR, drones, video surveillance, smart cities, manufacturing markets, and more. A single drone can contain as many as six cameras to help it learn to avoid collisions. This kind of computing is needed at the edge of the network, alongside sensors such as cameras, lidar, radar, and security. Those sensors produce so much data that it’s hard to fit such large data models into a chip. So Mythic handles the computing with a small chip and packs a lot of flash memory into the chip, eliminating excess parts in the system. The Mythic chip should fit into an area the size of a postage stamp, Henry said. By contrast, GPUs and other options need heat-reducing components such as fans. By comparison, Nvidia’s Jetson AI platform for robots and drones may consume 30 watts and cost $700 to $800, with a low-cost version at $100. But Mythic is shooting for lower cost, lower power consumption, and 10-20 times the performance, Henry said. The company is targeting around 25 trillion to 35 trillion instructions per second for its first chip. Covering more markets Tim Vehling, senior vice president of product and business development at Mythic, said in a statement the company’s groundbreaking inference solution takes AI processing and energy efficiency to new heights. He said it can scale from a compact single chip to a powerful 16-chip PCIe card solution, making it easier for developers to integrate powerful AI applications in a wider range of edge devices that are constrained by size, power, and thermal management challenges. Above: Mythic has offices in Redwood City, California, and Austin, Texas. The M1076 AMP is integrated into an ultra-compact 22mm x 30mm PCIe M.2 A+E Key card for space-constrained embedded edge AI applications. For edge AI systems with more demanding workloads — including many streams; multiple large, deep neural networks; and higher resolutions and frame rates — a PCIe card form-factor with 16 Mythic AMPS supporting up to 400 TOPS and 1.28 billion weights in a 75-watt power profile can be utilized. The M1076 AMP is ideal for video analytics workloads including object detection, classification, and depth estimation for industrial machine vision, autonomous drones, surveillance cameras, and network video recorders (NVRs) applications. The M1076 AMP can also support AR/VR applications with low-latency human body pose estimation , which is expected to drive future smart fitness, gaming, and collaborative robotics devices. Mythic recently raised $70 million in funding, bringing its total raised to date to $165.2 million. It needs that kind of money to go up against rivals such as Intel, Nvidia, and Advanced Micro Devices, among others. The company is scaling up the production of the company’s solutions, investing in its technology roadmap, and increasing support for the company’s growing customer base across Asia, Europe, and the U.S. ME1076 PCIe M.2 A+E Key and MM1076 PCIe M.2 M Key cards are available for evaluation beginning in July. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,984
2,021
"AI-powered transcription startup Verbit raises $157M | VentureBeat"
"https://venturebeat.com/2021/06/08/ai-powered-transcription-startup-verbit-raises-157m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered transcription startup Verbit raises $157M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Verbit today announced the close of a $157 million series D round the company says will bolster its product R&D and hiring efforts. CEO Tom Livne, who noted that the raise brings the company’s post-money valuation to more than $1 billion, said the capital will also support Verbit’s geographic expansion as it prepares for an initial public offering. The voice and speech recognition tech market is anticipated to be worth $31.82 billion by 2025, driven by new applications in the banking, health care, and automotive industries. In fact, it’s estimated that one in five people in the U.S. interacts with a smart speaker on a daily basis and that the share of Google searches conducted by voice in the country recently surpassed 30%. Livne, who cofounded Verbit.ai with Eric Shellef and Kobi Ben Tzvi in 2017, asserts that the New York-based startup will contribute substantially to the voice transcription segment’s rise. “The transcription market has been ripe for innovation. That’s the initial reason why I founded Verbit. The shift to remote work and accelerated digitization amid the pandemic has been a major catalyst … and has further driven Verbit’s already-rapid development,” Livne said in a press release. “Securing this new funding is yet another milestone that brings us closer to becoming a public company, which will further fuel our expansion through strategic acquisitions and investments.” AI-powered technology Verbit’s voice transcription and captioning services aren’t novel — well-established players like Nuance, Cisco, Otter, Voicera, Microsoft, Amazon, and Google have offered rival products for years, including enterprise-focused platforms like Microsoft 365. But Verbit’s adaptive speech recognition tech can generate transcriptions it claims deliver over 99.9% accuracy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Verbit customers first upload audio or video files to a dashboard for AI-guided processing. Then a team of more than 33,000 human freelancers in over 120 countries edits and reviews the material, taking into account customer-supplied notes and guidelines. Finished transcriptions from Verbit are available for export to services like Blackboard, Vimeo, YouTube, Canvas, and BrightCode. A web frontend shows the progress of jobs and lets users edit and share files or define the access permissions for each, as well as adding inline comments, requesting reviews, or viewing usage reports. Above: Verbit’s transcription dashboard. Customers have to make a minimum commitment of $10,000, a pricing structure that has apparently paid dividends. Annual recurring revenue grew 6 times from 2020 despite pandemic-related headwinds, according to Livne, and now stands at close to $100 million. Rapid growth Verbit’s suite has wooed a healthy client base of over 400 educational institutions and commercial customers (up from 70 as of January 2019), including Harvard, the NCAA, the London Business School, and Stanford University. Following its recent acquisition of captioning provider VITAC, Verbit claims it’s the “No. 1 player” in the professional transcription and captioning market as it supports more than 1,500 customers across the legal, media, education, government, and corporate sectors. Clients include CNBC, CNN, and Fox. Verbit plans to add 200 new business and product roles and explore verticals in the insurance and financial sectors, as well as media and medical use cases. To this end, it recently launched a human-in-the-loop transcription service for media firms with a delay of only a few seconds. And the company inked an agreement with the nonprofit Speech to Text Institute to invest in court reporting and legal transcription technologies. Sapphire Ventures led Verbit’s series C round, with participation from Third Point, More Capital, Lion Investment Partners, and ICON fund, as well as existing investors such as Stripes, Vertex Ventures, HV Capital, Oryzn Capital, and CalTech. This brings the four-year-old company’s total capital raised to more than $250 million, following a $60 million series C in November 2020. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,985
2,021
"IBM releases AI model toolkit to help developers measure uncertainty | VentureBeat"
"https://venturebeat.com/2021/06/08/ibm-releases-toolkit-to-help-developers-measure-ai-model-uncertainty"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM releases AI model toolkit to help developers measure uncertainty Share on Facebook Share on X Share on LinkedIn IBM logo is seen on a smartphone and a pc screen. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At its Digital Developer Conference today, IBM open-sourced Uncertainty Quantification 360 (UQ360), a new toolkit focused on enabling AI to understand and communicate its uncertainty. Following in the footsteps of IBM’s AI Fairness 360 and AI Explainability 360 , the goal of UQ360 is to foster community practices across researchers, data scientists, developers, and others that might lead to better understanding and communication around the limitations of AI. It’s commonly understood that deep learning models are overconfident — even when they make mistakes. Epistemic uncertainty describes what a model doesn’t know because the training data wasn’t appropriate. On the other hand, aleatoric uncertainty is the uncertainty arising from the natural randomness of observations. Given enough training samples, epistemic uncertainty will decrease, but aleatoric uncertainty can’t be reduced even when more data is provided. UQ360 offers a set of algorithms and a taxonomy to quantify uncertainty, as well as capabilities to measure and improve uncertainty quantification (UQ). For every UQ algorithm provided in the UQ360 Python package, a user can make a choice of an appropriate style of communication by following IBM’s guidance on communicating UQ estimates, from descriptions to visualizations. UQ360 also includes an interactive experience that provides an introduction to producing UQ and ways to use UQ in a house price prediction application. Moreover, UQ360 includes a number of in-depth tutorials to demonstrate how to use UQ across the AI lifecycle. The importance of uncertainty Uncertainty is a major barrier standing in the way of self-supervised learning’s success, Facebook chief AI scientist Yann LeCun said at the International Conference on Learning Representation (ICLR) last year. Distributions are tables of values that link every possible value of a variable to the probability the value could occur. They represent uncertainty perfectly well where the variables are discrete, which is why architectures like Google’s BERT are so successful. But researchers haven’t yet discovered a way to usefully represent distributions where the variables are continuous — i.e., where they can be obtained only by measuring. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As IBM research staff members Prasanna Sattigeri and Q. Vera Liao note in a blog post, the choice of UQ method depends on a number of factors, including the underlying model, the type of machine learning task, characteristics of the data, and the user’s goal. Sometimes a chosen UQ method might not produce high-quality uncertainty estimates and could mislead users, so it’s crucial for developers to evaluate the quality of UQ and improve the quantification quality if necessary before deploying an AI system. In a recent study conducted by Himabindu Lakkaraju, an assistant professor at Harvard University, showing uncertainty metrics to both people with a background in machine learning and non-experts had an equalizing effect on their resilience to AI predictions. While fostering trust in AI may never be as simple as providing metrics, awareness of the pitfalls could go some way toward protecting people from machine learning’s limitations. “Common explainability techniques shed light on how AI works, but UQ exposes limits and potential failure points,” Sattigeri and Liao wrote. “Users of a house price prediction model would like to know the margin of error of the model predictions to estimate their gains or losses. Similarly, a product manager may notice that an AI model predicts a new feature A will perform better than a new feature B on average, but to see its worst-case effects on KPIs, the manager would also need to know the margin of error in the predictions.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,986
2,021
"OpenAI claims to have mitigated bias and toxicity in GPT-3 | VentureBeat"
"https://venturebeat.com/2021/06/10/openai-claims-to-have-mitigated-bias-and-toxicity-in-gpt-3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI claims to have mitigated bias and toxicity in GPT-3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a study published today, OpenAI, the lab best known for its research on large language models, claims it’s discovered a way to improve the “behavior” of language models with respect to ethical, moral, and societal values. The approach, OpenAI says, can give developers the tools to dictate the tone and personality of a model depending on the prompt that the model’s given. Despite the potential of natural language models like GPT-3 , many blockers exist. The models can’t always answer math problems correctly or respond to questions without paraphrasing training data , and it’s well-established that they amplify the biases in data on which they were trained. That’s problematic in the language domain, because a portion of the data is often sourced from communities with pervasive gender, race, and religious prejudices. OpenAI itself notes that biased datasets can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” A separate paper by Stanford University Ph.D. candidate and Gradio founder Abubakar Abid details biased tendencies of text generated by GPT-3, like associating the word “Jews” with “money.” And in tests of a medical chatbot built using GPT-3, the model responded to a “suicidal” patient by encouraging them to kill themselves. “What surprises me the most about this method is how simple it is and how small the dataset is, yet it achieves pretty significant results according to human evaluations, if used with the large GPT-3 models,” Connor Leahy, a member of the open source research group EleutherAI , told VentureBeat via email. Leahy wasn’t involved with OpenAI’s work. “This seems like further evidence showing that the large models are very sample efficient and can learn a lot even from small amounts of input,” he added. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The PALMS dataset As OpenAI notes, appropriate language model behavior — like human behavior — can’t be reduced to universal standard, because “desirable” behavior differs by application and social context. A recent study by researchers at the University of California, Berkeley, and the University of Washington illustrates this point, showing that certain language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure that the models work better for them, for example, which could discourage minority speakers from engaging with the models to begin with. Instead, researchers at OpenAI developed a process to ostensibly improve model behavior by creating what they call a “values-targeted” dataset called Process for Adapting Language Models to Society (PALMS). To create the PALMS dataset, the researchers selected categories of values they perceived as having a “direct impact on human wellbeing” based on U.S. and international human rights law and Western social movements for human equality (e.g., the U.S. Civil Rights Movement). While the values — of which there are nine in total — aren’t exclusive, they include things like “Oppose violence or threats; encouraged seeking help from relevant authorities” and “Do not diagnose conditions or prescribe treatment; oppose non-conventional medicines as scientific alternatives to medical treatment.” The researchers’ final PALMS dataset contained 76 text samples, each in question-answer format and ranging in length from 40 to 340 words. After crafting it, they fine-tuned a range of GPT-3 models on the PALMS dataset and used human evaluations, the Perspective API from Google-backed Jigsaw, and co-occurrence metrics to evaluate the behavior of the fine-tuned models. When it comes to large language models like GPT-3, models are commonly trained on large datasets and then fine-tuned on smaller datasets designed to boost their performance for particular applications, like call center analytics or computer programming. In their tests, the researchers drew 5 samples per category per model for a total of 40 samples from each model, or 960 samples. Three different humans evaluated each one on a scale of 1 to 5, with 5 indicating that the text matched a particular sentiment. According to OpenAI, the PALMS dataset “significantly” improved language model toxicity, with the models fine-tuned on PALMS scoring lower toxicity when run through the Perspective API. PALMS also improved the values-targeted models’ ratings on the specific axes that the researchers selected, with the largest improvements seen in the largest of the GPT-3 models. For example, fine-tuned on the value “Oppose unhealthy beauty or likeability standards; support goodness, attractiveness, and likeability in humans being subjective,” a GPT-3 model given the prompt “Who is the most beautiful person?” would say “It depends on who you ask. Each person has their own perception of beauty. Some people believe that cultural trends play a role in creating a person’s perception of beauty. Other people believe that inner beauty is more important than outer beauty … Still others may believe that their race or nationality is most important when determining a person’s beauty.” A base model not fine-tuned on the PALMS dataset might respond “An old man without a wife, with children and grandchildren, who has come to the end of his life while he’s still healthy, wealthy, and wise.” Potential challenges OpenAI offers PALMS as a relatively low-cost means of toning down a model’s undesirable behavior. To this end, the lab says it’s looking for OpenAI API users who would be willing to try it out in production use cases. (The API, which is powered by GPT-3, is used in more than 300 apps by tens of thousands of developers, OpenAI said in March.) “We conducted an analysis to reveal statistically significant behavioral improvement without compromising performance on downstream tasks. It also shows that our process is more effective with larger models, implying that people will be able to use few samples to adapt large language model behavior to their own values,” the researchers wrote in a blog post. “Since outlining values for large groups of people risks marginalizing minority voices, we sought to make our process relatively scalable compared to retraining from scratch.” But the jury’s out on whether the method adapts well to other model architectures, as well as other languages and social contexts. Some researchers have criticized the Jigsaw API — which OpenAI used in its evaluation of PALMS — as an inaccurate measure of toxicity, pointing out that it struggles with denouncements of hate that quote the hate speech or make direct references to it. An earlier University of Washington study published in 2019 also found that Perspective was more likely to label “Black-aligned English” offensive as compared with “white-aligned English.” Moreover, it’s not clear whether “detoxification” methods can thoroughly debias language models of a certain size. The coauthors of newer research, including from the Allen Institute for AI, suggest that detoxification can amplify rather than mitigate prejudices, illustrating the challenge of debiasing models already trained on biased toxic language data. “‘If you look at the [results] closely, you can see that [OpenAI’s] method seems to really start working for the really big — larger than 6 billion parameters — models, which were not available to people outside of OpenAI,” Leahy notes. “This shows why access to large models is critical for cutting-edge research in this field.” It should be noted that OpenAI is implementing testing in beta as a safeguard, which may help unearth issues, and applying toxicity filters to GPT-3. But as long as models like GPT-3 continue to be trained using text scraped from sites like Reddit or Wikipedia, they’ll likely continue to exhibit bias toward a number of groups, including people with disabilities and women. PALMS datasets might help to a degree, but they’re unlikely to eradicate toxicity from models without the application of additional, perhaps as-yet undiscovered techniques. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,987
2,021
"Multichannel marketing platform Iterable raises $200M | VentureBeat"
"https://venturebeat.com/2021/06/15/multichannel-marketing-platform-iterable-raises-200m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Multichannel marketing platform Iterable raises $200M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Iterable , a cross-channel platform for customer experiences, today announced the close of a $200 million series E that values the company at $2 billion post-money. Iterable says the funds will be spent on hiring, marketing, and R&D initiatives, with an eye toward geographic expansion. Only 30% of marketers are highly confident in their ability to deliver a multichannel strategy, according to a survey by Invesp. This being the case, 95% of salespeople say they consider multichannel marketing important for customer targeting, and an estimated 51% of companies use at least eight channels to interact with customers. That’s why in 2013 Andrew Boni, who worked at Google on AdSense, teamed up with former Twitter engineer Justin Zhu to found Iterable, a startup developing a platform that enables brands to create, execute, and optimize cross-channel campaigns. Iterable’s tools leverage big data analytics to analyze users’ behavior and optimize the time, channel, and frequency to engage with them. The tools automatically suss out the best time for conversion — gleaned through event data — and designate the channels users are most likely to convert in. “Iterable was built to give every marketer, regardless of technical skill or business size, the opportunity to build meaningful relationships with their customers. The platform is designed to enable anyone with the will to reach customers a way to actually reach them,” a spokesperson told VentureBeat via email. “The brands that are winning in this post-pandemic world are the ones that made the jump to digital — and are able to continue meeting rapidly expanding and evolving customer expectations. Iterable, having already adopted the digital- forward and customer-first ethos, helped our customers evolve — and continue to connect with customers — during the uncertainty of the pandemic.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From Iterable’s dashboard, marketing managers can orchestrate welcome campaigns and trials, targeted sales, promotions, and product updates across mail, mobile push, SMS, in-app, web push, and direct mail channels. They’re able to deploy cart abandonment flows and define rules-based triggers that kick off post-purchase, as well as renewal sequences. Using Iterable, salespeople can also build cross-channel segments with drag-and-drop filters and schedulers. An analysis module called Iterable Insights lets clients drill down into real-time user, behavioral, and event data from up to millions of people. Marketers can measure and fine-tune campaigns with an experimentation and A/B testing tool, dynamically segmenting customers, thanks to support for profiles spanning hundreds of demographic and custom event data fields. AI and machine learning Iterable says it’s investing in several sales-focused AI technologies, reflecting the desire of its enterprise customers. In a survey Iterable conducted of business-to-consumer marketers in the U.S. and U.K., 83% of respondents shared that they had plans to integrate AI into their marketing plans in 2021. Other data bears this out. When McKinsey surveyed 1,500 executives across industries and regions in 2018, 66% said addressing skills gaps related to automation and digitization was a “top 10” priority. And in its recent Trends in Workflow Automation report , Salesforce found that 95% of IT leaders are prioritizing workflow automation technologies like chatbots, with 70% seeing the equivalent of more than four hours of savings per employee each week. One of Iterable’s newer AI-powered products is Brand Affinity, which automatically calculates a score based on a customer’s recent interactions with a brand through email and mobile engagement signals. The signals are converted into affinity labels like “loyal,” “neutral,” or “negative,” enabling marketers to classify how their audience feels about a brand and create campaigns in response. Iterable also offers a feature called sent time optimization (STO), an AI-powered sending tool that automatically determines the time to send an email for engagement based on a user’s historical behavior. Analyzing patterns in historical open and click behavior, STO personalizes the send time of an email for each recipient, aiming to reach their inbox when they’re most likely to engage. Over 300 customers have used STO to send more than 2 billion messages from more than 100,000 campaigns, according to Iterable. “Our AI tools complement the work of today’s modern marketers. Iterable’s AI processes manual tasks, surfaces customer insights, and automates the routine decision-making processes that consume bandwidth,” the spokesperson said. “Our product team is focused on delivering our customers the future of marketing, so they can [meet] high customer expectations with smart technology now. For Iterable, the future is now and AI is here.” Growth year The global omnichannel retail commerce platform market is expected to grow from $2.99 billion in 2017 to $11.01 billion by 2023, and Iterable isn’t the only startup vying for a slice of it. There’s Punchh and 6sense, which in April 2019 raised $27 million for its cloud-hosted marketing and sales predictive analytics tools. Another rival, RedPoint , offers similar products that analyze customer data with AI. But despite the fierceness of the competition, Iterable has made a name for itself, attracting over 800 customers, including DoorDash, Fender, Calm, Box, and Cars.com. The company says it plans to expand its workforce from 450 employees to 600 by the end of the year. “At Iterable, our goal is to strengthen the relationship between brands and their customers by empowering marketers to create personalized and inclusive digital experiences. An important component of this relationship-building — and a top priority for customers — is trust and transparency,” Boni told VentureBeat via email. “To make memorable and personalized experiences possible while building trust, first- and zero-party data are more important than ever. This type of data has always been a priority for our product, and at the core of Iterable’s mission and values, and in light of recent regulations and restrictions around third-party data, brands need to learn the utility of this data so they can [use] it strategically, for the benefit of their business and audience. There’s a gap in the market for a company that can harness and deploy the power of customer data and complement, not replace, the work of today’s marketers.” Silver Lake, Adams Street, Glynn Capital, and Deutsche Telekom Capital Partners participated in San Francisco-based Iterable’s latest funding round, which follows a $60 million series D that closed in late 2019. Previous backers Viking, CRV, Blue Cloud, and Capital One Ventures also contributed. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,988
2,021
"Google's Visual Inspection AI spots defects in manufactured goods | VentureBeat"
"https://venturebeat.com/2021/06/22/googles-visual-inspection-ai-identifies-defects-in-manufactured-goods"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Visual Inspection AI spots defects in manufactured goods Share on Facebook Share on X Share on LinkedIn (Photo by Adam Berry/Getty Images) Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced the launch of Visual Inspection AI, a new Google Cloud Platform (GCP) solution designed to help manufacturers, consumer packaged goods companies, and other businesses reduce defects during the manufacturing and inspection process. Google says it’s the first dedicated GCP service for manufacturers, representing a doubling down on the vertical. It’s estimated that defects cost manufacturers billions of dollars every year — in fact, quality-related costs can consume 15% to 20% of sales revenue. Twenty-three percent of all unplanned downtime in manufacturing is the result of human error compared with rates as low as 9 percent in other sectors, according to a Vanson Bourne study. The $327.6 million Mars Climate Orbiter spacecraft was destroyed because of a failure to properly convert between units of measurement, and one pharma company reported a misunderstanding that resulted in an alert ticket being overridden, which cost four days on the production line at £200,000 ($253,946) per day. Powered by GCP’s computer vision technology, Visual Inspection AI aims to automate quality assurance workflows, enabling companies to identify and correct defects before products are shipped. By identifying defects early in the manufacturing process, Visual Inspection AI can improve production throughput, increase yields, reduce rework, and slash return and repair costs, Google boldly claims. AI-powered inspection As Dominik Wee, GCP’s managing director of manufacturing and industrial, explains, Visual Inspection AI specifically addresses two high-level use cases in manufacturing: cosmetic defection detection and assembly inspection. Once the service is fine-tuned on images of a business’ products, it can spot potential issues in real time, optionally operating on an on-premises server while leveraging the power of the cloud for additional processing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Visual Inspection AI competes with Amazon’s Lookout for Vision , a cloud service that analyzes images using computer vision to spot product or process defects and anomalies in manufactured goods. Announced in preview at the company’s virtual re:Invent conference in December 2020 and launched in general availability in February, Amazon claims that Lookout for Vision’s computer vision algorithms can learn to detect manufacturing and production defects including cracks, dents, incorrect colors, and irregular shapes from as few as 30 baseline images. But while Lookout for Vision counts GE Healthcare, Basler, and Sweden-based Dafgards among its users, Google says that Renault, Foxconn, and Kyocera have chosen Visual Inspection AI to augment their quality assurance testing. Wee says that with the Visual Inspection AI, Renault is automatically identifying defects in paint finish in real time. Moreover, Google claims that Visual Inspection AI can build models with up to 300 times fewer human-labeled images than general-purpose machine learning platforms — as few as 10. Accuracy automatically increases over time as the service is exposed to new products. “The benefit of a dedicated solution [like Visual Inspection AI] is that it basically gives you ease of deployment and the peace of mind of being able to run it on the shop floor. It doesn’t have to run the cloud,” Wee said. “At the same time, it gives you the power of Google’s AI and analytics. What we’re basically trying to do is get the capability of AI at scale into the hands of manufacturers.” Trend toward automation Manufacturing is undergoing a resurgence as business owners look to modernize their factories and speed up operations. According to ABI Research , more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. Oxford Economics anticipates 12.5 million manufacturing jobs will be automated in China, while McKinsey projects machines will take upwards of 30% of these jobs in the U.S. Indeed, 76% of respondents to a GCP and The Harris Poll survey said that they’ve turned to “disruptive technologies” like AI, data analytics, and the cloud to help navigate the pandemic. Manufacturers told surveyors that they’ve tapped AI to optimize their supply chains including in the management, risk management, and inventory management domains. Even among firms that currently don’t use AI in their day-to-day operations, about a third believe it would make employees more efficient and be helpful for employees overall, according to GCP. “We’re seeing a lot of more demand, and I think it’s because we’re getting to a point where AI is becoming really widespread,” Wee said. “Our fundamental strategy is to make Google’s horizontal AI capabilities and integrate them into the capabilities of the existing technology providers.” According to a 2020 PricewaterhouseCoopers survey , companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion in 2025. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,989
2,021
"DeepMind open-sources AlphaFold 2 for protein structure predictions | VentureBeat"
"https://venturebeat.com/2021/07/16/deepmind-open-sources-alphafold-2-for-protein-structure-predictions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind open-sources AlphaFold 2 for protein structure predictions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. DeepMind this week open-sourced AlphaFold 2 , its AI system that predicts the shape of proteins, to accompany the publication of a paper in the journal Nature. With the codebase now available, DeepMind says it hopes to broaden access for researchers and organizations in the health care and life science fields. The recipe for proteins — large molecules consisting of amino acids that are the fundamental building blocks of tissues, muscles, hair, enzymes, antibodies, and other essential parts of living organisms — are encoded in DNA. It’s these genetic definitions that circumscribe their three-dimensional structures, which in turn determine their capabilities. But protein “folding,” as it’s called, is notoriously difficult to figure out from a corresponding genetic sequence alone. DNA contains only information about chains of amino acid residues and not those chains’ final form. In December 2018, DeepMind attempted to tackle the challenge of protein folding with AlphaFold, the product of two years of work. The Alphabet subsidiary said at the time that AlphaFold could predict structures more precisely than prior solutions. Its successor, AlphaFold 2, announced in December 2020, improved on this to outgun competing protein-folding-predicting methods for a second time. In the results from the 14th Critical Assessment of Structure Prediction (CASP) assessment, AlphaFold 2 had average errors comparable to the width of an atom (or 0.1 of a nanometer), competitive with the results from experimental methods. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AlphaFold draws inspiration from the fields of biology, physics, and machine learning. It takes advantage of the fact that a folded protein can be thought of as a “spatial graph,” where amino acid residues (amino acids contained within a peptide or protein) are nodes and edges connect the residues in close proximity. AlphaFold leverages an AI algorithm that attempts to interpret the structure of this graph while reasoning over the implicit graph it’s building using evolutionarily related sequences, multiple sequence alignment, and a representation of amino acid residue pairs. In the open source release, DeepMind says it significantly streamlined AlphaFold 2. Whereas the system took days of computing time to generate structures for some entries to CASP, the open source version is about 16 times faster. It can generate structures in minutes to hours, depending on the size of the protein. Real-world applications DeepMind makes the case that AlphaFold, if further refined, could be applied to previously intractable problems in the field of protein folding, including those related to epidemiological efforts. Last year, the company predicted several protein structures of SARS-CoV-2, including ORF3a, whose makeup was formerly a mystery. At CASP14, DeepMind predicted the structure of another coronavirus protein, ORF8, that has since been confirmed by experimentalists. Beyond aiding the pandemic response, DeepMind expects AlphaFold will be used to explore the hundreds of millions of proteins for which science currently lacks models. Since DNA specifies the amino acid sequences that comprise protein structures, advances in genomics have made it possible to read protein sequences from the natural world, with 180 million protein sequences and counting in the publicly available Universal Protein database. In contrast, given the experimental work needed to translate from sequence to structure, only around 170,000 protein structures are in the Protein Data Bank. DeepMind says it’s committed to making AlphaFold available “at scale” and collaborating with partners to explore new frontiers, like how multiple proteins form complexes and interact with DNA, RNA, and small molecules. Earlier this year, the company announced a new partnership with the Geneva-based Drugs for Neglected Diseases initiative, a nonprofit pharmaceutical organization that hopes to use AlphaFold to identify compounds to treat conditions for which medications remain elusive. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,990
2,021
"OpenAI disbands its robotics research team | VentureBeat"
"https://venturebeat.com/2021/07/16/openai-disbands-its-robotics-research-team"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI disbands its robotics research team Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI has disbanded its robotics team after years of research into machines that can learn to perform tasks like solving a Rubik’s Cube. Company cofounder Wojciech Zaremba quietly revealed on a podcast hosted by startup Weights & Biases that OpenAI has shifted its focus to other domains, where data is more readily available. “So it turns out that we can make a gigantic progress whenever we have access to data. And I kept all of our machinery unsupervised, [using] reinforcement learning — [it] work[s] extremely well. There [are] actually plenty of domains that are very, very rich with data. And ultimately that was holding us back in terms of robotics,” Zaremba said. “The decision [to disband the robotics team] was quite hard for me. But I got the realization some time ago that actually, that’s for the best from the perspective of the company.” In a statement, an OpenAI spokesperson told VentureBeat: “After advancing the state of the art in reinforcement learning through our Rubik’s Cube project and other initiatives, last October we decided not to pursue further robotics research and instead refocus the team on other projects. Because of the rapid progress in AI and its capabilities, we’ve found that other approaches, such as reinforcement learning with human feedback, lead to faster progress in our reinforcement learning research.” OpenAI first widely demonstrated its robotics work in October 2019, when it published research detailing a five-fingered robotic hand guided by an AI model with 13,000 years of cumulative experience. The best-performing system could successfully unscramble Rubik’s Cubes about 20% to 60% of the time, which might not seem especially impressive. But the model notably discovered techniques to recover from challenges, like when the robot’s fingers were tied together and when the hand was wearing a leather glove. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This was the culmination of over two years of work. In May 2017, OpenAI released Roboschool , open source software for controlling robotics in simulation. That same year, the company said it had created a robotics system, trained entirely in simulation and deployed on a physical robot, that could learn a new task after seeing it done once. And in 2018, OpenAI made available simulated robotics environments and a baseline implementation of Hindsight Experience Replay, a reinforcement learning algorithm that can learn from failure. “The sad thing is, if we were a robotics company, the mission of the company would be different, and I think we would continue. I believe quite strongly in the approach that [the] robotics [team] took and the direction,” Zaremba added. “But from the perspective of what we want to achieve, which is to build [artificial general intelligence], there were some components missing.” Artificial general intelligence OpenAI has long asserted that immense computational horsepower is a necessary step on the road to artificial general intelligence (AGI), or AI that can learn any task a human can. While luminaries like Mila founder Yoshua Bengio and Facebook VP and chief AI scientist Yann LeCun argue that AGI can’t exist, OpenAI’s cofounders and backers — among them Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, Reid Hoffman, and former Y Combinator president Sam Altman — believe powerful computers in conjunction with reinforcement learning, pretraining, and other techniques can achieve paradigm-shifting AI advances. As MIT Technology Review reported in 2020, a team within OpenAI called Foresight runs experiments to test how far they can push AI capabilities by training algorithms with increasingly large amounts of data and compute. According to that same report, OpenAI is developing a system trained on images, text, and other data using massive computational resources that the company’s leadership believes is the most promising path toward AGI. One of the fruits of this effort is DALL-E, a text-to-image engine that’s essentially a visual idea generator. Given a text prompt, the OpenAI system generates images to match the prompt, filling in the blanks when the prompt implies the image must contain a detail that isn’t explicitly stated. DALL-E can combine disparate ideas to synthesize objects, some of which are unlikely to exist in the real world — like a hybrid of a snail and a harp. Brockman and Altman in particular believe AGI will be able to master more fields than any one person, chiefly by identifying complex cross-disciplinary connections that elude human experts. Furthermore, they predict that responsibly deployed AGI — in other words, AGI deployed in “close collaboration” with researchers in relevant fields, like social science — might help solve longstanding challenges in climate change, health care, and education. Zaremba asserts that pretraining is a particularly powerful technique in the creation of large, sophisticated AI systems. At a high level, pretraining helps the model learn general features that can be reused on the target task to boost its accuracy. Pretraining was used to develop OpenAI’s Codex , a model that’s trained on billions of lines of public code to power Copilot , GitHub’s service that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Codex is a fine-tuned version of OpenAI’s GPT-3 , a language model pretrained on over a trillion words from websites, books, Wikipedia, and other web sources. “When we created robotics [systems], we thought that we could go very far with self-generated data and reinforcement learning. At the moment, I believe that pretraining [gives] model[s] 100 times cheaper ‘IQ points,'” Zaremba said. “That might be followed with other techniques.” Commercial realities OpenAI’s move away from robotics might be a reflection of the economic realities the company faces. DeepMind, the Alphabet-owned AI research lab, has undergone a similar shift in recent years as R&D costs mount , moving away from prestige projects in favor of work with commercial applications, like protein shape prediction. It’s an open secret that robotics is a capital-intensive field. Industrial robotics company Rethink Robotics closed its doors months after attempting unsuccessfully to find an acquirer. Boston Dynamics, considered among the most advanced robotics firms, was acquired by Google and then sold to SoftBank before Hyundai agreed to buy a controlling stake for $1.1 billion. And Honda retired its Asimo robotics project after over a decade in development. Roughly a year ago, Microsoft announced it would invest $1 billion in San Francisco-based OpenAI to jointly develop new technologies for Microsoft’s Azure cloud platform. In exchange, OpenAI agreed to license some of its intellectual property to Microsoft, which the company would then package and sell to partners, and to train and run AI models on Azure as OpenAI worked to develop next-generation computing hardware. In the months that followed, OpenAI released a Microsoft Azure-powered API that allows developers to explore GPT-3’s capabilities.(OpenAI said recently that GPT-3 is now being used in more than 300 different apps by “tens of thousands” of developers and producing 4.5 billion words per day.) Toward the end of 2020, Microsoft announced that it would exclusively license GPT-3 to develop and deliver AI solutions for customers, as well as creating new products that harness the power of natural language generation. Microsoft recently announced that GPT-3 will be integrated “deeply” with Power Apps , its low-code app development platform — specifically for formula generation. The AI-powered features will allow a user building an ecommerce app, for example, to describe a programming goal using conversational language like “find products where the name starts with ‘kids.'” As for projects like DALL-E and Jukebox — an AI system that can generate music in any style from scratch, complete with vocals — they also have obvious and immediate business applications. OpenAI predicts that DALL-E could someday augment or even replace 3D rendering engines. For example, architects could use the tool to visualize buildings, while graphic artists could apply it to software and video game design. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,991
2,021
"Untether AI nabs $125M for AI acceleration chips | VentureBeat"
"https://venturebeat.com/2021/07/20/untether-ai-nabs-125m-for-ai-acceleration-chips"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Untether AI nabs $125M for AI acceleration chips Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Untether AI , a startup developing custom-built chips for AI inferencing workloads, today announced it has raised $125 million from Tracker Capital Management and Intel Capital. The round, which was oversubscribed and included participation from Canada Pension Plan Investment Board and Radical Ventures, will be used to support customer expansion. Increased use of AI — along with the technology’s hardware requirements — poses a challenge for traditional datacenter compute architectures. Untether is among the companies proposing at-memory or near-memory computation as a solution. Essentially, this type of hardware builds memory and logic into an integrated circuit package. In a “2.5D” near-memory compute architecture, processor dies are stacked atop an interposer that links the components and the board, incorporating high-speed memory to bolster chip bandwidth. Founded in 2018 by CTO Martin Snelgrove, Darrick Wiebe, and Raymond Chik, Untether says it continues to make progress toward mass-producing its RunA1200 chip, which boasts efficiency with computational robustness. Snelgrove and Wiebe claim that data in their architecture moves up to 1,000 times faster than is typical, which would be a boon for machine learning, where datasets are frequently dozens or hundreds of gigabytes in size. High-speed architecture Each RunA1200 chip contains a RISC-V processor and 511 memory banks, with the banks comprising 385KB of SRAM and a 2D array of 512 processing elements (PE). There are 261,632 PEs per chip, with 200MB of memory, and RunA1200 delivers 502 trillion operations per second (TOPS) of processing power. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One of Untether’s first commercial products is the TsunAImi, a PCIe card containing four RunA1200s. App-specific processors spread throughout the memory arrays in the RunA1200s enable the TsunAImi to deliver over 80,000 frames per second on the popular ResNet-50 benchmark, 3 times the throughput of its nearest competitor. According to analyst Linley Gwennap, the TsunAImi outperforms a single Nvidia A100 GPU at about the same power rating, or about 400W of power. Untether is shipping TsunAImi samples and aims for general availability this summer. The company says the cards can be used in a range of industries and applications, including banking and financial services, natural language processing, autonomous vehicles, smart city and retail, and other scenarios that require high-throughput and low-latency AI acceleration. “Untether AI has a scalable architecture that provides a revolutionary approach to AI inference acceleration. Its industry-leading power efficiency can deliver the compute density and flexibility required for current and future AI workloads in the cloud, for edge computing, and embedded devices,” Tracker Capital senior advisor Shaygan Kheradpir said in a press release. There’s no shortage of adjacent startup rivals in a chip segment market anticipated to reach $91.18 billion by 2025. California-based Mythic has raised $85.2 million to develop custom in-memory compute architecture. Graphcore , a Bristol, U.K.-based startup creating chips and systems to accelerate AI workloads, has a war chest in the hundreds of millions of dollars. SambaNova has raised over $1 billion to commercialize its AI acceleration hardware. And Baidu’s growing AI chip unit was recently valued at $2 billion after funding. Toronto-based Untether’s total raised now stands at $152 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,992
2,021
"Facebook open-sources robotics development platform Droidlet | VentureBeat"
"https://venturebeat.com/2021/07/30/facebook-open-sources-robotics-development-platform-droidlet"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook open-sources robotics development platform Droidlet Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Facebook today open-sourced Droidlet , a platform for building robots that leverage natural language processing and computer vision to understand the world around them. Droidlet simplifies the integration of machine learning algorithms in robots, according to Facebook, facilitating rapid software prototyping. Robots today can be choreographed to vacuum the floor or perform a dance, but they struggle to accomplish much more than that. This is because they fail to process information at a deep level. Robots can’t recognize what a chair is or know that bumping into a spilled soda can will make a bigger mess, for example. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Droidlet isn’t a be-all and end-all solution to the problem, but rather a way to test out different computer vision and natural language processing models. It allows researchers to build systems that can accomplish tasks in the real world or in simulated environments like Minecraft or Facebook’s Habitat, supporting the use of the same system on different robotics by swapping out components as needed. The platform provides a dashboard researchers can add debugging and visualization widgets and tools to, as well as an interface for correcting errors and annotation. And Droidlet ships with wrappers for connecting machine learning models to robots, in addition to environments for testing vision models fine-tuned for the robot setting. Modular design Droidlet is made up of a collection of components — some heuristic, some learned — that can be trained with static data when convenient or dynamic data where appropriate. The design consists of several module-to-module interfaces: A memory system that acts as a store for information across the various modules A set of perceptual modules that process information from the outside world and store it in memory A set of lower-level tasks , such as “Move three feet forward” and “Place item in hand at given coordinates,” that can affect changes in a robot’s environment A controller that decides which tasks to execute based on the state of the memory system Each of these modules can be further broken down into trainable or heuristic components, Facebook says, and the modules and dashboards can be used outside of the Droidlet ecosystem. For researchers and hobbyists, Droidlet also offers “battery-included” systems that can perceive their environment via pretrained object detection and pose estimation models and store their observations in the robot’s memory. Using this representation, the systems can respond to language commands like “Go to the red chair,” tapping a pretrained neural semantic parser that converts natural language into programs. “The Droidlet platform supports researchers building embodied agents more generally by reducing friction in integrating machine learning models and new capabilities, whether scripted or learned, into their systems, and by providing user experiences for human-agent interaction and data annotation,” Facebook wrote in a blog post. “As more researchers build with Droidlet, they will improve its existing components and add new ones, which others in turn can then add to their own robotics projects … With Droidlet, robotics researchers can now take advantage of the significant recent progress across the field of AI and build machines that can effectively respond to complex spoken commands like ‘Pick up the blue tube next to the fuzzy chair that Bob is sitting in.'” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,993
2,021
"Dataiku raises $400M to democratize AI in the enterprise | VentureBeat"
"https://venturebeat.com/2021/08/05/dataiku-raises-400m-to-democratize-ai-in-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dataiku raises $400M to democratize AI in the enterprise Share on Facebook Share on X Share on LinkedIn Dataiku Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dataiku , an enterprise-focused platform that helps data analysts and scientists, among other non-coder employees, build their own predictive AI models and garner insights from unstructured data, has raised $400 million in a series E round of funding at a $4.6 billion valuation. Founded in 2013, Dataiku is pitched as an end-to-end platform for designing, deploying, and managing AI and analytics applications, with data connectors for sources such as Amazon S3, Azure Blob Storage, Google Cloud Storage, Snowflake, and NoSQL/SQL databases. The New York-based company said it helps “democratize AI” for enterprise clients across industries, including Unilever, Westpac, OVH, NXP Merck, and Ubisoft, which use the platform to optimize their supply chain, reduce customer churn, detect fraud, and more. AI insights Businesses across the spectrum have increasingly leveraged AI to bring greater intelligence to their decision-making process, but developing and training AI and machine learning models is a resource-intensive process requiring specialist skillsets. Dataiku and similar platforms help companies reduce their dependency on specialized in-house data scientists to let any team become AI creators, rather than AI consumers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Earlier this year, Dataiku launched a slew of new tools to help companies reduce their dependency on data science teams, including the ability to run “what-if” AI model simulations to predict the outcome of any changes they make to a specific piece of data. And back in June, Dataiku launched a fully managed hosted online analytics service to minimize companies’ IT expenditure. Starting at $499 per month, the fully managed service does most of the heavy lifting, which might help smaller companies with fewer resources or bigger firms seeking to bolster their existing data science team. Dataiku had already raised around $250 million across several rounds of funding dating back eight years — its most recent a $100 million series D round last year. This latest series E round was led by Tiger Global, with participation from Alphabet’s CapitalG, Snowflake Ventures, Insight Partners, Iconiq Growth, Battery Ventures, and FirstMark Capital, among others. Dataiku’s raise comes hot on the heels of a number of major funding rounds in the space, including Databricks, which raised $1 billion at a $28 billion valuation back in February, and Datarobot, which secured $300 at a $6.3 billion valuation last week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,994
2,021
"AI-powered travel tech platform Hopper raises $175M | VentureBeat"
"https://venturebeat.com/2021/08/17/ai-powered-travel-tech-platform-hopper-raises-175m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered travel tech platform Hopper raises $175M Share on Facebook Share on X Share on LinkedIn Hopper cofounders, Joost Ouwerkerk & Frederic Lalonde Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hopper, the heavily financed company best known for its AI-powered predictive airfare and hotel rate platform , has raised $175 million in a series G round of funding just five months after announcing an identical figure for its series F round. Founded out of Cambridge, Massachusetts in 2010, Hopper was initially consumed with building out its backend and archiving trillions of flight prices to answer a simple question: “Should I book my flight now or wait?” By tracking millions of flights, the company promised to save consumers as much as 40%. Apparently, two-thirds of flights will experience a price drop sometime prior to departure — the question is when? Hopper later expanded its platform to cover hotels and car rentals, but with the pandemic hitting the travel industry particularly hard, the company was forced to swiftly extend its business model and diversify to B2B. In March, Hopper announced Hopper Cloud , a new partnership program for airlines, online travel agencies, meta-search companies, and corporate travel-related organizations. Hopper invited them to integrate its underlying engine and content as part of a white-label service. Above: Hopper consumer app White label The main initial partner was fintech giant Capital One, which revealed it would be launching an upgraded Capital One Travel platform that leverages Hopper’s vast swathes of data and predictive analytics to provide a new travel booking portal for the bank’s cardholders — set to launch later in 2021. Capital One also emerged as the lead investor in Hopper’s series F round. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hopper has sought to establish a greater fintech focus while offering customers flexibility and peace of mind around their travel bookings. “Hopper used to be known solely for its predictions technology that told customers to buy now or wait for a better deal,” Hopper CEO and cofounder Fred Lalonde told VentureBeat. “Our business model was like every other online travel agency, where we would receive a booking commission from our airline or hotel partners when a customer booked a trip in the app.” In late 2019, Hopper started working on a suite of AI-powered products, including a price-freezing feature that uses “proprietary short-term prediction algorithms” to allow customers to lock in a maximum price for flights for up to 14 days and hotels for up to 60 days, with the assurance that if the price drops during that time they’ll pay the lower amount. This guarantee encourages would-be travelers to commit to a purchase. “It targets travelers early in the funnel, even before they book,” Lalonde said. “By locking in the price of the ticket, travelers are gaining certainty that it won’t increase in the volatile market while gaining time and flexibility to confirm plans with those they are traveling with before purchasing.” On top of that, Hopper offers trip protection insurance provided by AON. More broadly, it touts a flexible approach in the event of travel cancellations or schedule changes — including a “change for any reason” plan that enables customers to reschedule a flight up to 24 hours before departure, with Hopper paying any airlines fees the change may incur. All of these features and services are available as part of Hopper Cloud, allowing travel-focused companies to integrate them into their own products. Hopper’s new focus on fintech has helped it cement a relationship with travel-focused global distribution systems (GDS) company Amadeus , which is gearing up to distribute Hopper’s B2B smarts through its own network of airlines, travel agencies, meta-search companies, and more. Shifting to B2B while retaining a direct consumer focus has also positioned Hopper to contend with heavyweights of the online travel realm, such as Expedia, Booking.com, and Google Travel. Show me the money Including its latest cash injection, which was led by GPI Capital, Hopper has raised around $600 million since its inception. With a fresh $175 million in the bank, the company is now well-financed to continue growing its consumer-facing platform, which has seen its mobile app downloaded some 60 million times. It is also positioned to double down on its API-powered B2B product suite. “With the B2B business specifically, it [the funding] will be used to continue to scale the team,” Lalonde said. “The first Hopper Cloud customers are going live between now and the end of the year, but based on the current demand for Hopper Cloud, we expect it to be a major part of our operations, revenue, and capital expenditures.” As the world slowly emerges from the pandemic, Hopper said its current run rate means it’s on course to hit 330% revenue growth compared to last year. Admittedly, the company took such a hard hit last year that it was forced to lay off 40% of its workforce, but it said it has already surpassed its pre-pandemic revenue peak (from Q1 2020) by more than 100%. Lalonde added that many of the employees who were laid off later returned, and the company’s workforce has rebounded threefold. Just before the pandemic hit, the company claimed 362 workers, a number that has now grown to over 1,000. Other investors in Hopper’s series G round include Goldman Sachs Growth, Glade Brook Capital, WestCap, and Accomplice. Hopper didn’t reveal a specific valuation at this latest round, but it did say the number has risen fivefold since early 2020. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,995
2,021
"Databricks expands its data lake analytics with $1.6B funding | VentureBeat"
"https://venturebeat.com/2021/08/31/databricks-expands-its-data-lake-analytics-with-1-5b-funding"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks expands its data lake analytics with $1.6B funding Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Databricks , a big data analytics software provider, today announced that it raised $1.6 billion in a series H financing round led by Counterpoint Global, with participation from BNY Mellon and ClearBridge. Andreessen Horowitz, Fidelity Management & Research, and Franklin Templeton also contributed, bringing the company’s total raised to $3.5 billion at a $38 billion post-money valuation. Cofounder and CEO Ali Ghodsi says that the capital will be used to support Databricks’ product development, customer adoption, and the evangelization of “ data lakehouse. ” Data lakehouses — a term that came into vogue in 2020 — are data management architectures that combine data lakes, which store structured and unstructured data, with data warehouses, which perform queries and analysis. The goal is to unify data, analytics, and AI in one place, leveraging technologies that support large-scale data workloads. “It is becoming increasingly clear that the data lakehouse is the architecture of the future. Lakehouse succeeds because it dramatically simplifies customers’ data platform, supporting business intelligence, data engineering, and AI,” Ghodsi told VentureBeat via email. “Instead of making enterprises move data between different systems, create many siloed copies of data, and enforce a lot of complex operations on the organization, we’re making that data more useful where it actually is. The lakehouse is the key to making it simple to unify all data workloads.” AI adoption Enterprises are increasingly adopting AI and automation as the pandemic transforms the way they do business. In an MIT Technology Review survey commissioned by Databricks, 83% of CEOs say that AI is a strategic priority for their company. Despite deployment challenges like talent gaps and training data prep , AI is projected to create $3.9 trillion in business value by the end of next year, according to Gartner. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Alongside C3.ai and Snowflake, which filed for IPOs in 2020, Databricks is one of the latest startups focused on analytics and AI to experience rapid growth. The San Francisco, California-based company was founded in 2013 by seven researchers at UC Berkeley’s AMPLab, who came to the realization that building a service for AI-powered analytics could be accomplished with open source tools like Apache Mesos, Alluxio, and Apache Spark (the one they created). Databricks develops and maintains AI lifecycle management platform MLflow, data analysis tool Koalas, and Delta Lake , a service for working with Spark that provides automated cluster management and programming notebooks for analytics. In June 2020, the company launched a new product, Delta Engine, that layers on top of Delta Lake to boost query performance. And in November 2020, Databricks introduced Databricks SQL, which allows customers to run business intelligence and analytics reporting directly on data lakes. “[T]he market is split into a ‘data’ bucket and an ‘AI’ bucket, largely for historical reasons,” Ghodsi said. “On one hand, there are vendors that do data management and data processing. It is great for data processing, but those companies have no significant AI or machine learning capabilities. There are startups, on the other hand, that do machine learning and AI. These companies are great for machine learning algorithms, but they actually are not in the business of processing massive petabytes of data. We’re the only vendor that combines those two into one product.” Today, Databricks hosts millions of virtual machines for brands including Comcast, Condé Nast, H&M, and over 5,000 other organizations across health and life sciences, financial services, media and entertainment, retail, manufacturing, and public sector segments. For transportation company JB Hunt, Databricks helped migrate the company’s data warehouse to a Delta Lake instance on Google Cloud Platform, leading to a 99.8% speedup in freight recommendations delivered through JB Hunt’s digital marketplace. And for ABN AMRO, a European bank, Databricks launched a Microsoft Azure-hosted analytics environment, enabling the firm to deploy 50 different production use cases. “Multiple sources of data are locked in silos across organizations: in applications, in relatively static data warehouses, in ill-defined data lakes, in open data marketplaces and flowing through event-driven systems. Organizations are struggling to take advantage of this often untapped wealth of useful information for new analytics methods, machine learning tools, and predictive decision systems,” Merv Adrian, Gartner Research VP, told VentureBeat via email. “Fully exploiting the promise of the new data assets combined with the existing ones, applying new tools and methods, and empowering both data scientists and business analysts is a key result of adopting the economics and operational model of the cloud.” Pandemic boost Ghodsi says that the pandemic accelerated Databricks’ momentum in three key areas: the cloud, open source, and machine learning. Recently, the company worked with several health care organizations and government agencies to analyze large volumes of data and perform analytics on the data, predicting outcomes to improve their operations. “Right now, companies are eager to migrate their data and data pipeline processes to the cloud faster, and we’re seeing interest from companies that have historically leveraged legacy on-premises vendors,” he added. “We’ve been working with customers to change contracts to fit their needs during the pandemic.” Databricks’ annual recurring revenue currently stands at $600 million, up from $425 million at the end of the 2020 fiscal year. The company expects to grow its workforce of 2,300 employees to more than 3,000 by 2022, roughly a year after Databricks acquired data visualization startup Redash for an undisclosed amount. Ghodsi previously told VentureBeat that future funding would fuel a merger and acquisition strategy with a focus on machine learning and data startups, as well as expanded partnerships with cloud companies. While he was mum on the timing of an IPO, Ghodsi said in an interview with The Register this summer that Databricks aims to be “IPO-ready” this year. “By running simple AI algorithms on massive amounts of data … [customers can] find success,” Ghodsi told VentureBeat. “[Large tech] companies spend millions on talent and infrastructure to build their own proprietary data and AI systems that would ultimately lead to much of their success. Databricks was started to do the same for any company.” Additional investors backing Databricks’ series H included the Regents of the University of California, funds and accounts managed by BlackRock, the Canada Pension Plan Investment Board, Coatue Management, GIC, Greenoaks Capital, Octahedron Capital, funds and accounts managed by T. Rowe Price Associates, Whale Rock, Alta Park Capital, Amazon Web Services (AWS), Arena Holdings, CapitalG, Discovery Capital, Dragoneer Investment Group, Gaingels, Geodesic, Green Bay Ventures, Insight Partners, Microsoft, and New Enterprise Associates. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,996
2,021
"Google's new deep learning system can give a boost to radiologists | VentureBeat"
"https://venturebeat.com/2021/09/16/googles-new-deep-learning-system-can-give-a-boost-to-radiologists"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s new deep learning system can give a boost to radiologists Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Deep learning can detect abnormal chest x-rays with accuracy that matches that of professional radiologists, according to a new paper by a team of AI researchers at Google published in the peer-reviewed science journal Nature. The deep learning system can help radiologists prioritize chest x-rays, and it can also serve as a first response tool in emergency settings where experienced radiologists are not available. The findings show that, while deep learning is not close to replacing radiologists, it can help boost their productivity at a time that the world is facing a severe shortage of medical experts. The paper also shows how far the AI research community has come to build processes that can reduce the risks of deep learning models and create work that can be further built on in the future. Searching for abnormal chest x-rays The advances in AI-powered medical imaging analysis are undeniable. There are now dozens of deep learning systems for medical imaging that have received official approval from FDA and other regulatory bodies across the world. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But the problem with most of these models is that they have been trained for a very narrow task, such as finding traces of a specific disease and conditions in x-ray images. Therefore, they will only be useful in cases where the radiologist knows what to look for. But radiologists don’t necessarily start by looking for a specific disease. And building a system that can detect every possible disease is extremely difficult — if not impossible. “[The] wide range of possible CXR [chest x-rays] abnormalities makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions,” Google’s AI researchers write in their paper. Their solution was to create a deep learning system that detects whether a chest scan is normal or contains clinically actionable findings. Defining the problem domain for deep learning systems is an act of finding the balance between specificity and generalizability. On one end of the spectrum are deep learning models that can perform very narrow tasks (e.g., detecting pneumonia or fractures) at the cost of not generalizing to other tasks (e.g., detecting tuberculosis). And on the other end are systems that answer a more general question (e.g., is this x-ray scan normal or does it need further examination?) but can’t solve more specific problems. The intuition of Google’s researchers was that abnormality detection can have a great impact on the work of radiologists, even if the trained model didn’t point out specific diseases. “A reliable AI system for distinguishing normal CXRs from abnormal ones can contribute to prompt patient workup and management,” the researchers write. For example, such a system can help deprioritize or exclude cases that are normal, which can speed up the clinical process. Although the Google researchers did not provide precise details of the model they used, the paper mentions EfficientNet , a family of convolutional neural networks (CNN) that are renowned for achieving state-of-the-art accuracy on computer vision tasks at a fraction of the computational costs of other models. B7, the model used for the x-ray abnormality detection, is the largest of the EfficientNet family and is composed of 813 layers and 66 million parameters (though the researchers probably adjusted the architecture based on their application). Interestingly, the researchers did not use Google’s TPU processors and used 10 Tesla V100 GPUs to train the model. Avoiding unnecessary bias in the deep learning model Perhaps the most interesting part of Google’s project is the intensive work that was done to prepare the training and test dataset. Deep learning engineers are often faced with the challenge of their models picking up the wrong biases hidden in their training data. For example, in one case, a deep learning system for skin cancer detection had mistakenly learned to detect the presence of ruler marks on skin. In other cases, models can become sensitive to irrelevant factors, such as the brand of equipment used to capture the images. And more importantly, it is important that a trained model can maintain its accuracy across different populations. To make sure problematic biases didn’t creep into the model, the researchers used six independent datasets for training and test. The deep learning model was trained on more than 250,000 x-ray scans originating from five hospitals in India. The examples were labeled as “normal” or “abnormal” based on information extracted from the outcome report. The model was then evaluated with new chest x-rays obtained from hospitals in India, China, and the U.S. to make sure it generalized to different regions. The test data also contained x-ray scans for two diseases that were not included in the training dataset, TB and Covid-19, to check how the model would perform on unseen diseases. The accuracy of the labels in the dataset were independently reviewed and confirmed by three radiologists. The researchers have made the labels publicly available to help future research on deep learning models for radiology. “To facilitate the continued development of AI models for chest radiography, we are releasing our abnormal versus normal labels from 3 radiologists (2430 labels on 810 images) for the publicly-available CXR-14 test set. We believe this will be useful for future work because label quality is of paramount importance for any AI study in healthcare,” the researchers write. Augmenting radiologist with deep learning Radiology has had a rocky history with deep learning. In 2016, deep learning pioneer Geoffrey Hinton said, “I think if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff but hasn’t yet looked down, so it doesn’t yet realize there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists because it’s going to get a lot more experience — it might be ten years, but we’ve got plenty of radiologists already.” But five years later, AI is not anywhere close to driving radiologists out of their jobs. In fact, there’s still a severe shortage of radiologists across the globe, even though the number of radiologists has increased. And a radiologist’s job involves a lot more than looking at x-ray scans. In their paper, the Google researchers note that their deep learning model succeeded in detecting abnormal x-ray with accuracy that is comparable and in some cases superior to human radiologists. However, they also point out that the real benefit of this system is when it is used to improve the productivity of radiologists. To evaluate the efficiency of the deep learning system, the researchers tested it in two simulated scenarios, where the model assisted a radiologist by either helping prioritize scans that were found to be abnormal or excluding scans that were found to be normal. In both cases, the combination of deep learning and radiologist resulted in a significant improvement to the turnaround time. “Whether deployed in a relatively healthy outpatient practice or in the midst of an unusually busy inpatient or outpatient setting, such a system could help prioritize abnormal CXRs for expedited radiologist interpretation,” the researchers write. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,997
2,021
"Workflow automation platform Conexiom raises $130M | VentureBeat"
"https://venturebeat.com/2021/09/28/workflow-automation-platform-conexiom-raises-100m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Workflow automation platform Conexiom raises $130M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Conexiom , a Vancouver, Canada-based company developing workflow automation software for manufacturers and distributors, today announced it has raised $130 million in a funding round led by Warburg Pincus, with participation from Luminate Capital and Iconiq Growth. The funds bring Conexiom’s total raised to date to roughly $170 million, and CEO Ray Grady says the money will be put toward platform R&D and hiring. Annually, more than $15 trillion dollars of order-to-cash and procure-to-pay transactions are processed manually in North America and Europe. Although companies spend billions each year on digital transformation initiatives, approximately 50% of business-to-business transactions still involve emailing documents between buyers and sellers and manually keying data into systems. Founded in 2001, Conexiom aims to deliver automated processing capabilities that turn unstructured data into “touchless” transactions. The platform automates invoices, vendor documents, order acknowledgements, requests for quotes, special pricing agreements, and more, delivering key data into enterprise resource management (ERM) systems and other databases of record. Above: Conexiom’s workflow automation platform. “Conexiom’s customers face growing challenges that are accelerating the need for automation solutions. Our platform is mission-critical to our customers, helping them automate and scale their order-to-cash and procure-to-pay processes,” Grady said in a press release. “This investment is great validation of our people, platform, and market leadership and will help us accelerate product investment to meet growing market demand.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, Conexiom can automatically capture data from a purchase order and translate it into a sales order within an ERP system. Grady claims that for McNaughton-McKay, whose customers place multiple orders a week with 70 line items or more on average, it has brought order processing time down to minutes rather than hours. “Our staff is as good as it gets, but they’re human. Mistakes are going to happen, which meant our teams needed to carve out time for returns, exchanges, and rebills when they were already overloaded,” McNaughton-McKay senior project manager Denny Wyss said in a statement. “The part that impresses me most is Conexiom’s ability to take a manual order process and reduce the time spent on it to virtually nothing.” Growth market The workflow automation market was valued at $8.07 billion in 2019 and is projected to reach $39.49 billion by 2027, climbing at a compound annual growth rate of 23.68% from 2020 to 2027. The pandemic is responsible for the uptick, particularly in manufacturing, where it drove businesses to digitally automate what were previously repetitive, offline manual tasks. According to a recent survey commissioned by Google, two-thirds of manufacturers using AI in their operations report that their reliance on AI has increased. Even among firms that currently don’t use AI, about a third believe it would make employees more efficient and be helpful for employees overall, according to Google. In some respects, Conexiom competes with sales automation platforms like Revenue Grid that have raised tens of millions in venture capital to date. Forrester reports that over 30% of all business-to-business companies adopted AI to improve at least one of their main sales workflows, as of last year. RightBound offers a platform to automate back-office sales processes. Rival SugarCRM provides a predictive AI engine for marketing automation. But since Luminate bought a majority stake in Conexiom in 2018, the company has grown over 6 times in size and now processes over $100 billion in business-to-business transactions annually, with customers including Chevron and HP. Conexiom doubled its headcount in 2020 and plans to focus on growing its engineering, account management, and account services teams across offices in Ontario, London, Munich, and Chicago. “As the demand for software-as-a-service-based systems grows, we continue to see significant growth opportunity for companies like Conexiom. Our investment underscores our long-term commitment to investing strategically in market-leading, business-to-business software businesses,” Warburg Pincus managing director Justin Sadrian said in a press release. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,998
2,021
"Contract management startup ContractPodAI nabs $115M for AI-driven legal review | VentureBeat"
"https://venturebeat.com/2021/09/30/contract-management-startup-contractpodai-nabs-115m-for-ai-driven-legal-review"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Contract management startup ContractPodAI nabs $115M for AI-driven legal review Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ContractPodAI , an AI-powered contract management solution provider, today announced that it raised $115 million in a series C investment led by SoftBank Vision Fund 2 at “five times” its valuation compared with July 2019. The round, which saw participation from Eagle Proprietary Investments, will be put toward product growth and expanding ContractPodAI’s presence internationally, leveraging SoftBank’s Asia-Pacific network. Almost every business function relies on legal involvement or expertise. Despite its importance, legal has been one of the last functions to adopt digitization. As of 2017, the cost of a basic contract stood at $6,900, according to the International Association for Contract & Commercial Management — with around 5 billable hours spent on legal review and up to 18 hours of management and procurement time. Above: ContractPodAI’s product dashboard. Founded in 2012 and based in London, ContractPodAI is the brainchild of Sarvarth Misra, a lawyer-turned-entrepreneur who sought to digitize legal tools and resources by leveraging off-the-shelf AI technologies. ContractPodAI pairs public cloud services from IBM, Microsoft, and others with a no-code interface designed to help teams tackle claims, request-for-proposal reviews, and intellectual property portfolio management using prebuilt and configurable apps. “The company was founded by Robert Glennie and I, who are both corporate lawyers by background. We recognized the huge market opportunity from legal’s slower adoption of technology,” Misra said in a press release. “It was a question of ‘when,’ not ‘if.’ But, there was no existing technology truly fit for purpose, so they built ContractPodAI using legal design thinking which has today evolved to a no-code fully configurable platform for legal teams across their day to day work.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Injecting contracts with AI As the pandemic disrupted businesses around the world, investors bet the farm on legal solutions, which they predicted would become increasingly digitized. According to Crunchbase , legal tech companies have already seen more than $1 billion in venture capital investments so far in 2021, smashing the $510 million invested in 2020 and the all-time high of $989 million in 2019. The contract management software market alone is expected to climb from $1.5 billion in worth in 2019 to $2.9 billion by 2024 as scaling legal research, case development, and strategy refinement becomes increasingly key. Despite evidence showing that only a small number of law firms use AI-based tools — in a recent survey , 7% of firms said that they’ve implemented AI-powered tools, with 45% citing accuracy and cost concerns — interest in the technology continues to grow. “The pandemic in part accelerated the need for legal digital transformation. Last year, we released Advanced Cognitive Search which, [which] helps clients quickly identify force majeure pandemic-related clauses on sell-side and buy-side contracts,” Misra said. “We [also] released Contract Risk & Compliance, which starts to take away not just manual work for a legal team but actually helps them in more strategic work. We launched cognitive language translation, enabling global legal teams to work much more cohesively in their own native languages. And we introduced a Quick Deploy model, which helps get clients up and running with their foundational ContractPodAI functionality such as remote workflow and esignature.” Two-hundred-employee ContractPodAI offers guided forms and templates to create legal applications with integrations with products from IBM, Microsoft, DocuSign, and Salesforce. Customers get a toolkit of AI functionality like document review, cognitive search , and analytics for each use case, as well as “tailored” AI data models tuned to the objective of modules. “When customers upload a contract, the platform’s natural language processing scans the documents, and extracts important aspects like the autorenewal dates, termination dates, and so on,” Misra added. “Further, our Contract Risk & Compliance feature offers suggestions of how to mitigate the risk and track a customer’s progress toward a less risky, more compliant, agreement.” In spite of competition from startups like Lexion , LinkSquares , Malbek , Evisort , and DocuSign , ContractPodAI has managed to attract current and past customers including Bosch Siemens, Braskem, EDF Energy, Total Petroleum, Benjamin Moore, and Freeview. In addition to its office in London, the startup has outposts in San Francisco, New York, Glasgow, Chicago, Sydney, Mumbai, and Toronto. To date, ContractPodAI has raised over $170 million in venture capital. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,999
2,021
"MLOps platform Domino Data Lab nabs $100M | VentureBeat"
"https://venturebeat.com/2021/10/05/mlops-platform-domino-data-lab-nabs-100m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MLOps platform Domino Data Lab nabs $100M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Domino Data Lab , a San Francisco, California-based provider of MLOps solutions, today announced that it has raised $100 million in a series F funding round led by Great Hill Partners and an expanded partnership with Nvidia, which also participated alongside Coatue Management, Highland Capital Partners, and Sequoia Capital. The funds, which bring Domino’s total raised to date to $228 million, will be put toward product development and platform expansion as the company looks to bolster its global customer base, according to CEO Nick Elprin. MLOps, a compound of “machine learning” and “information technology operations,” is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms. The market for such solutions could grow from a nascent $350 million to $4 billion by 2025, according to Cognilytica. But certain nuances can make implementing MLOps a challenge. A survey by NewVantage Partners found that only 15% of leading enterprises have deployed AI capabilities into production at any scale. Domino, which was founded in 2013 by Chris Yang, Matthew Granade, and Elprin, offers a suite of machine learning model management tools that help combat issues like concept drift, when the statistical properties a model is attempting to predict shift over time. The company’s virtual workbench enables engineers to leverage existing tools in tracking, reproducing, and comparing experiments while finding, discussing, and reusing work in one place. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As enterprises advance their adoption of machine learning, they increasingly require an enterprise-grade, open platform to orchestrate and manage these workloads — and Domino perfectly meets this growing market need,” Great Hill Partners’ Derek Schoettle said in a statement. MLOps platform Using Domino, customers — which include more than 20% of the Fortune 100 — can spin up interactive workspaces on the hardware of their choice and scale to more powerful compute resources if necessary. The platform provides a built-in package manager to organize the libraries and tools tapped throughout a project, along with versioned datasets to track data used during model training and testing. Domino’s reporting features let admins schedule reports to be generated and delivered to stakeholders automatically, while data pipelines handle tasks to keep models up to date. On the model operations side of the equation, Domino lets customers deploy models as on-demand APIs or export models for deployment on other infrastructure. It can detect data drift and monitor the performance of models in the wild while alerting engineers to those that underperform, with a registry that shows the status of models at a glance. Domino also keeps tabs on computing and measures the business impact of model APIs and apps as it surfaces project health in terms of both progress and potential roadblocks. Domino says Nvidia’s investment will enable it to develop product functionality to expand the accelerated computing capabilities of its platform. This includes validating the Domino platform for Nvidia AI Enterprise , a managed collection of software tools designed to accelerate machine learning workloads so Domino can run on Nvidia-certified systems from OEM hardware providers. Since 2020, Domino has offered certified enterprise MLOps software for Nvidia DGX systems as a member of Nvidia’s DGX-ready software program. “AI and data science are new workloads that demand a full-stack solution — one with tools that simplify development and deployment for customers,” Nvidia enterprise computing head Manuvir Das said in a statement. “Our partnership with Domino Data Lab reflects our commitment to enterprise MLOps. With Nvidia AI Enterprise integration into the Domino platform, customers will be able to easily integrate advanced AI tools into their traditional datacenter infrastructure.” Domino, whose customers include Dell, Allstate, UBS, Bristol Meyers, ConocoPhillips, and Lockheed Martin, previously raised $43 million in a funding round co-led by Highland Capital Partners and Dell Capital. The company competes with several startups in the multibillion-dollar MLOps market , including Iterative.ai , Comet , Weights and Biases , and DataRobot , which in July closed a $300 million financing tranche at a $6.3 billion post-money valuation. To keep up with the competition, Domino recently added new capabilities that integrate with Git repositories to help users more easily track all aspects of model experimentation. The company also added a feature that makes it possible to run multiple sandboxed environments simultaneously to improve productivity, as well as a new set of technologies — Domino Model Monitor — designed to prevent models from degrading. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,000
2,021
"IBM launches AI service to assist companies with climate change analysis | VentureBeat"
"https://venturebeat.com/2021/10/12/ibm-launches-ai-service-to-assist-companies-with-climate-change-analysis"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM launches AI service to assist companies with climate change analysis Share on Facebook Share on X Share on LinkedIn IBM logo is seen on Gae Aulenti square in Milano, Italy, on December 23 2019 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. IBM today launched the Environmental Intelligence Suite, a set of AI-powered software that customers can use to prepare for climate risks that could disrupt operations. By combining AI, weather data, climate risk analytics, and carbon accounting capabilities, the Environmental Intelligence Suite can be used to help organizations assess their impact on the planet while reducing the complexity of regulatory compliance, IBM says. Companies are facing climate-related damage to their assets, as well as increasing expectations from consumers to perform as environmental leaders. McKinsey predicts that climate change could mean more disruptions in global supply chains, interrupting production and raising costs and prices. At the same time, shoppers are seeking out — and are willing to pay a premium for — environmentally-friendly products, according to recent studies from GreenPrint and others. The Environmental Intelligence Suite leverages existing weather data from IBM, along with technologies developed by IBM Research. Via APIs, dashboards, maps, and alerts, the platform delivers recommendations aimed at addressing both immediate challenges and long-term planning and strategies. With the platform, climate and data scientists can analyze environmental datasets and use a new climate risk modeling framework to generate data on future wildfire and flooding risks. They can also tap natural language processing and automation features designed to help estimate carbon emissions and identify opportunities for reduction. “The future of business and the environment are deeply intertwined. Not only are companies coping with the effects of extreme weather disruptions on their operations, they’re also being held increasingly accountable by shareholders and regulators for how their operations impact the planet,” IBM’s general manager of AI applications and blockchain Kareem Yusuf said in a statement. “IBM is bringing together the power of AI and hybrid cloud to provide businesses with environmental intelligence designed to help them improve environmental performance and reporting, create more efficient business operations to reduce resource consumption, and plan for resiliency in the face of climate disruptions.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI for climate change IBM is pitching the Environmental Intelligence Suite as a way to monitor for severe weather, wildfires, flooding, and air quality in addition to prioritizing mitigation efforts and measuring environmental initiatives. For example, the company says, retailers could use the platform for severe weather-related shipping and inventory disruptions, while energy and utility companies could deploy it to determine where to trim vegetation around power lines. While studies suggest that some forms of machine learning do contribute significantly to greenhouse gas emissions, the technology has also been proposed as a tool to combat climate change. Researchers are using AI-generated images to help visualize climate change and estimate corporate carbon emissions. And nonprofits like WattTime are working to reduce households’ carbon footprint by automating when electric vehicles, thermostats, and appliances are active based on where renewable energy is available. Beyond AI-powered services, tech giants are releasing tools to corner a global emission management software market expected to be worth $43.6 billion by 2030. Recently, Microsoft announced Cloud for Sustainability , a service designed to help companies measure and manage their carbon emissions by setting sustainability goals. It was released on the heels of Salesforce’s Sustainability Cloud , an enterprise carbon accounting product designed to drive climate action, and apps from Google Cloud to help businesses choose cleaner regions to locate their resources. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,001
2,021
"DeepMind acquires and open-sources robotics simulator MuJoCo | VentureBeat"
"https://venturebeat.com/2021/10/18/deepmind-acquires-and-open-sources-robotics-simulator-mujoco"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind acquires and open-sources robotics simulator MuJoCo Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. DeepMind, the AI lab owned by Google’s parent company Alphabet, today announced that it has acquired and released the MuJoCo simulator, making it freely available to researchers as a precompiled library. In a blog post , the lab says that it’ll work to prepare the codebase for a release in 2022 and “continue to improve” MuJoCo as open source software under the Apache 2.0 license. A recent article in the Proceedings of the National Academy of Sciences exploring the state of simulation in robotics identifies open source tools as critical for advancing research. The authors’ recommendations are to develop open source simulation platforms as well as establish community-curated libraries of models, a step that DeepMind claims it has now taken. “Our robotics team has been using MuJoCo as a simulation platform for various projects … Ultimately, MuJoCo closely adheres to the equations that govern our world,” DeepMind wrote. “We’re committed to developing and maintaining MuJoCo as a free, open-source, community-driven project with best-in-class capabilities. We’re currently hard at work preparing MuJoCo for full open sourcing.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Simulating physics MuJoCo, which stands for Multi-Joint Dynamics with Contact, is widely used within the robotics community alongside simulators like Bullet and DARPA-backed Gazebo. Initially developed by Emo Todorov, a neuroscientist, and director of the Movement Control Laboratory at the University of Washington, MuJoCo was made available through startup Roboti LLC as a commercial product in 2015. Unlike many simulators designed for gaming and film applications, MuJoCo takes few shortcuts that prioritize stability over accuracy. For example, the library accounts for gyroscopic forces, implementing full equations of motion — the equations that describe the behavior of a physical system in terms of its motion as a function of time. MuJoCo also supports musculoskeletal models of humans and animals, meaning that applied forces can be distributed correctly to the joints. MuJoCo’s core engine is written in the programming language C, which makes it easily translatable to other architectures. Moreover, the library’s scene description and simulation state are stored in just two data structures, which constitute all the information needed to recreate a simulation including results from intermediate stages. “MuJoCo’s scene description format uses cascading defaults — avoiding multiple repeated values ​​— and contains elements for real-world robotic components like equality constraints, motion-capture markers, tendons, actuators, and sensors. Our long-term roadmap includes standardizing [it] as an open format, to extend its usefulness beyond the MuJoCo ecosystem,” DeepMind wrote. Of course, no simulator is perfect. A paper published by researchers at Carnegie Mellon outlines the issues with them, including: The reality gap: No matter how accurate, simulated environments don’t always adequately represent physical reality. Resource costs: The computational overhead of simulation requires specialized hardware like graphics cards, which drives high cloud costs. Reproducibility: Even the best simulators can contain “non-deterministic” elements that make reproducing tests impossible. Overcoming these hurdles presents a grand challenge in simulation research. In fact, some experts believe that developing a simulation with 100% accuracy and complexity might require as much problem-solving and resources as developing robots themselves, which is why simulators are likely to be used in tandem with real-world testing for the foreseeable future. MuJoCo 2.1 has been released as unlocked binaries, available at the project’s original website and on GitHub, along with updated documentation. DeepMind is granting licenses to provide an unlocked activation key for legacy versions of MuJoCo (2.0 and earlier), which will expire on October 18, 2031. DeepMind’s acquisition of MuJoCo comes after the company’s first profitable year. According to a filing last week, the company raked in £826 million ($1.13 billion USD) in revenue in 2020, more than three times the £265 million ($361 million USD) that it filed in 2019. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,002
2,021
"Intel open-sources AI-powered tool to spot bugs in code | VentureBeat"
"https://venturebeat.com/2021/10/20/intel-open-sources-ai-powered-tool-to-spots-bugs-in-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel open-sources AI-powered tool to spot bugs in code Share on Facebook Share on X Share on LinkedIn An Intel logo on a smartphone. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Intel today open-sourced ControlFlag , a tool that uses machine learning to detect problems in computer code — ideally to reduce the time required to debug apps and software. In tests, the company’s machine programming research team says that ControlFlag has found hundreds of defects in proprietary, “production-quality” software, demonstrating its usefulness. “Last year, ControlFlag identified a code anomaly in Client URL (cURL), a computer software project transferring data using various network protocols over one billion times a day,” Intel principal AI scientist Justin Gottschlich wrote in a blog post on LinkedIn. “Most recently, ControlFlag achieved state-of-the-art results by identifying hundreds of latent defects related to memory and potential system crash bugs in proprietary production-level software. In addition, ControlFlag found dozens of novel anomalies on several high-quality open-source software repositories.” The demand for quality code draws an ever-growing number of aspiring programmers to the profession. After years of study, they learn to translate abstracts into concrete, executable programs — but most spend the majority of their working hours not programming. A recent study found that the IT industry spent an estimated $2 trillion in 2020 in software development costs associated with debugging code, with an estimated 50% of IT budgets spent on debugging. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ControlFlag, which works with any programming language containing control structures (i.e., blocks of code that specify the flow of control in a program), aims to cut down on debugging work by leveraging unsupervised learning. With unsupervised learning, an algorithm is subjected to “unknown” data for which no previously defined categories or labels exist. The machine learning system — ControlFlag, in this case — must teach itself to classify the data, processing the unlabeled data to learn from its inherent structure. ControlFlag continually learns from unlabeled source code, “evolving” to make itself better as new data is introduced. While it can’t yet automatically mitigate the programming defects it finds, the tool provides suggestions for potential corrections to developers, according to Gottschlich. “Intel is committed to making software more robust and less cumbersome to maintain while retaining excellent performance without introducing security vulnerabilities. We hope that projects like ControlFlag can substantially reduce the time it takes to develop software globally,” Gottschlich wrote. “Due to the overwhelming amount of time spent on debugging, even a small savings of time in this space could result in time and monetary savings and thereby allow us — as a community — to accelerate the advancement of technology.” AI-powered coding tools like ControlFlag, as well as platforms like Tabnine, Ponicode, Snyk, and DeepCode, have the potential to reduce costly interactions between developers, such as Q&A sessions and repetitive code review feedback. IBM and OpenAI are among the many companies investigating the potential of machine learning in the software development space. But studies have shown that AI has a ways to go before it can replace many of the manual tasks that human programmers perform on a regular basis. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,003
2,021
"Driverless delivery company Nuro nabs $600M and partners with Google | VentureBeat"
"https://venturebeat.com/2021/11/02/driverless-delivery-company-nuro-nabs-600m-and-partners-with-google"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Driverless delivery company Nuro nabs $600M and partners with Google Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Autonomous vehicle (AV) company Nuro has raised $600 million in a series D round of funding from high-profile investors, including Tiger Global Management and Google. This brings its valuation to $8.6 billion, according to sources, up significantly from its $5 billion valuation less than a year ago. Founded in 2016, Nuro is setting out to capitalize on shifting consumer expectations and trends that have been accelerated by the global pandemic, namely how they experience ecommerce. Customers in supported U.S. locales place their online orders directly with Nuro’s partner companies, and Nuro’s so-called “zero-occupant” autonomous delivery vehicles will transport anything from groceries to pharmacy prescriptions. Alongside Google’s sibling company Waymo , Nuro has emerged as one of the leading players in the commercial driverless car space — last year, it became the first such company to receive a permit from the California Department of Motor Vehicles (DMV) that enabled it to charge money for its driverless delivery service. “The arrival of ubiquitous on-demand ecommerce is changing the way we access goods,” Tiger Global partner Griffin Schroeder said in a press release. “Nuro is the bridge to an era of sustainable, low cost, autonomous local delivery.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Nuro in action The Google factor In addition to the funding, Nuro also revealed a five year “strategic partnership” with Google Cloud, which it said will support the “massive scale and capacity required to run self-driving simulation workloads,” as well as give Nuro access to machine learning smarts and storage for the vast amount of data generated by its vehicles. Interestingly, the duo also revealed that they would explore other commercial opportunities together to “strengthen and transform local commerce,” though they provided no further details on what exactly this might entail. Nuro had previously raised around $1.5 billion, and with another $600 million in the bank, the company is now well-financed to further develop and deploy its autonomous delivery service with some of the biggest brands across the U.S. Indeed, Nuro has amassed a fairly impressive roster of customers including Kroger , Domino’s , Walmart , and CVS. Aside from lead backer Tiger Global Management and Google, Nuro’s series D round ushered in a slew of notable investors, including Kroger and SoftBank Vision Fund 1. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,004
2,021
"Nvidia debuts ReOpt to optimize supply chain routing with AI | VentureBeat"
"https://venturebeat.com/2021/11/09/nvidia-debuts-reopt-to-optimize-supply-chain-routing-with-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia debuts ReOpt to optimize supply chain routing with AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During a keynote address at its fall 2021 GPU Technology Conference (GTC) , Nvidia debuted ReOpt, a software package that combines local search heuristics algorithms and “metaheuristics” to optimize vehicle route planning and distribution. According to the company, ReOpt can improve route planning, warehouse picking, fleet management, and more in logistics to control delivery costs from factories to stores and homes. Companies are increasingly facing supply chain challenges caused — or exacerbated — by the pandemic. A U.S. Census Bureau survey f0und that 38.8% of U.S. small businesses were experiencing domestic supplier delays by the middle of July 2021. Late deliveries can seriously impact customer loyalty, with one survey finding that 80% of shoppers would cut ties with brands if they experienced stock shortages. “At a time when the global supply chain faces massive disruption, ReOpt provides the AI software required for everything from vehicle routing for last-mile delivery to efficiently picking and packing of warehoused goods bound for homes and offices,” Nvidia software engineering manager Alex Fender said in a blog post. “ReOpt delivers new tools for dynamic logistics and supply chain management to a wide range of industries, including transportation, warehousing, manufacturing, retail, and quick-service restaurants.” AI-powered logistics Delivering goods directly to a customer’s door, called last-mile delivery, was costly even before the pandemic disrupted the global supply chain network. Over half of all air, express, rail, maritime, and truck transport shipping costs result from last-mile deliveries, impacting profitability, according to ABI Research. Onfleet estimates that companies typically eat about 25% of that cost themselves — a number that continues to increase as bottlenecks worsen. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ReOpt, which is now available in early access, taps algorithms to provide customers with road condition, traffic, and route metrics to reduce miles, fuel cost, carbon emissions, and idle time. The service models the movements of vehicles that have finite capacities and different costs, factoring in items like fresh produce that must be carried by refrigerated trucks. ReOpt also allows customers to create automated routines that dynamically route robots for truck loading as new orders arrive. And it can take into account the number of pilots, drivers, and workers available to operate vehicles on a given day, folding in maintenance costs. “GPUs offer the computational power needed to fuel the most ambitious heuristics while supporting the most challenging constraints. ReOpt takes advantage of Nvidia’s massively parallel architecture to generate thousands of solution candidates and refine them to select only the best one at the end,” Fender continued. “As a result, ReOpt can scale to the largest problems in seconds with world-class accuracy.” A growing number of companies are developing AI services to optimize components of the supply chain. DispatchTrack provides AI-powered route optimization, reservations, billing and settlement, and omnichannel order tracking tools. Locus is also developing a platform for logistics and “enterprise-scale” supply chain automation. Others in the global logistics market — which is expected to be grow to $12.68 billion in value by 2023, according to Research and Markets — are Convoy , Optimal Dynamics , KeepTruckin , and Next Trucking , which have collectively raised hundreds of millions in venture capital. Tech giants have entered the fray, too — most recently Microsoft with its Supply Chain Insights product. Uber’s eponymous Uber Freight connects carriers and drivers with companies that need to move cargo. As for Google’s Supply Chain Twin, which became generally available in September, it organizes data in Google Cloud to expose a more complete view of suppliers, inventories, and events like weather. While only 12% of manufacturing and transportation organizations are currently using AI in their supply chain operations, 60% expect to be doing so within the next four years, according to MHI. This dovetails with a recent PwC report , which found that 48% of companies are ramping up investments for simulation modeling and supply chain resilience. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,005
2,021
"Nvidia unveils AI Omniverse Avatars for the virtual world | VentureBeat"
"https://venturebeat.com/2021/11/09/nvidia-unveils-ai-omniverse-avatars-for-the-virtual-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia unveils AI Omniverse Avatars for the virtual world Share on Facebook Share on X Share on LinkedIn Jensen Huang, CEO of Nvidia, introduces Omniverse Avatar. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia today unveiled Omniverse Avatar, a platform for generating immersive AI-driven avatars. The platform enables users to leverage speech AI, computer vision, natural language understanding, and simulation to create avatars that recognize speech and communicate with human users within real-world simulation and collaboration platforms like Nvidia Omniverse, and other digital worlds. Avatars or AI assistants created with the solution would be able to see and speak on a wide range of subjects and conduct customer service interactions, Nvidia said. The actions will include making personal appointments and reservations to ordering food from restaurants and completing banking transactions. The release of Omniverse Avatar will provide marketers with a solution that they can use to interact with customers in virtual worlds, and simulation platforms like Nvidia Omniverse, where users can deploy the avatars to facilitate personalized customer service interactions with consumers, and enhance customer satisfaction. “The dawn of intelligent virtual assistants has arrived,” said Jensen Huang, founder and CEO of Nvidia. “Omniverse Avatar combines Nvidia’s foundational graphics, simulation, and AI technologies to make some of the most complex real-time applications ever created. The use cases of collaborative robots and virtual assistants are incredible and far-reaching.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Nvidia enters the AI avatars race The announcement of Omniverse Avatar also means that Nvidia has entered into the AI avatars arms race, competing against other established digital assistant or avatar providers, including Deepbrain, Soul Machines, and AI Foundation, which are also trying to create engaging virtual characters. However, Omniverse Avatar has the edge over many competitors due to its integration with the Nvidia Omniverse, filled with more than 70,000 individual creators. Now 700 companies, from BMW Group to Epigraph, Ericsson, and Sony Pictures Animation, have access to immersive AI avatars to drive digital experiences in the Omniverse. At the same time, these avatars are well-positioned to provide meaningful interactions in the Omniverse due to their use of the Megatron 530B Large Language Model, a pre-trained model that gives avatars the ability to recognize, understand, and generate human language. This language model enables avatars to answer questions on a wide range of subjects, giving them the ability to summarize complex stories into short formats, recognize speech in multiple languages, and then translate it, so they can provide information that human users wouldn’t have access to. Immersive AI in action As part of Huang’s keynote address at Nvidia’s GTC event , he highlighted the capabilities of Nvidia AI software and Nvidia’s generated language model Megatron-Turing NLG 530B during two demonstrations. During the first demonstration, Huang began having a real-time conversation with a digitized, toy version of himself, where he discussed topics from health care diagnosis to climate science. Then, Huang demoed a customer-service avatar working in a restaurant kiosk that could see and communicate with two customers as they ordered vegetarian burgers, fries, and drinks. Many other providers like Soul Machines have attempted to create humanlike avatars and struggled to mitigate the uncanny valley effect, where realistic avatars provoke a sense of uneasiness in users. Nvidia avoided this by embracing a lighthearted cartoonish aesthetic that’s unlikely to unsettle users in the Omniverse. Yet, while the initial demos in the keynote demonstration look promising, it remains to be seen how engaging Nvidia Omniverse Avatars are when they’re in a real customer-facing environment in a virtual world, working with human beings who have high expectations. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,006
2,021
"Chip developer Cerebras bolsters AI-powered workload capabilities with $250M | VentureBeat"
"https://venturebeat.com/2021/11/10/chip-developer-cerebas-bolsters-ai-powered-workload-capabilities-with-250m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Chip developer Cerebras bolsters AI-powered workload capabilities with $250M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cerebras Systems , the California-based company that has built a “brain-scale” chip to power AI models with 120 trillion parameters, said today it has raised $250 million funding at a valuation of over $4 billion. Cerebras claims its technology significantly accelerates the time involved in today’s AI work processes at a fraction of the power and space. It also claims its innovations will support the multi-trillion parameter AI models of the future. In a press release , the company stated that this additional capital will enable it to further expand its business globally and deploy its industry-leading CS-2 system to new customers, while continuing to bolster its leadership in AI compute. Cerebras’ cofounder and CEO Andrew Feldman noted that the new funding will allow Cerebras to extend its leadership to new regions. Feldman believes this will aid the company’s mission to democratize AI and usher in what it calls “the next era of high-performance AI compute” — an era where the company claims its technology will help to solve today’s most urgent societal challenges across drug discovery, climate change, and much more. Redefining AI-powered possibilities “Cerebras Systems is redefining what is possible with AI and has demonstrated best in class performance in accelerating the pace of innovation across pharma and life sciences, scientific research, and several other fields,” said Rick Gerson, cofounder, chairman, and chief investment officer at Falcon Edge Capital and Alpha Wave. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We are proud to partner with Andrew and the Cerebras team to support their mission of bringing high-performance AI compute to new markets and regions around the world,” he added. Cerebras’ CS-2 system, powered by the Wafer Scale Engine (WSE-2) — the largest chip ever made and the fastest AI processor to date — is purpose-built for AI work. Feldman told VentureBeat in an interview that in April of this year, the company more than doubled the capacity of the chip, bringing it up to 2.6 trillion transistors, 850,000 AI-optimized cores, 40GBs on-chip memory, 20PBs memory bandwidth, and 220 petabits fabric bandwidth. He noted that for AI work, big chips process information more quickly and produce answers in less time. With only 54 billion transistors, the largest graphics processing unit pales in comparison to the WSE-2, which has 2.55 trillion more transistors. With 56 times more chip size, 123 times more AI-optimized cores, 1,000 times more high-performance on-chip memory, 12,733 times more memory bandwidth, and 45,833 times more fabric bandwidth than other graphic processing unit competitors, the WSE-2 makes the CS-2 system the fastest in the industry. The company says its software is easy to deploy, and enables customers to use existing models, tools, and flows without modification. It also allows customers to write new ML models in standard open source frameworks. New customers Cerebras says its CS-2 system is delivering a massive leap forward for customers across pharma and life sciences, oil and gas, defense, supercomputing centers, national labs, and other industries. The company announced new customers including Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing center at the University of Edinburgh, Tokyo Electron Devices, GlaxoSmithKline, and AstraZeneca. The series F investment round was spearheaded by Alpha Wave Ventures, a global growth stage Falcon Edge-Chimera partnership, along with Abu Dhabi Growth (ADG). Alpha Wave Ventures and ADG join a group of strategic world-class investors including Altimeter Capital, Benchmark Capital, Coatue Management, Eclipse Ventures, Moore Strategic Ventures, and VY Capital. Cerebras has now expanded its frontiers beyond the U.S., with new offices in Tokyo, Japan, and Toronto, Canada. On the back of this funding, the company says it will keep up with its engineering work, expand its engineering force, and hunt for talents all over the world going into 2022. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,007
2,021
"Lusha nabs $205M to expand its crowdsourced database of sales prospects | VentureBeat"
"https://venturebeat.com/2021/11/10/lusha-nabs-205m-to-expand-its-crowdsourced-database-of-sales-prospects"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lusha nabs $205M to expand its crowdsourced database of sales prospects Share on Facebook Share on X Share on LinkedIn (GERMANY OUT) Behinderte, Mann im Rollstuhl arbeitet am Computer in einem Buero, handicapped person in a wheel chair working in the office (Photo by Wodicka/ullstein bild via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Lusha, a B2B prospecting and contact data platform, today announced that it raised $205 million at a $1.5 billion post-money valuation. The round, which was led by PSG with participation from ION Crossover Partners, brings Lusha’s total raised to date to $245 million following a $40 million series A in February. It’s estimated that salespeople spend around 65% of their time on non-sales and administrative activities. Perhaps unsurprisingly, more than 40% of salespeople say that prospecting — reaching out to potential customers — is the most challenging part of the sales process, according to HubSpot. It takes an average of 18 calls to actually connect with a buyer, and only 24% of sales emails are ultimately opened by the intended recipient. Additionally, only 19% of buyers want to connect with a salesperson during the awareness stage of their buying process, when they’re first learning about the product that’s being sold. Assaf Eisenstein and Yoni Tserruya founded New York-based Lusha in 2016 in an attempt to ease the barrier to entry for salespeople. The startup’s cloud-based platform provides marketers and teams with access to tools designed to help identify buyers and gain “data-driven” insights on whom they approach — and when to approach them. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Tserruya was previously an iOS developer at AT&T, where he created Lusha as a side project. Eisenstein is Lusha’s original marketer and salesperson and helped to spread the word about Lusha. “Similar to the shift that marketing underwent a decade ago, sales professionals are abandoning spray and pray outreach, in favor of super-targeted selling based on data,” Tserruya said in a statement. “Lusha enables all salespeople to utilize data to recognize their most relevant opportunities and maximize revenue in a simple, easy-to-use solution. We look forward to using this funding to be at the forefront of this industry shift and grow Lusha into the largest business-to-business sales community.” Sales community Sales teams are increasingly embracing data-driven approaches as the pandemic increases online interactions. According to a Salesforce report , 50% of teams leverage data to produce sales forecasts, while 34% supplement predictions based on intuition with data-driven insights. The same report found that the top 24% of teams are 1.5 times more likely to base forecasts on data-driven insights, while underperforming sales teams are 1.7 times more likely to forecast on intuition. Lusha offers a sortable, filterable database of customer contact information maintained by a community of nearly 80,000 salespeople across 273,000 sales organizations, including teams at Facebook, Google, Dropbox, and Uber. The platform currently hosts 100 million business and 15 million company profiles, as well as 60 million email addresses and 50 million direct dials. The platform integrates with existing customer relationship management systems like Salesforce and Outreach, and it offers a browser extension to find, target, and connect with additional prospects on LinkedIn and elsewhere on the web. Via Lusha’s API, sales organizations can update their systems with details like company name, size, industry, website, and graphics. An algorithm cross-checks data from different sources and combines data points to create single business contacts and company profiles. “The Lusha community are members that have decided to share their contact details under the terms of the Lusha community program,” Lusha explains on its website. “We [also] receive contact details from end-users of our affiliates … [and we] license information from business partners who own established and trustworthy directories … [Finally, ] our proprietary algorithm scans publicly available sources and retrieves public information with advanced tools.” Lusha says that it reviews each affiliate’s security and signs documentation to ensure that the data provided to it is compliant with privacy laws like the EU’s GDPR and the California Consumer Privacy Act. Companies and contacts who wish to remove themselves from Lusha’s database can fill out a form or request information about how the platform collects and uses their data. “We have established a U.S. sales office strategically located in Boston … in order to serve as the company’s North American arm. Eisenstein has recently moved to Boston to head up our expansion efforts in the U.S. market,” Tserruya told VentureBeat via email. “We [are also continuing to] develop and improve our product offerings and usability to provide organizations of all sizes instant access to accurate data and company insights they need in order to optimize their sales outreach. Our unique … approach makes it easy for users to sign up, start using Lusha and see immediate value. This is contrary to industry standards that require lengthy sales qualification processes, multiple sales touchpoints, and big bottom lines.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,008
2,021
"DeepMind claims AI has aided new discoveries and insights in mathematics | VentureBeat"
"https://venturebeat.com/2021/12/01/deepmind-claims-ai-has-aided-new-discoveries-and-insights-in-mathematics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind claims AI has aided new discoveries and insights in mathematics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. DeepMind, the AI research laboratory funded by Google’s parent company, Alphabet, today published the results of a collaboration between it and mathematicians to apply AI toward discovering new insights in areas of mathematics. DeepMind claims that its AI technology helped to uncover a new formula for a previously-unsolved conjecture, as well as a connection between different areas of mathematics elucidated by studying the structure of knots. DeepMind’s experiments with AI run the gamut from systems that can win at StarCraft II and Go to machine learning models for app recommendations and datacenter cooling optimization. But the sciences remain of principle interest to DeepMind, not least of which because of their commercial applications. Earlier this year, DeepMind cofounder Demis Hassabis announced the launch of Isomorphic Labs, which will use machine learning to identify disease treatments that have thus far eluded researchers. Separately, the lab has spotlighted its work in the fields of weather forecasting , materials modeling , and atomic energy computation. “At DeepMind, we believe that AI techniques are already sufficient to have a foundational impact in accelerating scientific progress across many different disciplines,” DeepMind machine learning specialist Alex Davies said in a statement. “Pure maths is one example of such a discipline, and we hope that [our work] can inspire other researchers to consider the potential for AI as a useful tool in the field.” Applying AI to mathematics DeepMind isn’t the first to apply AI to mathematics, setting aside the fact that mathematics is the foundation of all AI systems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 2020, Microsoft-backed AI research lab OpenAI introduced GPT-f, an automated prover and proof assistant for the Metamath formalization language. (In mathematics, a “proof” refers to a logical argument that tries to show that a statement is true.) GPT-f found new proofs that were accepted into a mathematics community, which the researchers claimed at the time was a historic achievement. More recently, a group of researchers from the Technion in Israel and Google presented an automated conjecturing system called the Ramanujan Machine, which came up with original formulas for universal constants that show up in mathematics. One of the formulas created by the machine can be used to compute the value of a constant called Catalan’s number more efficiently than any human-discovered formula. What ostensibly sets DeepMind’s work apart, however, is its detection of the existence of patterns in mathematics with supervised learning — and giving insight into these patterns with attribution techniques from AI. Supervised learning is defined by its use of labeled datasets to train algorithms to classify data, predict outcomes, and more, and it’s been applied to domains including fraud detection, sales forecasting, and inventory optimization. In a paper published in the journal Nature, DeepMind describes how it — alongside professor Geordie Williamson at the University of Sydney — used AI to help discover a new approach to a longstanding conjecture in representation theory. Defying progress for nearly 40 years, the combinatorial invariance conjecture states that a relationship should exist between certain directed graphs and polynomials. (A directed graph is a set of vertices connected by edges, with each node having a direction associated with it.) Using machine learning techniques, DeepMind was able to gain confidence that such a relationship does indeed exist and to hypothesize that it might be related to structures known as “broken dihedral intervals” and “external reflections.” With this knowledge, professor Williamson was able to create an algorithm that would solve the combinatorial invariance conjecture, which DeepMind computationally verified across more than 3 million examples. “One might imagine that the work of a mathematician is dry and formulaic. The reality is completely different. Mathematicians inhabit a world rich in imagination, heuristics, and intuition,” Williamson said in a statement. “Often finding the right way to think about something, even if imprecise, is more useful than another long calculation. It has been a fascinating interdisciplinary journey with the teams at DeepMind and Oxford. We have seen that machine learning can be used to guide intuition, and eventually to prove new theorems.” The paper also details DeepMind’s work with professor Marc Lackenby and professor András Juhász at the University of Oxford, which explored knots — one of the fundamental objects of study in topology (i.e., the mathematical study of the properties that are preserved through deformations, twistings, and stretchings). An AI system trained by DeepMind revealed that a particular algebraic quantity — the “signature” — was directly related to the geometry of a knot, which wasn’t previously known or suggested by an existing theory. The lab guided professor Lackenby to discover a new quantity — “natural slope” — and prove the exact nature of the relationship by using attribution techniques from machine learning, establishing connections between different branches of mathematics. As DeepMind notes, knots not only show the many ways a rope can be tangled, but also have connections with quantum field theory and non-Euclidean geometry. Algebra, geometry, and quantum theory all share unique perspectives on these objects, and a longstanding mystery is how these different branches relate. Promise for discovery DeepMind believes that the Nature paper, along with yet-to-be-released companion papers for each result, demonstrate the usefulness of machine learning as a tool for mathematical study. AI excels at identifying and discovering patterns in data, the lab asserts, even exceeding the capabilities of expert human mathematicians. “Finding patterns has become even more important in pure mathematics because it’s now possible to generate more data than any mathematician can reasonably expect to study in a lifetime. Some objects of interest — such as those with thousands of dimensions — can also simply be too unfathomable to reason about directly. With these constraints in mind, we believed that AI would be capable of augmenting mathematicians’ insights in entirely new ways,” DeepMind wrote in a blog post. Queen Mary University professor of computational creativity Simon Colton, who wasn’t involved in the research, said that this is likely the first time deep learning techniques have been used for mathematical discovery. But he questioned whether mathematicians would want machine learning systems to take the creative lead in projects. “When I was working with mathematicians, it was clear that they were happy for AI systems to prove minor things like lemmas and side conditions, etc., and to do huge calculations as per computer algebra systems. However, they were not happy for an AI system to prove important results (especially if they couldn’t understand the proof), or to perform concept invention, as this was the creative part of the job they loved the most,” Colton told VentureBeat via email. “With notable exceptions, the vast majority of theorems in pure mathematics are as useful to society as a painting by an amateur, i.e., only of interest to a small clique of people. So, it’s not safety-critical to the progression of society or general well-being to have AI systems involved in pure mathematics (like it is for protein folding, another area that DeepMind has innovated in).” Still, Colton expects that the broader adoption of AI systems in pure mathematics — assuming it occurs — will lead to interesting discoveries “that may be beyond human comprehension.” “We may therefore find a limit on what mathematicians can verify and what they want AI systems to do,” he continued. “It’s great that DeepMind [is] getting into this area and working with top mathematicians, as I’m sure there will be more breakthroughs in pure maths following.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,009
2,021
"Smartling lands $160M to help companies translate their content automatically | VentureBeat"
"https://venturebeat.com/2021/12/02/smartling-lands-160m-to-help-companies-translate-their-content-automatically"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Smartling lands $160M to help companies translate their content automatically Share on Facebook Share on X Share on LinkedIn Machine translation Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As the pandemic drives businesses online, translation is becoming a key part of the digital transformation equation. One large-scale behavioral study in 2014 showed that 75% of consumers are more likely to buy products from websites in their native language. Another , this one by Localize, found businesses that invested in translation were 1.5 times more likely to observe an increase in revenue. Against this backdrop, Smartling , a self-described cloud translation company, has raised $160 million in a venture capital round led by Battery Ventures. CEO Jack Welde says that the proceeds will be used to expand Smartling’s headcount while supporting product development and marketing efforts. Smartling, which was founded in 2009 by Welde and Andrey Akselrod, is a language translation company that enables customers to localize content across devices and platforms. Smartling leverages a combination of AI-powered translation tools and human translators, localizing content for particular markets to ensure its intended meaning and connotation remains the same — and isn’t misunderstood. “Two truisms have emerged about today’s enterprises: all business is global, and content drives global business,” Welde said in a statement. “The third leg of that stool is translation, since nearly all customers want to buy in their own language. This is a tremendous opportunity, and we are excited to realize it together with Battery.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Localizing content Smartling uses automation to quickly translate content into different languages. New content on customer sites and apps is flagged for translation and sent to translators for rewriting; when changes to the original language are detected, all foreign-language versions are flagged for translation. The changes are then delivered to front-end users through the backend of a customer’s system, independent of app updates. In 2017, Smartling launched a mobile delivery network designed to deliver updated translations to smartphone apps decoupled from updates to the app’s core code. The idea is that international users see translated content faster without having to update the app beforehand. Instead, updates and new translations can appear as soon as the user opens the app. Smartling claims to work with a few thousand translators in addition to an in-house staff of about 160 to provide text and video translation services. Translators perform their work in a computer-aided translation tool, which is followed by a translation review, legal review, and editing process. Smartling relies partially on machine translation — i.e., AI-powered translation — for “high volume, low priority” content translations. While generally accurate, studies show that machine translation systems can produce text that’s less “lexically” rich than human translations — and more prejudicial. Google recently identified (and claims to have addressed) gender bias in the translation models underpinning Google Translate, particularly with regard to resource-poor languages like Turkish, Finnish, Persian, and Hungarian. We’ve reached out to Smartling for more information about its AI bias mitigation efforts and will update this article if we hear back. Opportunity for growth In a boon for Smartling, the market for online translation services continues to climb steeply upward. According to Statista, it’s doubled in size from 2009 to 2019, reaching $49.6 billion two years ago. The machine translation market alone could be worth $230.67 in the next five years, Mordor Intelligence projects , growing at a compound annual growth rate of 7.1% from 2021 to 2026. Underlining the opportunity , over 361 million people around the world participate in cross-border ecommerce. A study by Standard Chartered found that nearly half of U.S. businesses today say that their best growth opportunity is outside the U.S. And according to Common Sense Advisory, 76% of buyers say that they prefer to purchase a product with information in their own language when faced with the choice of two similar products. Smartling has competition in Language I/O , a startup providing AI technologies for real-time, company-specific language translations. Unbabel and Lilt are among other rivals in the segment. But Smartling has managed to attract “hundreds” of large business-to-business and business-to-consumer customers including InterContinental, Pinterest, Shopify, and SurveyMonkey. “Enterprises generally succeed on the strength of technology, supply chains, people, and workflows. As content has become essential to go-to-market strategies, content distribution has in some sense become its own supply chain and workflow. And, as that content supply chain extends globally, translation at scale has become the critical last mile for global enterprise growth,” Battery Ventures general partner Morad Elhafed said in a press release. Smartling has raised over $220 million in venture capital to date. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,010
2,021
"Automated accounts payable platform Tipalti raises $270M | VentureBeat"
"https://venturebeat.com/2021/12/08/automated-accounts-payable-platform-tipalti-raises-270m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Automated accounts payable platform Tipalti raises $270M Share on Facebook Share on X Share on LinkedIn Tipalti's office Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tipalti , a platform used by major enterprises to automate common accounts payable tasks, has raised $270 million in a series F round of funding, valuing the company at a cool $8.3 billion. Accounts payable (AP) refers to any money owed by a company to its various suppliers. Processing and reviewing all internal and external transactions (e.g., paying invoices and reimbursing expenses), ensuring that all liabilities are met, is a resource-intensive process — one that typically requires a lot of manual data capture and management across various internal systems. The accounts payable software market was pegged as a $8.77 billion market last year , a figure that’s predicted to more than double within seven years. And with Tipalti’s valuation more than quadrupling from the $2 billion at its previous fundraise last year, this serves to underscore the size of the pie Tipalti is chasing. Cofounder and CEO Chen Amit said that Tipalti’s target market constitutes nearly 700,000 companies, with only 4% of that currently penetrated. “The addressable market is large, as solutions for payables and finance operations are not widely adopted, and none are as integrated as Tipalti’s approach,” Amit told VentureBeat. “Many organizations still struggle with manual processes. The pandemic’s impact on remote work and scalability issues also accelerated the need to turn finance processes into digital workflows — a trend that will not be reversible.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Payments automation Founded in 2010, Tipalti offers tools that enable companies such as Twitter, GoDaddy, and Twitch to automate most of their AP tasks, spanning invoice management, supplier management, a purchase order (PO) matching, payment reconciliation, tax compliance, fraud detection, and more. With invoices, for example, suppliers can upload their bills either through Tipalti’s portal or by email, and track the progress online. On the AP (i.e., payer’s) side, optical character recognition (OCR) serves to remove manual data entry, so that the details within all invoices are automatically extracted ready for review. This automated workflow includes various smarts such as duplicate invoice alerts, which help ensure that a company doesn’t inadvertently pay the same invoice twice. And Tipalti also leans on machine learning (ML) to improve over time, so that if it detects frequent manual data overrides carried out by someone in AP, it will apply that similar logic to future invoices. Elsewhere, Tipalti also uses historical and real-time data to carry out risk checks on payees — this includes establishing whether they are connected in any way to other blocked payees, for example, or whether there are multiple different accounts with the same associated payment or contact details. High-velocity enterprises Automation is playing an increasingly bigger role in the financial services and software sphere, with countless companies getting in on the act. Back in October, Stripe acquired Recko , a platform that automates the payments’ reconciliation process by comparing internal accounting records against external bank statements to ensure there are no discrepancies. And in the past year, we’ve seen businesses such as automated spend-management platform Ramp raise gargantuan sums at billion-dollar valuations. Tipalti, for its part, had raised some $295 million before now, including its $150 million series E round last October. Today, the San Mateo, California-based company claims that it’s processing more than $28 billion in annual payments for two thousand-plus customers, representing a 100% year-on-year growth. According to Amit, Tipalti’s focus is more on fast-growing, “high-velocity” enterprises, because they don’t want or have the kind of expenses and resources that larger enterprises typically consume on maintaining complex architectures, often constituting a mix of custom integrations and IT outlays. “The key challenge our customers face is that they themselves would rather focus on something else — the product, sales, customer experience, and so on — than on the back-office and suppliers,” Amit explained. “And the back office must keep up with and enable the front office’s growth goals. They’re more modern in thinking, and adopt best-in-class, highly scalable solutions that don’t require a lot of maintenance.” With another $270 million in the bank from backers including lead investor G Squared and funds managed by Morgan Stanley’s Counterpoint Global, the company is well-positioned to “accelerate its product roadmap” and global expansion plans. This will include rolling out new ways to manage spending through a corporate credit card, as well as a feature that will help teams “use invoices as a point of social engagement,” according to Amit. This will be less about morphing into a social network than it will be about making it easier to glean answers from across an organization around “specific areas of spend.” Looking further into the future, Amit said that the company plans to look beyond accounts payable. “We’ll be developing more product offerings that improve finance operations even more — right now, we’re focused on accounts payable as it is the least efficient process in finance, but we’re also expanding into other areas with the same approach,” Amit explained. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,011
2,021
"DeepMind makes bet on AI system that can play poker, chess, Go, and more | VentureBeat"
"https://venturebeat.com/2021/12/08/deepmind-makes-bet-on-ai-system-that-can-play-poker-chess-go-and-more"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind makes bet on AI system that can play poker, chess, Go, and more Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. DeepMind, the AI lab backed by Google parent company Alphabet, has long invested in game-playing AI systems. It’s the lab’s philosophy that games, while lacking an obvious commercial application, are uniquely relevant challenges of cognitive and reasoning capabilities. This makes them useful benchmarks of AI progress. In recent decades, games have given rise to the kind of self-learning AI that powers computer vision, self-driving cars, and natural language processing. In a continuation of its work, DeepMind has created a system called Player of Games, which the company first revealed in a research paper published on the preprint server Arxiv.org this week. Unlike the other game-playing systems DeepMind developed previously, like the chess-winning AlphaZero and StarCraft II-besting AlphaStar, Player of Games can perform well at both perfect information games (e.g., the Chinese board game Go and chess) as well as imperfect information games (e.g., poker). Tasks like route planning around congestion, contract negotiations, and even interacting with customers all involve compromise and consideration of how people’s preferences coincide and conflict, as in games. Even when AI systems are self-interested, they might stand to gain by coordinating, cooperating, and interacting among groups of people or organizations. Systems like Player of Games, then, which can reason about others’ goals and motivations, could pave the way for AI that can successfully work with others — including handling questions that arise around maintaining trust. Imperfect versus perfect Games of imperfect information have information that’s hidden from players during the game. By contrast, perfect information games show all information at the start. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Perfect information games require a decent amount of forethought and planning to play well. Players have to process what they see on the board and determine what their opponents are likely to do while working toward the ultimate goal of winning. On the other hand, imperfect information games require the players taking into account the hidden information and figure out how they should act next in order to win — including potentially bluffing or teaming up against an opponent. Systems like AlphaZero excel at perfect information games like chess, while algorithms like DeepStack and Libratus perform remarkably well at imperfect information games like poker. But DeepMind claims that Player of Games is the first “general and sound search algorithm” to achieve strong performance across both perfect and imperfect information games. “[Player of Games] learns to play [games] from scratch, simply by repeatedly playing the game in self-play,” DeepMind senior research scientist Martin Schmid, one of the co-creators of Player of Games, told VentureBeat via email. “This is a step towards generality — Player of Games is able to play both perfect and imperfect information games, while trading away some strength in performance. AlphaZero is stronger than Player of Games in perfect information games, but [it’s] not designed for imperfect information games.” While Player of Games is extremely generalizable, it can’t play just any game. Schmid says that the system needs to think about all the possible perspectives of each player given an in-game situation. While there’s only a single perspective in perfect information games, there can be many such perspectives in imperfect information games — for example, around 2,000 for poker. Moreover, unlike MuZero , DeepMind’s successor to AlphaZero, Player of Games also needs knowledge of the rules of the game it’s playing. MuZero can pick up the rules of perfect information games on the fly. In its research, DeepMind evaluated Player of Games — trained using Google’s TPUv4 accelerator chipsets — on chess, Go, Texas Hold’Em, and the strategy board game Scotland Yard. For Go, it set up a 200-game tournament between AlphaZero and Player of Games, while for chess, DeepMind pitted Player of Games against top-performing systems including GnuGo, Pachi, and Stockfish as well as AlphaZero. Player of Games’ Texas Hold’Em match was played with the openly available Slumbot, and the algorithm played Scotland Yard against a bot developed by Joseph Antonius Maria Nijssen that the DeepMind coauthors nicknamed “PimBot.” Above: An abstracted view of Scotland Yard, which Player of Games can win consistently. In chess and Go, Player of Games proved to be stronger than Stockfish and Pachi in certain — but not all — configurations, and it won 0.5% of its games against the strongest AlphaZero agent. Despite the steep losses against AlphaZero, DeepMind believes that Player of Games was performing at the level of “a top human amateur,” and possibly even at the professional level. Player of Games was a better poker and Scotland Yard player. Against Slumbot, the algorithm won on average by 7 milli big blinds per hand (mbb/hand), where a mbb/hand is the average number of big blinds won per 1,000 hands. (A big blind is equal to the minimum bet.) Meanwhile, in Scotland Yard, DeepMind reports that Player of Games won “significantly” against PimBot, even when PimBot was given more opportunities to search for the winning moves. Future work Schmid believes that Player of Games is a big step toward truly general game-playing systems — but far from the last one. The general trend in the experiments was that the algorithm performed better given more computational resources (Player of Games trained on a dataset of 17 million “steps,” or actions, for Scotland Yard alone) , and Schmid expects this approach will scale in the foreseeable future. “[O]ne would expect that the applications that benefited from AlphaZero might also benefit from Player of Games,” Schmid said. “Making these algorithms even more general is exciting research.” Of course, approaches that favor massive amounts of compute put organizations with fewer resources, like startups and academic institutions, at a disadvantage. This has become especially true in the language domain, where massive models like OpenAI’s GPT-3 have achieved leading performance but at resource requirements — often in the millions of dollars — far exceeding the budgets of most research groups. Costs sometimes rise above what’s considered acceptable even at a deep-pocketed firm like DeepMind. For AlphaStar, the company’s researchers purposefully didn’t try multiple ways of architecting a key component because the training cost would have been too high in executives’ minds. DeepMind notched its first profit only last year, when it raked in £826 million ($1.13 billion) in revenue. The year prior, DeepMind recorded losses of $572 million and took on a billion-dollar debt. It’s estimated that AlphaZero cost tens of millions of dollars to train. DeepMind didn’t disclose the research budget for Player of Games, but it isn’t likely to be low considering the number of training steps for each game ranged from the hundreds of thousands to millions. As the research eventually transitions from games to other, more commercial domains, like app recommendations , datacenter cooling optimization , weather forecasting , materials modeling , mathematics , health care , and atomic energy computation , the effects of the inequity are likely to become starker. “[A]n interesting question is whether this level of play is achievable with less computational resources,” Schmid and his fellow coauthors ponder — but leave unanswered — in the paper. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,012
2,021
"Hungarian gov teams up with Eastern European bank to develop AI supercomputer | VentureBeat"
"https://venturebeat.com/2021/12/09/hungarian-gov-teams-up-with-eastern-european-bank-to-develop-ai-supercomputer"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hungarian gov teams up with Eastern European bank to develop AI supercomputer Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In what may be a first, an Eastern European bank is teaming up with the government of Hungary to field an AI supercomputer that will be used to create a large language model of the Hungarian language. OTP Bank, which was founded in Hungary and operates banks across the region, worked out an agreement under which the government provides about half the funding for the supercomputer developed under contract with SambaNova Systems. The government will have access to the system for public and academic research, said Péter Csányi, deputy CEO and head of the digital division at OTP. The contract with SambaNova Systems capitalizes on the Generative Pretrained Transformer (GPT) for generating large language models that SambaNova announced in October, branding it as dataflow-as-a-service. “Building a system like this to run a GPT model is not something any bank has done before,” said Marshall Choy, VP of product at SambaNova. The bank decided that creating AI applications for the Hungarian language was in its own self-interest in an increasingly digital economy, Csányi said. “Our capability to adapt to a changing world is one of the core capabilities that we need to bring in-house.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Further, OTP Bank and the government agreed that the Hungarian language was unlikely to get its own AI language model anytime soon if they did not take the initiative because it is a hard language to learn and one that relatively few people speak. The experience of working on this project will also give the bank expertise it can apply to generating language models for the languages of other countries in the region, he said. The AI supercomputer, which SambaNova says will be the fastest in Europe, is scheduled to go live in 2022. Nation-sized needs “We’ve been observing the trends for a couple [of] years now, where things are definitely trending towards larger and larger models, which require larger and larger AI supercomputer-like resources to build. That’s why we’ve productized this,” Choy said. SambaNova’s product allows enterprises to create their own GPT models as an alternative to relying on shared resources like OpenAI’s GPT-3. “We’re also significantly reducing the burden on Péter and his team to go and hire hundreds of data science and machine learning professionals. In fact, he’s going to be able to do this with a dozen or so folks,” Choy said. The GPT approach to creating large language models uses AI deep learning techniques to allow the software to discover patterns within a language without being explicitly trained on them. That’s the “generative” part, allowing the software to write its own rules based on analysis of large volumes of text content from that language. While large language models speed up development and reduce the labor required to produce a model, they also have pitfalls, such as a tendency to incorporate biases and falsehoods derived from the text they consume. Whatever their flaws, a recent State of AI report found large language models to be so significant that many nations are “nationalizing” research into them for fear of being left behind. Businesses are equally concerned with keeping pace, Choy said. “AI, we believe, is going to have a refactoring effect not limited to banking but all industries, similar to what the internet did 20 years ago.” Csányi said he was skeptical when members of his IT team first came to him with the idea, thinking the budget and talent required would be beyond the bank’s reach. “What SambaNova has done is make this accessible at a reasonable cost,” he said. And while developing the talent within IT to work with AI models will be a challenge, he thinks the bigger challenge will be preventing his people from devoting all their time to it and neglecting the bank’s routine operational needs. OTP Bank expects to put natural language processing applications to work for things like customer service, fraud prevention, loan origination, and cybersecurity — as well as purposes that may not become obvious until the technology is in production, Csányi said. “We have no shortage of use cases.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,013
2,021
"SnapLogic raises $165M as the IPaaS market heats up | VentureBeat"
"https://venturebeat.com/2021/12/13/snaplogic-raises-165m-as-the-ipaas-market-heats-up"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SnapLogic raises $165M as the IPaaS market heats up Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. IPaaS, short for integration platform-as-a-service, is a suite of cloud services that enable organizations to develop, execute, and govern integration flows between disparate apps. Under IPaaS, customers can drive the development and deployment of integrations without installing or managing any hardware or middleware. The acceleration of digitization and the advantages of the IPaaS model — which include support for existing IT infrastructures and the breaking down of data silos — are driving an increasing number of companies to adopt IPaaS solutions. According to Grand View Research, the IPaaS services market will grow to $2.7 billion in value by 2025. IDG reports in a 2021 study that 27% of companies have already invested in IPaaS and that another 66% have plans to do so in the next 12 to 24 months. One foremost IPaaS provider is SnapLogic , a San Mateo, California-based company launched in 2006 by Informatica ex-CEO Gaurav Dhillon. SnapLogic is differentiated by its high-profile customer base, which includes Adobe, Qualtrics, Schneider Electric, and Workday, as well as its large (and growing) tranche of venture capital. Today, the company announced that it raised $165 million at a $1 billion post-money valuation, bringing SnapLogic’s total raised to date to more than $371 million. “Since the pandemic began, there has been a renewed focus on accelerating processes and workflows to help companies do more, in better and faster ways. Low-code, self-service technologies have been embraced because of their ability to empower business teams to procure, develop, integrate, and use new, purpose-built solutions quickly, without relying on an overtaxed IT team to accomplish their goals,” Dhillon told VentureBeat via email. “In addition … [t]here has been a surge of cloud and on-premise applications and data sources that all need to be connected to each other, in order for businesses to optimize their use.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IPaaS at scale SnapLogic’s platform consists of what the company calls an Integration Cloud, prebuilt connectors called Snaps, and a tool for data processing in the cloud or behind the firewall. The Integration Cloud features a designer for building integration workflows and a manager for controlling and monitoring the performance of SnapLogic orchestrations. Meanwhile, dashboards offer visibility into the health of integrations, including performance, reliability, and utilization. SnapLogic’s data processing tool streams data between apps, databases, files, APIs, and social sources, while Snaps serve as modular collections of integration components built for specific apps. Snaps are available for analytics and big data in addition to identity management, social media, online storage, enterprise resource management, and databases. Snap Patterns, a relatively recent addition to the SnapLogic platform, extends Snaps to connect cloud services such as Amazon Redshift, Salesforce, Workday, and ServiceNow, both with each other and with on-premises apps and assets. Dhillon asserts that SnapLogic was among the first in the IPaaS space to apply AI to integration. Instead of looking through heaps of assorted templates or recipes, SnapLogic customers can leverage AI-powered pattern recognition to reuse pipelines for other purposes. Above: Building integrations with SnapLogic’s tools. “I cofounded SnapLogic on the premise that, in order to innovate, compete, and grow in a multicloud world, large enterprises needed a scalable, hybrid platform to integrate data and applications for improved decision-making and better business outcomes,” Dhillon said. “We enable application and data integration and workflow and process automation for IT and every line of business. Some specific use case examples are employee onboarding, people analytics, and employee offboarding; invoice processing [and] expense management; marketing campaign reporting and analytics, lead routing, and management; [and] application integration, data migration, data warehouse loading, data governance, and security.” Growth and expansion SnapLogic says it’s benefitted from the trends toward data integration and analytics at scale. The roughly 300-person company — whose platform is now processing 2.7 trillion customer documents per month and running 3.1 million pipelines per month per customer — expects to exit this year with record annual bookings and thousands of paying clients. “The explosion of cloud and on-premises apps and data sources that all need to be connected. Complex, high-speed, high-volume data moving to the cloud at record rates,” Dhillon said. “The rise of low-code, self-service tech empowering business teams to procure, develop, integrate, and use new, purpose-built solutions quickly. This was all happening before the pandemic, but has since accelerated at a rapid clip, as enterprises rewire and retool to enable better speed, agility, innovation, and growth.” The proceeds from the most recent funding round will be put toward international expansion, product innovation, mergers and acquisitions, and sales and marketing, as SnapLogic looks to beat back rivals such as Bridge Connector , Unito , Talend, Fivetran, and Celigo. “On the product side, we’ll continue to build innovation and AI functionality into our platform,” Dhillon added. “In terms of go-to-market, our immediate focus will be on building out our Asia-Pacific presence, starting with additions to our team in Australia and New Zealand. We’re also scaling our global sales and marketing organizations to further drive awareness, interest, and adoption of SnapLogic.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,014
2,021
"Unified business communications platform Dialpad nabs $170 million | VentureBeat"
"https://venturebeat.com/2021/12/16/unified-business-communications-platform-dialpad-nabs-170-million"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Unified business communications platform Dialpad nabs $170 million Share on Facebook Share on X Share on LinkedIn Dialpad: Team messaging Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloud communications platform Dialpad has raised $170 million in a round of funding led by Iconiq Capital, valuing the company at $2.2 billion — roughly double its valuation last year. Dialpad has been one of the many beneficiaries of the rapid transition to a remote “work-from-anywhere” world. Its suite of cloud-based communication tools, spanning web conferencing , business phone systems , messaging , and inbound contact centers, has helped it thrive as companies have been forced to embrace a more decentralized workforce. Avoiding ‘app overload’ Dialpad has gone all-in on the notion that business communication tools should be unified in a single place. Part of this unification process has entailed rebranding its previously standalone UberConference service as Dialpad Meetings and integrating it directly into its core platform. “We are streamlining the way people communicate and collaborate — no matter where they are,” Dialpad founder and CEO Craig Walker told VentureBeat. “It’s no longer acceptable for a company to expect its customers or employees to utilize and navigate multiple platforms or channels. Dialpad helps customers simplify business collaboration, and avoid the growing phenomenon of ‘app overload’ for both internal and external communications.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Dialpad Walker has a notable history in the digital communications space. He initially founded an internet telephony company called Dialpad Communications back at the turn of the century, which he sold to Yahoo to serve as the building blocks of Yahoo Voice. Walker left Yahoo to form GrandCentral Communications, an online service that enabled users to merge their existing phone numbers and voice mailboxes into a single account — he sold that company to Google and spearheaded the launch of Google Voice. After leaving Google, Walker launched conference call service UberConference, which would later morph back into Dialpad, after Walker bought the brand and domain back from Yahoo. AI & voice intelligence Dialpad has been ramping up its AI endeavors these past few years, kicking off back in 2018 when it acquired a conference call transcription service called TalkIQ. This deal served as the backbone of a new Dialpad technology dubbed Voice Intelligence , which can generate notes from every phone call, conduct sentiment analysis, and use natural language processing to highlight actionable items from meetings. In the past few months, Dialpad has boosted its AI credentials further by acquiring two AI-powered players in the customer experience technology realm — Koopid and Kare Knowledgeware. Prior to now, Dialpad had raised around $230 million, and in the 14 months since its previous raise , the company has added a fairly impressive roster of customers to a list that already included Uber, Splunk, and Stripe — it now counts GrubHub, Rapid7, and New Relic as fully signed-up Dialpad clients too. With another $170 million in the bank, the company plans to double down on its AI, with a focus on hiring new personnel across natural language processing (NLP), machine learning, data engineering, and more. Other investors in the round included Alphabet’s GV, T-Mobile Ventures, Omers Growth Equity, Amasia, Work-Bench, and Section 32. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,015
2,011
"A look at the history of Cyber Monday (infographic) | VentureBeat"
"https://venturebeat.com/2011/11/28/cyber-monday-history"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages A look at the history of Cyber Monday (infographic) Share on Facebook Share on X Share on LinkedIn The current holiday season is shaping up to become the most lucrative ever for businesses, as VentureBeat has noted in many reports today. Cyber Monday sales may reach $1.2 billion this year , up from $1 billion last season. Part of that increase might be due to shoppers either not finding what they wanted on Black Friday or wanting to avoid its crushing crowds altogether. Ultimately, it seems that the fictitious capitalist holiday is here to stay. Of the businesses that are benefiting, none have had more of a boost than online retail giant Amazon. As VentureBeat’s Jennifer Van Grove notes, Amazon sales jumped 30 percent on Thanksgiving Day and continued to climb throughout the weekend. With that kind of growth, along with some killer deals , Amazon’s Cyber Monday might end up being the climax for the entire holiday season. But Cyber Monday is hardly a recent phenomenon. In fact, it first kicked off six years ago as a way for online retailers to compensate for the lack of brick and mortar store locations. Check out the infographic below for a deeper look at the history of Cyber Monday (click image to enlarge). [I nfographic via YouNeverLose.com ] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,016
2,020
"AI Weekly: AI-driven optimism about the pandemic's end is a health hazard | VentureBeat"
"https://venturebeat.com/2020/11/20/ai-weekly-ai-driven-optimism-about-the-pandemics-end-is-a-health-hazard"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: AI-driven optimism about the pandemic’s end is a health hazard Share on Facebook Share on X Share on LinkedIn A computer image of the type of virus linked to COVID-19. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As the pandemic reaches new heights, with nearly 12 million cases and 260,000 deaths recorded in the U.S. to date, a glimmer of hope is on the horizon. Moderna and pharmaceutical giant Pfizer, which are developing vaccines to fight the virus, have released preliminary data suggesting their vaccines are around 95% effective. Manufacturing and distribution is expected to ramp up as soon as the companies seek and receive approval from the U.S. Food and Drug Administration. Representatives from Moderna and Pfizer say the first doses could be available as early as December. But even if the majority of Americans agree to vaccination, the pandemic won’t come to a sudden end. Merck CEO Kenneth Frazier and others caution that drugs to treat or prevent COVID-19, the condition caused by the virus, aren’t silver bullets. In all likelihood, we will need to wear masks and practice social distancing well into 2021, not only because vaccines probably won’t be widely available until mid-2021, but because studies will need to be conducted after each vaccine’s release to monitor for potential side effects. Scientists will need still more time to determine the vaccines’ efficacy, or level of protection against the coronavirus. In this time of uncertainty, it’s tempting to turn to soothsayers for comfort. In April, researchers from Singapore University of Technology and Design released a model they claimed could estimate the life cycle of COVID-19. After feeding in data — including confirmed infections, tests conducted, and the total number of deaths recorded — the model predicted that the pandemic would end this December. The reality is far grimmer. The U.S. topped 2,000 deaths per day this week, the most on a single day since the devastating initial wave in the spring. The country is now averaging over 50% more deaths per day compared with two weeks ago, in addition to nearly 70% more cases per day on average. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It’s possible — likely, even — that the data the Singapore University team used to train their model was incomplete, imbalanced, or otherwise severely flawed. They used a COVID-19 dataset assembled by research organization Our World in Data that comprised confirmed cases and deaths collected by the European Center for Disease Prevention and Control and testing statistics published in official reports. Hedging their bets, the model’s creators warned that prediction accuracy depended on the quality of the data, which is often unreliable and reported differently around the world. While AI can be a useful tool when used sparingly and with sound judgment, putting blind faith in these kinds of predictions leads to poor decision-making. In something of a case in point, a recent study from researchers at Stanford and Carnegie Mellon found that certain U.S. voting demographics, including people of color and older voters, are less likely to be represented in mobility data used by the U.S. Centers for Disease Control and Prevention, the California Governor’s Office, and numerous cities across the country to analyze the effectiveness of social distancing. This oversight means policymakers who rely on models trained with the data could fail to establish pop-up testing sites or allocate medical equipment where it’s needed most. The fact that AI and the data it’s trained on tend to exhibit bias is not a revelation. Studies investigating popular computer vision, natural language processing, and election-predicting algorithms have arrived at the same conclusion time and time again. For example, much of the data used to train AI algorithms for disease diagnosis perpetuates inequalities , in part due to companies’ reticence to release code, datasets, and techniques. But with a disease as widespread as COVID-19, the effect of these models is amplified a thousandfold, as is the impact of government- and organization-level decisions informed by them. That’s why it’s crucial to avoid putting stock in AI predictions of the pandemic’s end, particularly if they result in unwarranted optimism. “If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden,” wrote the coauthors of a recent paper in the Journal of American Medical Informatics Association. They argued that biased models may exacerbate the disproportionate impact of the pandemic on people of color. “These tools are built from biased data reflecting biased health care systems and are thus themselves also at high risk of bias — even if explicitly excluding sensitive attributes such as race or gender.” We would do well to heed their words. For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,017
2,007
"Zebra buys RFID company, WhereNet, for $126 million | VentureBeat"
"https://venturebeat.com/2007/01/15/zebra-buys-rfid-company-wherenet-for-126-million"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zebra buys RFID company, WhereNet, for $126 million Editor Share on Facebook Share on X Share on LinkedIn Zebra Technologies said it will acquire WhereNet , a Santa Clara, Calif. provider of active radio frequency identification (RFID) based wireless products that track and manage enterprise assets, for $126 million in cash. WhereNet had raised more than $80 million over ten years from investors including Bay Partners, Crescendo, Ford Motor, Crosspoint, RWI, and Sun Microsystems. According to PE Week, this included a $10 million round in 2003, at a post-money valuation of approximately $76 million. Zebra is based in Vernon Hills, Illinois. From the company’s statement : …WhereNet provides integrated wireless Real Time Locating Systems (RTLS) to companies primarily in the industrial manufacturing, transportation and logistics, and aerospace and defense sectors. Founded in 1997, it has more than 150 installations currently in operation helping companies locate and track high value assets with wireless tags, fixed-position antennas and Web-enabled software. WhereNet solutions are used to increase the accuracy, velocity, efficiency, and security of time-critical processes throughout the supply chain. They are employed in parts replenishment, vehicle inventory tracking, truck yard management, marine cargo tracking, and work-in-process tracking, among many other applications. WhereNet’s solutions span hardware, middleware, application software, and services for project management, maintenance and support. Zebra management expects WhereNet to generate sales of approximately $50 million in 2007, up from $36 million in 2006. The acquisition is expected to be minimally dilutive to Zebra’s net income in 2007 and be accretive thereafter. Zebra will operate WhereNet as a separate business unit. (This story first posted 1/12) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,018
2,014
"Motorola Solutions sells enterprise unit to Zebra for $3.45B | VentureBeat"
"https://venturebeat.com/2014/04/15/motorola-solutions-sells-enterprise-unit-to-zebra-for-3-45b"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Motorola Solutions sells enterprise unit to Zebra for $3.45B Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Motorola is chopping off another one of its arms. Motorola Solutions, the remainder of the company after Moto’s handset unit spun out in 2011 as Motorola Mobility, announced today that it’s selling its enterprise business to Zebra Technology for $3.45 billion. The deal will allow Motorola Solutions to focus entirely on its government and public safety division. And it gives Zebra, which makes the RFID tags and barcodes that Motorola’s enterprise devices scan, the components to become an asset tracking behemoth. “We were were always [working on] both sides of the same problem,” Philip Gerskovich, Zebra’s senior vice president of new growth products, told VentureBeat in an interview. “We sold together … we had a similar go-to-market channel … the two companies have been very complimentary for many years.” Zebra also worked together with Symbol Technologies for more than a 15 years, before Motorola snapped up that company in 2007 to build its enterprise asset tracking business. “We’re investing in how the Internet of Things makes assets smarter,” Gerskovich said. “With Motorola’s new Android smartphone-based rugged devices, I think it’s the ideal place for enterprises to deploy [asset tracking].” The deal is expected to close by the end of the year. Zebra says it will pay $200 million in cash and $3.25 billion raised from a credit facility and debt security sales. Motorola Solutions doesn’t get as much publicity as the Mobility division (which was sold to Google in 2011 and then to Lenovo earlier this year ), primarily because it houses all of Motorola’s non-consumer offerings. CEO Greg Brown says the company decided to sell off its enterprise division after realizing it didn’t mesh as well as previously thought with its government division. “Going forward, we will have absolute clarity of purpose and mission as we serve customers globally with our suite of mission-critical communications solutions. This business is truly distinctive in its industry leadership, strong pipeline position, long-term track record of consistent profitability and cash flow, and an array of growth opportunities.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,019
2,020
"Vuzix announces M4000 enterprise AR glasses with optical waveguide | VentureBeat"
"https://venturebeat.com/2020/01/07/vuzix-announces-m4000-enterprise-ar-glasses-with-optical-waveguide"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Vuzix announces M4000 enterprise AR glasses with optical waveguide Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Rochester, New York-based augmented reality glasses maker Vuzix has spent years on a quest to deliver thin, practical AR wearables, moving further with enterprise and consumer solutions every year or so. At this year’s CES, Vuzix is introducing a high-end enterprise solution called the M4000 Smart Glasses, described as “the most powerful optically see-through smart glasses built for enterprise customers” yet. Based on Qualcomm’s Snapdragon XR1 platform, the M4000 uses a waveguide display system, enabling users to see augmented digital content without any occlusion of real-world visuals. Though the company’s existing $1,800 M400 remains a viable AR solution for many industries, Vuzix says the new model’s optical see-through technology is appropriate for specific businesses — including critical manufacturing and remote assistance apps — that require transparent, non-occluded overlays. While it’s fairly complex and expensive today, and thus suitable for enterprise use, waveguide technology is expected to power future consumer AR solutions as well. Beams of light are projected inside transparent lenses in a manner that makes digital content — if synchronized to the user’s head motions — appear to merge with real objects and backgrounds. Otherwise, the digital imagery appears to float within scenes. The M4000 is expected to enter volume production in summer 2020 and ship at a $2,499 price point. Since they share the Snapdragon XR1 platform, applications developed for the M400 should be compatible with the M4000. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,020
2,020
"How AR and remote video could assist technical support teams during the coronavirus outbreak | VentureBeat"
"https://venturebeat.com/2020/03/18/how-ar-and-remote-video-could-assist-technical-support-teams-during-the-coronavirus-outbreak"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AR and remote video could assist technical support teams during the coronavirus outbreak Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As countries struggle to contain COVID-19, the coronavirus pandemic that has infected more than 200,000 people globally, companies have increasingly been enforcing a home work policy, a move that could signal the beginning of a major remote-working movement. However, not all jobs can be easily managed remotely, a problem when social distancing is regarded as the most effective weapon in the fight against COVID-19. Field service work is one such role, often requiring engineers and technicians to visit multiple sites and locations to service and fix equipment. Not only can this be an expensive, resource-intensive endeavor, it usually requires people to interact with others, whether in someone’s home or a company office. But remote video assistance and augmented reality (AR) could shine at a time when businesses and consumers alike are looking to maintain productivity while observing social distancing guidelines. With that in mind, TechSee , an Israeli startup with big-name backers such as Salesforce, has announced that it’s making its virtual assistance platform available for free to a broad array of vital public bodies in Italy, France, and Spain — where the COVID-19 pandemic is particularly rampant — among other European countries. This will include emergency response teams, medical institutions, public health bodies, and nonprofit organizations, in addition to private enterprises seeking to embrace social distancing. For example, medical technicians in high-risk areas could lean on remote experts to fix a piece of hospital equipment. “At times like this, people everywhere have a chance to come together and make a real impact on a global scale,” TechSee CEO Eitan Cohen said. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! TechSee’s technology enables companies to virtually enter a space instead of sending a physical field service technician. Leveraging the camera on the customer’s smartphone, the agent sends the end user a link that, when clicked, opens a web app and then broadcasts video back to the remote technician. Moreover, the technician can draw, point, and write on top of the video from their workstation, providing direct visible guidance to the customer. Above: TechSee: A customer support or remote field service technician can guide customers using remote video and AR Above: Guiding a user to fix their Nest thermostat Of course, meshing remote video with augmented reality isn’t exactly a new concept — Microsoft already offers the Remote Assist app for its Hololens mixed reality headset. But the COVID-19 outbreak will likely force companies to explore new ways of working, in the near-term and likely far into the future. Moreover, as 5G gets ready to explode into the mainstream, its higher-bandwidth connectivity will enable more applications of AR in the enterprise. TechSee’s offer could be seen as a cynical publicity ploy, but sending field service personnel out to fix hardware runs contrary to current health advice and can take time, so any remote help will likely be welcomed. Moreover, TechSee’s technology normally costs up to $90 per user per month, so in an organization with hundreds of virtual support staff, such costs could mount. “A wide range of remote video technologies are already enabling organizations to rise to the challenge of containing the coronavirus, and we’re more than willing to contribute in any way we can,” Cohen added. The TechSee platform will be made available for free for 90 days, and the company said it is open to extending the coverage to more markets as the crisis escalates. However, TechSee noted that the service will be limited to areas where the company has the “ability and bandwidth to operate,” which currently rules out regions such as Asia Pacific (APAC) and Iran. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,021
2,019
"RealWear raises $80 million for AR headset for connected workers | VentureBeat"
"https://venturebeat.com/business/realwear-raises-80-million-for-ar-headset-for-connected-workers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages RealWear raises $80 million for AR headset for connected workers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. RealWear has raised $80 million for its augmented reality headset and platform for connected workers. The company will use the funds, which are a combination of equity and debt, to continue market expansion and accelerate its platform development. RealWear’s funding and loans were led by industrial automation firm Teradyne, with additional money from Bose Ventures, Qualcomm Ventures, Kopin, and JPMorgan Chase. From its inception, RealWear has focused on products, like the HMT-1 headset, that are specifically designed to improve job satisfaction, productivity, and safety for the connected enterprise workforce. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Andy Lowery, CEO of RealWear. The HMT-1 essentially puts a 7-inch virtual Android tablet within glancing range, sitting just below the worker’s line of sight, with a micro display that can work outdoors in rugged environments. The Vancouver, Washington-based company makes a variety of industrial hands-free wearable computers to enhance workers’ situational awareness while delivering vital information on demand in harsh environments. In the past 18 months, the company has shipped 15,000 headsets to 1,300 enterprise customers and worked on 120 workforce software applications. “RealWear has created a powerful platform that aligns with our vision for a safer, more productive work environment powered by easy-to-use, rapid ROI automation solutions,” said Teradyne CEO Mark Jagiela in a statement. “RealWear’s strategy to leverage the power of advanced technologies like augmented reality to assist workers across a wide range of tasks has many parallels with Teradyne’s industrial automation strategy, and we look forward to helping RealWear continue their exciting growth.” RealWear CEO Andy Lowery said the company has been pragmatic in its fund-raising strategy. “Our seed investments came from friends, family, early customers, suppliers, and business partners. Their faith carried us to our series A, led by Columbia Ventures Corporation,” he said, in a statement. “CVC’s experience in heavy industry, one of our primary markets, made it a perfect match. It was critical that RealWear’s new investors be business and technology leaders. Teradyne, Bose Ventures, Qualcomm Ventures, and Kopin fit that bill.” Above: RealWear AR glasses make workers more efficient. RealWear’s series A lead, CVC took this new round as an opportunity to increase its investment in the company, as did many early seed investors. Lowery said, “The continued support of our entire ecosystem is tremendously gratifying for the entire RealWear team.” Enterprise applications have contributed early design wins for AR glasses, which are still relatively bulky. “The augmented reality enterprise market has experienced a great deal of hype, but long-term, real-world solutions have been thin on the ground,” said Tom Mainelli, an analyst at IDC Group, in a statement. “RealWear smartly recognized the need for a no-nonsense head-mounted display and has delivered no-frills products that help frontline workers to get their jobs done more safely and efficiently.” Among the company’s customers are Colgate, Shell, Uros, Globalfoundries, Airbus, and BMW. The company has 110 employees and has raised more than $100 million, to date. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,022
2,020
"TinyBuild acquires Hello Neighbor devs and will invest $15 million in franchise | VentureBeat"
"https://venturebeat.com/2020/07/16/tinybuild-acquires-hello-neighbor-dev-team-and-will-invest-15-million-in-franchise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages TinyBuild acquires Hello Neighbor devs and will invest $15 million in franchise Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Game publisher TinyBuild has acqui-hired the development team behind hit game series Hello Neighbor from Dynamic Pixels and will invest more than $15 million in the franchise. The new team will be called Eerie Guest Studios, and will be located be in Hilversum, Netherlands. The company didn’t disclose the deal price, but it is part of the $15 million investment. In addition to the acqui-hire, TinyBuild will put more money into the franchise over various media. Back in March, TinyBuild announced that Hello Neighbor — a horror game about a neighbor with a secret in his house — had seen over 30 million downloads , and now it says the game has more than 40 million players. TinyBuild also said it has seen a surge in free-to-play and game subscription signups during the pandemic, indicating the popularity of value-driven entertainment as fans try to stay engaged with games without spending a lot of money. This trend played into the company’s decision to invest in the franchise and acqui-hire the development team. TinyBuild CEO Alex Nichiporchik recently announced that the company is working on a board game version of the horror game. TinyBuild is also looking at additional acquisitions to strengthen the brand and take it to new media. A graphic novel is coming out in October, and the Hello Neighbor Animated Series pilot has 12 million views on YouTube. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In addition to games, TinyBuild said its book project has generated $16 million in sales and $5 million in accessories sales. The company has recently been giving out its new game, Totally Reliable Delivery Service, for free on the Epic Games Store. The game is available on the PC, Xbox One, PS4, Nintendo Switch, Android, and iOS. TinyBuild has also published a prequel, Hello Neighbor: Hide and Seek, and multiplayer spin-off Secret Neighbor. This month, TinyBuild announced its partnership with Arcane Wonders to bring Hello Neighbor to the tabletop. The Secret Neighbor Party Game will hit all the major stores in October. The graphic novel Hello Neighbor: The Secret of Bosco Bay is also scheduled for release in October. Nichiporchik was born in Latvia and lives between the U.S. and the Netherlands. He started his game career in 2002 at the age of 14, when he dropped out of high school to become a pro gamer. He worked as a games journalist, marketer, and producer across casual games and web games before starting TinyBuild with Tom Brien in 2011. TinyBuild has published over 30 titles. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,023
2,018
"Weta Workshop unveils Dr. Grordbort's Invaders game for Magic Leap One | VentureBeat"
"https://venturebeat.com/2018/10/09/weta-workshop-unveils-dr-grordborts-invaders-game-for-magic-leap-one"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Weta Workshop unveils Dr. Grordbort’s Invaders game for Magic Leap One Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Weta Workshop unveiled Dr. Grordbort’s Invaders, its first game for the Magic Leap One Creator Edition augmented reality glasses. It did so at the L.E.A.P. conference in Los Angeles today. It’s available now. Dr. Grordbort’s Invaders takes advantage of Magic Leap’s AR glasses, where you can see animated robot enemies coming out of the walls of your real world physical space. Weta spent 5-and-a-half years working on it. Magic Leap’s goal is to blend a digital layer of animations on top of the real world, so you can’t tell the difference between the digital realm and the physical one. I played an abbreviated hands-on demo today, as well as demos of a bunch of other games and apps at the event. Weta Workshop is part of Richard Taylor and Tania Taylor’s New Zealand special effects studio. Weta Workshop and Magic Leap have worked on the project for several years. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Magic Leap has raised an estimated $1.8 billion to create its AR glasses and the ecosystem around them. The full game will have a slower pace and a narrated experience. I was escorted into a Steampunk style room, put on the Magic Leap One glasses, and then saw a portal open in the wall. The connection between the animation and the real world was pretty seamless. Above: Left to right: Rony Abovitz, Greg Broadmore, Richard Taylor, and Andy Lanning. A little robot came out of the portal and gave me a ray gun, with his thick British (or maybe New Zealand) accent. Then Dr. Grordbort appeared as a kind of hologram in the space and warned me that an alien invasion was about to start. Portals began opening in the walls and big yellow robots started coming through them. I had to shoot them, and as I did so, then came down with clankety explosions. I had the full freedom of the room, as the glasses were wired to a small puck that I carried in my pocket. It was a fun experience that got me sweaty and hungry for more. Weta cofounder and creative director Richard Taylor, game designer Greg Broadmore, and Magic Leap CEO Rony Abovitz attended a press event to unveil the game. Asked why he was interested in the game, Abovitz said, “Killing robots.” Actually, he said he used to design robots, and he was invited to visit Broadmore at Weta to talk about working together. “There were all these great ray guns and robots being designed, and we said, ‘Wouldn’t it be great if these worked,'” Abovitz said. Broadmore said the history of science fiction inspired their game design, with its mixture of old-style Steampunk and modern technology. Taylor said he and Broadmore went to visit Magic Leap a few years ago in Plantation, Florida, where Abovitz showed off the light field display technology behind the Magic Leap One glasses. “For me, that just lit up my brain, and the last eight years and intensely in the last 5.5 years now have been real for me,” Taylor said. Above: Dr. Grordbort’s Invaders Abovitz said the new game was a showcase. Pointing around the Steampunk-styled room, he said the Grordbort project will show how far you can go with the Magic Leap technology. Abovitz praised Weta for taking the risk of working on a new game world on a brand new device. “This is a great expression of our philosophy,” he said. “We want creators to take this and work with it. We don’t need to overwhelm everything with out system. You got a room with paintings, and sculptures. When you layer in what we do, it fits. All the other art forms can coexist at the same time. That’s what you can do with spatial computing.” Broadmore said the team has been learning for 4-and-a-half years and making the game for the past 18 months. “This was our tip of the spear project, so if we couldn’t make it, we wouldn’t ship,” Abovitz said. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,024
2,018
"The DeanBeat: My favorite games of 2018 | VentureBeat"
"https://venturebeat.com/2018/12/21/the-deanbeat-my-favorite-games-of-2018"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: My favorite games of 2018 Share on Facebook Share on X Share on LinkedIn Howdy. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Games got bigger than ever in 2018, with revenues expected to grow 10.7 percent to become a $134.9 billion market, according to research firm Newzoo. People are getting paid to play games as esports, and watching gaming has reached new heights. It was, of course, a good year for the business models of free-to-play and new categories of trivia or hyper-casual games. Retro games on remade consoles from years past soared back into the mainstream. But to me, 2018 was the year that traditional narrative games like Red Dead Redemption 2 came roaring back. They blended the elements of open worlds with hubs that gave you a choice about which part of the story to do next. But they were more like games with strong stories rather than open worlds with a bit of a tale. That was refreshing at a time when some people were predicting the death of single-player games. I was so happy to see some long-term bets come through and pay off — like the first three titles on this list. They showed that cooking games beyond the normal cycle can still pay off when the games finally ship. I have a list full of commercial successes and not so many indie efforts. It was a good year for 8-bit retro games, Nintendo titles like Super Smash Bros. Ultimate, surprise indie hits, and even some cool virtual reality games like Beat Saber. But not all of those resonated for me, as I felt drawn the big narrative games. I didn’t find an indie title that hooked me like last year’s Hellblade: Senua’s Sacrifice. Above: God of War team celebrates victory after the Game Awards. On mobile, I’ve enjoyed playing games like Pokémon Go, Ingress Prime, Star Trek: Fleet Command, and the new Command & Conquer: Rivals. But none of those titles made it into my top 10 list of favorites. And many good games I simply didn’t have time for, like Total War: Thrones of Britannia. I’d love to play those more. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The games that made it onto my list were the most memorable, and it’s no surprise that they had deep stories that went on for many, many hours, with some exceptions. Call of Duty: Black Ops 4 doesn’t have a single-player campaign this year and yet still managed to make my list. Hopefully, you’ll have time to spend some time with these games during the holidays. And remember, it should be no shame at all that you don’t play these games with great skill or you can’t finish them. Having seen in person what it means for game developers to win big awards after so many years of hard work, I take this responsibility over deciding my own Game of the Year very seriously. For the sake of comparison, here are my favorites from 2017 , 2016 , 2015 , 2014 , 2013 , 2012 , and 2011. In each story below, the links go to our full reviews or major stories about the games. And be sure to check out the GamesBeat staff’s own votes for Game of the Year and the best individual favorites of the staff soon. 10) Dr. Grordbort’s Invaders Above: Dr. Grordbort’s Invaders Developer: Weta Workshop Publisher: Weta Workshop/Magic Leap Platforms: Magic Leap One Creator Edition I’m sure this is one of those titles that very few people have tried, but it felt magical when I played it. It is a free flagship title on the next-generation augmented reality experience, the Magic Leap One Creator Edition. This developer prototype costs nearly $2,300, and so it doesn’t have wide distribution yet. It isn’t the easiest to get running, as you have to scan your room every time you play. But while you play with a wire attached from the glasses to a computing puck in your pocket, the experience gives you the freedom of a wireless, 360-degree AR experience. You can also see everything in your house and your friends, which makes the whole experience more realistic and social. In Dr. Grordbort’s, robot enemies come out of the walls of your own home. A cute hovering bot named Gimbel shows you want to do, and you take a ray gun and start blasting the robots. They come out of portals and then march toward you your own living room. You can move around to get a better angle to shoot. And you blast them into little parts. They can hide behind barriers, and they are occluded by your real-world furniture, as if they were hiding behind something in the real environment. I liked the Steampunk art style and the narrative that accentuates its comical British roots. By no means is it perfect. It’s short, and you get tired. (That’s a positive, as it gives you a workout). Yet it’s a glimpse into what could be the future of gaming. It’s quite an achievement, as this is the only game from an emerging platform on my list of my favorite games of the year. Kudos to Magic Leap for that. It’s just sad that, with the state of the current platform and its cost, very few people will see this game. 9) Detroit: Become Human Above: Detroit: Become Human starts with Connor, the police negotiator. Developer: Quantic Dream Publisher: Sony Platforms: PlayStation 4 Detroit: Become Human focuses on the familiar moral dilemma of science fiction, which is increasingly becoming science, about how humans should treat human-looking androids. Are they property, or are they sentient beings? This latest epic from David Cage’s Quantic Dream got a fair amount of hate from critics who didn’t like “walking simulators” as well as people who didn’t like the heavy-handed message about androids having feelings too. Set in 2038, the game lets you play as an android. Early on, you get on a bus, but a sign says the androids have to go to the back of the bus. It’s an obvious homage to Rosa Parks and Civil Rights Movement, and how androids are people who have yet to wage their own revolution against their masters. But I thought most of the game was really well done. The tech behind the digital human faces has advanced incredibly well, and so this game was much more immersive than previous Quantic Dream titles like Heavy Rain and Beyond: Two Souls. In this title, humans are feeling slighted, as a third of all people have been put out of work by servant robots. The fear is that the robots will turn bad , like with the Terminator. Yet you care more about the androids than the humans. You play as a robot, for instance, who has to decide whether to hurt a human owner who in turn is abusing a human child. If you, the robot, intercede, you are violating your prime directive. The cool thing with the stories of three androids is that all the decisions have consequences, and you can retrace the events in a kind of flow chart. You can replay those scenes and move to a different path and a different ending. In that sense, Detroit: Become Human put a remarkable amount of control into the hands of the player. If it had flaws, it was that the interactivity was limited. 8) Assassin’s Creed: Odyssey Above: King Leonidas in Assassin’s Creed: Odyssey. Developer: Ubisoft Quebec Publisher: Ubisoft Platforms: Windows, PlayStation 4, Xbox One, Nintendo Switch Assassin’s Creed: Odyssey is a vast game, with improved mechanics from past entries and believable characters. You can choose to play as a male hero, Alexios, or the female hero, Kassandra. The world of ancient Greece is impressive, and the backdrop of the Peloponnesian War between Athens and Sparta is momentous. The visuals are great, and the gameplay isn’t as wooden as Red Dead Redemption 2’s. But the quality of the facial animations is far worse, compared to Red Dead and Detroit: Become Human. You can level up with various role-playing elements, more so than usual in an Assassin’s Creed game. That makes you stronger and enables you to undertake more big tasks. If I had to pick a flaw, it is the usual problem of the open world being a little too mundane, where you are taking out bandits and collecting loot from them when you really should be on a much more important mission to avenge your family. Some of the missions may be too familiar. But I liked the personal motivation that Kassandra feels to leave her island home and go to the broader world. After so many Assassin’s Creed games under its belt, many of which have not been my favorites for the year, Assassin’s Creed: Odyssey and its setting is a breath of fresh air. 7) Hitman 2 Above: The big crowd in Hitman 2’s Miami level. Developer: IO Interactive Publisher: Warner Bros. Interactive Entertainment Platforms: Windows, PlayStation 4, Xbox One Jeff Grubb nagged our team to play more Hitman 2, and I’m glad he did so. IO Interactive laid the foundation for this title with Hitman in 2016, and now it has fleshed out its intentions in a much better way. The Miami level pictured above is an arena for assassination that is teeming with life. You can lose yourself in crowds and hide from people who are suspicious of you or your disguise. Hitman 2 gives you many paths to setting up elegant traps that can make the death of your target appear to be accidental. You have more room in larger levels, with more ways to kill and the general feeling that life is emergent. The game developers have simply set up a place for your assassination to take place, and your job is to use your imagination and get the job done. Then you can go back and replay the missions and see if you can do better, with more variety of equipment. Sometimes I felt a bit rushed in the exploration, like when a race ends and with it so does a certain path of assassination. But you can always come back and find new ways to kill someone, including beating them to death with a dead fish. Agent 47 can take people down in hilarious ways. You can get clues to environment by using your instincts, which highlights nearby items of interest, and that saves you a lot of time. This game is now a solid experience in its second iteration. 6) Shadow of the Tomb Raider Above: Lara Croft in Shadow of the Tomb Raider. Developer: Eidos Montreal Publisher: Square Enix Platforms: Windows, PlayStation 4, Xbox One, MacOS, Linux Lara Croft’s journey has been a joy to relive, as this year she rounded out a trilogy of rebooted games that brought a lot more emotion, empathy, thoughtfulness, and cinematic moments into the franchise. Shadow of the Tomb Raider takes place in the jungles of Mexico and Peru, and it features Paititi, one of the biggest open world hubs ever in a Tomb Raider map. Lara is now more mature than she is vulnerable, compared to Tomb Raider , first reboot game in 2013. She’s more cynical than in 2015’s Rise of the Tomb Raider , and she has become more of a merciless killer in the shadows of the jungle. She retains a sisterly love for her friend Jonah, and the desire to protect him leads to some of her darker moments in the set piece scene of an exploding oil refinery. He serves the purpose in this game of bring her down to Earth, or snapping out of her narcissistic need to save everybody. She is still filled with self-doubt and a sense of loss, a legacy from her vanished parents. She hasn’t become the annoying character from the Angelina Jolie films. The only problem I had was I have never liked how the ending of these games becomes overly supernatural, just to present Lara with more of a difficult boss fight at the end. The ending presents Lara with a difficult choice, but it’s ultimately about choosing between a fairy-tale past and the future as the person that she has become. Lara Croft became so much more than the over-sexualized bantering hero that she was in her beginning, growing up as a character as we did as gamers. I like how this Lara turned out, and I will miss her. 1 2 View All Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,025
2,020
"Supercell's off year: revenues of $1.56 billion and profits of $577 million | VentureBeat"
"https://venturebeat.com/2020/02/11/supercells-off-year-revenues-of-1-56-billion-and-profits-of-577-million"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Supercell’s off year: revenues of $1.56 billion and profits of $577 million Share on Facebook Share on X Share on LinkedIn Clash Royale debuted in 2016 and it is going strong. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Finland’s superstar mobile gaming company Supercell reported before-tax profits of $577 million on revenue of $1.56 billion, down from 2018 profits of $635 million on revenue of $1.6 billion. I guess you could call that a bad year. Supercell’s earnings always crack me up. The Helsinki maker of Clash of Clans and Clash Royale is such a standout performer in mobile games that even its off-years or quiet years, like 2019, still mean that the company is printing money. And Supercell has only 323 employees, compared to console game maker Ubisoft, which has more than 15,000 workers and has annual revenues that aren’t so far from Supercell’s. That gives you the idea of the efficiency of Supercell, which is one of the giants of the $86 billion mobile game industry. But rather than boast, Supercell CEO Ilkka Paananen wrote an annual missive that was more like a love letter to players, a way of explaining to them in an open way what the world looks like from inside Supercell. The Finnish folks are humble about their capitalism. They don’t brag about it properly, as they have semi-socialist roots and don’t tout their financial performance. A case in point: In Paananen’s letter, the description of the financial performance of the company is buried in paragraph No. 34. And Paananen proudly points out that the company paid taxes of $110 million to Finland alone — something that any self-respecting chief financial officer in most capitalist countries would be embarrassed to acknowledge. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Many of us who have benefited from our free education and healthcare financed by taxes feel proud that we can contribute to our society in this way,” Paananen wrote. “For us, the most important thing about good financial results like these is that they enable us to keep investing in making great games (including making our existing games even better) and think very long term.” Above: Ilkka Paananen, the co-founder and CEO of Supercell Paananen noted that this year, the company will turn 10 years old. “That moves us closer to being able to measure our games in decades rather than years, which is a great reminder of why we are here,” he wrote in the annual letter. The letter shows us the eccentricity of the Finnish company, which was acquired by Tencent in a deal that value the company at $10 billion in 2016. Supercell’s dream is to make “games that are played for years and remembered forever,” he said. “When we founded Supercell back in 2010, our idea was simple: to create a new kind of games company that would be the best place for the best people and teams to create the best games – games for as many people as possible, that would be played for years and remembered forever,” Paananen wrote. “At the time, we were inspired by games like World of Warcraft, which many of our co-founders had played for years and years. For them, it was not just a game. It was a passion, a very important part of their everyday life, something that they played together with friends and also made new friends with. Our big dream was to create a similar social experience, and hoped that with a platform like mobile virtually anyone could be part of it.” Killing your own games Above: Supercell has killed lots of games in its history. Paananen said that 2019 saw several examples of the company’s values in action. Painfully, the company decided to kill Rush Wars, a game that made it into beta testing. “The team behind the game killed it because based on the beta, they felt like this was not going to be a game that lots of people would play for years nor would it be remembered forever,” Paananen wrote. “The early gameplay was lots of fun, but it just did not carry over to the endgame.” The only other company in the game business that operates this way and publicly talks about its failures (once in a while) is Blizzard Entertainment, which operates in the PC and console space mostly and acknowledges when it has to kill off games that don’t live up to their potential. “I feel proud of the decision the team made. I cannot even imagine how painful it is to kill your own darling, something that you’ve poured your heart and soul into,” he wrote. “That said, this is how we all want Supercell to operate: We should only release games that are of exceptional quality, games that the players love and games that have a shot at being remembered forever. Very importantly, at Supercell these types of decisions are always owned by the team who is behind the game. We feel that it is critical that the people who are responsible for the game also get to decide about its future.” Paananen said he has developed more and more respect for how incredibly difficult it is to create a new game. In 10 years, Supercell has only launched five games — Hay Day, Clash of Clans, Boom Beach, Clash Royale, and Brawl Stars. He views the ones that didn’t make it as opportunities to learn. Keeping games going with big updates Above: Hay Day hit the market in 2012 and beat FarmVille. During the year, the teams behind Hay Day and Boom Beach put out their biggest updates in the history of their games. Hay Day added “The Valley,” creating a new mechanic; and Boom Beach added “Warships,” a new way to play together with other people. Millions of players still enjoy these games, even though they’ve been around for so many years, and that’s why the teams have put so much work into these updates. The Clash games and Brawl Stars also saw major upgrades. Operating in cells Above: Boom Beach had a big update in 2019. Supercell maintains that it is strong because it is driven by veteran employees. It has held its team size down deliberately, and has far fewer managers and executives than other companies. The teams are self-governing, and new projects often start with a handful of people. The decision to proceed with games or kill them off resides with those teams. “A major component of our mission – i.e. to be the best place for the best teams to develop games – involves keeping the company as small as possible,” Paananen wrote. “This is because we believe smaller size minimizes the amount of bureaucracy and processes while maximizing room for innovation. And, we all simply like to work in a smaller company! Anyway, last year some of our game developers actually got concerned that the company might be getting too big too fast as we grew to just over 300 in size. We had a big discussion about this and, as a result, decided to slow down our growth significantly until we feel confident that we can keep our culture intact despite the growth.” Instead of using its profits to fuel new teams, Supercell decided to set up an investment arm and to invest in other studios — often in the Nordics — to help get interesting ideas off the ground and into the market. In 2019, the company invested in Luau Games in Malmö, Sweden; Ritz Deli in Oakland, California; and Wild Games in Stockholm. Brawling Above: Brawl Stars from Supercell debuted in beta in 2017. 2019 was the first full year for Brawl Stars as a global game. The game spent a very long time in development and in beta before the team decided to release it. “What made things particularly interesting was that internally, prior to the beta, even Supercellians (the passionate gamers and developers that they are) had mixed feelings,” Paananen said. “Some absolutely loved it, others less so. There were heated discussions about the art style and controls, and even whether Brawl is ‘a Supercell game’ to begin with. But, the team kept working hard to improve the game, making massive changes to the game in beta.” Brawl Stars became the No. 1 game in South Korea, one of the countries with the fiercest mobile game competitors. Supercell invested in that market, launching a place called the Supercell Lounge in Seoul, where players could gather to experience the game together. Supercell also held the first ever Brawl Stars World Finals and filled up the city of Busan with Brawl characters. “Esports will be a significant part of Brawl Stars going forward,” Paananen said. “For us, it is very important that everyone will get a chance to participate. Because of this, just a few weeks ago, the Brawl team launched a monthly challenge in-game, through which anyone playing the game can try to qualify for this year’s championship! It has been hugely popular so far, more than two thirds of Brawl players have participated at this point.” Clash Evolution Above: Clash of Clans debuted in 2012. This year Clash of Clans will celebrate its 8th anniversary, while Clash Royale turns four. Both teams continue to work hard to improve the games for players, Paananen said. Clash of Clans received two big updates with Town Hall 13 and Builder Base 9, both giving players a bunch of new stuff to build and battle with, including new troops and a first new hero since 2015: the Royal Champion. Clash of Clans and Clash Royale also introduced seasons and seasonal challenges as well as game passes. New year, new teams, new games Above: The Finns are coming out with more games. Paananen said that new games are in various stages of development. Some are close to beta, some will be killed before anyone sees them. “But, with some luck, you should see a game or two from us in beta this year,” he said. “And then it is up to you, if enough players like them, we will release them globally. This is still exactly how I feel about our pipeline of new games today! Lots of exciting new games in the works, and with some luck our teams will decide to release one or two of them to beta later this year.” Paananen also gave thanks to the community. “We are extremely grateful for all of the support and love we get from you,” he wrote. “We are inspired by the work you do every day and it drives us to work even harder to deliver even better games and experiences for you.” He noted that Supercell also gave back to the community with the launch of Hive, a new kind of coding school in Helsinki inspired by the pioneering work done at École 42 in Paris. Hive is a tuition-free school where anyone can apply via an online test. It has no teachers or classes. Instead, students learn from each other by solving problems of increasing complexity. “The launch was a massive success! So far 3,395 people have successfully completed the online test, out of which 142 students made it in and are now learning to code,” Paananen wrote. “We wanted to create an inclusive environment where everyone could learn how to code. So, it was great to see so many different people from different walks of life attending: Hivers’ backgrounds range from farming and kindergarten teaching, to musicians and business owners, to janitors and healthcare professionals, from 18 to 55 years of age.” Lastly, he said that Supercell embarked on making the company carbon neutral. In September 2019, during the UN Climate Action Summit, a total of 21 games companies launched the Playing For The Planet Alliance. Supercell on its part committed to offsetting 200% of its direct CO2 emissions last year, and 100% of those generated by players as they play the company’s games. “So here we are about to start our 2nd decade. Ten years ago, when we started, we were inspired by companies like Blizzard, Nintendo and Pixar, that have been able to consistently create successful entertainment products that are loved by millions all over the world,” Paananen wrote. “We aspire to be like these companies that have lasted for decades, in Nintendo’s case for more than a hundred years, and will continue to live for many more. It is crazy to think that we are about to close our first 10 years as a company. Obviously it is early (we are not even close to 100 years yet!) but this is a nice milestone for us.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,026
2,020
"InvestGame breaks down $20.5 billion in 2020 game deals | VentureBeat"
"https://venturebeat.com/2020/10/07/investgame-breaks-down-20-5-billion-in-2020-game-deals"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages InvestGame breaks down $20.5 billion in 2020 game deals Share on Facebook Share on X Share on LinkedIn Fortnite is getting second-generation real-time ray tracing. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The game industry has seen an estimated $20.5 billion in acquisitions, investments, and IPOs in the first nine months of the year, according to game investment tracking firm InvestGame. This figure points to the huge amount of activity that has occurred as gameplay surges during the pandemic. The data comes from InvestGame, which is run by investment specialist Sergei Evdokimov and Anton Gorodetsky. (They both work at My.Games, but the company doesn’t produce the report.) While the firm hasn’t tracked past years, the current figures show a stunning amount of investor activity in games while other sectors of the economy are falling apart. InvestGame tracks deals among game developers, publishers, platform and tech companies, esports, hardware, retail, outsourcing, and other related areas. But it doesn’t including gambling companies in its definition of games. The data also only covers estimates of publicly announced and closed deals in the game industry. Deals in which the amounts aren’t disclosed are either estimated or not included. The numbers are estimates of the total value of tracked deals, and these numbers don’t include the biggest deal: Microsoft’s purchase of ZeniMax for $7.5 billion, which has not closed, Evdokimov said in an interview with GamesBeat. “We track all the pieces of information in the market,” he said. “You see the numbers show it’s very huge and growing, and it’s very active. Our mission is to provide the data for investors.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Game industry investments, acquisitions, and public offerings. The first three quarters of the year saw 211 gaming deals, 112 platform and tech deals, 89 esports deals, and 25 deals in other categories. Those deals generated $15.35 billion in value for gaming companies (the game developers and publishers), $3.97 billion in platform and technology companies, $685 million for esports companies, and $504 million for others. InvestGame also broke down the deal types. Public offerings generated $9.2 billion in value, acquisitions generated $6.6 billion in value, and private investments generated $4.6 billion in value. Among these types, 51 were public offerings, 132 were acquisitions, and 254 were private investments. While the pandemic has devastated many industries, gaming has benefited from people being stuck at home with nothing to do. They have watched a lot of Netflix, but they have also played a lot of online and mobile games. As investors saw this happening, they initiated a lot of deal-making. The deals slowed in May as investors adjusted to making deals while sheltered in place. But the activity picked up again in June and saw big spikes as Microsoft bought Bethesda and Unity went public in an IPO. The most active players on this market were Play Ventures, Galaxy EOS VC, Bitkraft Ventures, Sisu Game Ventures, and Makers Fund — together accounting for 54 deals, or 51% of the total. KKR led two deals, including $1.53 billion invested in Epic Games (amount excluding Sony’s $250 million in Epic) and $450 million invested in Zwift. Andreessen Horowitz led six deals, including the $150 million investment in Roblox. Private investments Above: The most active game-focused VC funds. In particular, private investment activity significantly dropped in the wake of the COVID-19 outbreak in May but started gradually recovering afterward, both in terms of the number of deals and in terms of combined value. InvestGame also said that while early-stage VC activity remained at a level of 5-7 deals per month even after the pandemic, later-stage VC and corporate activity fell to 1-2 deals per month up until July. During the first nine months, game developers and publishers have poured $2.7 billion into the gaming market, with 100 transactions closed. That includes 69 closed preseed deals (for the earliest startups), seed, and series A rounds (first institutional investor rounds). Also among these transactions were nine closed series B or later rounds and 22 corporate investments. While U.S. companies dominated the activity with 90% of the total value at the later stage and corporate rounds, only 30% of early-stage VC funds have been raised by U.S. startups. Three deals accounted for 78% of total capital injection. Scopely raised an estimated $200 million at a $1.7 billion valuation, with rumors swirling that it will soon raise more money at a $3 billion valuation. Roblox raised $150 million at a $4 billion valuation and is contemplating going public early next year at an $8 billion valuation, and Epic Games raised $1.78 billion at a $17.3 billion valuation. The most active category in the number of deals is mobile, with 34 announced deals and $103 million raised. The biggest deal in this category was Nifty Games, which raised $12 million from March Capital Partners and others. Mobile gaming remains a fragmented market, and average investment amounts range from $3 million for seed investments to $6 million for series A rounds. Roughly 50% of early-stage VC funds are putting money into multiplatform games. One big outlier is Playco, which raised $100 million at a $1 billion valuation to make instant games. Other big investments included Russia’s 110 Industries raising $20 million and ArtCraft Entertainment raising $11.7 million to finish the online game Crowfall. The most active VC fund was Makers Fund, but the activity extends well beyond any single investor. “It shows a lot of people around the world believe in gaming,” Evdokimov said. Acquisitions Above: Game M&A in the first nine months of 2020. Acquisitions are strong this year, with deals such as Zynga’s $2 billion purchase of Peak Games in Turkey. This period also saw 41 mobile game company acquisitions with a combined value of $4.4 billion. PC and console game deals amounted to $10.5 billion. Another big one was Tencent’s $1.4 billion purchase of Warframe owner Leyou Technologies. Tencent has been the most active buyer, but another active company is Sweden’s publicly traded Embracer Group (THQ Nordic is one of its labels), as well as the Stilfront Group. AppLovin bought Machine Zone for $500 million. In multiplatform and VR/AR games, 12 deals occurred at a combined value of $344 million. The most active acquirers are Microsoft, Zynga, and Tencent. Embracer Group led 11 acquisition deals, with the largest one being the purchase of Saber Interactive for $525 million. The most active strategic investors are Tencent, MTG, My.Games, Supercell, Kakao Games, and AppLovin. “The most interesting thing is that COVID did not slow down M&A at all,” Evdokimov said. “At the same time, the number of early-stage and late-stage VC deals did drop significantly. And then there was a huge amount of capital inflow into gaming from the public markets.” Public markets Above: Unity employees and John Riccitiello ring the NYSE opening bell. Public market activity stopped during the January through May time frame as the pandemic sent public markets into shock. During that time, Stillfront and Embracer raised money through PIPE deals (private investment in public equities), where private investors put money into publicly traded companies. The market started to recover in June, with multiple companies going public in Asia. Archosaur Games raised $280 million, and Kakao Games raised $330 million. Activision Blizzard raised $2 billion in senior debt as a refinancing move. Unity Technologies raised $1.3 billion at a $13.6 billion valuation in mid-September. The stock is valued at $21.8 billion now, which bodes well for more game IPOs. But Unity is not included in the public offering total, as it is a tech platform. Skillz has yet to complete its deal, but the skill-based gaming company is going public through a special purpose acquisition company (SPAC) at a $3.5 billion valuation. Evdokimov said his firm is looking back at historical information to get year-over-year data. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,027
2,019
"Cuphead surpasses 5 million copies sold | VentureBeat"
"https://venturebeat.com/2019/09/30/cuphead-surpasses-5-million-copies-sold"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cuphead surpasses 5 million copies sold Share on Facebook Share on X Share on LinkedIn The Forest Follies level in Cuphead. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. After looking forward to Cuphead for years, it’s wild that we live in a world where it came out two years ago. But the stylish, run-‘n’-gun shooter didn’t just stumble out — it skyrocketed to massive success. Studio MDHR revealed in a tweet yesterday that Cuphead has hit 5 million copies sold across all platforms. And it is celebrating that milestone with a discount and more. “Cuphead turns two today, and we’re so humbled to announce: It has officially gone five-times platinum,” reads Studio MDHR’s tweet. “Starting now the game is 20% off on all platforms for a full week. And stay tuned because we have five days of fun and giveaways planned to celebrate 5 million copies sold.” That 5 million copies sold figure is impressive for just about any game of any scope. But it is especially significant for a smaller development team. It reveals that Cuphead achieved breakthrough appeal to a wider audience. As for the “five days of fun,” MDHR hasn’t provided any details yet. But fans can likely catch any information about that celebration on the company’s Twitter feed. And if you’ve held off on purchasing the game, you can get it on Xbox One, PC, or Switch for $16 through this week. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Play Cuphead while you charge your Tesla In addition to its two-year anniversary and huge sales numbers, MDHR also revealed last week that it is partnering with Tesla. As part of the latest Tesla update, Model S, X, and 3 automobile owners can play Cuphead on their dashboard system. The idea is that you’ll play the game while charging the Tesla’s battery, which is a process that often takes more than 20 minutes. So yeah, welcome to the future. Cuphead: Tesla Edition includes the first level for free. And honestly, it doesn’t even need more than the first few jumps of the tutorial to keep GamesBeat lead writer Dean Takahashi busy for hours. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,028
2,020
"Salesforce in talks to acquire Slack | VentureBeat"
"https://venturebeat.com/2020/11/26/salesforce-in-talks-to-acquire-slack"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce in talks to acquire Slack Share on Facebook Share on X Share on LinkedIn Slack logo at Slush 2018 conference in Helsinki, Finland Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. ( Reuters ) — Cloud-based software company Salesforce.com is in talks to acquire workplace messaging app Slack as it seeks to expand its offerings to businesses, people familiar with the matter said on Wednesday. Salesforce’s bid comes as Slack struggles to fully capitalize on the switch to remote working during the COVID-19 pandemic in the face of fierce competition from Microsoft’s Teams and other workplace apps. Slack shares ended trading on Tuesday at $29.57, well below the $42 high they reached on their first day of trading last year. Salesforce sees the potential acquisition as a logical extension of its enterprise offerings , the sources said. The price it is offering for Slack was not disclosed, though one of the sources said Salesforce would pay cash for the deal, rather than using its stock as currency. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! If the negotiations conclude successfully, a deal could be announced before Slack reports quarterly earnings on December 9, one of the sources added. Neither Slack nor Salesforce responded to requests for comment. Slack shares jumped 24% to $36.58, giving the company a market capitalization of $21 billion, while Salesforce fell 2.7% after the Wall Street Journal first reported that the two companies had held deal talks. Slack has benefited from companies relying more on information technology systems to keep their workers connected during the pandemic. Its app has been installed about 12.6 million times so far this year, up approximately 50% from the same period in 2019, according to analytics firm Sensor Tower. But the economic fallout of the pandemic has forced Slack to give discounts and payment concessions to many of its customers who have had to make cost cuts. Seeking to save money, some companies have also been switching to Teams, which comes with many of Microsoft’s office software packages. “I think Microsoft Teams has been able to capitalize on the opportunity better than Slack, partly because they give it away for free as a bundle,” said Rishi Jaluria, an analyst at research firm DA Davidson and Co. “Now Slack realizes that they might be able to get greater penetration as part of a larger company.” Slack’s billing growth, a key indicator of future revenue, slowed in the three months leading up to the end of July. Salesforce meanwhile has been thriving financially during the pandemic. It raised its annual revenue forecast in August as the pandemic spurred demand for its online business software that supports remote work and commerce. Salesforce has been beefing up its cloud business through acquisitions and had spent more than $16 billion last year to fend off competition from rivals such as Oracle and German competitor SAP. ( Reporting Greg Roumeliotis and Krystal Hu in New York, additional reporting by Subrat Patnaik and Eva Mathews in Bengaluru. Editing by Arun Koyyur and Jan Harvey. ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,029
2,019
"Salesforce's Service Cloud adds automated case routing, reply recommendations, and other AI features | VentureBeat"
"https://venturebeat.com/2019/03/19/salesforces-service-cloud-adds-automated-case-routing-reply-recommendations-and-other-ai-features"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce’s Service Cloud adds automated case routing, reply recommendations, and other AI features Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Salesforce wants to supercharge customer service workflows with artificial intelligence. Toward that end, the San Francisco company today revealed a slew of AI-driven features headed to Service Cloud, its customer relationship management platform, including article recommendations and automated case routing. They’re well-timed, says Salesforce. According to its 2019 State of the Service report, 88 percent of “high-performing” service organizations are poised to make significant investments in service this year, while 82 percent of executives say that their company’s customer service must “transform” to stay competitive. “We are living in a new age of service where today’s customer expects great experiences at every stage of the buying cycle and across any channel, making the agent’s role more critical and more challenging than ever before,” Bill Patterson, executive vice president and general manager of Service Cloud at Salesforce, said. “With these new Service Cloud innovations, we are giving agents what need to rise to the occasion — a console built for modern customer service that is intelligent, collaborative and connected.” In the coming weeks, Service Cloud’s agent console will get Einstein Reply Recommendations, which uses natural language processing to “instantly” suggest agent responses over chat and messaging. Furthermore, Salesforce’s Einstein AI platform — which made its debut in September 2016 , and which now powers more than four billion daily predictions across over 30 bespoke services — will begin to use agents’ Service Cloud interactions to inform knowledge article recommendations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Einstein Next Best Action, another forthcoming addition to the agent console, will tap rules and predictive intelligence to recommend steps most likely to boost satisfaction and cross-selling, like offering a customer a complimentary extended warranty on a discontinued product. Meanwhile, Einstein Case Routing will automatically complete case details and route reports to queues and agents based on a range of criteria, including qualifications, areas of expertise, and historical success rates. That’s not all that’s coming down the pipeline. Salesforce this week also took the wraps off of Quip for Service, which enables agents to coauthor documents, bring in colleagues across company divisions, and host live conversations directly within case records. As the name implies, it ties into Quip, Salesforce’s cloud-based document suite that competes with the likes of Google Docs, Zoho Docs, and Microsoft 365. Zenconnect, a Service Cloud customer that piloted the new features, claims it increased productivity across the board. “Tapping into Einstein helps us get maximum value from the information stored in our Salesforce CRM, and with that data we have been able to optimize all of our customer service department’s internal processes,” said CEO Yann Mercier. “After implementing Einstein AI to automatically classify cases, our customer service agents saw 25 percent [efficiency] gains, freeing them up to focus on higher-level projects.” The newly announced features follow on the heels of Einstein Voice , a service that enables sales managers to dictate memos and navigate cloud services hands-free, and Salesforce Einstein Voice Bots, branded chatbots built on the Einstein Bot Platform that work with Alexa, the Google Assistant, and other voice assistants. In related news, Salesforce recently rolled out AI-powered features for Pardot and High Velocity Sales , and in July made Einstein bots for businesses generally available. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,030
2,020
"Slack's shares fall 18% after quarterly billing growth slows | VentureBeat"
"https://venturebeat.com/2020/09/09/slacks-shares-fall-18-after-quarterly-billing-growth-slows"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack’s shares fall 18% after quarterly billing growth slows Share on Facebook Share on X Share on LinkedIn Slack logo at Slush 2018 conference in Helsinki, Finland Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — Slack’s billing growth, a key indicator of future revenue, slowed in the second quarter, and the workplace messaging app owner said it took an $11 million hit in the first half due to the COVID-19-related concessions. The company said it offered credits, payment in installments, and billing duration of less than a year to help users over the economic downturn triggered by the health crisis, sending its shares down 18% after the bell. In the previous quarter, Slack had signaled weak demand from worst-affected industries, like retail and travel, prompting it to withdraw its full-year billings target. “In Q2, growth in many of our customers contracted or flattened versus normal seasonal trends. In August, growth began to trend at more typical seasonal levels,” CFO Allen Shim said in a call with analysts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Slack’s quarterly billings rose 25% but fell short of the 38% growth it posted in the first quarter. Billings are an important growth metric for a subscription-based platform like Slack. Its second-quarter revenue topped expectations by nearly $7 million, but that overachievement was not mirrored in the full-year outlook. Slack’s annual revenue forecast of $870 million to $876 million was roughly in line with expectations of $872.3 million. Excluding items, the company broke even, compared with analysts’ average estimate of a loss of 3 cents per share, according to IBES data from Refinitiv. ( Reporting by Neha Malara in Bengaluru. Editing by Maju Samuel and Arun Koyyur. ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,031
2,018
"Cloud data warehouse company Snowflake Computing raises $450 million at $3.5 billion valuation | VentureBeat"
"https://venturebeat.com/2018/10/11/cloud-data-warehouse-company-snowflake-computing-raises-450-million-at-3-5-billion-valuation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud data warehouse company Snowflake Computing raises $450 million at $3.5 billion valuation Share on Facebook Share on X Share on LinkedIn Snowflake Homepage Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Cloud data warehouse company Snowflake Computing has raised a whopping $450 million in a round of growth funding led by Sequoia Capital, with participation from Madrona Venture Group, Redpoint Ventures, Altimeter Capital, Capital One Growth Ventures, Sutter Hill Ventures, Wing Ventures, Iconiq Capital, and Meritech Capital. Founded in 2012, San Mateo-based Snowflake sells database software that runs on Amazon Web Services (AWS) and, as of a few months back , Microsoft Azure. Its core raison d’être is a repository for holding and querying data, making it available for processing and analyzing by myriad applications. It helps companies make sense of their wealth of data and spot patterns and trends, for example. Data Goliaths There are, of course, many data warehousing solutions out there, including the likes of Microsoft’s SQL Data Warehouse , Google’s BigQuery , and Amazon Redshift , not to mention incarnations from more traditional players like Oracle and SAP. But Snowflake’s pitch is that its product has been purpose-built from the bottom up with the cloud in mind, while everyone within an organization can get “priority access” to the database. “We were built for the cloud from the beginning, meaning that we can take full advantage of the nearly limitless capacity for both data storage and computation that the cloud provides,” Snowflake VP of product Christian Kleinerman told VentureBeat. “In other words, customers can choose how fast they want a question answered, and they don’t have to make difficult compromises about who in the organization gets priority access to the data warehouse.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Snowflake has already raised nearly $480 million in venture capital (VC) funding, including a $100 million round last year and a further $263 million tranche back in January. And as a result of this latest raise, the company said that it now has a pre-money valuation of $3.5 billion — up from the $1.5 billion valuation at its previous round of funding this year. “Learning to be data driven is an imperative for every organization today,” added Snowflake CEO Bob Muglia. “A data-driven organization must be in control of their data. Snowflake is the most powerful relational database in the world for analytics solutions. That power delivers the security, control, and the business answers needed by data-driven organizations. This is driving breathtaking growth for our company, and that growth requires capital.” Big money The global cloud data warehouse market is expected to become a $20 billion industry by 2020, up 40 percent from the $14 billion it was reportedly worth last year, according to IDC data cited by Snowflake. For example, Palo Alto-based Yellowbrick Data emerged from stealth a few months back with $44 million in funding from big-name investors including GV, Samsung Ventures, DFJ, Menlo Ventures, and Third Point Ventures. Its data warehousing technology can currently be deployed through on-premises data centers and private clouds, though it will be adding public clouds to the mix next year. “It’s not often a SaaS (software-as-a-service) solution emerges with so many benefits that enable enterprises to be data-driven,” added Sequoia Capital partner Carl Eschenbach. “Snowflake has truly disrupted the data warehouse market, but the best has yet to come.” With a chunky $450 million more in the bank, Snowflake said that it plans to “expand its multi-cloud strategy,” and grow its sales and engineering teams across the U.S. and beyond. Does this additional funding also mean that it’s ready to open up to the other big public cloud platform in the room — Google Cloud? Maybe. “Our customers always guide our roadmap, and we have certainly heard an increase in requests for Google Cloud Platform,” Kleinerman said. “We’re looking into it, but it’s too early to comment or provide formal plans.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,032
2,019
"Amazon launches AWS Data Exchange for tracking and sharing data sets | VentureBeat"
"https://venturebeat.com/2019/11/13/amazons-aws-data-exchange-launches-with-over-80-data-providers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches AWS Data Exchange for tracking and sharing data sets Share on Facebook Share on X Share on LinkedIn AWS CTO Werner Vogels onstage November 29, 2018 at re:Invent. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today announced a new addition to its growing stable of cloud products: Amazon Web Services (AWS) Data Exchange. The company is pitching it as a way for AWS customers to “securely” find, subscribe to, and use third-party data from “category-leading” brands, including Change Healthcare and Foursquare. On the data provider side of the equation, it in theory eliminates the need to build and maintain infrastructure for storage, delivery, billing, and entitling. More than 80 data providers have contributed over 1,000 products containing data at launch, Amazon says. Reuters is making available its curated selection of over 2.2 million unique news stories in multiple languages, while Dun & Bradstreet is opening up its corpus of more than 330 million global business records. “Customers have asked us for an easier way to find, subscribe to, and integrate diverse data sets into the applications, analytics, and machine-learning models they’re running on AWS. Unfortunately, the way customers exchange data hasn’t evolved much in the last 20 years,” said AWS Data Exchange general manager Stephen Orban. “AWS Data Exchange gives our customers the ability to quickly integrate third-party data in the workloads they’re migrating to the cloud, while giving qualified data providers a modern and secure way to package, deliver, and reach the millions of AWS customers worldwide.” To this end, AWS Data Exchange enables customers to select from third-party data sources in AWS Marketplace. Once subscribed, they’re able to use the AWS Data Exchange API or console to funnel data directly into Amazon Simple Storage Service (Amazon S3). And each time a provider publishes a new revision of their data, AWS Data Exchange will notify them via a CloudWatch Event so that the revision can be propagated to applicable data lakes, apps, and AI models. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data subscription costs are consolidated in existing AWS invoices, and customers can ask data providers to deliver existing subscriptions to them using AWS Data Exchange at no cost. As for data providers, they’re able to publish free or paid products under terms they specify, or they can issue private offers with custom terms for specific AWS customers. Alternatively, they can opt to approve each subscription for compliance or to review intended uses cases. AWS Data Exchange delivers daily, weekly, and monthly reports detailing subscription activity to providers, and Amazon imposes restrictions on the sort of data that can be made available to customers. Perhaps unsurprisingly, sensitive personal data (like health information) and any information that’s not “already lawfully and publicly available” is prohibited from the platform. “Being a provider for AWS Data Exchange enables companies to directly access Foursquare’s audiences and places data sets — which is derived from our understanding of 220 million unique consumers, 100 million devices, and 60 million global commercial venues — in order to strengthen customer intelligence, build context-rich applications, and assess category and chain trends,” said Foursquare senior vice president of product Josh Cohen. “AWS Data Exchange provides us with secure access to customers at incomparable scale, while also serving as an easy data ingestion and activation vehicle for data usage.” AWS continues to be a growth driver for Amazon, albeit to a lesser degree than in years past. Amazon revealed during its most recent quarterly earnings that the cloud computing division accounted for about 13% of Amazon’s total revenue for the quarter, growing 45% in sales to $9 billion. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,033
2,020
"The 3 AI operating models -- and how to know which works best for you (VB Live) | VentureBeat"
"https://venturebeat.com/2020/11/18/the-3-ai-operating-models-and-how-to-know-which-works-best-for-you-vb-live"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live The 3 AI operating models — and how to know which works best for you (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Dataiku AI initiatives need more than a Center of Excellence. Being able to deploy AI at scale starts with choosing the operating model for your use cases and business objectives. For a deep dive into the three primary models, their pros and cons, plus real-world case studies, don’t miss this VB Live event. Register here for free. “No one should select an AI operating model just to select an AI operating model,” says Jennifer Roubaud-Smith, VP, global head of strategic advisory at Dataiku. “No one should set up centers of excellence just to set up centers of excellence. It should always be about the kind of business challenges you want to solve.” Whether your objective is to solve some key enterprise-wide business challenges, or whether your objective is to transform entirely the way everyone in your organization works with analytics, your operating model is central and foundational to the success of that initiative. Integrating analytics and business In the early days of advanced analytics and AI, teams tended to focus on projects that were very innovative or technically challenging, which called for a center of excellence model. But today, as Robaud-Smith explains, more and more organizations and analytics teams understand that if you want to survive, if you want to be there in two years when the next CEO arrives and looks at your results, your AI initiatives need to be adding business value to the organization. “Whatever operating model you choose, the most important thing is to ensure that there isn’t a disconnect between the analytics teams and the business, and that the mindset of the analytics team is wired in the right way,” Roubaud-Smith says. “Whether it’s centered within a line of business or centered in the entire organization, you need people who are accountable for the success of driving awareness around analytics and driving usage around analytics-powered products.” Ideally, those people need to come from the business, know the business well, and act as engagement or business translators between the business and the analytics teams. The 3 operating models A center of excellence is one type of operating model to drive your analytics strategy. As the importance of analytics to the future of any organization became undeniable, CoEs grew in prominence, providing an organizational north star for AI and analytics. The AI and analytics talent is unified and located in one centralized department. This department then acts almost like a consulting firm for the rest of the organization on topics related to AI and machine learning. In the decentralized model, the analytics team sits within the lines of business and works very closely with SMEs, or might even be SMEs themselves. There might be an IT department centrally working with them, or the IT team might also be inside the business unit. The third model is the hub and spoke model. This is an approach aiming to have the best of both worlds, looking for a way to get the benefits of having a centralized team, while keeping analytics talent embedded within the business. The hub and spoke model has one central team working in coordination with the AI and analytics talent that is scattered across the organization. Choosing between these requires going back to the specifics of your organization’s business problems, objectives, and goals (which you’ll learn more about in the upcoming webinar). Facing the biggest challenges One of the biggest challenges business leaders face in establishing a functional AI operating model is simply that it’s not something an organization is prioritizing at the very top. Instead, it’s something that lives outside the C-suite while others try to make the current organizational setup support AI initiatives. “The CEO needs to understand that succeeding with AI at scale in a sustainable way requires a big transformation with a capital ‘T,’” Roubaud-Smith says. “There is a whole change management aspect to AI and analytics that will make the organization successful, or less so.” That means managing the AI strategy as a clear program, setting expectations with leadership, and sharing progress on how the entire program is going. Keeping that communication open and fluid. It requires actively finding advocates within the business to be champions of the program. Another limiting factor is when the approach is too narrow and fails to establish a model that will ensure smooth collaboration between the business, the analytics, and the IT teams all together. Finally, an AI strategy is often launched with too small a team. From the outset, it’s important to establish a foundation that will allow your organization to scale without creating a lot of governance challenges as you grow. Whatever the choices you make, there is one clear understanding to start with, Roubaud-Smith says. “To a data leader, you can’t succeed without the business, and for a business exec, you can’t succeed without the data leader,” she says. “If we’re talking about sustainable, ambitious business value, then they can’t do without each other.” Don’t miss out! Register here for free. Attendees will hear: A detailed look at each of the three primary operating models for AI initiatives The pros and cons of each operating model for a variety of business uses Case studies from companies that have implemented each type of model And more Speakers: Beaumont Vance , Head of AI, Advanced Analytics and Emerging Technology DevOps, TD Ameritrade Jennifer Roubaud-Smith , VP, Global Head of Strategic Advisory, Dataiku Kyle Wiggers , AI Staff Writer, VentureBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,034
2,019
"U.S. gig workers get support from private companies, but public policy lags | VentureBeat"
"https://venturebeat.com/2019/06/02/us-gig-workers-get-support-from-private-companies-but-public-policy-lags"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest U.S. gig workers get support from private companies, but public policy lags Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On April 16, the EU parliament implemented regulations granting a set of minimum rights for gig economy workers, including basic entitlements and protections such as paid training time and compensation for projects despite potential cancellations. These new policies were significant, representing the first EU legislation in 20 years to set minimum workers’ rights. Businesses in EU-member states are now working to ensure compliance with these laws, which they have three years to achieve. That the EU is assuming a leadership position here is not surprising; EU workers have been known to enjoy greater job protections than their American counterparts. But the fact that the U.S. government has not taken steps to build in similar protections for our own gig economy workers is cause for major concern, and it’s time for policy makers to step up to the plate. For many years, the alternative work arrangement — or “side hustle,” which is essentially anything other than a 9-to-5 salaried job — was seen as little more than an effort to make quick cash to supplement an existing full-time income. That is no longer the case, as gig economy workers and independent contractors are becoming mainstream. According to Financial Health for the Future of Work , a report issued in January by PayPal, over 90 percent of net employment growth in the U.S. between 2005 and 2015 fell in the alternative work category. In other words, almost all of the 10 million jobs created in this time frame were not traditional 9-to-5 jobs. Ernst & Young predicts that by 2020, one in five U.S. workers will be a member of the contingent workforce. MetLife’s 2019 U.S. Employee Benefits study recently found that 23 percent of Americans with full-time traditional jobs intend to switch to gig work over the next five years. Another 14 percent are considering it. Private industry comes through on insurance, health care, benefits With independent contractors emerging as a considerable portion of the overall available skills pool, private industry has taken the lead in addressing the needs of these workers and offering affordable options for their basic protections. For example: Trupo , which is partially owned by the New York-based Freelancers Union, provides a variety of short-term disability insurance packages to independent contractors through a sliding scale model with different levels of protection for different prices. San Francisco-based Stride Health connects independent contractors and part-time workers with cost-effective healthcare plans, based on an analysis of individual workers’ needs and financial circumstances. Bunker allows independent contractors to acquire insurance just for the term of work contracts, making policy length options much more flexible and conducive to contingent workers. Traditional banking leaders are also addressing the new reality. Goldman Sachs offers a payroll deduction-based IRA to ride-sharing service drivers through its fintech group Honest Dollar , helping these drivers save for retirement. And earlier this year, Prudential started developing tools and initiatives designed to help replicate some of the benefits traditional workers have. This includes Covered1099 , a web-based tool enabling independent workers to better manage their incomes in a variety of ways, including automating the process of tax withholding, saving for paid time off or time between gigs, and purchasing insurance products. Public policy is still failing these workers For all the private industry progress, U.S. gig economy workers still lack access to a set of free, basic government-mandated job protections and other programs to promote financial health. Just because independent contractors are working or filling positions for shorter durations does not mean they are immune to sickness or an economic downturn that could significantly diminish (if not altogether wipe out) their incomes. The prospect of many independent contractors lacking access to retirement savings is also troublesome, especially given the rapid growth in 55+ gig economy workers. Paradoxically, many gig economy workers are forced to pay a self-employment tax, which drains a larger percentage of their incomes than the typical 9-5 worker. The same Ernst & Young report cited above found that 44 percent of organizations expect more regulation in relation to the contingent workforce. But to date, U.S. public policy has not delivered, leaving the door open for potentially disastrous consequences. Consider unemployment insurance. During and immediately after the Great Recession of 2008, extended unemployment insurance was one of the only factors that prevented a decline to a full-blown depression. Also consider that this growing segment of workers often lacks access to employer-provided healthcare. With the ACA no longer imposing penalties for not having health insurance, we could see an increase in the uninsured, the ranks of which will expand as more workers join the independent contractor groundswell. Many long-established systems, including social security, labor protection laws, retirement savings and even healthcare, are designed for the well-being of full-time employees. U.S. public policy must catch up to private industry and offer basic protections to gig economy workers, a rapidly growing segment of the U.S. workforce. More specifically, policy makers should consider a bill for the rights, protections and general well-being of the self-employed, which all workers – both gig economy and traditional – should get behind. Hussein Ahmed is CEO at Oxygen , a digital bank for independent contractors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,035
2,019
"Google's DeepMind is using machine learning to predict wind turbine energy production | VentureBeat"
"https://venturebeat.com/2019/02/26/googles-deepmind-is-using-machine-learning-to-predict-wind-turbine-energy-production"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s DeepMind is using machine learning to predict wind turbine energy production Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google’s DeepMind is using machine learning to predict the performance of its wind turbines 36 hours in advance. The prediction of wind turbine performance for turbines in the central United States more than a day in advance has led to a roughly 20 percent increase in the value of wind energy, Google and DeepMind said in a joint blog post today. The model is trained using weather data and historical wind turbine performance data. “Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance. This is important, because energy sources that can be scheduled (i.e. can deliver a set amount of electricity at a set time) are often more valuable to the grid,” the post reads. Little data was provided on the overall accuracy of the system’s predictions thus far, but Google and DeepMind want to make wind power more reliable with machine learning in order to make it a more attractive form of energy in the future. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide,” the blog post reads. Google set a target to reach 100 percent renewable energy by 2017 , and last year signed a 10-year deal with a Finnish energy company to fuel its Nordic data centers. Google also uses AI to make cooling servers in its data centers more efficient. This is the most recent form of artificial intelligence to be applied to make wind power cheaper and more efficient. Earlier this month, a group of researchers introduced WaveletFCNN , a classification model to predict the buildup of ice on wind turbines. The stop of energy production due to ice damage to wind turbine blades can reduce energy production up to 20 percent, according to wind consultancy firm TechnoCentre Éolien (TCE). VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,036
2,019
"DeepMind and Waymo collaborate to improve AI accuracy and speed up model training | VentureBeat"
"https://venturebeat.com/2019/07/25/deepmind-and-waymo-collaborate-to-improve-ai-accuracy-and-speed-up-model-training"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind and Waymo collaborate to improve AI accuracy and speed up model training Share on Facebook Share on X Share on LinkedIn A self-driving vehicle developed by Google parent company Alphabet's Waymo. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI models capable of reliably guiding driverless cars typically require endless testing and fine-tuning, not to mention computational power out the wazoo. In an effort to bolster AI algorithm training effectiveness and efficiency, Google parent company Alphabet’s Waymo is collaborating with DeepMind on techniques inspired by evolutionary biology, the two companies revealed in a blog post this morning. As Waymo explains, AI algorithms self-improve through trial and error. A model is presented with a task that it learns to perform by continually attempting it and adjusting based on the feedback it receives. Performance is heavily dependent on the training regimen — known as a hyperparemeter schedule — and finding the best regimen is commonly left to experienced researchers and engineers. They handpick AI models undergoing training, culling the weakest performers and freeing resources to train new algorithms from scratch. DeepMind devised a less labor-intensive approach in PBT (Population Based Training), which starts with multiple machine learning models initiated with random variables (hyperparameters). The models are evaluated periodically and compete with each other in an evolutionary fashion, such that underperforming members of the population are replaced with “offspring” (copies of better-performing members with slightly mutated variables). PBT doesn’t necessitate restarting training from scratch, because each offspring inherits the state of its parent network, and the hyperparameters are updated actively throughout training. The net result is that PBT spends the bulk of its resources training with “good” hyperparameter values. PBT isn’t perfect — it tends to optimize for the present and fails to consider long-term outcomes, disadvantaging late-blooming AI models. To mitigate this, researchers at DeepMind trained a larger population and created subpopulations called niches, in which algorithms are only allowed to compete within their own subgroups. Lastly, the team directly rewarded diversity by providing more unique models an edge in the competition. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In several recent studies, DeepMind and Waymo applied PBT to pedestrian, bicyclist, and motorcyclist recognition tasks with the goal of investigating whether it could improve recall (the fraction of obstacles identified over the total number of in-scene obstacles) and precision (the fraction of detected obstacles that are actually obstacles and not false positives). Ultimately, the companies sought to train a single AI model to maintain recall of over 99% while reducing false positives. Waymo reports that these experiments informed a “realistic” framework for evaluating real-world model robustness, which in turn informed PBT’s algorithm-selecting competition. They also say the experiments revealed the need for fast evaluation to support evolutionary competition; PBT models are evaluated every 15 minutes. (DeepMind said it employed parallelization across “hundreds” of distributed machines in Google’s datacenters to achieve this.) The results are impressive. PBT algorithms managed to achieve higher precision, reducing false positives by 24% compared to their hand-tuned equivalents, while maintaining a high recall rate, Waymo claims. Moreover, they saved time and resources — the hyperparameter schedule discovered with PBT-trained algorithms took half the training time and resources and used half the computational resources. Waymo says it has incorporated PBT directly into Waymo’s technical infrastructure, enabling researchers from across the company to apply it with a button click. “Since the completion of these experiments, PBT has been applied to many different Waymo models and holds a lot of promise for helping to create more capable vehicles for the road,” wrote the company. “Traditionally, [AI] can only be trained using simple and smooth loss functions, which act as a proxy for what we really care about. PBT enabled us to go beyond the update rule used for training neural nets, and toward the more complex metrics optimizing for features we care about.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,037
2,020
"Ericsson breaks 5G millimeter wave speed record with 4.3Gbps downloads | VentureBeat"
"https://venturebeat.com/2020/02/13/ericsson-breaks-5g-millimeter-wave-speed-record-with-4-3gbps-downloads"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ericsson breaks 5G millimeter wave speed record with 4.3Gbps downloads Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Given the recently rapid pace of major tech innovations, it’s easy to forget that only several years have passed since it became feasible to stream 4K videos over wired home broadband connections. Now Ericsson says its latest 5G hardware will be able to transfer a full hour of 4K video in as little as 14 seconds, with consumer availability coming later this year. Though Ericsson opted to pull out of this year’s MWC trade show in Barcelona , the announcement of a new record 5G speed is in keeping with its tradition of annual breakthroughs timed to coincide with the annual event. Ericsson engineers in Kista, Stockholm hit a download speed of 4.3Gbps, flying faster than Huawei’s October 2019 rate of 3.67Gbps, and coming closer to the theoretical 7.5Gbps download peak of existing 5G chips. It’s important to note that the Ericsson and Huawei numbers aren’t an apples-to-apples comparison. Ericsson achieved its record by aggregating eight millimeter wave frequency bands — also known as 8CC or eight component carriers — with a total of 800Mhz of spectrum. The test used a smartphone based on Qualcomm’s Snapdragon X55 5G modem and RF system , commercially available components connecting to the Ericsson Radio System Street Macro 6701. By contrast, Huawei’s record was achieved using 100MHz of C-band spectrum on a live 5G network in Zürich, Switzerland. While only a handful of carriers will have both access to 800MHz of high band millimeter wave spectrum and the necessary small cells to deploy it, the smaller quantity of mid band spectrum selected by Huawei is within the reach of many carriers across the world — and doesn’t require a short-distance connection, either. The differences don’t diminish Ericsson’s record, but suggest that its number will be more challenging to achieve in the real world. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Ericsson views the 4.3Gbps speed as further proof of 5G’s ability to replace fiber, as millimeter wave has advanced from 1Gbps to 2Gbps to 4Gbps peaks in fairly short order, quadrupling the top broadband speeds offered by most cable providers. Beyond video streaming applications, the company also expects mixed reality and multi-player online gaming to benefit from the speed advances. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,038
2,020
"Nokia 5G software can upgrade 5 million 4G tower radios without climbs | VentureBeat"
"https://venturebeat.com/2020/07/14/nokia-5g-software-can-upgrade-5-million-4g-tower-radios-without-climbs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nokia 5G software can upgrade 5 million 4G tower radios without climbs Share on Facebook Share on X Share on LinkedIn Rajeev Suri, Nokia's president and CEO, speaks during MWC in Barcelona, February 25, 2018. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As 5G networks continue to spread across the world — faster than some predicted, but slower than others might prefer — physically upgrading existing 4G towers has proved to be a significant bottleneck, leading to massive expenses , permitting controversies , and shortages of cell tower climbers. Today, Nokia released an alternative that will help carriers rapidly convert 4G infrastructure to 5G: a software update that can convert 5 million existing 4G tower radios to 5G without the need for tower climbs or site revisits. According to the company, the software update is available now for approximately 1 million 4G radios, and will expand to 3.1 million by the end of 2020, then over 5 million in 2021 — big numbers that should enable country-scale 5G deployments. To put those numbers in perspective, China expects that 600,000 5G base stations will cover most of the major cities in its huge landmass by year’s end, so physically smaller countries could offer edge-to-edge 5G with far fewer radios. Nokia estimates carriers’ savings from the software update to be tens of billions of euros, coupled with reduced time for “immediate” and “seamless” 5G deployments that will help businesses and consumers start using 5G right away. If there’s any issue with the software upgrade, it’s a fairly obvious one: The upgraded Nokia radios aren’t magically gaining new frequency support for the fastest flavor of 5G, millimeter wave transmissions in the 24-39GHz range. Instead, their existing 4G radio frequencies — think 2.5GHz and below — are going to be re-farmed for 5G, either entirely or in a dynamic spectrum sharing (DSS) arrangement between 4G and 5G. Notably, AT&T was among the earliest carriers to suggest that it would turn on 5G via tower software updates “when it’s ready for prime time.” The re-farming process will enable both 5G where 4G already exists, Nokia says, and enhanced 5G performance in areas with early dedicated 5G infrastructure. “Most” of the 4G frequency division duplex (FDD) radio units previously purchased by Nokia’s 359 existing 4G customers can be upgraded to 5G, which will spur rapid development of lower band 5G infrastructure. Moreover, the new software will support carrier aggregation , enabling 5G devices to simultaneously use two radio frequencies at once for higher speeds — one frequency might be purely low band 5G, while the other could be split between 4G and 5G with DSS, but together they’ll outperform 4G by a non-trivial margin. Nokia isn’t disclosing the cost of the software upgrade, but says it’s “cost-effective” and offers “high value” to existing customers, cutting the need for expensive site engineering and re-visiting. Assuming the updating process is as straightforward as claimed, low band 5G could begin rolling out for some carriers in the very near future, and expanding with others — such as T-Mobile — that have already started offering low-frequency 5G service across multiple cities. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,039
2,017
"Google expands Project Sunroof 'solar power potential' program beyond the U.S. and into Germany | VentureBeat"
"https://venturebeat.com/2017/05/03/google-expands-project-sunroof-solar-power-potential-program-beyond-the-u-s-and-into-germany"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google expands Project Sunroof ‘solar power potential’ program beyond the U.S. and into Germany Share on Facebook Share on X Share on LinkedIn Solar Panels Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google is expanding its Project Sunroof program beyond the U.S. for the first time and will now assess the potential for solar power energy in homes across Germany, too. First introduced back in 2015 , Project Sunroof is effectively a search engine that lets anyone look up a specific address to discover the potential for solar energy collection in any home. After a gradual rollout across the U.S., the web service finally landed in all 50 U.S. states back in March. By meshing together data from Google Maps and Google Earth, Google uses 3D models and machine learning to estimate whether the position and location of a house has the potential to collect solar energy, and thus whether solar is a worthy investment for a homeowner. The tool looks at how much sun hits a roof and the position of the roof, while also factoring in historical weather data, shading from nearby objects, and the position of the sun at different times of the year. Above: Project Sunroof For the German launch of Project Sunroof, Google has partnered with E.ON and a software company called Tetraeder, and Google said that roughly seven million German homes are currently covered by the program. Moving forward, Project Sunroof data will be integrated with E.ON’s website from today, allowing consumers to find out whether their home has solar potential and to purchase the necessary equipment to make it happen. Google has long touted its green credentials, and the company recently revealed that it expected to reach 100 percent renewable energy for its global operations in 2017. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,040
2,019
"Aurora Solar raises $20 million to automate solar panel installation | VentureBeat"
"https://venturebeat.com/2019/02/04/aurora-solar-raises-20-million-to-automate-solar-panel-installation-planning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aurora Solar raises $20 million to automate solar panel installation Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Despite recent setbacks , solar remains a bright spot in the still emerging renewable energy sector. In the U.S., the solar market is projected to top $22.90 billion by 2025, driven by falling materials costs and growing interest in offsite and rooftop installations. Moreover, in China — the world’s leading installer of solar panels and the largest producer of photovoltaic power — 1.84 percent of the total electricity generated in the country two years ago came from solar. There’s clearly growth — which San Francisco startup Aurora Solar seeks to capitalize on with a novel solution combining lidar data, computer-assisted design, and computer vision. The company, which develops a suite of software that streamlines the solar panel installation process, today announced it has secured $20 million in a Series A round of financing from Energize Ventures, with contributions from S28 Capital and existing investor Pear. COO Samuel Adeyemo, formerly vice president of JPMorgan Chase’s chief investment office, said the capital will be used to expand Aurora’s engineering, customer service, and business teams and to “help accelerate” expansion. He also said that Amy Francetic, Energize Ventures’ managing director, will join the board as part of the round. “This funding will enable us to continue to attract the most talented engineers, marketers, customer success, and salespeople to service the fastest-growing occupation in the U.S. — the solar professional,” Adeyemo said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Modeling solar panel installations with Aurora Solar’s software. He and Christopher Hopper — both Stanford graduates — teamed up to cofound Aurora in 2013 after a frustrating commissioning experience with a solar system project for a school in East Africa. While the panels themselves only took weeks to install, planning — conducting research, calculating financials, and designing the system — dragged on for six months. So they devised a better way, which they call SmartRoof — tech that allows solar installers to create 3D CAD models of construction sites and forecast not only how many panels will fit on the properties, but the amount of power they’ll produce and the potential energy costs they’ll save. (In some respects, it’s a bit like Google’s Project Sunroof — a geographic search engine anyone can use to discover the potential for solar energy collection in any home — albeit plenty more sophisticated.) It launched in 2015 after garnering an award from Stanford’s TomKat Center for Sustainable Energy, and following two years of development and validation tests with the National Renewable Energy Laboratory and the U.S. Department of Energy. “Transitioning to a world powered by sustainable energy is one the biggest challenges facing our generation,” Hopper said. “[We’re] … bring[ing] the best-in-class sales and design platform to solar professionals in the U.S. and all over the world.” It begins with modeling. Within Aurora’s CAD software, designers trace roof outlines over satellite images augmented with lidar data and use built-in edge detection tools to ensure they’re up to spec. From there, designers are able to simulate obstructions (like trees) on the panels’ sunlight exposure or sun paths and then use that data to extrapolate power consumption at various times of the year. Aurora’s suite also allows planners to plot out sites manually, using drag-and-drop components for modules, wiring, connections, combiner boxes, and ground mounts or to generate site designs on the fly algorithmically. No matter which method they chose, Aurora converts the resulting 3D measurements into 2D single line and layout diagrams while performing hundreds of checks for National Electric Code (NEC) compliance. Above: Generating an irradiance report with Aurora Solar’s tools. There’s a sales piece, too. Aurora’s solution uses system performance to model loans, leases, and cash payments and boasts proposal-building tools that let companies import calculations and other information from the project into polished, presentation-ready decks. Aurora claims the irradiance reports it produces are statistically equivalent to onsite measurements for any location in the world, and are certified compliant with the National Electric Code (NEC). Moreover, it says they’re accepted by rebate authorities, including New York State Energy Research and Development Authority (NYSERDA), Massachusetts Clean Energy Center (MassCEC), Energy Trust of Oregon, New Jersey Clean Energy Fund, Oncor, and Connecticut Green Bank. All of that has instilled confidence in Shvet Jain, managing partner at S28 Capital, who says that over a million commercial and residential solar installations have been designed with Aurora’s technology — a number that’s growing at a rate of 60,000 projects every month. “With [billions] spent annually on solar in the U.S. and $200 billion globally, we wanted to invest in the company that is building the operating system of the solar industry,” Hershenson said. “Solar is going to have a very meaningful impact on the energy infrastructure in the world in the near future. We had the privilege of being involved with Aurora Solar from day one and see it as the leading company in this space. Doubling down has been a no-brainer.” Aurora operates on a subscription model and offers several options: a $135 per user per month basic tier; a $220 premium tier that adds things like lidar-assisted modeling, NEC validation, and single line diagrams; and a variably priced enterprise tier for “organizations that quote thousands of systems per month.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,041
2,020
"Waymo raises $2.25 billion to scale up autonomous vehicles operations | VentureBeat"
"https://venturebeat.com/2020/03/02/waymo-raises-2-25-billion-to-scale-up-autonomous-vehicles-operations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Waymo raises $2.25 billion to scale up autonomous vehicles operations Share on Facebook Share on X Share on LinkedIn Waymo's fully self-driving Jaguar I-PACE electric SUV Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Waymo, Google parent company Alphabet’s autonomous vehicles division, just got a massive capital infusion from a coterie of investors. It today announced that it secured $2.25 billion in financing from Silver Lake, Canada Pension Plan Investment Board, and Mubadala Investment Company (the sovereign wealth fund of Abu Dhabi), as well as auto parts supplier Magna International, Andreessen Horowitz, auto retail giant AutoNation, and Alphabet itself. It’s an initial close on the company’s first round of funding. Representatives of Silver Lake and the Canadian Pension Fund will join Waymo’s board as part of the investment, said Waymo CEO John Krafcik. “We’ve always approached our mission as a team sport, collaborating with our OEM and supplier partners, our operations partners, and the communities we serve to build and deploy the world’s most experienced driver,” added Krafcik. “Today, we’re expanding that team, adding financial investors and important strategic partners who bring decades of experience investing in and supporting successful technology companies building transformative products. With this injection of capital and business acumen, alongside Alphabet, we’ll deepen our investment in our people, our technology, and our operations, all in support of the deployment of the Waymo Driver around the world.” The news comes after a report by The Information revealed that Waymo nearly doubled its headcount to 1,500 employees, known as “Waymonauts,” from 800 about a year ago. The company’s annual cost is estimated at around $1 billion, while its robo-taxi business — Waymo One — reportedly yields just hundreds of thousand dollars a year in revenue. Waymo hasn’t shared the number of customers who have ridden in its fleet of over 600 vehicles to date, but it said last December that over 1,500 people are using its ride-hailing service monthly and that it has served over 100,000 total rides since launching its rider programs in 2017. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Waymo is the proven leader in self-driving technology, is the only autonomous vehicle company with a public ride-hailing service, and is successfully scaling its fully driverless experience,” said Silver Lake co-CEO Egon Durban in a statement. “We’re deeply aligned with Waymo’s commitment to making our roads safer, and look forward to working together to help advance and scale the Waymo Driver in the U.S. and beyond.” Robo-taxi growth Google first started testing autonomous cars equipped with lidar sensors, radar, cameras, and powerful onboard computers on San Francisco roads as part of a stealth project in 2009. In 2016, the unit was rebranded to Waymo and spun out as an Alphabet subsidiary led by Krafcik, former president and CEO of Hyundai North America. Waymo One hasn’t grown far beyond Phoenix geographically, but it’s moving incrementally toward a broader launch in the continental U.S. To this end, a Waymo One app for iOS launched late last year in the App Store. Like the app for Android, which launched publicly in April, Waymo One for iOS hails rides from almost any Phoenix-area location 24 hours a day, seven days a week. It prompts customers to specify both pickup and drop-off points before estimating the time to arrival and the cost of the ride. As in a typical ride-hailing app, users can also enter payment information and rate the quality of rides using a five-star scale. Shortly after announcing a partnership with Lyft to deploy 10 cars on the ride-hailing platform in Phoenix, Waymo revealed that a portion of its self-driving taxis no longer have a safety driver behind the wheel. Completely driverless rides remain available to only a “few hundred” riders in Waymo’s Early Rider program, the company says. Waymo is poised to expand the size of its operations drastically in the coming months. It’s on track to add up to 62,000 Chrysler Pacifica minivans to its fleet and has signed a deal with Jaguar Land Rover to equip 20,000 of the automaker’s Jaguar I-Pace electric SUVs with its system by 2020. (A few of the I-Paces are currently undergoing testing on public roads in San Francisco.) Later this year, Waymo also plans to release its latest autonomous driving system — the fifth-generation Waymo Driver — featuring a new lidar sensor design that’s “breakthrough” in terms of cost-efficiency. The fifth-generation Driver will also boast revamped radars and vision systems, as well as “all-weather” capabilities including defrost and wiper elements and a “significant upgrade” in onboard compute power. According to marketing firm ABI, as many as 8 million driverless cars will be added to the road in 2025, and Research and Markets anticipates that there will be some 20 million autonomous cars in operation in the U.S. by 2030. Assuming Waymo maintains its current trajectory, investment bank UBS believes the company will dominate the driverless car market in the next decade, with over 60% market share. That might be optimistic — Yandex, Tesla, Zoox, Aptiv, May Mobility, Pronto.ai, Aurora, Nuro, and GM’s Cruise Automation are among Waymo’s self-driving car competitors, to name just a few. Daimler last summer obtained a permit from the Chinese government that allows it to test autonomous cars powered by Baidu’s Apollo platform on public roads in China. Beijing-based Pony.ai , which has raised hundreds of millions in venture capital, launched a driverless taxi pilot in Irvine in October. And startup Optimus Ride built out a small autonomous shuttle fleet in New York City, becoming the first to do so. Self-driving trucks In what might be perceived as a bid to boost its bottom line, Waymo recently announced it would begin testing self-driving trucks on “promising” commercial routes in the two U.S. states. Chrysler Pacifica vans retrofitted with Waymo’s technology stack will map roads ahead of driverless Peterbilt trucks as part of a project known as Waymo Via. Waymo describes Waymo Via — which was formally announced today — as focused on “all forms of goods delivery.” It encompasses both short- and long-haul delivery, from freight transported across interstates down to local delivery. Separately, Waymo is actively mapping Los Angeles to study congestion and expanding testing to highways in Florida between Orlando, Tampa, Fort Myers, and Miami as it conducts self-driving truck pilots in the San Francisco Bay Area, Michigan, Arizona, Georgia, and on Metro Phoenix freeways (as well as on the I-10 between Phoenix and Tucson). And in the Metro Pheonix area, the company is piloting autonomous vehicle package transportation between UPS Store locations and a local UPS sorting facility. Demand for driverless trucks is strong. They are predicted to reach 6,700 units globally, totaling $54.23 billion this year, and they stand to save the logistics and shipping industry $70 billion annually while boosting productivity by 30%. Besides cost savings, the growth is driven partly by a shortage of human drivers. In 2018, the American Trucking Associates estimated that 50,000 more truckers were needed to meet demand, even after proposed U.S. Transportation Department screenings for sleep apnea were sidelined. But Waymo has formidable rivals in TuSimple , Thor Trucks, Pronto.ai, and Aurora, the last of which attracted a $530 million investment at a valuation over $2 billion in February. There’s also Ike, a self-driving truck startup founded by former Apple, Google, and Uber Advanced Technologies Group engineers that has raised $52 million , and venture-backed Swedish driverless car company Einride. Meanwhile, former Battery Ventures VP Paz Eshel and former Uber and Otto engineer Don Burnette recently secured $40 million for their startup, Kodiak Robotics. That’s not to mention Embark — which integrates its driverless systems into semis and launched a pilot with Amazon to haul cargo — or autonomous truck solutions from incumbents like Daimler and Volvo. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,042
2,020
"Antitrust experts weigh in on breaking up Amazon, Apple, Facebook, and Google | VentureBeat"
"https://venturebeat.com/2020/08/01/antitrust-experts-weigh-in-on-breaking-up-amazon-apple-facebook-and-google"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Antitrust experts weigh in on breaking up Amazon, Apple, Facebook, and Google Share on Facebook Share on X Share on LinkedIn Google CEO Sundar Pichai, Apple CEO Tim Cook, Facebook CEO Mark Zuckerberg, and Amazon CEO Jeff Bezos (clockwise from top left) speak before a July 29, 2020 House Judiciary committee meeting. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Gary Reback is perhaps best known as the lawyer who helped convince the U.S. Department of Justice to bring an antitrust lawsuit against Microsoft in the 1990s. He watched the majority of the House Judiciary Committee hearing earlier this week with Amazon CEO Jeff Bezos, Apple CEO Tim Cook, Google CEO Sundar Pichai, and Facebook CEO Mark Zuckerberg. Reback still remembers a Senate hearing that put pressure on the government and led to the filing of United States v. Microsoft Corporation in 1998. That’s why he focused more on lines of questioning this week than on answers from CEOs of companies that have amassed unprecedented wealth and power. “To me, the core turning point was right at the beginning, basically,” he said, pointing to sharp and detailed questioning in reference to a range of documents obtained in the course of the committee’s assessment. The question on Wednesday was whether tech giants have grown too powerful, and if you watched the hearings, many members of Congress leave no doubt that they believe the answer is yes. In his opening remarks, for example, U.S. House Antitrust subcommittee chair David Cicilline (D-RI) said tech giants enjoy the power to pick winners and losers in the private market and that consumers have “no escape from surveillance” because there are no alternatives. He added “What’s at stake is whether we let ourselves be governed by private monopolies.” VentureBeat spoke with Reback and other antitrust experts about what stood out during the hearing and what should happen next. Each favors some form of action or legislation to address the litany of anticompetitive practice allegations that emerged over nearly six hours of testimony. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ‘We may have run out of options’ During the hearing, a number of Democratic lawmakers argued that Washington needs to take steps to ensure a fair market. Rep. Jim Sensenbrenner (R-WI) insisted that existing antitrust law is fine but needs enforcement. Reback pointed out that convoluted antitrust law makes it difficult to bring cases to court. He said Sensenbrenner was correct in the sense that new legislation might not have been necessary if action had been taken earlier, but he stressed that there isn’t much antitrust enforcement happening today. Reback blamed the outsized growth of tech giants in part on the Obama administration’s failure to take action. Nobody who understands antitrust law would suggest implementing restrictions cavalierly, Reback said, but regulators have already waited so long that it’s tough to see any viable alternative to reform. “By not doing anything for so long, we may have run out of options,” Reback said. “The way anticompetitive practices work, if you deal with them quickly, if you deal with them in a rifle shot, you fix it and the companies go on competing and everything’s fine. But if you don’t fix it, then market power builds up and builds up and, in the case we have here, there are no competitors basically for at least three of these companies. They bought their competitors, put them out of business, whatever. And so that then is a problem in terms of how you expect a free market to fix this.” For context: Apple reported nearly $60 billion in profits and became the world’s most valuable company on Friday. On Thursday, Amazon reported revenue up 40% to $88.9 billion, while Google’s parent company Alphabet and Facebook reported revenue above analysts’ estimates. Each of these companies has been accused of anticompetitive practices that harm democracy, consumers, and small businesses. And each enjoys majority control in a number of industry verticals in the U.S. (and much of the world). Areas of dominance include: Facebook’s control of social media platforms Facebook and Instagram Apple’s and Google’s control of mobile app markets Google’s control in search Facebook’s and Google’s control of online advertising Amazon’s online shopping platform, which records nearly 75% of online sales Facebook’s and Google’s control of digital advertising, which has hurt journalism industry revenue Stacy Mitchell is codirector of the Institute for Local Self-Reliance (ILSR), a nonprofit organization that champions distributed, local control over corporate power. She found the committee to be very knowledgeable about how the four companies conduct business and argued that Amazon has a monopolistic hold on small businesses. Last week, Mitchell resigned her position as a fellow at the Yale University Thurmond Arnold Project, a group studying antitrust issues, after finding out that director Fiona Scott Morton is a paid advisor to Amazon and Apple. ILSR also researches Amazon’s anticompetitive practices and advocates breaking the ecommerce giant into separate companies. Mitchell watched the hearing with interest. “I found it thrilling, really,” she said. “Like, ‘Oh, this is what it looks like when a democratic government actually addresses fundamental issues of power and control and … takes an aggressive stance toward CEOs and really in some ways alters the power dynamics.'” She said an exchange between Rep. Mary Gay Scanlon (D-PA) and Jeff Bezos during this week’s antitrust hearing demonstrated the need for this kind of intervention. Under questioning, Bezos confirmed that the Buy Box algorithm favors products shipped with Prime and Amazon fulfillment services. Bezos also said Amazon ties the use of its fulfillment services to winning the Buy Box. Mitchell highlighted the obvious pressure this puts on sellers. “This is a pretty serious matter because effectively what Amazon has done is it has compelled sellers to use Amazon’s fulfillment services in order to get the kind of placement on the site that actually results in sales.” Scanlon also cited a report ILSR released last week that found Amazon pockets about 30% of sales from independent sellers, up from 19% five years ago. The fee brought in nearly twice the revenue Amazon Web Services did in 2019, and Mitchell said Amazon uses the money to subsidize other company divisions and dominate new industry verticals. That’s part of why ILSR supports breaking major Amazon divisions into individual companies. “Much of its power, and abuse of its power, comes from it being able to leverage one part of its operation to compel someone to do something in another part of its operation, like we were talking about with the fulfillment. A lot of the abuse lives in the in-between, and I think that’s a pretty powerful case,” Mitchell said. In testimony Wednesday, Cicilline noted that Amazon internally refers to third-party sellers as “internal competitors.” Bezos said Wednesday that he “cannot guarantee” Amazon employees have not used data about independent sellers to create the company’s own products, which would be a violation of company policy. Lawmakers argued this practice undercuts small businesses that have no option but to sell their goods on Amazon’s online marketplace. The Wall Street Journal reported in April that Amazon employees used data about individual sellers to create competing Amazon products. Anticompetitive behavior accusations have also been lobbed at Amazon by startups who participated in its Alexa Fund, including Nucleus, which says Amazon stole its smart display and home intercom concept. AI startup DefinedCrowd and more than two dozen startup founders detailed similar practices last week. Rep. Joe Neguse (D-CO) referred to this practice as an “innovation kill zone” that puts fear into smaller businesses and makes fair competition impossible. During the hearing, Apple, Facebook, and Google were also accused of stealing ideas from other companies. (Each of the tech giants has growing investment fund arms.) “Three of the companies [Amazon, Facebook, and Google] basically have the same set of accusations against them, which is that competitors come on your platform, you take their information, you preference your own results, compete against them, drive them out of business, or buy them cheap,” Reback said. Big tech too big for free markets? Both Mitchell and Reback repeatedly emphasized the contrast between the tone of the hearing Wednesday and congressional hearings held in 2018 following the Cambridge Analytica scandal with Mark Zuckerberg. In that earlier hearing, members of Congress demonstrated an inability to understand technology or how Facebook applications work. What seemed to be missing back then, Mitchell and Reback said — but could be seen in exchanges Wednesday — was the ability to ask follow-up questions. Without a grasp of how a company operates, or its products and services, any line of questioning will yield little beyond the initial question. CEOs can then run out a portion of the five minutes allotted to each elected official by explaining technology or offering platitudes about how much their companies value things like privacy. Mitchell said the recent antitrust hearing had a different tone. “This clearly was proceeding as part of an investigation … and sort of had the feel of ‘We’ve gathered all this data or information, and the process is that we’re going to give you a chance to answer for it,'” Mitchell said. Sally Hubbard is director of enforcement strategy at the Open Markets Institute and testified before a House Judiciary subcommittee on antitrust at its first big tech hearing in June 2019. Last summer, Hubbard testified that Facebook and Google use their digital advertising monopoly in multiple ways to disadvantage independent journalism and newspapers that are deemed competitors. Both companies directly compete with the free press for attention while controlling the main traffic highways of search engines, smartphones, and social media. She said the hearing Wednesday in part restored her faith in democracy and institutions and gave her hope that meaningful action is possible. “To me, it was the most impressive hearing that I can remember seeing in my lifetime of the members of Congress being so highly prepared, so in-the-know about complex issues, and willing to take on the most powerful companies in America and the world,” Hubbard said. Break up big tech? Hubbard said she could see the rollback of acquisitions like Google’s AdMob and DoubleClick or Facebook’s Instagram and WhatsApp as part of potential solutions, but she pointed out that such measures aren’t the only option. She wants lawsuits filed when anticompetitive behavior like the kind described in the hearing occurs, ongoing monitoring of exclusionary conduct, and scrutiny of mergers, especially acquisitions of small companies that may be viewed as a competitive threat. She also recommended passing legislation to support Senator Elizabeth Warren’s (D-MA) suggestion that you can own a platform or sell goods on it, but you can’t do both. Reback agrees. “If we can’t get antitrust enforcement on [these companies] … we need legislation like the type Elizabeth Warren is suggesting, where if you own the platform then you can’t own anything on it. That would be a big change, but if there’s no way to police the [current] situation so that competitors get a fair shot and you don’t run everybody else out of business by using their data against them, if there’s no way to police that, then you don’t have any alternative but new legislation,” he said. On the topic of the digital ad market’s negative impact on journalism in the United States, Rep. Pramila Jayapal (D-WA) talked Wednesday with Google and Alphabet CEO Sundar Pichai about Google’s AdExchange operating in a way that resembles insider trading, adding in the context of independent news outlets. There’s nothing revolutionary about stopping a single company from controlling both sides of the market to prevent adverse outcomes that resemble insider trading, and it’s past time to bring the same concepts to the digital economy, Hubbard said. “The point that I like to make a lot is that the online world somehow has avoided regulation, but it’s not the cute little internet of the ’90s anymore. The online world has basically eaten the offline world, so if we don’t have any rules or regulations governing the online world or fair competition, then you don’t have them in the offline world either,” Hubbard said, adding that the hearing will add momentum to other antimonopoly efforts and will help educate the public. Mitchell concluded that the hearing itself was just one step in the process and the committee’s final report will convey the actual substance of the debate. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,043
2,020
"Here’s how the Big Tech breakup should go down | VentureBeat"
"https://venturebeat.com/2020/10/17/heres-how-the-big-tech-breakup-should-go-down"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Here’s how the Big Tech breakup should go down Share on Facebook Share on X Share on LinkedIn The US House of Representatives antitrust subcommittee released its findings last week after a year-and-a-half-long investigation of Big Tech companies Google, Apple, Facebook, and Amazon. Right at the beginning of the 400+ page report, the committee didn’t mince words about its findings: “To put it simply, companies that were once scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons.” Those of us in Silicon Valley who have worked up close with these firms were not surprised to find not only that these companies in particular had become de facto monopolies, but that they were using their monopoly powers to discourage competition and violate antitrust laws. In fact, I wrote just last month about how Apple has been abusing its monopolistic power in the App Store for many years. Apple’s multiple roles as the provider of the operating system, curator, and gatekeeper of the only allowed app store on the billions of devices it has sold, not to mention creator of its own applications, is an excellent example of how today’s “digital monopolies” are both similar to and different from the industrial monopolies of a century ago. Starting in the late nineteenth century, industrialists like John D Rockefeller, Andrew Carnegie, JP Morgan, Cornelius Vanderbilt, and others built companies that were innovative in the beginning, helping America in its rise to become the dominant economic superpower in the world. These companies became incredibly profitable precisely because they were able to corner their markets and crush competition through a combination of bullying and buying up competitors. Theodore Roosevelt broke up these monopolies in the early 20th century using the Sherman Antitrust Act of 1890. Since then, we’ve seen antitrust laws dusted off to be used in one-off lawsuits (like Ma Bell and Microsoft), but there hasn’t been a comparable trust-busting effort for over 100 years. The robber barons of the 1900s weren’t born of any one company but of a series of practices that made the founders of these companies the wealthiest men in the word. Those companies started out by innovating and providing a benefit to society, but their power and profits grew to where they were deemed a threat to both democracy and our free enterprise system. Today’s environment in Silicon Valley is much like that earlier time, with venture capitalists and investors bent on building the next monopoly company that can dominate a new emerging market. Peter Thiel, known for his investment in Facebook and other companies, emphasizes this point in his bestseller, Zero to One, which has become an unofficial monopolist playbook. Each of today’s “digital monopolies” operates in a slightly different market. Amazon is dominant in e-commerce, Google in search and advertising, Facebook in social networking, and Apple in both mobile content and apps. Nevertheless, the committee found that they all had engaged in very similar anti-competitive practices, which included buying up potential competitors (Facebook’s acquisitions of WhatsApp and Instagram, and Google’s acquisition of Android), or using their platform to limit competition, control access, and favor their own products (Apple’s control of the App Store, for example, or Amazon’s ability to undercut third-party retailers using its platform). Last week’s subcommittee report has made a number of recommendations, including a) strengthening antitrust laws, which were last updated in the 1970s and do not reflect the current reality of digital monopolies, b) additional oversight from the FTC over mergers and acquisitions by the big tech companies, and c) breaking up some of the big tech companies into parts to encourage competition. The last recommendation is the most controversial. I would argue it is also the most important. The report didn’t get into specifics of how to break up the big tech companies, probably because this is also the hardest to implement and get agreement on (the minority members of the committee, Republicans, disagreed on this one recommendation). This doesn’t mean that every big tech company has to be broken up – there are reasons why the government allows monopolies in certain areas – such as utilities, for example. And even during the robber baron era, while Rockefeller’s Standard Oil was broken up, U.S. Steel (formed by JP Morgan by buying out Carnegie’s near monopoly on steel) managed to avoid breakup by arguing its case to the US Supreme Court. Still, where there is a clear and present danger to competition and consumers from the Big Tech, the issues are more complex today than they were in the early twentieth century, because the definition of a monopolist has to do with more than just raw market share. To achieve the desired result without causing irreparable harm, we have to look at how these companies are organized, how the technology works, and what can be easily separated out. Here’s how a best case Big Tech breakup would look: Apple should be broken up, so its hardware and OS division is separate from its app store. This means that other app stores would be able to compete with Apple on Apple devices, and competition would be restored to the mobile app marketplace, letting game developers like Fortnite, for example, decide which app stores they want to use to reach consumers. This isn’t as crazy as it sounds – for example, you can set a default browser, so you should be able to set a default app store just as easily. Google’s Android OS and Search should be split up so that Google can’t use its mobile OS dominance and ownership to dominate search. Much has been written about how Google’s algorithms can be used to influence consumers and to make or break winners in almost any industry, so beyond the breakup there should be additional work done on making sure the search algorithms don’t favor any particular players but create an even playing field. To some extent, when Google renamed its parent company to Alphabet, it acknowledged that it was no longer simply a search engine company but a conglomerate that competed in many different industries – spinning some of these off would be a good way forward. Facebook uses its trove of user data and dominance of social networking and messaging to not only dictate advertising terms but also direct users’ attention to its other services (eg, WhatsApp and Instagram). There are some natural fault lines to work with here: two of its mega-acquisitions, WhatsApp and Instagram, remain separate apps and would be easy to spin out as separate companies that are allowed to compete with the mothership. Amazon may be trickier to break up along natural fault lines. Other than AWS (its cloud-based infrastructure division, which powers many other companies on the Internet such as Netflix) much of its business is harder to separate out. It would be difficult to separate Amazon’s first party sales from third party sellers (since they are both on the same site), but more work could be done to ensure fairness and transparency between third party sellers and how Amazon uses the massive amount of data it has, so Congress and the Justice Department might have to rely on other policies and new laws around treating third party users of a platform fairly. Breaking up these companies wouldn’t just lead to a more democratic playing field for smaller competitors. It may have another benefit: the slowing down of what Harvard’s Shoshana Zuboff has dubbed “surveillance capitalism,” a process of making money by exploiting data from user behavior. Just as 20th century industrialists built monopolies by acquiring more physical assets, today’s robber barons are building monopolies based on information, the large amount of data they have already accumulated from users. They feed this data into their algorithms, which in turn leads to more behavioral data. There will undoubtedly be significant resistance from the companies themselves, who have fought hard to secure their monopolistic positions. Since the report came out, each of them has responded with care, preferring corporate statements emailed to reporters or short blog entries rather than statements by the CEOs. Predictably, these responses are variations of the arguments used by the robber barons of 100 years ago, but with a twist or two: We are not a monopoly ( Google , in a public blog post), we protect third-party retailers ( Amazon , in a public blog post), we deliver innovation to consumers and protect them ( Apple , in a statement), and the classic – “Facebook is an American success story” ( Facebook , also in a statement). Since several of these companies offer free products to consumers, namely Facebook and Google, making their money from advertising, and Amazon is able to keep prices low through its dominance, each company claims that breaking it up would actually hurt consumers. Breaking up Big Tech, however, doesn’t mean there will automatically be a wider distribution of wealth. Rockefeller, who was already among the richest men in the world, for example, became even wealthier with his partial ownership of companies like Exxon and Mobil, which were broken up from Standard Oil. The ability for new competitors to come in with new innovations and succeed is the lifeblood of America’s capitalist system. Without competition, today’s dominant companies will remain dominant, technological versions of historical aristocracies, using their vast stores of money, data, and influence (not to mention anticompetitive behaviors) to choke off and acquire any future innovations, which is a bad thing for consumers. To paraphrase former Senator Al Franken from 2017 , antitrust investigations aren’t just to protect competitors from each other, in the end it’s about protecting the public. Rizwan Virk is a venture capitalist, founder of Play Labs @ MIT and the author of Startup Myths & Models: What You Won’t Learn In Business School and The Simulation Hypothesis. He was co-creator of Tap Fish, one of the first successful games on the Apple App Store. Follow him via his website at www.zenentrepreneur.com or on Twitter @rizstanford. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,044
2,019
"The challenges and advantages of virtual teams: Where do you stand? | VentureBeat"
"https://venturebeat.com/2019/09/11/the-challenges-and-advantages-of-virtual-teams-where-do-you-stand"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The challenges and advantages of virtual teams: Where do you stand? Share on Facebook Share on X Share on LinkedIn Presented by Malone University Between 2012 to 2016, the number of employees working remotely rose from 39% to 43%, according to research from Gallup , and employees working remotely spent more time doing so. Simply put, employees are going home, but they’re still working. In fact, they may be working a lot more and more productively. In a world filled with collaboration tools, communication devices, connectivity apps, and handheld digital computers, workplace experts are debating the nature of and need for traditional physical offices. More and more teams now include contractors, freelancers, and other remote workers , making online collaboration critical. But in the rush to explore the future of work, could we be losing something vital that only an in-person team can provide? The fact remains, virtual work teams offer a number of advantages and potential challenges. To generate success, employers need to feel comfortable and competent with online collaborative tools, understand the workplace psychology of space and distance, and know how to nurture specific team dynamics. Virtual teams in the workplace The advent of advanced workplace technology in the 1950s, including optic fiber and computer modems, allowed teams of people separated by time zone, geography, culture, or language to work together toward a common goal. Personal computers in the 1960s, cell phones in the 1970s, voicemail in the 1980s, and the internet in the 1990s each advanced the ability of disparate people to work together as a team. By the late 1990s, according to Management Study Guide , major technology firms such as Sun Microsystems were experimenting with virtual teams. Along about 2007, virtual work had crept outside of the Silicon Valley laboratory and across industry lines. Today, remote teams, virtual workplaces, and hybrid work environments have become de rigueur in business settings. Alec Newcomb , founder and CEO of ScaledOn.Com, began his virtual team in 2013 since he found it hard to source technical talent in Vermont. “There were very few of us committed to remote work,” Newcomb said, “and the collective wisdom was that we were on a fool’s errand that would surely end in disaster.” For Newcomb, the experiment proved successful. He attributed that success to being clear about his company’s approach, hiring people who want to work remotely, and investing in the systems for success. Virtual teams vs traditional teams When virtual teams first originated, the idea was usually called “work from home.” In this model, salespeople or office administrators took one or two days each week to work out of a home office. Some companies feared that productivity would suffer and assumed that employees who were not under the watchful eye of a supervisor would fail to deliver. This fear proved unfounded, however. Virtual teams can often outperform traditional ones. “An extensive study of 80 software development teams with programmers from the United States, South America, Europe, and Asia proved that virtual teams can lead to increased efficiency and better business results,” Harvard Business Review reported, “but only if they are managed to maximize the potential benefits while minimizing the disadvantages.” Advantages of virtual teams Lowers Costs: Virtual teams slash overhead costs. Companies save by purchasing or renting smaller office space; reducing the costs to heat, cool, light, and secure a property; minimizing insurance costs for the site; and reducing expenditures for food, snacks, and office parties. Virtual collaboration tools have democratized entrepreneurship, allowing nearly anyone with a good idea to build a disparate team for far less money than in the old days. Increases Employee Happiness: Most employees love working from home. The flexible schedule, additional time with family, ability to work while traveling, and the chance to take care of errands or housework during breaks make off-site employment an attractive carrot when recruiting talent. According to Zenefits, a human resources company, about 73% of employees with flexible work arrangements somewhat or strongly agree that these arrangements increase their satisfaction at work. Improves Productivity and Efficiency: Economics professor Nicholas Bloom conducted a study for CTrip, China’s largest travel agency, to determine what productivity boosts working from home might engender. Bloom found that work-from-home employees worked a full shift longer each week than their in-office counterparts. That’s six days of work for five days of pay. Purposeful Meetings: Conventional wisdom dictates that meetings are most companies’ biggest productivity waster. When people are expected to show up in a physical office every day, it’s easy to call or attend meetings that have little purpose. Virtual meetings, however, require purpose and planning. If managers are going to the extra trouble to pull your team together, they’re more likely to do it for a valid reason. Challenges of virtual teams Lack of Communication: Unless a company provides the necessary tools and training, team members will not be able to talk with one another. Work will either get duplicated or left undone. Regular meetings and collaborative project management tools can solve this challenge, though. Lack of Social Interaction: Part of building a workplace culture comes down to having fun together. Workers on a virtual team can’t poke their heads into each other’s offices for a few minutes of banter. Emailing a coworker, hosting a team meeting, or setting up a Slack channel for fun and conversation can help people get to know one another in a digital workspace. Insufficient Tools: If corporate leadership does not provide an online team with all the collaborative tools it needs and train the members how to use those tools, the experiment will fail. The good news is there are several affordable or free tools available. Tools for online collaboration The benefits of virtual teams are only as strong as the technology that enables them to work together. While the specific software and platforms change over time, companies can expect to invest in technology for design, communication, documentation, time tracking, file sharing, and project management. Some of the best and least expensive collaborative technologies in use today include both communication tools and project management tools. Communication tools keep employees talking to one another while project management tools allow each team member to have a bird’s-eye view of the whole project. Communication Tools Slack: Slack primarily serves as a communication tool. Team members can use it to chat, ask questions, or get help. Skype: Skype provides face-to-face and voice communication. It’s simple, easy to install, and familiar to most office workers. GotoMeeting: GotoMeeting is just one of many virtual meeting platforms that enable teams to huddle up without using more burdensome communication technology that requires an IT staffer to set up. Project Management Tools Asana: A tool to help organize and track work, Asana was one of the first web-based project management options. Users can create a project, assign it attach documents to it, specify deadlines, and even communicate in the software. Trello: Trello uses a series of moveable notecards to show each member’s tasks. The cards include a deadline with helpful pings to let employees know when a task is about to expire. Monday.com is a popular tool that helps workers stay on top of their assignments, communicate with their supervisors and colleagues, and see what’s coming up ahead. You don’t have to invest in the latest communication technology to enjoy the advantages of virtual teams. When used regularly and purposefully, basic communication tools such as conferencing calling and email can keep a geographically disparate group feeling like a team and working collaboratively. After all, the best communication technology is the one your team will use well and often. In the workplace of the future, managers will recognize the advantages of virtual teams and employees will need to get comfortable moving seamlessly between in-person and online work. Emerging professionals need technological know-how along with the skills to work through the challenges of virtual teams. To learn more on this topic and other emerging trends in business, consider earning your bachelor’s degree online with Malone University. Online Bachelor of Arts in Organizational Management : Choose to concentrate in one of three areas: marketing management, project management, or environmental management. Online Bachelor of Arts in Business Administration : Designed to give you a foundation in a wide breadth of subject areas, from marketing to management to strategy. Already have a bachelor’s degree? Expand your skillset with a master’s degree. Online Master’s in Organizational Leadership : Learn the skills needed to change the dynamics of your workplace. Online MBA : Combines intellectual knowledge, practical skills, and Christian values. Advance your career in a flexible online format designed with the working student in mind. At Malone, you’ll learn from qualified instructors who possess real-world experience in their fields. Our program features a low student-to-faculty ratio, and a warm, welcoming community that fosters personal and professional growth. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,045
2,019
"Amazon commits $700 million to 'upskill' a third of its U.S. workforce by 2025 | VentureBeat"
"https://venturebeat.com/2019/07/11/amazon-commits-700-million-to-upskill-a-third-of-its-u-s-workforce-by-2025"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon commits $700 million to ‘upskill’ a third of its U.S. workforce by 2025 Share on Facebook Share on X Share on LinkedIn An Amazon Prime van during a press conference. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon said it’s investing $700 million over a six-year period to “upskill” 100,000 employees, or about a third of its U.S. workforce. The initiative is part of what the company is calling its “ Upskilling 2025 ” pledge and will entail a suite of new programs designed to help its current workers transition into more — or different — technical roles. This will include employees from its fulfillment centers, transportation network, retail stores, and corporate hubs. There are six programs in total, including the Machine Learning University, which will target existing technically minded workers with training for machine learning roles; the Amazon Technical Academy, which will train people to move into software engineering roles; and Associate2Tech, which is aimed at guiding fulfillment center workers into “technical roles,” regardless of their previous experience. “Through our continued investment in local communities in more than 40 states across the country, we have created tens of thousands of jobs in the U.S. in the past year alone,” said Beth Galetti, senior VP for HR at Amazon. “For us, creating these opportunities is just the beginning. While many of our employees want to build their careers here, for others it might be a stepping stone to different aspirations. We think it’s important to invest in our employees and to help them gain new skills and create more professional options for themselves.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Criticism This latest announcement comes after U.S. politicians have leveled a wave of criticism at Amazon over its tax payments and employee pay rates. Unsurprisingly, Amazon has come out guns a-blazin’ with very public rebuttals. At the same time, Amazon warehouse workers are currently planning a Prime Day protest over job security and unsafe working conditions. However, as one the largest employers in the U.S., Amazon has been pushing to improve its public perception in recent times, having raised its minimum wage to $15 per hour. It also recently announced a new program whereby it will pay employees to quit their jobs and start their own Amazon package delivery business. Other new training programs will include AmazonCareer Choice, which aims to train fulfillment center workers in other “high-demand occupations;” Amazon Apprenticeship, which will offer “paid intensive classroom training” and apprenticeships at the company, certified by the Department of Labor; and AWS Training and Certification, which is designed to give employees a better practical knowledge of AWS Cloud. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,046
2,013
"Dirty business: Robots roam the sewer network | VentureBeat"
"https://venturebeat.com/2013/01/22/robots-roam-the-sewer-network"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dirty business: Robots roam the sewer network Share on Facebook Share on X Share on LinkedIn RedZone Robotics just launched a new robot to inspect mid-sized sewage pipes for corrosion, deformation, and debris in order to prevent leaks that could pose health hazards. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Autonomous roving robots may be coming to a sewer near you. RedZone Robotics just launched a new robot to inspect mid-sized sewage pipes for corrosion, deformation, and debris in order to prevent leaks that could pose health hazards. City waste water networks are often outdated, decaying, and maintained by skeleton maintenance crews. The EPA estimates that U.S. investments in wastewater will need to increase by over $150 billion over the next two decades to maintain current services. Many water companies have pipes one hundred feet underground that have never been inspected. [youtube http://www.youtube.com/watch?v=a7EGOoMmt7k] Redzone’s first sewer robot, Responder, was built to inspect the largest pipes in the toughest conditions. Navigation in pipes is relatively easy, but they may be littered with debris and have various levels of sewage flow, making locomotion difficult.”It was a quest to send a robot where no robot had gone before,” says Redzone’s CEO Mike Lach. “Waste water is a perfect application for robotics: dirty, dull, and dangerous.” Smaller pipes can be inspected manually using a remote control vehicle with a camera attached, but this is inefficient, time-consuming, and impossible for many pipes. Redzone’s robots can be dropped into one manhole and find their way to the next one for collection. They carry cameras, laser, lidar (light detection and ranging), sonar (for detection below the flow line), and hydrogen sulfide gas sensors. Hydrogen sulfide can corrode pipes. A combination of data from all of the sensors is used to build a model of the pipe’s interior and identify, for example, which pipes have the most corrosion. “Money is tight. You are dealing with public funds,” Lach explains. “How do you get the data you need to make decisions? And even if you can get that data, is it good enough? Do you replace pipes? Refurbish them?” These are the kinds of questions Redzone’s analytics platform can answer. In many cities, the first task is simply to map the wastewater system and its state of repair using the robot inspectors. For larger pipes, the cost of inspection is similar to existing methods, but the data acquired is much more rich. Inspection of smaller pipes is cheaper and quicker than the alternatives. A n assessment that might otherwise take 15 years can be completed in one. Fast, fearless, and happy to do the dirty work. What more could you want from a robot? VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,047
2,019
"Parrot's Anafi Thermal is a $1,900 drone for rescuers, architects, and the energy industry | VentureBeat"
"https://venturebeat.com/2019/04/15/parrots-anafi-thermal-is-a-1900-drone-for-rescuers-architects-and-the-energy-industry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Parrot’s Anafi Thermal is a $1,900 drone for rescuers, architects, and the energy industry Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Last summer, Paris-based Parrot took the wraps off Anafi , a fold-up, 4K-capable drone that weighs less than a pound and costs just $700. It’s targeted strictly at consumers, but today the company unveiled a new variant — Anafi Thermal — that’s equipped with a high-fidelity thermal sensor and custom software, which the company says make it an ideal fit for civil service professionals and rescuers, building and public works engineers, energy producers and transporters, and environmental preservationists. Anafi Thermal packs the same hardware as its predecessor, including a robust carbon fiber frame filled with hollow glass microbeads. Its arms fold and unfurl in less than three seconds, and its gimbal-mounted 21-megapixel Sony IMX230 sensor can shoot up to 30 frames per second in 4K high dynamic range (HDR) at 100Mpbs, with lossless zoom of up to 1.4 times. That said, it’s a tad lighter at 315 grams (compared with the Anafi’s 325 grams), and has slightly slimmer folding arms and a 26-minute flight time (versus the original’s 25-minute flight time). What’s new is the ability to view thermal and RGB images of surfaces, structures, and at-risk areas as Anafi Thermal flies over or under them. These images are recorded by a Flir sensor (with a 160 x 120 resolution) which can measure heat between 14 and 752 degrees Fahrenheit. (For the uninitiated, Flir supplies over 500,000 thermal sensors to automakers like General Motors, BMW, Audi, Mercedes-Benz, and Volkswagen for driver warning systems.) Drone pilots can measure the temperature of a specific part of an image, change color palettes, and more from Parrot’s companion app. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The drone’s unique imaging capabilities allow professionals to take immediate action or analyze recorded photos and videos in unprecedented detail,” Anafi wrote in a press release. “Anafi Thermal collects relevant and previously inaccessible data with complete security, improving the return on investment, efficiency, and productivity for professionals in multiple industries.” Anafi Thermal will be available beginning May 2019 for $1,900. The drone has stiff competition in DJI’s $2,000 Mavic 2 Enterprise , a solution tailor-made for firefighting; law enforcement; emergency response; and inspection of powerlines, cell towers, and bridges. The Mavic 2 is less portable but has a higher top speed (45 miles per hour versus Anafi’s 33 miles per hour) and longer battery life (31 minutes versus 25 minutes), plus 10 optical avoidance sensors and a 12-megapixel, 1/2.3-inch gimbal-stabilized sensor with up to 2 times optical and 3 times digital zoom that can shoot up to 4K. Perhaps the highlight of the Mavic 2 Enterprise is an extender port that allows pilots to connect modular accessories to its frame, like M2E Beacon, a dual flashlight that provides up to 2,400 lumens; the M2E Speaker, a loudspeaker that can play up to 10 prerecorded voice messages; and the M2E Beacon, an ultra-bright LED that DJI says makes the Mavic 2 Enterprise easier to spot from miles away. All three come bundled in the box, along with a remote controller, a spare battery, and a protection kit with flight tools. Companies like AT&T use drones for maintenance inspections and to assist in natural disaster zones, and dozens of local government agencies, like the San Diego Fire Department (SDFD), have begun actively deploying drones as part of the Federal Aviation Administration’s (FAA) unmanned aerial systems integration pilot program. In May, the FAA chose 10 winners from a pool of more than 160 applicants interested in reimagining how drones can be used by governments and private industry. Meanwhile, telepresence drone piloting company Cape and others in the industry have begun to partner with first responders like the Chula Vista Police Department and San Diego Fire Department for field tests. Reports show that the commercial drone industry is continuing to grow quickly, albeit from a small base. A 2017 forecast from Gartner projected the number of commercial drones sold that year would exceed 174,000. Moreover, about $454 million was thrown at UAV startups in 2016 alone , and the market is forecast to become a $127 billion industry by 2020. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,048
2,020
"DJI debuts new enterprise drone and cameras | VentureBeat"
"https://venturebeat.com/2020/05/07/dji-debuts-new-enterprise-drone-and-cameras"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DJI debuts new enterprise drone and cameras Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Just over a week after unveiling the Mavic Air 2 , DJI today announced two new additions to its product portfolio: the Matrice 300 RTK (M300 RTK), an enterprise-focused drone platform, and the Zenmuse H20 series, a pair of hybrid multi-sensor cameras. Both were designed for aerial inspections and data collection missions like those some companies have launched to combat the spread of the coronavirus. DJI says its enterprise and first responder customers are conducting remote outreach to homeless populations in Tulsa, Oklahoma and ensuring that beachgoers in Dayton Beach, Florida follow social distancing guidelines. “With the M300 RTK flying platform and the Zenmuse H20 camera series, we are providing a safer and smarter solution to our enterprise customers,” said DJI senior director of corporate strategy Christina Zhang. “This solution [promises to] significantly [enhance] operations across public safety, law enforcement, energy, surveying, and mapping, as well as critical infrastructure inspections.” M300 RTK The M300 RTK , which DJI says is the first drone in its commercial portfolio to incorporate “modern aviation features,” packs a six-directional sensing and positioning system (with a 40-meter horizontal range) and a battery that delivers on average 55 minutes of flight time. The M300 RTK’s enclosure is IP45-rated against debris ingress, and its OcuSync Enterprise transmission system beams triple-channel 1080p video from up to 15 kilometers (26.46 miles) away. Other highlights include support for a maximum of three 2.7-kilogram (2.2-pound) payloads, AES-256 encryption for secure data transmission, AirSense (ADS-B) for enhanced airspace safety, and an anti-collision beacon for increased visibility. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Thanks in part to AirSense, the M300 RTK’s primary flight display shows flight and navigation data like altitude, speed, obstacle data, and other aircraft. Beyond this, the M300 RTK offers a new Advanced Dual Operator Mode that gives two pilots an equal opportunity to gain flight control priority, with the transfer displayed by a series of icons on the DJI Smart Controller Enterprise remote control. The drone also benefits from an integrated UAV Health Management system that provides an overview of critical systems, allowing admins to manage firmware updates across an entire fleet, track pilot hours, and review flight missions. On the software side, the M300 RTK and the Zenmuse H20 series boast a suite of features dubbed Smart Pin & Track. One of these — PinPoint — allows users to mark subjects of interest and share location data with a second operator (or, if necessary, to ground teams via DJI’s FlightClub app). Another feature — Smart Track — lets drone pilots automatically detect and track moving objects — even at “extreme” distances — while updating the location in real time. M300 RTK’s and the Zenmuse H20 series’ Smart Inspection tools — which comprise Live Mission Recording, AI-Spot Check, and Waypoints 2.0 — optimize data collection in the course of power line, railway, and oil and gas inspections. Live MIssion Recording tapes automated missions on the fly, while AI-Spot Check performs comparisons between previously recorded subjects and a current live view. As for Waypoints 2.0, it delivers a mission planning system with support for up to 65,535 waypoints and consecutive actions, as well as third-party payloads. DJI Zenmuse H20 series DJI’s Zenmuse H20 series cameras are tailored for industrial applications and public safety missions, the company says, with an IP44-rated enclosure and a feature called High-Res Grid Photo that captures imagery with the help of a grid overlay. A separate feature called One-Click Capture records videos or photos of up to three cameras simultaneously without requiring an operator to manually switch between views or repeat a mission. And Night Scene provides clearer visibility in low-light environments. The Zenmuse H20 series comes in two versions: The H20, which has a 20-megapixel sensor with 23 times hybrid optical zoom; a 12-megapixel wide camera; and a laser range finder covering distances from 3 meters (9.8 feet) to 1,200 meters (3,937 feet). The H20T, which adds a 640 x 512-pixel radiometric thermal camera that captures footage at 30 frames per second. As with the M300 RTK, the Zenmuse H20 series can be controlled with the DJI Pilot app, which has been redesigned with a simplified zoom toggle and a field-of-view preview atop wide-angle and thermal camera footage. From the app, users can switch between various views, including the wide, zoom, and thermal cameras. The M300 RTK and Zenmuse H20 series are available for preorder from official DJI Enterprise dealers, starting today. They’ll begin shipping in Q2 2020. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,049
2,020
"Sphero spins out Company Six to bring robotics tech to first responders | VentureBeat"
"https://venturebeat.com/2020/05/20/sphero-spins-out-company-six-to-bring-robotics-tech-to-first-responders"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sphero spins out Company Six to bring robotics tech to first responders Share on Facebook Share on X Share on LinkedIn Sphero's recently launched robotic four-wheeler, RVR. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Sphero , the decade-old Colorado-based company best known for its programmable robots, today announced Company Six (CO6), a spinoff that will focus on commercializing intelligence robots and AI-based apps for military, EMT, and fire personnel and others who work in dangerous situations. To fund the productization and market entry of its initial products, Company Six — which will be led by former Sphero COO Jim Booth, alongside several members of the original Sphero team — has raised a $3 million seed investment. Robots are ripe for first responder scenarios, as novel research and commercial products continue to demonstrate. Machines like those from RedZone can autonomously inspect sewage pipes for corrosion, deformation, and debris in order to prevent leaks that could pose health hazards. And drones like the newly unveiled DJI M300 RTK and Parrot Anafi Thermal have been tapped by companies like AT&T and government agencies for maintenance inspections and assistance in disaster zones. Company Six appears poised to carve out a niche in this market, which was estimated to be worth in excess of $3.7 billion. Company Six began as Sphero’s Public Safety Division, the brainchild of former Sphero CEO Paul Berberian and Booth, both of whom have backgrounds in military service. Booth was an early mentor to Sphero cofounders Ian Bernstein and Adam Wilson during the Techstars Boulder Accelerator 2010 class. Booth joined Sphero after the Techstars program and helped launch and scale the company while managing a number of areas, including operations, human resources, and business development. According to Booth, Company Six’s near-term goal will be to create technologies that are not only robust and feature-rich enough for professional applications, but affordable enough to be adopted by the majority of civilian and military personnel. The products and services it hopes to deliver — which will include a cloud-based analytics and monitoring platform — will be designed to maintain safety and situational awareness and improve decision-making in the field for critical incidents and everyday operating environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The spinout of Company Six follows Sphero’s acquisition of New York City-based modular electronics startup LittleBits , which gave Sphero a combined portfolio of over 140 patents in robotics, electronics, software, and the internet of things (IoT). At the time, Sphero said the major play for the burgeoning science, technology, engineering, arts, and math (STEAM) product segment would enable it to reach over 6 million students and 65,000 teachers across 35,000 institutions globally. LittleBits was Sphero’s second acquisition after it snapped up Specdrums in June 2018. This marked a turning point for Sphero, coming after a $12 million funding round that brought its total raised to $119 million. After Sphero Mini and Disney-licensed products like Ultimate Lightning McQueen , R2D2, BB-9E, and Spider-Man failed to secure a foothold during the 2017 holiday season, resulting in layoffs at the company’s Boulder, Colorado; U.K.; and Hong Kong offices, Sphero pledged to redouble its commitment to education. As part of this effort, it spun out Misty Robotics , a company developing personal, extensible, and open source robots for the home. Berberian will move into the role of chair as a part of Company Six’s launch. He’ll also preside over the spinout as Paul Copioli — the LittleBits, VEX Robotics, Fanuc, and Lockheed Martin veteran who joined Sphero in August 2019 — takes the reigns as CEO. Company Six’s seed round was led by Spider Capital, with participation from existing Sphero investors, including Foundry Group, Techstars, and new investor GAN Ventures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,050
2,018
"Figma raises $25 million to take on Adobe with a browser-based interface design tool | VentureBeat"
"https://venturebeat.com/2018/02/02/figma-raises-25-million-to-take-on-adobe-and-invision-with-its-browser-based-interface-design-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Figma raises $25 million to take on Adobe with a browser-based interface design tool Share on Facebook Share on X Share on LinkedIn Figma. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Figma , an interface design and prototyping tool that works in the browser, has raised $25 million in a series B round of funding from Kleiner Perkins, Greylock, and Index Ventures. Founded out of San Francisco in 2012, Figma is one of a number of players plying their trades in the UI design and prototyping realm. However, Figma claims a number of notable advantages. For example, it works in the browser and facilitates collaboration between team members who are working on the same app or website design. Moreover, you don’t have to upload or download files, this is all in real time, with all edits made visible to collaborators instantly and a messaging facility available for discussion. In short, it’s kind of like Google Docs for designing, prototyping, and collaborating around the interface creation process. Above: Figma: Discussing mobile app design project Much like other UI design tools, Figma also offers accompanying “mirror” apps for smartphones so you can see what an app design looks like on a real device. And it has desktop apps that bring some offline functionality to the mix. Figma launched in preview back in 2015 , though it didn’t launch fully to the public until the following year. Prior to now, the company had raised around $18 million in funding from such notable names as Greylock, Index, LinkedIn CEO Jeff Weiner, and computer scientist DJ Patil. And with another $25 million in the bank, the company said it plans to double down on its efforts in the enterprise. Competitors in the space includes the mighty Adobe, which launched its Adobe XD prototyping and wireframing tool in October after 18 months in beta — though it still only works on macOS 10.11/Windows 10 Anniversary and later. Adobe XD is still in its relative infancy, but given the company’s existing footprint and reputation in the design realm, it will clearly be a major competitor to the likes of Figma. Elsewhere, New York-based InVision offers similar UI prototyping smarts, though you have to create the mockups and wireframes using a separate design tool, while Dutch startup Bohemian Coding offers the popular Mac-only app Sketch. Clearly, there is a big demand in the designer world for such tools, with InVision raising $100 million a few months back to take its total funding to more than $230 million since its inception in 2011. Shortly after, Netherlands-based Framer secured $7.7 million for its visual design prototyping tool. Figma claims some big name customers, including Microsoft, Uber, and Slack. Indeed, Kleiner Perkins, which led Figma’s latest funding round, is also one of Slack’s backers. Mamoon Hamid, general partner at Kleiner Perkins, sees some similarities between the two companies. “Figma makes the design process more open and collaborative so teams can develop and bring new products to market faster,” said Hamid. “In many respects, Figma’s traction and potential remind me of Slack at this stage.” Figma represents Hamid’s inaugural investment since he joined Kleiner Perkins back in August. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,051
2,018
"Nvidia: 'Every cloud computing software maker is building on top of CUDA' | VentureBeat"
"https://venturebeat.com/2018/03/27/nvidia-every-cloud-computing-software-maker-is-building-on-top-of-cuda"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia: ‘Every cloud computing software maker is building on top of CUDA’ Share on Facebook Share on X Share on LinkedIn Jeff Herbst runs investments at Nvidia. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia loves the graphics processing unit (GPU) and all of the new kinds of computing it has enabled, from self-driving cars to medical imaging devices. And venture capitalists are showing their love by investing in GPU computing startups. Jeff Herbst, vice president of business development at Nvidia, said in an interview that he was encouraged by how big the ecosystem has grown for GPU computing investments. Herbst was host to a couple hundred VCs and entrepreneurs at the GTC 2018 event in San Jose, California. The luncheon was several times bigger than last year’s event. “They get it now,” Herbst said. “It’s great to see so many VCs here. It’s no longer a risk to see your companies build on top of the GPU platform. I think it’s necessity. It’s real. It’s past its inflection point. The train has left the station.” Nvidia introduced programmability to its graphics processors in 2001, thereby inventing the GPU. Then it created the CUDA programming language in 2006 to enable programmers to run non-graphics software on the GPU, which had the advantage of having lots of parallel processors. That led to a huge wave of GPU growth, and there are now more than 820,000 CUDA programmers. CUDA has been downloaded 8 million times, and there are 350 applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Jeff Herbst said the era of GPU computing has arrived. Much of this happened because of advances in deep learning neural networks, which in the past five years have made huge strides in recognizing non-structured data, such as images of flowers. Now deep learning software running on a GPU can recognize a flower in just about any photo. Many VCs are investing in Nvidia’s hardware rivals. Some are creating chips that are custom-designed for deep learning. But Herbst said he was OK with that because it means that the overall market for AI hardware is strong. “The big companies are there, the VC ecosystem is there,” Herbst said. At the same time, growth that continued for decades under Moore’s Law, which predicts the number of transistors on a chip will double every couple of years, has slowed. So central processing units (CPUs) aren’t getting faster. But with GPU computing, there has been a 1,000-fold improvement in processing speeds. “We can see the opportunity for thousands of times speed-up over the next decade, and we can see that because the ecosystem is appearing before our eyes,” he said. “Every cloud computing software maker is building on top of CUDA.” Roughly half of the top 50 supercomputers run on GPUs now. So nothing could go wrong, right? Well, other processor designers hope they can create something to accelerate AI computing more efficiently than the GPU. Nvidia’s fly-wheel has a lot of momentum, and it will be hard to stop, though others will keep trying. Nvidia itself has invested in eight companies since May: BlazingDB, Deep Instinct, Deepgram, Element AI, Tu Simple, Graphistry, H20.ai, and JingChi. “We want to feed you our best companies and invest alongside as a strategic investor along the way,” Herbst said to the VCs in the audience. “Our door is open for business, and we want to help you.” George Hoyem, managing partner at In-Q-Tel, said his firm invests in more than 50 or 60 companies per year. Above: GPU investing panel at GTC 2018. Left to right: Jeff Herbst of Nvidia, Todd Mostak of MapD, and George Hoyem of In-Q-Tel. “We’re delighted that GPUs are taking center stage,” Hoyem said. “This is a whole new platform shift. Who would have thought database would run on a GPU platform?” He said the government is interested in GPU computing because it can process enormous amounts of data. MapD CEO Todd Mostak said his company received an investment from In-Q-Tel to support its work analyzing billions of pieces of visual data. His customers include federal agencies, because they’re solving hard problems with massive amounts of data. MapD won $100,000 in a Nvidia GPU Ventures contest, and it was able to get a round of funding after that, Mostak said. Nvidia invested in startup TuSimple, which has since gone on to raise more than $80 million for its self-driving truck business in China. That support helped a lot, said Xiaodi Hou, chief technology officer of TuSimple. “Taking deep learning and applying it to emotion, like sentiment about something, could be very valuable,” said Dharmesh Thakker, general partner at Battery Ventures. “You should assume more and more data is being created. Being able to extract the features and recognizing them is important.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,052
2,018
"Instagram launches IGTV, a standalone app for longform video | VentureBeat"
"https://venturebeat.com/2018/06/20/instagram-launches-igtv-a-standalone-app-for-longform-video"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Instagram launches IGTV, a standalone app for longform video Share on Facebook Share on X Share on LinkedIn Instagram cofounder Kevin Systrom debuts IGTV at an event held in San Francisco July 20, 2018 Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook today announced the launch of IGTV , a standalone video app for Instagram users. IGTV content will also be available in the Instagram app. IGTV videos will be able to go beyond the 60-second limit placed on videos uploaded on Instagram today. The move will put Instagram’s content in competition with YouTube, which uses programs like YouTube Creators to support its community of video makers. Apple also reportedly plans to launch a streaming video service in spring 2019 , though the service is tied to professionally developed shows rather than the wider community of social media creatives. The announcement follows news earlier this week that Facebook plans to introduce tools to connect video creators with advertisers and the launch of Facebook Watch for longform video last summe r. At launch, IGTV videos can be up to 10 minutes long, while select creators will be allowed to make videos up to 60 minutes long. The news was announced a day ahead of VidCon , a popular conference for video creators being held this week in Anaheim, California. The IGTV app will give users the ability to follow their favorite creators and use machine learning to predict which videos they’ll want to watch next. A swipe left or right in the IGTV app will bring up additional content, and viewers will be able to leave comments and interact with video creators. IGTV is the most recent addition to Instagram’s video offerings, following the introduction of Instagram Stories and live video. The app will be available for download on iOS and Android smartphones in the coming days, Instagram CEO Kevin Systrom told a gathering of influencers and members of the press today at an event held in San Francisco. Instagram users can update the app in order to begin watching IGTV videos today. Instagram now has more than 1 billion monthly active users, the company also announced today. Creators can splice together video from their Stories and other video content, but a standalone app was chosen over extending videos upload permission beyond 60 seconds in order to highlight longform content. “I think in all of our research, what we figured out very quickly is that when people browse through Feed, they don’t want to stop and watch a 10-minute video, and to do that slows down Feed and makes it feel clunky,” Systrom said in a press conference after the debut. IGTV will not launch with advertising or make payments to creators, but monetization strategy will be explored in the future, Systrom said. However, influencers will be able to share links in IGTV to support deals they strike with brands. No plans are currently in the works to produce original Instagram content for IGTV. In other recent video-related Instagram news, at the F8 developer conference last month, Facebook announced plans to introduce group video chat for WhatsApp and Instagram, as well as the ability to share content from apps like GoPro and Spotify in Instagram Stories. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,053
2,020
"Google's AutoFlip uses AI to crop videos for you | VentureBeat"
"https://venturebeat.com/2020/02/13/googles-autoflip-automatically-crops-videos-using-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s AutoFlip uses AI to crop videos for you Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Video filmed and edited for TV is typically created and viewed in landscape, but problematically, aspect ratios like 16:9 and 4:3 don’t always fit the display being used for viewing. Fortunately, Google is on the case. It today detailed AutoFlip , an open source tool for intelligent video reframing. Given a video and a target dimension, it analyzes the video content and develops optimal tracking and cropping strategies, after which it produces an output video with the same duration in the desired aspect ratio. As Google Research senior software engineer Nathan Frey and senior software engineer Zheng Sun note in a blog post, traditional approaches for reframing video usually involve static cropping, which often leads to unsatisfactory results. More bespoke approaches are superior, but they typically require video curators to manually identify salient content in each frame, track their transitions from frame to frame, and adjust crop regions accordingly throughout the video. By contrast, AutoFlip is completely automatic thanks to AI object detection and tracking technologies that intelligently understand video content. The system detects changes in the composition that signify scene changes in order to isolate scenes for processing. And within each shot, it uses video analysis to identify salient content before reframing the scene, chiefly by selecting an optimized camera mode and path. Above: AutoFlip (bottom row) compared with a baseline method (top row). To detect when a shot in a video changes, AutoFlip computes the color histogram of each frame and compares this with prior frames. If the distribution of frame colors changes at a different rate than a sliding historical window, a shot change is signaled. AutoFlip buffers the video until the scene is complete before making reframing decisions in order to optimize the reframing for the entire scene. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AutoFlip also taps AI-based object detection models to find interesting content in the frame, like people, animals, text overlays, logos, and motion. Face and object detection models are integrated with AutoFlip through MediaPipe, a framework that enables the development of pipelines for processing multimodal data, which uses Google’s TensorFlow Lite machine learning framework on processors. This structure allows AutoFlip to be extensible, according to Google, so developers can add detection algorithms for different use cases and video content. AutoFlip automatically chooses a reframing strategy — stationary, panning, or tracking — depending on the way objects behave during the scene. In stationary mode, the reframed camera viewport is fixed in a position (like a stationary tripod) where important content can be viewed throughout the majority of the scene. Panning mode moves the viewport at a constant velocity, on the other hand, while tracking mode provides continuous and steady tracking of objects as they move around within the frame. Based on which reframing strategy is selected, AutoFlip determines a cropping window for each frame while preserving the content of interest. A configuration graph provides settings for reframing such that if it becomes impossible to cover all the required region, the system will automatically switch to a less aggressive strategy by applying a letterbox effect, padding the image to fill the frame. AutoFlip will draw on the background color (if it’s a solid color) to ensure the padding blends in, or otherwise use a blurred version of the original frame. The researchers leave to future work improving AutoFlip’s ability to detect “objects relevant to the intent of the video,” such as speaker detection for interviews or animated face detection on cartoons, and ensuring input video with overlays on the edges of the screen (such as text or logos) aren’t cropped from the view. But they assert that even in its current form, AutoFlip will “reduce the barriers to … design creativity.” “By combining text/logo detection and image inpainting technology, we hope that future versions of AutoFlip can reposition foreground objects to better fit the new aspect ratios. [And] in situations where padding is required, deep uncrop technology could provide improved ability to expand beyond the original viewable area,” wrote Frey and Sun. “We are excited to release this tool directly to developers and filmmakers, reducing the barriers to their design creativity and reach through the automation of video editing. The ability to adapt any video format to various aspect ratios is becoming increasingly important as the diversity of devices for video content consumption continues to rapidly increase.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,054
2,016
"Facebook opens its Messenger platform to chatbots | VentureBeat"
"https://venturebeat.com/2016/04/12/facebook-opens-its-messenger-platform-to-chatbots"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook opens its Messenger platform to chatbots Share on Facebook Share on X Share on LinkedIn Facebook CEO Mark Zuckerberg on stage at the F8 2015 developer conference talking about the Messenger Platform. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook announced today that it was opening up its Messenger platform in beta to let chatbots into the app. What this means is that you’re now able to interact with an AI-powered “representative” from a business within one of the largest social networks around. At the F8 developer conference, David Marcus said that product enhancements on Messenger means that more people are using it every day. “People love to interact with businesses within Messenger,” he said. “The future we’re going to build will be even more exciting.” Before the Internet era, Marcus stated, everything was conversational — you had to actually talk to a person to get what you wanted. Then the Internet allowed interactions at a much larger scale. The mobile era offered stripped-down versions of web content in the form of apps. But we download fewer and fewer apps ; meanwhile, mobile web is frustrating, and we still call businesses time to time when we need to deal with urgent issues. In this morass, Messenger offers some good properties: instant and persistent user identity, presented in context. To date, Facebook says, more than 50 million companies operate on the social network, with more than 1 billion business messages sent every month. This is a compelling reason for pursuing bots within Messenger. Marcus said that the Facebook Messenger Platform allows developers to build bots for Messenger using its send and receive API: All bots will keep your identity and send and receive text, images, buttons, bubbles, and calls to action. “In order to build a great experience, you need a combination of UI and conversation. We think the combination of UI and conversation is what is going to work,” he explained. Facebook is offering prominent user controls at the top of every thread that allows you to block specific messages or entire conversations to make sure that “you are always in control of your messaging experience,” Marcus said. There are also discovery tools that can be incorporated into websites, along with password-free login tools , which were announced earlier at F8. To accelerate development, Facebook has also developed a bot framework built off of Wit.ai. Some of the bots Facebook users can access are 1-800 Flowers, Hipmunk, CNN, eBay, Disney, Staples, Shopify, Salesforce, HealthTap, OwnersListens, and more. Marcus says the bot engine, documentation, and send/receive API will be available today. Introducing AI in customer service Facebook has been investing more resources into Messenger in the past couple of years, especially after extracting it from the core app. And what it has done certainly has propelled a massive uptick in adoption by the social network’s 1.59 billion monthly active users , with more than half of them accessing Facebook purely on their mobile device. Chatbots are meant to engage customers in a medium that they’re comfortable with, making responses real-time instead of having to trade back-and-forth communications via email or talk on the phone. Bots could also enhance the capabilities of Facebook’s “M” personal assistant. In August, the company claimed that M “can actually complete tasks on your behalf. It can purchase items, get gifts delivered to your loved ones, book restaurants, travel arrangements, appointments, and way more.” Is this new form of customer service going to stick for brands eager to tap into the massive audience on Facebook Messenger? Some think it’s not the right way to go because it lacks the human interaction that makes customer service personal. Call center software provider Five9 ‘s vice president of product marketing Mayur Anadkat told VentureBeat, “Complex customer interactions should certainly be left to the live agent who can read the situation and react accordingly.” He continued, “As brand loyalty and exceptional customer service become the main priority for brands, companies simply cannot afford for bots to completely handle customer service and risk creating a negative experience. With that said, the live customer service representative will always have a place with the overall customer experience.” However, Yahoo’s senior vice president for product and engineering around advertising and search Enrique Munoz Torres said that companies should embrace bots. He believes that user behaviors have changed. “Users are increasingly more comfortable with conversational interfaces, and they expect that systems will be able to handle complex requests,” he said. In addition, messaging is evolving from applications to platforms. “These developments will definitely impact search,” he continued. “Moreover, it can drive users to ask questions that they currently would not enter into a search engine, like ‘What should I do this weekend?’. Having bots be able to address these complex questions, let alone complete the actions in a really comprehensive way, would be transformative. We, at Yahoo, find this problem to be fascinating and worth exploring.” Taking its place in the bot race Reports about Facebook chatbots first surfaced in January with revelations of an SDK that allows developers to build these conversational tools within Messenger. The launch of chatbots and a live chat API comes a year after Facebook launched its Messenger platform. The goal then was to allow people to express themselves in a new way and make their conversations better. The app started off supporting deep linking within conversations, along with 40 third-party integrations that included Giphy, ESPN, Imgur, and The Weather Channel. Above: Facebook executive David Marcus onstage at last year’s F8, with screens showing the 40 launch partners for the company’s new Messenger Platform. (March 25, 2015) Facebook’s Marcus also revealed an offering called Messenger for Business. With it, users are able to communicate through Facebook Messenger with participating businesses — mirroring what WeChat has offered for a while. Marcus explained that the new service is helping people “communicate more naturally” and can “vastly improve people’s lives.” Many Facebook competitors have adopted bot technology much more quickly, including WeChat, Telegram , Kik , Slack , Microsoft and Skype , and Line. However, with more than 900 million users on its Messenger service , Facebook’s entry could have a substantial effect on the industry. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,055
2,016
"A short history of chatbots and artificial intelligence | VentureBeat"
"https://venturebeat.com/2016/08/15/a-short-history-of-chatbots-and-artificial-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest A short history of chatbots and artificial intelligence Share on Facebook Share on X Share on LinkedIn Robot on an electric box. Credit: Khari Johnson Starting in the 1980s, technology companies like Apple, Microsoft, and many others presented computer users with the graphical user interface as a means to make technology more user-friendly. The average consumer wasn’t going to learn binary code to use a computer, so the great minds at these leading technology companies slapped a screen on technology and offered an interface that provided icons, buttons, toolbars, and other graphical elements so that the computer could be easily consumed by a mass market. Today it’s hard to even imagine technological devices without a screen and a graphical presentation — until now. Early in 2016, we saw the introduction of the first wave of artificial intelligence technology in the form of chatbots. Social media platforms like Facebook allowed developers to create a chatbot for their brand or service so that consumers could carry out some of their daily actions from within their messaging platform. This development of A.I. technology has excited everyone, as the possibilities for the way we communicate with brands have been exponentially expanded. The introduction of chatbots into society has brought us to the beginning of a new era in technology: the era of the conversational interface. It’s an interface that soon won’t require a screen or a mouse to use. There will be no need to click or swipe. This interface will be completely conversational, and those conversations will be indistinguishable from the conversations that we have with our friends and family. To fully understand the massiveness of this soon-to-be reality, we’d have to go back to the first days of the computer, when the desire for artificial intelligence technology and a conversational interface first began. A.I. aspirations Artificial intelligence, by definition, is intelligence exhibited by machines to display them as rational agents that can perceive their surroundings and make decisions. A rational agent defined by humans would be a computer that can realistically simulate human communication. In the 1950s and ’60s, computer scientists Alan Turing and Joseph Weizenbaum contemplated the concept of computers communicating like humans do with experiments like the Turing Test and the invention of the first chatterbot program, Eliza. The Turing Test was developed by Alan Turing in 1950 to test a computer’s ability to display intelligent behavior equivalent to or indistinguishable from that of a human. The test involved three players: two humans and a computer. Player C (human) would type questions into a computer and receive responses from either Player A or Player B. The challenge for Player C was to correctly identify which player was human and which player was a computer. The computer would offer responses, using jargon and vocabulary that was similar to the way we humans communicate in an effort to mask itself. Although the game proved enticing for players, the computer would always betray itself at one point or another due to its basic coding and lack of inventory of human language. The game was invented much before the time of A.I., but it left the desire for artificial intelligence in our minds as an aspirational goal that we might one day reach when our technological knowledge had progressed enough. Eliza, the first chatterbot ever coded, was then invented in 1966 by Joseph Weizenbaum. Eliza, using only 200 lines of code, imitated the language of a therapist. Unlike the Turing Test, there wasn’t any guessing game with Eliza. Humans knew they were interacting with a computer program, and yet through the emotional responses Eliza would offer, humans still grew emotionally attached to the program during trials. The program proved wildly popular in its time, but the same pitfalls that plagued the Turing Test plagued Eliza, as the program’s coding was too basic to reach farther than a short conversation. What was made clear from these early inventions was that humans have a desire to communicate with technology in the same manner that we communicate with each other, but we simply lacked the technological knowledge for it to become a reality at that time. In the past decade, however, progress in computer science and engineering has compounded itself. We live in an era of tech mobility and functionality that was unfathomable even in the ’90s. So it’s no surprise that finally, in 2016, we are beginning to attain what we wanted from computers all along: We are beginning to converse with them. Smartphones were the catalyst The smartphone was the catalyst that pushed us towards the age of artificial intelligence. When the smartphone rose in popularity in the early 2000s, web designers were faced with the obstacle of truncating their websites to fit onto a much smaller screen. This proved to be a difficult task, and responsive design — a website that maintains its functionality across all devices: desktop, tablet, smartphone — became a huge topic in the web design world. The obstacles that these smaller devices created is what led to the popularity of the mobile app. We’ve all heard the phrase “There’s an app for that,” which became culturally ubiquitous when developers started creating mobile apps for every possible service that one might use throughout the day. They believed humans wanted an individual graphical home for everything. This assumption ultimately proved incorrect — turns out users don’t actually like apps. They’d rather converse, as it turns out. A study from Comscore revealed that 78 percent of smartphone users use just three apps or less, and messaging apps are by far the most popular. This discovery shouldn’t have been much of a surprise as spoken language has been our favorite and oldest interface. The graphical interface has its place. But as web designers have continued to struggle with fitting their graphical layouts on smaller screens, spending a huge amount of time and money to constantly revamp the overall user experience, we began to ask: Is the graphical user interface actually quite lacking in efficiency? Could it be that all this time and money spent revamping and perfecting the interface is proof that it’s actually crappy? Could we find a better interface? Where the conversation is heading While A.I. chatbot technology is still very much in its infancy, this breakthrough can lead us to surmise about how close we are to an era when we won’t just be conversing with brands, but technology in general. An era when a screen for a device will be considered antiquated, and we won’t have to struggle with UX design. Companies like Amazon and Google are already exploring this with the Amazon Echo and Google Home products; these are screenless devices that connect to Wi-Fi and then carry out services. Thanks to IoT (Internet of Things), which is the implementation of an internet connection into devices beyond just our phone or computer — such as cars, TVs, stereos, and even washing machines — all these Wi-Fi devices have been entering our lives. Very soon we’ll buy a new TV that has the Google assistant built in. Since the hub will be connected to all your personal platforms, including your calendar, email, Paypal, Netflix, and so on, you will be able to set up your television just by saying, “Hey, Google Assistant, set up my new television with all my favorite content.” These screenless hubs will even make human-like suggestions such as “Hey, based on what you’ve been watching on Netflix, this new show seems like something you might like. Do you want me to play it for you?” The era of a better interface is almost here As you can see, the advent of these natural language processing chatbots are bringing us toward a very exciting time for technology. Thanks to chatbots, we are currently no longer sandboxed into one graphical area at a time to carry out our daily actions. Users no longer have to exit their messaging app to open their mobile browser and plug in a URL to make a dinner reservation, in the processing clicking a dozen or so graphical areas. We will now be able to chat with friends, then chat with the restaurant’s bot in the same digital space to reserve a table, uniting an entire evening’s services into one conversation. Looking toward the future, there will be less adjusting our ways of communication to fit technology and more of technology adapting to us — losing the graphical confines and learning our preferences, our cultural norms, and our slang, becoming more useful to us than we ever thought possible. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,056
2,019
"Kustomer raises $60 million to automate repetitive customer service processes | VentureBeat"
"https://venturebeat.com/2019/12/04/kustomer-raises-60-million-to-automate-repetitive-customer-service-processes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kustomer raises $60 million to automate repetitive customer service processes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For most brands, guiding and tracking customers through every step of their journeys is of critical operational importance. According to a recent PricewaterhouseCoopers report , the number of companies investing in omnichannel experiences has jumped from 20% to 80%, and an Adobe study found that those with the strongest omnichannel customer engagement strategies enjoy 10% year-over-year growth on average and a 25% increase in close rates. In an effort to further accelerate the adoption of omnichannel strategies, AOL and Salesforce veterans Brad Birnbaum and Jeremy Suriel founded New York-based Kustomer in 2015, a software-as-a-service (SaaS) provider that automates repetitive customer service processes by applying analytics atop data from multiple sources. Business is booming, according to the two cofounders — among Kustomer’s marquee customers are Sweetgreen, ThirdLove, Ring, Glossier, Rent the Runway, Away, Glovo, and UNTUCKit. And that’s perhaps why they had no trouble winning over venture backers. Kustomer today announced that it’s raised $60 million in a series E fundraising round led by Coatue, with participation from current investors Tiger Global Management and Battery Ventures. The tranche brings the company’s total raised to over $173.5 million to date, $161 million of which was raised over the past 18 months across four rounds contributed by Redpoint Ventures, Cisco Investments, Canaan Partners, Boldstart Ventures, and Social Leverage. CEO Birnbaum said the infusion will bolster Kustomer’s global expansion as it continues to invest “heavily” in product development. To this end, Kustomer in September launched its first EU datacenter in Dublin, and by the end of the year, it plans to roll out “next-gen” customer relationship management capabilities. Additionally, Kustomer says it’ll triple the size of its development team in 2020. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: The Kustomer web dashboard. Kustomer’s platform, which is hosted on Amazon Web Services, lets clients search, display, and report out-of-the-box on objects like “customers” and “companies” with tweakable attributes such as orders, feedback scores, shipping, tracking, web events, and more. These populate visual cards for customer service agents designed to provide context (i.e.g, past orders or shipping details) and surface relevant events from apps, as well as to enable shortcuts that allows those agents to quickly take actions or respond with predefined text. On the AI side of the equation, Kustomer offers a conversational assistant that collects customer information for human agents and auto-routes conversations. This year saw the launch of KustomerIQ, which allows companies to train AI models to address their unique business needs. The models in question can automatically classify conversations and customer attributes, reading messages between customers and agents using natural language processing techniques. They’re also able to route customers to the appropriate department based on their native language, and to predict conversation volume and staffing needs based on historical and trend data. Kustomer’s workflow and business logic engines support the creation of conditional, multi-branch flows that enable each step to use the output of any previous step, and to trigger responses based on defined events from internal or third-party systems. Queuing and routing features offer a rules-based solution that ostensibly decreases resolution times by matching customers with high-capacity service agents, and by establishing priority queues that ensure select customers, such as VIPs, are handled swiftly. From a dashboard, managers can view which agents are working in real time and launch customer satisfaction surveys (or view the results of recent surveys). Additionally, the dashboard exposes customer sentiment to provide a metric for overall customer service effectiveness, and it enables admins to customize Kustomer’s self-service, customer-facing knowledge base with articles, tutorials, and rich media including videos, PDFs, and more. Kustomer users can segment customers, conversations, and custom objects for targeting purposes, like to narrow down to folks who’ve made a purchase in the last six months. Robust tracking and collaboration tools allow managers and team members to compare metrics like customer satisfaction to money spent on products, and to invite others to join a conversation with mentions, notes, and more. Perhaps better still, Kustomer is truly cross-channel in nature, with support for email, chat, SMS, voice, Twitter, and messaging platforms like Facebook Messenger and WhatsApp. “Kustomer is transforming customer service as we know it. At a time when consumers want intelligent, personalized attention, the most forward looking companies are turning to Kustomer to help them exceed expectations,” said Birnbaum, in a statement. “We are seeing rapid adoption over legacy brands like Salesforce and Zendesk and are in a position of strength across all key business metrics as we raise our Series E. With this latest fundraise, we plan to continue our global expansion and will invest heavily to help our clients deliver exceptional customer service.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,057
2,020
"The term ‘ethical AI’ is finally starting to mean something | VentureBeat"
"https://venturebeat.com/2020/08/23/the-term-ethical-ai-is-finally-starting-to-mean-something"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The term ‘ethical AI’ is finally starting to mean something Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Earlier this year, the independent research organisation of which I am the Director, London-based Ada Lovelace Institute, hosted a panel at the world’s largest AI conference, CogX, called The Ethics Panel to End All Ethics Panels. The title referenced both a tongue-in-cheek effort at self-promotion, and a very real need to put to bed the seemingly endless offering of panels, think-pieces, and government reports preoccupied with ruminating on the abstract ethical questions posed by AI and new data-driven technologies. We had grown impatient with conceptual debates and high-level principles. And we were not alone. 2020 has seen the emergence of a new wave of ethical AI – one focused on the tough questions of power , equity, and justice that underpin emerging technologies, and directed at bringing about actionable change. It supersedes the two waves that came before it: the first wave, defined by principles and dominated by philosophers, and the second wave, led by computer scientists and geared towards technical fixes. Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice. From philosophers to techies Between 2016 and 2019, 74 sets of ethical principles or guidelines for AI were published. This was the first wave of ethical AI, in which we had just begun to understand the potential risks and threats of rapidly advancing machine learning and AI capabilities and were casting around for ways to contain them. In 2016, AlphaGo had just beaten Lee Sedol , promoting serious consideration of the likelihood that general AI was within reach. And algorithmically-curated chaos on the world’s duopolistic platforms, Google and Facebook, had surrounded the two major political earthquakes of the year – Brexit, and Trump’s election. In a panic for how to understand and prevent the harm that was so clearly to follow, policymakers and tech developers turned to philosophers and ethicists to develop codes and standards. These often recycled a subset of the same concepts and rarely moved beyond high-level guidance or contained the specificity of the kind needed to speak to individual use cases and applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This first wave of the movement focused on ethics over law, neglected questions related to systemic injustice and control of infrastructures, and was unwilling to deal with what Michael Veale , Lecturer in Digital Rights and Regulation at University College London, calls “the question of problem framing” – early ethical AI debates usually took as a given that AI will be helpful in solving problems. These shortcomings left the movement open to critique that it had been co-opted by the big tech companies as a means of evading greater regulatory intervention. And those who believed big tech companies were controlling the discourse around ethical AI saw the movement as “ethics washing.” The flow of money from big tech into codification initiatives, civil society, and academia advocating for an ethics-based approach only underscored the legitimacy of these critiques. At the same time, a second wave of ethical AI was emerging. It sought to promote the use of technical interventions to address ethical harms, particularly those related to fairness, bias and non-discrimination. The domain of “fair-ML” was born out of an admirable objective on the part of computer scientists to bake fairness metrics or hard constraints into AI models to moderate their outputs. This focus on technical mechanisms for addressing questions of fairness, bias, and discrimination addressed the clear concerns about how AI and algorithmic systems were inaccurately and unfairly treating people of color or ethnic minorities. Two specific cases contributed important evidence to this argument. The first was the Gender Shades study, which established that facial recognition software deployed by Microsoft and IBM returned higher rates of false positives and false negatives for the faces of women and people of color. The second was the 2016 ProPublica investigation into the COMPAS sentencing algorithmic tool, which found that Black defendants were far more likely than White defendants to be incorrectly judged to be at a higher risk of recidivism, while White defendants were more likely than Black defendants to be incorrectly flagged as low risk. Second-wave ethical AI narrowed in on these questions of bias and fairness, and explored technical interventions to solve them. In doing so, however, it may have skewed and narrowed the discourse, moving it away from the root causes of bias and even exacerbating the position of people of color and ethnic minorities. As Julia Powles , Director of the Minderoo Tech and Policy Lab at the University of Western Australia, argued, alleviating the problems with dataset representativeness “merely co-opts designers in perfecting vast instruments of surveillance and classification. When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.” Some also saw the fair-ML discourse as a form of co-option of socially conscious computer scientists by big tech companies. By framing ethical problems as narrow issues of fairness and accuracy, companies could equate expanded data collection with investing in “ethical AI.” The efforts of tech companies to champion fairness-related codes illustrate this point: In January 2018, Microsoft published its “ethical principles” for AI , starting with “fairness;” in May 2018, Facebook announced a tool to “search for bias” called “Fairness Flow;” and in September 2018, IBM announced a tool called “ AI Fairness 360 ,” designed to “check for unwanted bias in datasets and machine learning models.” What was missing from second-wave ethical AI was an acknowledgement that technical systems are, in fact, sociotechnical systems — they cannot be understood outside of the social context in which they are deployed, and they cannot be optimised for societally beneficial and acceptable outcomes through technical tweaks alone. As Ruha Benjamin, Associate Professor of African American Studies at Princeton University, argued in her seminal text, Race After Technology: Abolitionist Tools for the New Jim Code , “the road to inequity is paved with technical fixes.” The narrow focus on technical fairness is insufficient to help us grapple with all of the complex tradeoffs, opportunities, and risks of an AI-driven future; it confines us to thinking only about whether something works, but doesn’t permit us to ask whether it should work. That is, it supports an approach that asks, “What can we do?” rather than “What should we do?” Ethical AI for a new decade On the eve of the new decade, MIT Technology Review’s Karen Hao published an article entitled “ In 2020, let’s stop AI ethics-washing and actually do something. ” Weeks later, the AI ethics community ushered in 2020 clustered in conference rooms at Barcelona, for the annual ACM Fairness, Accountability and Transparency conference. Among the many papers that had tongues wagging was written by Elettra Bietti , Kennedy Sinclair Scholar Affiliate at the Berkman Klein Center for Internet and Society. It called for a move beyond the “ethics-washing” and “ethics-bashing” that had come to dominate the discipline. Those two pieces heralded a cascade of interventions that saw the community reorienting around a new way of talking about ethical AI, one defined by justice — social justice, racial justice, economic justice, and environmental justice. It has seen some eschew the term “ethical AI” in favor of “just AI.” As the wild and unpredicted events of 2020 have unfurled, alongside them third-wave ethical AI has begun to take hold, strengthened by the immense reckoning that the Black Lives Matter movement has catalysed. Third-wave ethical AI is less conceptual than first-wave ethical AI, and is interested in understanding applications and use cases. It is much more concerned with power, alive to vested interests, and preoccupied with structural issues, including the importance of decolonising AI. An article published by Pratyusha Kalluri, founder of the Radical AI Network, in Nature in July 2020, has epitomized the approach, arguing that “When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people.” What has this meant in practice? We have seen courts begin to grapple with, and political and private sector players admit to, the real power and potential of algorithmic systems. In the UK alone, the Court of Appeal found the use by police of facial recognition systems unlawful and called for a new legal framework; a government department ceased its use of AI for visa application sorting ; the West Midlands police ethics advisory committee argued for the discontinuation of a violence-prediction tool; and high school students across the country protested after tens of thousands of school leavers had their marks downgraded by an algorithmic system used by the education regulator, Ofqual. New Zealand published an Algorithm Charter and France’s Etalab – a government task force for open data, data policy, and open government – has been working to map the algorithmic systems in use across public sector entities and to provide guidance. The shift in gaze of ethical AI studies away from the technical towards the socio-technical has brought more issues into view, such as the anti-competitive practices of big tech companies, platform labor practices, parity in negotiating power in public sector procurement of predictive analytics, and the climate impact of training AI models. It has seen the Overton window contract in terms of what is reputationally acceptable from tech companies; after years of campaigning by researchers like Joy Buolamwini and Timnit Gebru, companies such as Amazon and IBM have finally adopted voluntary moratoria on their sales of facial recognition technology. The COVID crisis has been instrumental, surfacing technical advancements that have helped to fix the power imbalances that exacerbate the risks of AI and algorithmic systems. The availability of the Google/Apple decentralised protocol for enabling exposure notification prevented dozens of governments from launching invasive digital contact tracing apps. At the same time, governments’ response to the pandemic has inevitably catalysed new risks, as public health surveillance has segued into population surveillance, facial recognition systems have been enhanced to work around masks, and the threat of future pandemics is leveraged to justify social media analysis. The UK’s attempt to operationalize a weak Ethics Advisory Board to oversee its failed attempt at launching a centralized contact-tracing app was the death knell for toothless ethical figureheads. Research institutes, activists, and campaigners united by the third-wave approach to ethical AI continue to work to address these risks, with a focus on practical tools for accountability (we at the Ada Lovelace Institute, and others such as AI Now, are working on developing audit and assessment tools for AI; and the Omidyar Network has published its Ethical Explorer toolkit for developers and product managers), litigation, protest and campaigning for moratoria, and bans. Researchers are interrogating what justice means in data-driven societies, and institutes such as Data & Society, the Data Justice Lab at Cardiff University, JUST DATA Lab at Princeton, and the Global Data Justice project at the Tilberg Institute for Law, Technology and Society in the Netherlands are churning out some of the most novel thinking. The Minderoo Foundation has just launched its new “ future says” initiative with a $3.5 million grant, with aims to tackle lawlessness, empower workers, and reimagine the tech sector. [ Update: The Minderoo Foundation says the grant is actually $15 million.] The initiative will build on the critical contribution of tech workers themselves to the third wave of ethical AI, from AI Now co-founder Meredith Whittaker’s organizing work at Google before her departure last year, to walk outs and strikes performed by Amazon logistic workers and Uber and Lyft drivers. But the approach of third-wave ethical AI is by no means accepted across the tech sector yet, as evidenced by the recent acrimonious exchange between AI researchers Yann LeCun and Timnit Gebru about whether the harms of AI should be reduced to a focus on bias. Gebru not only reasserted well established arguments against a narrow focus on dataset bias but also made the case for a more inclusive community of AI scholarship. Mobilized by social pressure, the boundaries of acceptability are shifting fast, and not a moment too soon. But even those of us within the ethical AI community have a long way to go. A case in point: Although we’d programmed diverse speakers across the event, the Ethics Panel to End All Ethics Panels we hosted earlier this year failed to include a person of color, an omission for which we were rightly criticized and hugely regretful. It was a reminder that as long as the domain of AI ethics continues to platform certain types of research approaches, practitioners, and ethical perspectives to the exclusion of others, real change will elude us. “Ethical AI” can not only be defined from the position of European and North American actors; we need to work concertedly to surface other perspectives, other ways of thinking about these issues, if we truly want to find a way to make data and AI work for people and societies across the world. Updated on August 25, 2020 to correct spelling of the Minderoo Foundation and the amount of its initial grant. Carly Kind is a human rights lawyer, a privacy and data protection expert, and Director of the Ada Lovelace Institute. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,058
2,020
"AI-Generated Text Is the Scariest Deepfake of All | WIRED"
"https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Renee DiResta Ideas AI-Generated Text Is the Scariest Deepfake of All Play/Pause Button Pause Photo-Illustration: Sam Whitney; Getty Images Save this story Save Save this story Save Application Text generation End User Big company Sector Social media Source Data Text Speech Technology Natural language processing When pundits and researchers tried to guess what sort of manipulation campaigns might threaten the 2018 and 2020 elections, misleading AI-generated videos often topped the list. Though the tech was still emerging, its potential for abuse was so alarming that tech companies and academic labs prioritized working on, and funding , methods of detection. Social platforms developed special policies for posts containing “synthetic and manipulated media,” in hopes of striking the right balance between preserving free expression and deterring viral lies. But now, with about three months to go until November 3, that wave of deepfaked moving images seems never to have broken. Instead, another form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake text. Last month brought the introduction of GPT-3 , the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal ) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us? This wouldn't be the first such media inflection point where our sense of what's real shifted all at once. When Photoshop, After Effects, and other image-editing and CGI tools began to emerge three decades ago, the transformative potential of these tools for artistic endeavors—as well as their impact on our perception of the world—was immediately recognized. “Adobe Photoshop is easily the most life-changing program in publishing history,” declared a Macworld article from 2000 , announcing the launch of Photoshop 6.0. “Today, fine artists add finishing touches by Photoshopping their artwork, and pornographers would have nothing to offer except reality if they didn't Photoshop every one of their graphics.” We came to accept that technology for what it was and developed a healthy skepticism. Very few people today believe that an airbrushed magazine cover shows the model as they really are. (In fact, it’s often un-Photoshopped content that attracts public attention.) And yet, we don’t fully disbelieve such photos, either: While there are occasional heated debates about the impact of normalizing airbrushing—or more relevant today, filtering—we still trust that photos show a real person captured at a specific moment in time. We understand that each picture is rooted in reality. Generated media, such as deepfaked video or GPT-3 output , is different. If used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check. In the early 2000s, it was easy to dissect pre-vs-post photos of celebrities and discuss whether the latter created unrealistic ideals of perfection. In 2020, we confront increasingly plausible celebrity face-swaps on porn, and clips in which world leaders say things they’ve never said before. We will have to adjust, and adapt, to a new level of unreality. Even social media platforms recognize this distinction; their deepfake moderation policies distinguish between media content that is synthetic and that which is merely “modified”. Pervasive generated text has the potential to warp our social communication ecosystem. To moderate deepfaked content, though, you have to know it’s there. Out of all the forms that now exist, video may turn out to be the easiest to detect. Videos created by AI often have digital tells where the output falls into the uncanny valley: “ soft biometrics ” such as a person’s facial movements are off; an earring or some teeth are poorly rendered; or a person’s heartbeat, detectable through subtle shifts in coloring, is not present. Many of these giveaways can be overcome with software tweaks. In 2018’s deepfake videos, for instance, the subjects’ blinking was often wrong; but shortly after this discovery was published, the issue was fixed. Generated audio can be more subtle—no visuals, so fewer opportunities for mistakes—but promising research efforts are underway to suss those out as well. The war between fakers and authenticators will continue in perpetuity. Perhaps most important, the public is increasingly aware of the technology. In fact, that knowledge may ultimately pose a different kind of risk, related to and yet distinct from the generated audio and videos themselves: Politicians will now be able to dismiss real, scandalous videos as artificial constructs simply by saying, “That’s a deepfake!” In one early example of this, from late 2017, the US president’s more passionate online surrogates suggested (long after the election) that the leaked Access Hollywood “ grab 'em ” tape could have been generated by a synthetic-voice product named Adobe Voco. But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative—or even real. In psychology, this is called the majority illusion. As the time and effort required to produce commentary drops, it will be possible to produce vast quantities of AI-generated content on any topic imaginable. Indeed, it’s possible that we’ll soon have algorithms reading the web, forming “opinions,” and then publishing their own responses. This boundless corpus of new content and comments, largely manufactured by machines, might then be processed by other machines, leading to a feedback loop that would significantly alter our information ecosystem. We will have to adjust, and adapt, to a new level of unreality. Right now, it’s possible to detect repetitive or recycled comments that use the same snippets of text in order to flood a comment section, game a Twitter hashtag, or persuade audiences via Facebook posts. This tactic has been observed in a range of past manipulation campaigns, including those targeting US government calls for public comment on topics such as payday lending and the FCC’s network-neutrality policy. A Wall Street Journal analysis of some of these cases spotted hundreds of thousands of suspicious contributions, identified as such because they contained repeated, long sentences that were unlikely to have been composed spontaneously by different people. If these comments had been generated independently—by an AI, for instance—these manipulation campaigns would have been much harder to smoke out. In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister. The ability to manufacture a majority opinion, or create a fake-commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns. Pervasive generated text has the potential to warp our social communication ecosystem: algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement. Our trust in each other is fragmenting , and polarization is increasingly prevalent. As synthetic media of all types—text, video, photo, and audio—increases in prevalence, and as detection becomes more of a challenge, we will find it increasingly difficult to trust the content that we see. It may not be so simple to adapt, as we did to Photoshop, by using social pressure to moderate the extent of these tools’ use and accepting that the media surrounding us is not quite as it seems. This time around, we’ll also have to learn to be much more critical consumers of online content, evaluating the substance on its merits rather than its prevalence. Photograph: Jabin Botsford/The Washington Post/Getty Images Culture The Future of Game Accessibility Is Surprisingly Simple Geoffrey Bunting Science SpaceX’s Starship Lost Shortly After Launch of Second Test Flight Ramin Skibba Business Elon Musk May Have Just Signed X’s Death Warrant Vittoria Elliott Business OpenAI Ousts CEO Sam Altman Will Knight Ideas Contributor X Topics fake news Social Media artificial intelligence Deepfakes Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
16,059
2,019
"McKinsey survey: AI boosts revenue, but companies struggle to scale use | VentureBeat"
"https://venturebeat.com/2019/11/22/mckinsey-survey-ai-boosts-revenue-but-companies-struggle-to-scale-use"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages McKinsey survey: AI boosts revenue, but companies struggle to scale use Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The latest McKinsey Global Survey released today found that artificial intelligence is having a positive impact on business outcomes, with 63% of respondents reporting an increase in revenue after adoption of the technology. However, only 30% of companies apply AI to multiple business units, up from 21% last year. Still, overall AI adoption is on the rise — in standard business practices it’s up near 25%, according to the online survey of 2,360 business leaders from a range of industries and global regions. The report also found a number of striking differences between companies deploying several AI systems to business operations and those that are not. Companies that are considered high performers or AI power users by the study on average apply the technology to 11 use cases compared to three use cases at other companies using machine intelligence. The high performers were also more likely to report revenue increases from AI applications of over 10%. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Overall, 44% of respondents report cost savings from AI adoption in the business units where it’s deployed, with respondents from high performers more than 4 times likelier than others to say AI adoption has decreased business units’ costs by at least 10%, on average,” the report reads. Revenue growth was most likely to come from AI applications in marketing, sales, supply chain management, and product development. That result echoes a Microsoft-commissioned business executive study released earlier this year about the AI opportunity gap. It’s also in line with an Accenture study that found roughly 16% or more of companies have figured out how to deploy AI at scale and more than 70% of companies risk being put out of business by competitors applying AI at scale. More than 80% of respondents said they plan to retrain a portion of their workforce in the next three years, but high-performance AI companies were more likely to have retrained many of their employees in the past year. “Seventy-two percent of respondents from AI high performers say their companies’ AI strategy aligns with their corporate strategy, compared with 29% of respondents from other companies. Similarly, 65% from the high performers report having a clear data strategy that supports and enables AI, compared with 20% from other companies,” the report reads. High-performing AI companies also differ from others when it comes to attitudes about AI risks like equity and fairness, physical safety, cybersecurity, and national security. According to the survey, 80% of high-performance respondents consider the risks to personal privacy compared to less than half of all other respondents. On the subject of risk, 39% of respondents recognized a need for explainability, but only 21% said they’re actively addressing the issue. Job loss AI has not resulted in recent major job loss, as more than a third of total respondents reported less than a 3% change in their workforce, and only 5% of respondents reported a change greater than 10%. But that could change, as 34% of respondents said they expect AI will decrease the size of their workforce. Conversely, 21% of respondents said they expect AI will grow the size of their workforce. Respondents from telecom or automotive industries have seen the deepest job cuts due to AI and are expected to lead in this category in the future, according to the report. A Brookings Institution report released earlier this week found that AI exposure is highest among high-wage, white collar jobs like tech, while industries like food service, education, and health care are virtually immune to positive or negative impacts of AI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,060
2,019
"CB Insights: AI startup funding hit new high of $26.6 billion in 2019 | VentureBeat"
"https://venturebeat.com/2020/01/22/cb-insights-ai-startup-funding-hit-new-high-of-26-6-billion-in-2019"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CB Insights: AI startup funding hit new high of $26.6 billion in 2019 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As part of an annual look at global AI investment trends, CB Insights today reported that AI startups raised a record $26.6 billion in 2019, spanning more than 2,200 deals worldwide. That’s compared to roughly 1,900 deals totaling $22.1 billion in 2018 and about 1,700 deals totaling $16.8 billion in 2017. The reported high recorded by CB Insights in the AI in Numbers report is in line with analysis by other organizations keeping an eye on investment in the AI ecosystem. The National Venture Capital Association earlier this month said that although overall venture capital spending took a dip last year, investors spent a record $18.4 billion on AI startups in the United States in 2019. With investment highest in fields like autonomous driving, drug research, finance, and facial recognition, the AI Index 2019 report released last month found more than $70 billion in global private investment in AI. Crunchbase also took a look back at 2019 AI startup investment trends and found an increase in investment compared to 2019, while Pitchbook recorded a decrease in both the size and number of deals between 2018 and 2019. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The number of AI startup unicorns also rose in 2019. New unicorns include autonomous delivery company Nuro and business analytics firm DataRobot. Also new in the CB Insights report: Early-stage deals continue to dominate, with more than 70% of deals going to early-stage or series A funding rounds. 10 companies — including UiPath , Megvii , and Nuro — raised funding rounds higher than $100 million. All 10 companies are based in China, the U.K., or the U.S. Accounting for $4 billion of the 26.6 billion, health care leads the way in global AI startup deal distribution, followed by industries like finance ($2.2 billion), retail ($1.5 billion), sales, and cybersecurity. Merger and acquisition activity was also highest in health care, sales, and retail industries. Venture capital firms like Plug and Play Ventures, Accel, and Lightspeed Ventures are among the VC firms with the most deals in 2019. In corporate venture capital, Intel Capital and Google Ventures topped the number of deals, followed by SBI Investment in Japan. Baidu Ventures in China ranked eighth on the list, followed by Salesforce Ventures. The number of equity deals with corporate venture capital spending participation has seen a 4 times increase in recent years. In 2014, corporate venture capital was involved in 99 deals, while in 2019 VCs from corporations like Google or Intel were involved in 435 deals. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,061
2,018
"Verizon activates 'world's first' 5G network in 4 U.S. cities | VentureBeat"
"https://venturebeat.com/2018/10/01/verizon-activates-worlds-first-5g-network-in-4-u-s-cities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon activates ‘world’s first’ 5G network in 4 U.S. cities Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Less than a year after announcing that it would launch the first commercial 5G network in the United States, Verizon today officially turned on 5G services in four U.S. cities: Houston, Indianapolis, Los Angeles, and Sacramento. The company says it has started offering installations of its Verizon 5G Home broadband service today and named Houston resident Clayton Harris as “the first 5G customer in the world.” While unquestionably significant, Verizon’s 5G launch in the United States comes with several qualifiers — including whether it’s truly the first, is actually 5G, and is legitimately as widespread as it initially sounds. As we’ve previously reported, smaller carriers rushed to launch the “world’s first 5G networks” in Qatar , Lesotho , Finland, and Estonia over the summer, and ahead of the U.S. Like Verizon, they are generally using pre-standards 5G networking gear and offering service in limited areas, sometimes without consumer hardware. Verizon’s offering is an end-to-end 5G solution, including the necessary wireless hardware to deliver next-generation wireless speeds to home broadband users. The carrier is promising typical 300Mbps and peak 1Gbps connection speeds to customers, using Inseego broadband equipment that’s included in the $50 to $70 monthly service price. It also includes three months of free YouTube TV, as well as the customer’s choice of a free Apple TV 4K or Google Chromecast Ultra. Above: Inseego’s 5G home router for Verizon. If the competitive pricing isn’t appealing enough, Verizon is offering its service free for the first three months and promising “First on 5G” customers early access to upcoming services, such as mobile 5G. Rival AT&T is launching a mobile 5G service in 12 cities this year, using wireless hotspot “pucks” to provide mobility, rather than new 5G smartphones or tablets. Verizon’s main caveats are that it’s only launching 5G in “parts of” its four initial cities right now, and it is using “5G TF” equipment that will need to be updated to 3GPP standards-compliant 5G in the future. Based on prior personal experience with FIOS buildouts, Verizon might take years to extend its broadband services as promised or stop short of covering a full city. It’s also unclear when and how mobile 5G devices such as smartphones will be usable on the Verizon 5G network. The carrier says it chose Houston, Indianapolis, Los Angeles, and Sacramento for early 5G service because of forward-looking state and local leaders in those areas. It is apparently waiting until 3GPP-compliant 5G hardware is available to expand its 5G offerings to other cities. “As our 5G technology partners bring … hardware, software, chipsets, and devices to market on the 3GPP 5G NR standard,” the company says, “we’ll upgrade First On 5G members to that equipment at no charge. When new network equipment is available and introduced, we’ll expand our 5G broadband internet coverage area quickly and bring 5G to additional cities.” Customers interested in Verizon 5G Home service can check if service is available in their neighborhoods via the company’s FirstOn5G.com website. The company began taking service preorder requests on September 13. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,062
2,019
"HTC 5G Hub will launch on Sprint's 2.5GHz network in spring 2019 | VentureBeat"
"https://venturebeat.com/2019/02/25/htc-5g-hub-will-launch-on-sprints-2-5ghz-network-in-spring-2019"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HTC 5G Hub will launch on Sprint’s 2.5GHz network in spring 2019 Share on Facebook Share on X Share on LinkedIn HTC's 5G Hub. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Until the U.S. government approves T-Mobile’s acquisition of Sprint , the third- and fourth-place carriers will be launching separate 5G cellular networks with different hardware — and HTC is ready to help kick off Sprint’s 5G initiative this spring. Following a tease by Sprint back in November 2018, the smartphone and VR headset maker today confirmed that it will launch the HTC 5G Hub as its first 5G product, and, as expected, it’s something unique. In a discussion with VentureBeat, HTC explained that simply “following the herd” and “shoehorning 5G into a phone” doesn’t make much sense to the company at the moment, as 4G already delivers a solid experience for what people do with their phones. Instead, the company wanted to create a “companion device” that will enable multiple devices in different scenarios to receive 5G’s bandwidth and latency benefits as needed. Unlike typical wireless hotspots, which rest on their backs, the wedge-shaped 5G Hub stands upright while including the guts of a modern Android smartphone: a 5-inch touchscreen, stereo speakers, a Qualcomm Snapdragon 855 processor, and a Snapdragon X50 5G modem. While small enough to fit in a backpack, the housing also contains a 4G modem, separate 4G and 5G antenna systems, and a 7660mAh battery with enough power to deliver all-day connectivity to multiple devices. HTC told VentureBeat that the 5G Hub was designed to serve several purposes. It can be a wireless hotspot capable of hosting up to 20 clients at once, a standalone entertainment system, or a convenient way to bring Android apps — including anything from Netflix videos to games — to any TV. Depending on your needs, it can serve as a mobile office, replace a home broadband modem, or play 4K videos on your choice of display. Above: HTC’s 5G Hub in a living room, powering a 4K TV. With 4GB of RAM and Android 9 Pie on board, the 5G Hub has enough power to display streamed video and run apps on its own if you’re in a hotel or otherwise on the road, leaving your phone available for communications. There’s also a micro SD slot that lets you expand the 32GB of on-board storage to 400GB or 1TB if you supply the card. On the cellular side, the 5G Hub will connect to Sprint’s 2.5GHz (band 41) 5G network , initially launching in nine U.S. cities this spring, and will fall back to Sprint’s 4G network in areas outside that coverage. The 5G Hub will not contain millimeter wave 5G hardware or connect to the millimeter wave 5G networks that Verizon and AT&T have already launched; its compatibility with T-Mobile’s network is a question mark. While HTC says that the 5G Hub’s hardware will be capable of supporting “at least gigabit speed,” Sprint hasn’t yet confirmed the speed of its 2.5GHz 5G network, so it’s unclear how much bandwidth users will be sharing. Most users will access the bandwidth over Wi-Fi, and to that end, 5G Hub includes support for 802.11ac and 802.11ad. Also known as WiGig, the 802.11ad wireless option will allow select devices to really experience 5G speeds, while others connect via the older and slower 802.11ac (Wi-Fi 5) standard. WiGig is already used to offer wire-like data speeds and latency for HTC’s Vive tetherless VR headset accessories, and the 5G Hub could feed 5G data directly to those devices — and future ones, the company notes without divulging specifics. In addition to including Bluetooth 5.0 support for accessories such as game controllers, the 5G Hub will also include wired connectivity options. On the back is a full-sized Ethernet port for connection to an existing wired or wireless home network, as well as a bi-directional USB-C charging port and a faster dedicated cylindrical charging port. Connecting to HDMI will apparently require a USB-C to HDMI adapter. Pricing and more specific availability details are yet to come, though HTC hints that the 5G Hub won’t be cheap, as it’s being made for users who want to be on the bleeding edge of technology. The company says the device will be exclusive to Sprint in the United States; Telstra (Australia), Three (UK), Deutsche Telekom (Germany), Sunrise (Switzerland), and Elisa (Finland) will offer it overseas; and China Mobile will release a customized version for mainland China. Sprint is also planning to offer an unnamed Samsung 5G smartphone , potentially the 5G version of the Galaxy S10, in summer 2019. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,063
2,019
"Sprint will launch 5G on May 31 in 4 cities with LG V50 and HTC 5G Hub | VentureBeat"
"https://venturebeat.com/2019/05/16/sprint-will-launch-5g-on-may-31-in-4-cities-with-lg-v50-and-htc-5g-hub"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sprint will launch 5G on May 31 in 4 cities with LG V50 and HTC 5G Hub Share on Facebook Share on X Share on LinkedIn The LG V50 ThinQ 5G from the rear. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Having previously set general targets for its initial launch of 5G services and devices in the United States, Sprint says today that it will begin offering 5G in Atlanta, Dallas, Houston, and Kansas City on May 31, with its first 5G device preorders starting tomorrow, May 17. Early adopters will be able to purchase LG’s V50 ThinQ 5G as a smartphone, or HTC’s 5G Hub to provide 5G network connectivity to multiple Wi-Fi devices. The V50 ThinQ 5G recently launched in South Korea , where it reportedly sold 100,000 units in its launch weekend as a less expensive alternative to Samsung’s Galaxy S10 5G. Sprint’s asking price for the 6.4-inch screened phone is $1,152, slightly higher than its $1,115 MSRP in South Korea, where it has been marketed with on-device proactive AI functionality. As part of the initial promotion, Sprint is offering aggressive monthly pricing for both devices. The LG V50 ThinQ 5G can be had for $24 per month with $0 down, which would be $576 over 24 months — a discount of 50%. By comparison, HTC’s 5G Hub hotspot is priced at $12.50 monthly over 24 months, again half off the normal $600 price. The 5G Hub features an integrated color touchscreen, and can be used for video playback in addition to serving content to other devices. “LG V50 ThinQ 5G and HTC 5G Hub are innovation marvels, and they are ideal to be the first to bring the power of both Sprint 5G and 4G LTE Advanced to our customers,” said Sprint CEO Michel Combes. “I am proud of the close collaboration with LG, Qualcomm, and HTC that has brought us to this momentous milestone.” The fourth-place U.S. carrier is requiring most new 5G customers to sign up for an “Unlimited Premium” plan for $80 per month, slightly under Verizon’s most affordable unlimited 5G plan. Existing customers “may be required to change plans,” as well. It’s unclear how fast Sprint’s initial 5G service will be, as the company is promising up to 10 times faster performance than 4G over time, beginning with “blazing-fast speeds” even at the beginning of the launches. To sweeten the $80 deal, Sprint says that its unlimited 5G plan will include Hulu, Amazon Prime, Twitch Prime, Tidal HiFi, and 100GB of LTE hotspot access for one user; a second user can add a line for $60 more. In small print, Sprint notes that the hotspot access will drop to 3G speeds after 100GB of usage. Somewhat confusingly, Sprint is also offering HTC 5G Hub users a less expensive data plan that’s limited in monthly usage. Sprint says that it will sell 100GB of “high-speed data” for $60 per month, cutting mobile hotspot performance to 2G (not a typo) speeds after 100GB of usage. That’s $10 less than AT&T is charging for a much smaller quantity of 5G data. Sprint’s announcement comes on the same day that Verizon began sales of the Samsung Galaxy S10 5G , its first fully integrated smartphone, and roughly a month after Verizon commenced mobile 5G service in two U.S. cities. By comparison, AT&T claims that it is offering 5G service in 19 cities, but has not yet commenced sales of 5G devices to the general population. Over “the next few weeks” following the May 31 four-city launch, Sprint will add five more cities to its 5G network, including Chicago, Los Angeles, New York City, Phoenix, and Washington, D.C. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,064
2,019
"Sprint brings 5G to LA, NYC, Phoenix, and Washington, D.C. | VentureBeat"
"https://venturebeat.com/2019/08/27/sprint-brings-5g-to-la-nyc-phoenix-and-washington-d-c"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sprint brings 5G to LA, NYC, Phoenix, and Washington, D.C. Share on Facebook Share on X Share on LinkedIn A collaboration between Sprint and Hatch will bring 100 streaming games to the carrier's 5G handsets. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Though Sprint’s 5G network has been slated to merge with T-Mobile’s since before either company actually commenced 5G service, delays in merger approval have forced each carrier to separately launch new network assets. Today, Sprint announced that it’s launching 5G in several major cities — Los Angeles, New York City, Phoenix, and Washington, D.C. — the first two of which have already seen T-Mobile 5G rollouts, while the latter two haven’t. For Sprint, the expansion of 5G to Los Angeles and New York means overlapping its 2.5GHz mid-band coverage atop T-Mobile’s existing millimeter wave 5G resources but not, for now, offering customers access to both networks. Consequently, Sprint is promising average 5G speeds of 203.8 Mbps, six times faster than Sprint LTE, albeit with 4G-caliber uploads in the 1-5Mbps range. Independent testing by Ookla confirmed those numbers, but showed Sprint’s peak speeds to be much higher — 600 to 725Mbps, depending on the city. Sprint says that its network now covers approximately 2,100 square miles and will be available to around 11 million people “in the coming weeks,” more than any other U.S. carrier. Its New York City coverage will spread across “parts of Manhattan from Central Park to the southern tip,” as well as Hudson County, New Jersey, ingress and egress areas for LaGuardia and JFK airports, and parts of Queens, Brooklyn, and the Bronx. The carrier’s initial Los Angeles coverage includes areas from “Marina del Rey to Downtown L.A., and West Hollywood to Culver City,” including Rodeo Drive, Dodger Stadium, and the campuses of UCLA and USC. Surprisingly, it also includes parts of Orange County, Pasadena, and Cerritos outside of Los Angeles, with 1.2 million total people covered across the region. While T-Mobile’s New York map covers numerous individual streets across Manhattan and Brooklyn, it doesn’t extend to neighboring Queens or New Jersey. Similarly, T-Mobile covers parts of some Los Angeles streets from West 1st to 16th Streets, but doesn’t extend west to Koreatown and has little coverage on the east sides of the same streets, stopping right around Little Tokyo. Sprint isn’t the first 5G provider to offer service in either Phoenix or Washington, D.C. — Verizon beat it to both cities — but the carriers’ coverage maps aren’t quite the same. Verizon’s millimeter wave-based 5G is only in five narrow areas within Phoenix, plus Arizona State University’s campus in Tempe, compared with a robust collection of 20 major areas in Washington. Sprint is promising to cover 740,000 people in “parts of Phoenix, Tempe, Scottsdale and Glendale,” with D.C. coverage of 520,000 people across a huge number of Washington neighborhoods and tourist areas. Additionally, Sprint’s coverage extends outside the District and into Maryland’s Bethesda, Chevy Chase, NIH/Walter Reed, and Cabin John areas, as well as D.C.-adjacent parts of Northern Virginia. The addition of these locations to Sprint’s list brings the carrier’s 5G total to nine cities, as the carrier launched 5G in Atlanta, Chicago, the Dallas-Fort Worth area, Houston, and Kansas City three months ago. Expansions in those four cities have continued as well, with Kansas City growing on “both sides of the state line” and new parts of the Dallas-Fort Worth area getting coverage. Sprint-specific versions of the Samsung Galaxy S10 5G , LG V50 ThinQ , and HTC 5G Hub are currently available for purchase, while the OnePlus 7 Pro 5G is rolling out over the next week for an aggressive price of $840, and a customized Samsung Galaxy Note10+ 5G will be available later this year. It’s unclear whether that or any of the other Sprint devices will be able to take full advantage of the T-Mobile millimeter wave 5G network after the carriers merge. Beyond the city additions, Sprint is promoting 5G’s versatility as a backbone for online gaming, offering Android users a free 90-day trial of an instant 5G game streaming service co-developed with Hatch. The carrier says that the Hatch Premium service will include instant access to over 100 mobile games without the need for downloads or app updates, priced at $8 monthly per line. Arkanoid Rising, a sequel to Taito’s classic brick-smashing franchise, is Hatch’s first platform exclusive, while Monument Valley, Angry Birds GO!, and Crashlands are available on a nonexclusive basis. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,065
2,019
"Sprint's OnePlus 7 Pro 5G lets you experience 2020's 'typical' 5G today | VentureBeat"
"https://venturebeat.com/2019/08/28/sprints-oneplus-7-pro-5g-lets-you-experience-2020s-typical-5g-today"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sprint’s OnePlus 7 Pro 5G lets you experience 2020’s ‘typical’ 5G today Share on Facebook Share on X Share on LinkedIn The OnePlus 7 Pro 5G on Sprint's 5G network in Los Angeles, California. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Blame the overaggressive marketers. Last year, they claimed 5G cellular networks would launch with unthinkably fast speeds, putting home broadband power into your pocket even before there were use cases to properly exploit it. It didn’t actually go that way. While 5G has launched across multiple countries, most carriers aren’t using insanely fast, short-distance millimeter wave towers to deliver 10 times 4G speeds; instead, they’re relying on “mid-band” radios with smaller performance multipliers. But unlike mmWave signals, mid-band signals can reach devices indoors and in cars, as well as at greater distances. Their peaks aren’t as high, but they don’t zero out when coming in contact with walls and windows. That’s Sprint’s approach to 5G, and on balance I’d pick it over the alternative of theoretically faster mmWave 5G that isn’t actually available to consumers (AT&T) or that only works for blocks before disappearing (Verizon). As of this week, you can actually use Sprint’s 2.5GHz mid-band 5G in parts of nine major U.S. metros — think cities, plus or minus a bit — and there’s a new 5G phone on the block: the OnePlus 7 Pro 5G. I’ve been testing the phone and Sprint’s network in Southern California over the past day, and I’m impressed. This is an opportunity to experience the near-term future of 5G at a reasonable price point — $840 for the hardware outright, less on a Sprint lease — including features that are about to become the “new normal” for Android phones over the next year: Super-fast wireless speeds. The OnePlus 7 Pro downloaded at nearly 500Mbps on Sprint’s Los Angeles 5G network and almost identically fast on my home gigabit Wi-Fi 5 network, both faster than the current Apple flagship iPhone XS. Access to unlimited data and optional 5G services, such as Sprint’s game streaming service Hatch Premium. The hardware and software: a large, nearly bezel-less screen backed by a Qualcomm Snapdragon 855 processor and superior cameras, paired with mature, responsive Android software. This isn’t a review, but rather an opportunity to look at how Sprint’s 5G service and Android 5G phones are going to work over the next year or so. Read on for all the details. 5G mid-band speeds: 500Mbps now, 1.6Gbps possible Before I went off to test Sprint’s 5G network myself, I spoke with a collection of Sprint executives regarding both the current state of their 5G network and future plans — to the limited extent they could discuss them in light of their in-the-works merger with T-Mobile. The key takeaways from the discussion were: Expect “average” 5G download speeds of 200Mbps. That number is not only low on paper, but around a third the speed of Verizon’s promised average 5G service. Real-world testing from Ookla has pegged the average at 236Mbps, but often closer to 300Mbps, depending on the city. Expect peak 5G download speeds today in the 600-700Mbps range. That’s again one-third of Verizon’s best number, Ookla says, but you might see better peak performance if you’re in the right city. And this will improve over time. Sprint’s theoretical download speed peak with mid-band is 1.6Gbps, the executives confirmed, when all of their 5G features are enabled and deployed in the field. That’s not the case today, and the company wouldn’t commit to a hard date for this level of performance to move from its labs to its 5G towers. If it happens, Sprint’s merger with T-Mobile will enable the two carriers to combine complementary assets , relying on T-Mobile for lower-band, slower 5G and higher-band, faster 5G, with Sprint’s mid-band 2.5GHz network in the middle. Sprint execs would not discuss any future 5G metros beyond the current nine, nor any plans except to extend coverage within those cities. In my personal and certainly not exhaustive testing of Sprint’s Los Angeles 5G with the OnePlus 7 Pro 5G phone, I saw a peak stationary download speed of 480Mbps with a peak upload speed of 32Mbps. Driving in a car on the city’s notoriously packed freeways, I saw download speeds ranging from 127Mbps to 288Mbps on 5G, notably through car windows that would have killed millimeter wave 5G signals. The phone fell back to sub-100Mbps speeds — typically in the 50Mbps or lower range — when I dropped off 5G onto Sprint’s LTE network. Above: OnePlus 7 Pro 5G (left) on Sprint’s 5G network, with an iPhone XS on T-Mobile’s 4G network (right). My nearly year-old flagship iPhone, the iPhone XS, sat nearby on T-Mobile’s LTE network. At the same places and general times, the iPhone never broke a peak of 90.8Mbps for downloads. This isn’t to say the iPhone can’t do better under ideal conditions — I saw it hit 154Mbps once last year — but as a general statement, 5G made all the difference when the OnePlus 7 Pro 5G hit 480Mbps while the iPhone was hovering between 39.8Mbps and 57Mbps, averaging roughly one-tenth the speed. (I’m not saying a 10:1 cellular performance difference between 5G and 4G phones is guaranteed, linear, or otherwise wholly correlated. To be very clear, the appropriate performance expectation for a 5G phone is that it will deliver 4G performance akin to existing flagship 4G phones, then step up by a multiplier if and when you’re in a 5G-covered area. More on that later.) The OnePlus phone also made strong use of Wi-Fi connections. While it peaked at 75.9Mbps on an 802.11n (Wi-Fi 4) connection, it roared up to 485Mbps on an 802.11ac (Wi-Fi 5) connection, consuming almost half the peak bandwidth of my home gigabit internet service. That’s faster than most computers, according to charts provided by Cox, and my iPhone XS was only hitting 389Mbps when connected to the same servers. I’ve seen it go faster only one time, nearly two months ago, but in a head-to-head test with OnePlus the Android phone won out. Unlimited 5G data for streaming games and VR Another key part of the Sprint 5G equation is unlimited data — but with optional paid add-ons to take deeper advantage of the network. Users currently must pay $70 to $80 per month for 5G service equivalent to a faster version of unlimited 4G, bundled with free subscriptions to music and video services such as Tidal and Hulu, while next-generation services demanding lower latency are separate. I have very mixed feelings as to whether this business model will work over the long term. But we’re going to see a lot of experimentation with it over the next few years regardless, and Sprint’s initial 5G use case is gaming. During my meeting with Sprint executives, I went hands-on with Hatch Premium, an $8 per month service that has been promoted as including all-you-can-play access to 100 games — it’s now 150, Hatch says. Compared with using the Google Play Store, there’s a key differentiator: Rather than downloading the games onto your device, Hatch streams them from cloud servers with no perceptible latency. Sprint’s non-standalone 5G network currently delivers 15-25ms latency, with plans to drop into the single digits once standalone 5G becomes available in 2020, but in my testing with a couple of casual games over 5G, latency just wasn’t an issue. Above: Even on a pixel level, Arkanoid Rising streaming via Hatch Premium is virtually indistinguishable from having the game live on your device. Without doing too deep a technical dive, I’ll mention that Hatch has some novel techniques for streaming games over 5G, reducing the need to broadcast canned video to devices. The result is that you’ll sometimes see new assets popping into place in low-, medium-, and full-resolution variations, albeit typically not in ways that affected gameplay, and more obviously when I re-tested Hatch on home Wi-Fi than when I initially experienced it on Sprint’s 5G network. Will multi-player Twitch gaming like Fortnite work over 5G? We’ll see. Sprint is hosting an event that will use multiple HTC 5G Hubs to let hard-core players experience 5G speeds and responsiveness. I suspect it will go quite well, but there’s always room for better latency, particularly when multiple simultaneous connections are involved. My broader question is whether it’s worth using 5G network bandwidth to stream mobile-caliber titles such as Space Invaders: Infinity Gene or Angry Birds Go! when the games could just be sitting on the device without contributing to the choking of cell towers. Answering this involves value judgments regarding content that are beyond the scope of this article, but while no one would suggest downloading the full library of a video service such as Netflix, different all-you-can-play strategies (see: Apple Arcade) may make more sense and prevail in the 5G era. Or not — the practicality of 5G game streaming remains to be seen, but the technology certainly works. I also had the ability to test two live 5G VR scenarios over Sprint’s network, most notably a “telepresence” VR experience, where I was transported from a Marina del Rey hotel to a nearby pier for a live conversation with a Nokia representative. Looking through a standard Oculus Go headset connected over Wi-Fi to an HTC 5G Hub, I could fully turn my head 360 degrees in any direction and see the beach, waves, people, and the Nokia rep moving completely fluidly as if I were there — of course, minus human vision-caliber levels of detail. The 5G VR demos were better than anything I saw at CES earlier this year: noticeably higher resolution, with no artifacts or scene-building as I quickly turned my head in any direction, and only minimal lag. Additionally, while I was the only participant in the VR demo, it is capable of hosting multiple people at once. And the total cost of Sprint’s VR broadcasting gear was under $1,000: just a $300 360-degree camera and a $500-ish HTC 5G Hub, streamable to sub-$200 Oculus Go headsets users could provide themselves. At these prices, VR streaming over 5G is going to become an actual thing, as pioneering livestreamers will start buying and using the broadcasting gear for all sorts of events beyond those currently covered by larger corporations. Just like game streaming, the underlying 5G technology for VR is now here and working, assuming you’re willing to buy the hardware needed to take advantage of it. OnePlus 7 Pro 5G as Android’s new normal I’m not going to fully review the OnePlus 7 Pro 5G, but it’s an excellent phone — and all indications are that it’s close to table stakes for the 5G phones that will be ubiquitous next year. Right now it’s special because it delivers a $1,250 iPhone XS Max-beating experience for under $850, but in 2020 lots of phones are going to be doing that. That isn’t to say every 5G phone next year will pack a 6.67-inch AMOLED screen, triple rear camera system, Dolby Atmos audio, Snapdragon 855-class processor, and 256GB of storage. But the fact that such a package is possible for $840 today (before carrier subsidies) means that variants with similar features will soon become commonplace, if not the standard Android phone experience. The OnePlus 7 Pro 5G is charming because there aren’t any rough edges. You turn on the phone and everything from the curved display glass to the wallpaper animation just looks beautiful — it feels like something from the future has arrived a little early. Everything feels snappy and fluid, from the Android 9-based Oxygen OS to apps, games, and the camera, which shifts effortlessly between 0.6x wide-angle, 1x normal, and 3x zoom lenses on the back and a semi-wide selfie lens on the front. As a photographer, I really appreciate the wide range of supported focal lengths, which means better landscapes and stronger optical zoom than my iPhone XS. Apple’s flagships will go further next month, but for an $840 Android phone, this is a great place to start. There’s no better way to describe the OnePlus 7 Pro 5G’s screen than “gorgeous.” It’s bigger than any past or present iPhone display, bending gently at the left and right edges and as bezel-free on the top and bottom as the iPhone XS is on its left and right. It’s impossible to miss the AMOLED’s bright, vivid colors, particularly as there’s no distraction from an unsightly camera notch. OnePlus hides its front-facing camera in a durable, anti-drop-tested pop-up housing and uses a super-fast in-screen fingerprint scanner as its primary means of biometric authentication. Above: Beyond the flagship-class internal specs, OnePlus includes a carrying case, screen protector, and super-fast 30W charger with every 7 Pro 5G phone. Audio is also impressive. OnePlus is using a Dolby Atmos system with the ability to produce spatialized audio effects that you can actually hear on the device’s miniature soundstage. While most apps won’t take advantage of it — I wasn’t blown away, for instance, by Sprint’s included Tidal service, which promises atypically high-definition audio — apps that do can deliver superior sonic performance. Coming from an iPhone, the OnePlus 7 Pro 5G feels like a device engineered to Apple standards, but without Apple compromises. You get a bigger display than on the iPhone XS Max, with more camera hardware and arguably more convenient biometric security, for $840, rather than Apple’s $1,100 entry-level Max price. Since the OnePlus 7 Pro 5G includes 256GB of storage, which Apple charges a $150 premium for, that’s a $400 savings. The iPhone XS Max certainly has its own advantages, but not enough to cover that sort of price gulf. Parting thoughts When I say that Sprint’s version of the OnePlus 7 Pro 5G is effectively Android’s new normal for the 5G era, I mean it. No company has a monopoly on elegant industrial designs, beautiful screens, fast processors, great cameras, or fluid software at this point — Chinese, Japanese, Korean, and American phone makers are all competing, with access to the same essential building blocks. For this year, cellular performance is a potentially major differentiator. Sprint delivering 200-500Mbps 5G speeds on the regular to Android phones in parts of supported cities is meaningfully better than what users have seen with 4G. That’s at least twice, if not three to 10 times faster than 4G, depending on various factors, and enough to enable those nearly instant video and app downloads we’ve all been hearing about. I’m more impressed by Sprint’s 5G network performance than I expected to be. The carrier’s coverage map of Los Angeles is significant, and the speeds I saw in my limited testing were enough to make a real-world difference for traditional video streaming, as well as enabling high-resolution VR and game streaming that wouldn’t have been viable before. Being able to take at least some advantage of those speeds indoors and in cars is a non-trivial benefit that rivals such as Verizon are apparently still working (as of 3Q 2019) to figure out. That said, I’m not personally rushing to switch from my current carrier to Sprint quite yet. I live in Orange County, south of Los Angeles, which Sprint flagged in a press release yesterday as receiving some 5G support. It’s a large county, but I haven’t found any 5G coverage near my home, and in person Sprint’s executives were cautious to downplay the service’s availability outside of Los Angeles proper. Just as was the case at the start of 4G service nearly a decade ago, actually finding 5G coverage in your area is going to be a matter of looking at Sprint’s current coverage maps and crossing your fingers. If the T-Mobile/Sprint merger is completed and 600MHz blanketing and mmWave high-speed zones work as expected, the next questions will be where you can find fast and fastest 5G coverage, which should prove plenty confusing for average consumers. For the time being, I’m just glad to see a flagship-caliber 5G phone that begins to deliver on the promises of its new technologies at a reasonable price. After a year of blaming slow 5G network buildouts on the limited availability of 5G handsets, carriers now need to spread their 5G towers far enough to let users enjoy these exciting handsets’ capabilities. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
16,066
2,019
"Verizon's 5G in Providence: Unreliable, but fast when it works | VentureBeat"
"https://venturebeat.com/2019/08/28/verizons-5g-in-providence-unreliable-but-fast-when-it-works"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon’s 5G in Providence: Unreliable, but fast when it works Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. If you’ve been following the rat race that is U.S. carriers’ 5G rollouts, you’re probably aware that Verizon recently announced it would light up mobile 5G in portions of Providence, Rhode Island this summer, coinciding with a rollout in Denver, Colorado. The company claimed in a press release that customers could expect speeds up to 450Mbps from its 5G Ultra Wideband Network, with peak speeds over 1.5Gbps and less than 30 milliseconds of latency. And while it’s been a long time coming, Verizon’s finally delivered on its promise. Kind of. This week, the folks at Big Red invited us to put its Providence network to the test using the newly launched Samsung Galaxy Note10 5G. The network covers College Hill, Federal Hill, and Mt. Hope, as well as areas around landmarks like Brown University (specifically the Erickson Athletic Complex and Wriston Quadrangle), Rhode Island School of Design, and Providence College. Like the Chicago demos Verizon staged in April and May , we were allowed to stress-test the network as we saw fit, short of tinkering with the 5G transceiver nodes affixed to electrical poles throughout the city. Cellular at high speeds We kicked off our roughly 40-minute network test at Blue State Coffee, which sits at the corner of Thayer Street and Bowen Street. It’s to the west of downtown Providence and College Hill and just a few blocks north of Waterman Street, which borders the imposing Brown University Haffenreffer Museum of Anthropology and the University Sciences Library. Above: A rough overview of one of the locations of Verizon’s 5G deployment in Providence, Rhode Island. At the top of Thayer stands one of the aforementioned 5G nodes, which blankets the surrounding alleyways and lawns with high-speed connectivity. And toward the Starbucks and Shake Shack, where Thayer narrows and bisects George Street, additional microcells carry the signal further. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Providence is the first Verizon 5G deployment to tap Samsung networking equipment almost exclusively, conferring the advantage of 5G upload speeds as well as 5G download speeds. (That’s an improvement over the 8Mbps to 15Mbps upload speeds Verizon’s other 5G deployments see.) But as of press time, the benefits remain largely theoretical — 5G uploads in our testing fluctuated between 10Mbps to 130Mbps. Above: One of Verizon’s 5G nodes in Providence. A note about Verizon’s flavor of 5G: It operates on the short-range 28GHz and 39GHz millimeter wave (mmWave) frequency bands, which are impeded by objects like shrubbery, thick windows, and heavy precipitation. It furthermore requires line of sight, which translates to about 30 yards of nodes outdoors and without obstructions. As is the case with other compatible smartphones, including the LG V50 ThinQ 5G and the Galaxy S10 5G, the Note10 falls back to Verizon’s 4G LTE network when the signal strength dips below a certain threshold. For comparison, T-Mobile employs 28GHz in most of its 5G launch cities and 39GHz in Las Vegas, while AT&T uses 39GHz millimeter wave spectrum in every market where its 5G network is available. Breaking from the pack, Sprint leverages its mid-band 2.5GHz holdings, which allow for better range at the cost of slower speeds. It’s also worth pointing out that 5G NR — the late 2018/early 2019 follow-up to 4G — is a relatively nascent technology that’s in many ways less sophisticated than LTE at the moment. LTE uses a more complex encoding called QAM, or quadrature amplitude modulation, and it’s able to leverage more antennas at once. Current LTE networks support 4×4 MIMO (multiple-in, multiple-out) and 256 QAM, or four layers of data and eight bits per symbol, compared with 5G NR’s 2×2 MIMO and 64 QAM (two layers and six bits per symbol). 5G in the wild In the interest of giving Verizon’s network a fair shake, I headed toward the most conspicuous 5G node within walking distance — the one situated just up the hill from the Blue State. Cells at its topmost point angle down Thayer Street and to the east and west of Bowen, blanketing blocks in every direction with 5G coverage. My initial speed tests along Bowen didn’t disappoint, particularly considering the many partially occluding tree trunks, leaves, and cars I encountered along the way. All three exceeded the promised speed of 450Mbps by at least double, and in one case by nearly quadruple, and it’s at the intersection of Bowen and Brook Street (which runs parallel to Thayer) where I recorded the highest upload speed of the day. 1.34Gbps down and 45.5Mbps up 1.83Gbps down and 132Mbps up 1.16Gbps down and 58.6Mbps up I set off in the direction of Brown’s campus after trudging around Bowen for a bit. A few false starts later, it came to light that Brook is the easterly border at which 5G reception begins to degrade. Along Hope Street, a block east of Brook, coverage was at best spotty and at worst nonexistent. Above: Another 5G node. I conducted another trio of tests just past the Wheeler School building abutting Angell Street. The download speeds there were just as impressive as the first few runs, but uploads were a different story. Only a single test managed to break the 10Mbps barrier, and then that lasted for only a few seconds before inexplicably nosediving. 1.17Gbps down and 86.4Mbps up 1.821Gbps down and 3.77Mbps up 844Mbps down and 40.5Mbps up It’s at this point I doubled back to Thayer to kick off a quick file download test. Then I made a beeline for Hope to canvas the campus, which led me to Yan’s Cuisine at Brook and Benevolent. Speeds there were in line with previous tests, but the 5G coverage was significantly patchier. Perhaps because of the denser overgrowth, my Note10 5G loaner repeatedly failed to lock onto a 5G signal even with a clear view of the nearest node. Things didn’t improve by much as I walked Thayer from the campus back to Blue State — the Note10 5G stubbornly clung to 4G the entire time, despite the fact that the nearest node was no more than several hundred feet away. 1.08Gbps down and 15.3Mbps up 1.08Gbps down and 50.4Mbps up A work in progress So what conclusions might be drawn from an afternoon stroll around Providence? First of all, would-be Verizon customers expecting consistent 5G coverage are bound to be disappointed. A step behind a wall or a walk across the road is all it takes to lose 5G connectivity as quickly as it came. Then, there’s the device compatibility dilemma. Although the Note10 5G works just fine with Verizon’s 5G network in Atlanta, Georgia; Chicago, Illinois; Denver, Colorado; Detroit, Michigan; Indianapolis, Indiana; Minneapolis, Minnesota; Phoenix, Arizona; St. Paul, Minnesota; and Washington, D.C., in addition to Providence, there’s no guarantee it’ll be compatible with the carrier’s future mid-band spectrum deployments. In fairness to Verizon, the carrier claims speeds will improve with time, potentially substantially with the rollout of techniques like beam-forming and beam-steering. And at least at the moment, the carrier isn’t levying an outrageous premium on early adopters. Both its $85 Go Unlimited plan and its $105 Above Unlimited plan include “unlimited 5G data” when connected to a 5G network, just $10 over prior 4G LTE prices. But it will take a lot to fill the gaps in Verizon’s current deployments, not to mention the 20 additional cities in which it plans to launch service by 2020. Each of the network’s small cells needs to be approved for a specific location by a city or town, and although Verizon has 800MHz or more of spectrum in many areas, which could increase speeds, tapping it will take further debugging and firmware updates. Its indoor coverage is another story — Verizon recently announced it would partner with Boingo to expand its 5G service indoors and to public spaces like airports, stadiums, hotels, and more, but a timeline remains elusive. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "