id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
15,367
2,019
"Spur raises $8 million to simplify human resource management | VentureBeat"
"https://venturebeat.com/2019/04/16/spur-raises-8-million-to-simplify-human-resource-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Spur raises $8 million to simplify human resource management Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Hourly workers fill about 60% of all jobs (or 78 million) in the U.S., and the average HR department devotes as much as 80% of their work schedule to engaging with and onboarding them. That’s a hefty chunk of time each week, but the alternative — dispensing with people management altogether — is far worse. Fortunately, there’s a third choice in Spur , a Huntsville, Alabama startup developing a cloud-hosted human resource management platform. It today announced that it’s raised $8 million in a series A funding round led by Third Prime with participation from Mark Bezos (who’ll join the board) and Blue Ridge Capital’s John Griffin. Spur CEO Glenn Clayton says the money will help to accelerate its growth within the hospitality industry, bolster its executive and corporate team, and expand its service to Dallas and additional metro areas. “We started Spur with a mission to provide workers greater access to opportunity and ultimately improve their quality of life,” said Clayton. “We’re doing that by partnering with businesses and other organizations to take on all the responsibilities associated with managing HR and payroll for their hourly workforces and then ensuring those workers are well taken care of over the course of their employment.” Spur’s software-as-a-service (SaaS) suite provides hotels, restaurants, and other businesses with an app-based scheduler they can use to slot employees into weekly schedules and approve timesheets. Workers see post-shift earnings within the app. Spur becomes the official employer of record, enabling it to extend benefits and services like health care plans, savings accounts, and paid sick leave, as well as facilitate skills and background checks, screener questionnaires, and W-4 and I-9 verification. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Additionally, Spur hosts a training library and an online jobs portal that matches workers with side gigs, the latter of which outlines qualifications (i.e., permit requirements), pay rates, locations, and other pertinent job details. Spur says that since launching in 2017, it’s added “thousands” of workers to its platform across multiple cities including Atlanta, Georgia; Orlando, Florida; Houston, Texas; Nashville, Tennessee; and Birmingham, Montgomery, and Huntsville, Alabama. “We believe most businesses want to treat their workers well — we’re just making it easier and more cost-effective than ever before to deliver on that goal,” Clayton said. “Third Prime, Mark Bezos, and our other investors joining the Spur mission further validates the importance of what we have built and the enormous market demand for better ways to manage employment for America’s growing hourly workforce.” Spur is in some ways like Jitjatjo , a New York-based startup that offers an end-to-end web staffing platform with a two-sided marketplace through which businesses can book workers with as little as one hour’s notice. Jyve also pairs customers with workers from a “talent marketplace” — albeit strictly for retail customers. Talent-finding app Pared is a closer match, in that it similarly targets the service and hospitality industry, but it’s currently only available in the San Francisco Bay Area and New York City. Despite the competition, though, Third Prime managing partner Wes Barton contends that Spur is well-positioned for growth. “Spur is redefining how businesses manage and provide for their hourly workforce,” Barton said. “We have been impressed with Spur’s leadership, platform, unique value proposition, and unwavering dedication to helping improve the quality of life for millions of hard-working Americans. We are looking forward to continued momentum as more companies and their workers experience Spur firsthand.” In addition to is Huntsville headquarters, Spur has offices in New York and Chicago. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,368
2,021
"Timesheet tracking app When I Work raises $200M | VentureBeat"
"https://venturebeat.com/2021/11/01/timesheet-tracking-app-when-i-work-raises-200m-in-major-growth-round"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Timesheet tracking app When I Work raises $200M Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. When I Work , a job scheduling and timesheet platform, today announced that it secured $200 million in a majority growth investment from Bain Capital Tech Opportunities with participation from Arthur Ventures. The company says that the strategic funding — which brings its total raised to $224 million — will enable When I Work to expand its product suite as it pursues new merger and acquisition opportunities. There were more than 10 million open jobs in the U.S. in August, according to the U.S. Bureau of Labor Statistics — but 5 million fewer people working than before the pandemic. Beyond a lack of benefits and difficult customers, as well as low pay and potential exposure to the coronavirus, former members of the workforce tell surveyors that they desire greater workplace flexibility. According to a recent MyWorkChoice study , 18% of hourly workers reported that a lack of flexible scheduling is the reason they quit their jobs. Gig market Founded in 2010 in Minneapolis, Minnesota, by Chad Halvorson and Daniel Olfelt, When I Work provides shift-based workforce management software that handles scheduling, communication, and tracking the time employees take to complete jobs. Halvorson previously served as a managing partner at Meditech Communications, a marketing agency for the medical device industry, while Olfelt was the senior lead engineer at Meditech. “The idea for When I Work grew out of founder Chad Halvorson’s frustrations with the tedious paper scheduling system at his high school grocery store job,” CEO Martin Hartshorne told VentureBeat via email. “The company was created to build a workforce management software specifically aimed at shift-based workplaces and focused on making employee scheduling easier and more streamlined for both employers and employees alike … With When I Work, employees get flexibility and more control over their schedules, resulting in happier, more productive teams while employers gain agility and a means to attract and retain workers. 85% of employees using When I Work engage with the app weekly.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When I Work lets admins manage time off, switch shifts, and message with or view the availability for a single team and location — or across hundreds. The platform, which integrates with payroll providers like ADP, can automatically match shifts with employee qualifications and availability and prevent overtime with alerts, thresholds, and shift limits. A time clock app with GPS capabilities facilitates clocking in and clocking out across devices, and employers can also create an employee clock on an iPad, computer, or mobile device so that remote or offsite workers can clock in and out from their devices. “When the scheduling and attendance software are used together, [if employees are] late or forget to clock out, both the manager and employee are notified with a mobile alert. You can also get mobile alerts if employees clock in at the wrong job site by using the built-in GPS location services,” When I Work explains on its website. “By enabling GPS clock in [and live maps], companies can ensure that employees are punching in at their workplace or job site address based on GPS location on their phones.” Privacy concerns While When I Work customers cite the need for protection against time theft — according to one source, employers lose about 4.5 hours per week per employee to time theft — some workers might feel differently about the platform’s location-tracking capabilities. Privacy experts worry that workplace tracking tools will normalize greater levels of surveillance, capturing data about workers’ movements and empowering managers to punish employees perceived as unproductive. As a piece in The Atlantic points out, there’s no federal privacy law to keep businesses from tracking their employees with GPS — and only a handful of states impose restrictions on it. Each state has its own surveillance laws, but most give wide discretion to employers as long as any equipment and software they use to track employees is plainly visible. A recent survey by ExpressVPN found that that surveillance of employees has increased during the pandemic, with 78% of the companies surveyed reporting that they’re now using monitoring software. In a separate survey , only 5% of workers said that they completely agree with the practice of employee activity and productivity monitoring, and more than half said that they didn’t know if their employer was monitoring them or not. For its part, When I Work says it only records an employee’s location upon clock in and clock out if location restrictions are enabled. Employees’ locations aren’t gathered throughout their shift, and employees aren’t clocked in and out automatically when they’re at or leave a scheduled location. (They must clock in and out using one centralized time clock.) “When I Work is a unique software platform that solves a very real pain point for hourly employees and their employers,” Bain Capital Tech Ventures managing director Phil Meicler said in a press release. “It was built to suit the needs of today’s workforce by elegantly automating a time-consuming and manual process, generating real productivity gains and offering a solution that is beloved by the employees who use it daily.” Despite competition from companies such as Deputy, QuickBooks Time, Hubstaff, and Timely, When I Work added customers across health care, retail, and manufacturing this year, bringing the number of workplaces that use its platform to more than 200,000. Name-brand clients include Dunkin’, Ace Hardware, Ben & Jerry’s, and Kenneth Cole, some of which uses When I Work’s ancillary services like labor reports and analytics. “Since the company’s founding in 2010, more than 10 million employees have worked more than 100 million shifts on the platform … When I Work aims to triple the value of the company in the next few years,” Hartshorne said. “This investment will enable When I Work to deepen and expand its suite of employee-first software solutions, pursue opportunistic acquisitions and enhance its category-defining position. Our number priority is to use these funds to help support additional research and development, so our platform can keep providing our customers with increased breadth and depth of functionality for their business needs.” The global time-tracking software market was valued at $425.32 billion in 2019 and is expected to reach $1.78 trillion by 2026, at a compound annual growth rate of 22.36%. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,369
2,016
"Microsoft launches Azure Monitor public preview, H-Series VM instances | VentureBeat"
"https://venturebeat.com/2016/09/26/microsoft-launches-azure-monitor-public-preview-h-series-vm-instances"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft launches Azure Monitor public preview, H-Series VM instances Share on Facebook Share on X Share on LinkedIn A Microsoft Azure Monitor dashboard. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today announced the launch of a new type of compute-intensive virtual machine (VM) instances for its Azure public cloud, along with a public preview for a new cloud infrastructure monitoring service called Azure Monitor. Azure Monitor gives Azure customers a tool for checking the status of Azure resources they’re using, including VMs. The service provides an activity log and metrics, and it lets users create shareable dashboards and sign up for notifications. It’s accessible from the Azure portal, but data can also be accessed through application programming interfaces (APIs), Azure Monitor senior program manager Ashwin Kamath wrote in a blog post. Now Azure has its own first-party monitoring tool, just like public cloud market leader Amazon Web Services (AWS) has CloudWatch and Google Cloud Platform has StackDriver. Microsoft does already offer cloud-based monitoring tools, and some of them can be used as extensions of Azure Monitor. “Azure Monitor enables you to easily stream metrics and diagnostic logs to OMS [Operations Management Suite] Log Analytics to perform custom log search and advanced alerting on the data across resources and subscriptions,” Kamath wrote. “Azure Monitor metrics and logs for Web Sites and VMs can be easily routed to Visual Studio Application Insights, unlocking deep application performance management within the Azure portal.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On top of that, Azure Monitor has integrations with monitoring and log analysis tools from other companies, including AppDynamics, Atlassian, Cloudyn, DataDog, New Relic, PagerDuty, Splunk, and Sumo Logic. The new H-Series instances are now available only through the South Central US Azure region, with more geographical availability to come, Azure compute principal program manager Tejas Karmarkar wrote in a blog post. The instances have 8-16 cores, 56-224 GiB of RAM, and 1-2 TiB of solid-state local disk space. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,370
2,021
"Microsoft: Open source is now the accepted model for cross-company collaboration | VentureBeat"
"https://venturebeat.com/2021/01/14/microsoft-open-source-is-now-the-accepted-model-for-cross-company-collaboration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft: Open source is now the accepted model for cross-company collaboration Share on Facebook Share on X Share on LinkedIn Microsoft "loves" open source t-shirt Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft said that it has learned a lot from its increased engagement with the open source world, adding that open source is also now the “accepted model” for collaboration between companies. Microsoft was once one of the purest purveyors of proprietary software, but it has gone some way toward shedding that image over the past decade. Spearheaded in large part by Satya Nadella , who oversaw. NET’s open-sourcing , Microsoft’s joining of the Linux Foundation , and the open source initiative , Microsoft has pushed hard to convince the world that it’s “ all-in on open source. ” The year 2020 continued on a similar footing, with Microsoft open-sourcing more of its own technologies. The company also created (and joined) the Open Source Security Foundation (OSSF) alongside old foes Google and IBM, and it emerged as the top external contributor to Google’s open source Chromium project. In a blog post published today, Microsoft said that an industry-wide embrace of open source technology has encouraged cross-company collaboration, particularly among the tech giants, which can bypass much of the lawyering to join forces in weeks rather than months. This highlights the role that open source plays in bringing the big tech behemoths of the world together. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A few years ago if you wanted to get several large tech companies together to align on a software initiative, establish open standards, or agree on a policy, it would often require several months of negotiation, meetings, debate, back and forth with lawyers … and did we mention the lawyers?,” wrote Sarah Novotny, Microsoft’s open source lead for the Azure Office of the CTO. “Open source has completely changed this: it has become an industry-accepted model for cross-company collaboration. When we see a new trend or issue emerging that we know would be better to work on together to solve, we come together in a matter of weeks, with established models we can use to guide our efforts.” The company highlighted several ways that it’s learning from its investments in open source, including the importance of listening to community feedback; the need to help employees find a balance between autonomy and adhering to company policy; and why “over communicating” can help remove uncertainty and stress. The rise of open source A quick peek across the open source sphere over the past few years reveals how pivotal open source now is to businesses of all sizes, with IBM snapping up open source software maker Red Hat for $34 billion, Salesforce buying Mulesoft for $6.5 billion, and Microsoft itself doling out $7.5 billion for GitHub. Moreover, all the big technology companies these days rely on — and contribute to — open source projects, while simultaneously making many of their own tools available under an open source license. In a year fraught with challenges for most of the global workforce as they rapidly transitioned to remote work , Microsoft said the world can learn a lot from the open source realm, which has always had to embrace a remote-first and “digital-first” mindset due to its inherent global distribution. “For those of us who have been deeply engaged in open source, remote work has been our norm for many years because open source communities are large, globally distributed, and require effective collaboration from developers around the world,” Novotny said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,371
2,021
"BrowserStack, a cross-browser web testing platform for DevOps, raises $200M | VentureBeat"
"https://venturebeat.com/2021/06/16/browserstack-a-cross-browser-web-testing-platform-for-devops-raises-200m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BrowserStack, a cross-browser web testing platform for DevOps, raises $200M Share on Facebook Share on X Share on LinkedIn BrowserStack: Local testing Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. BrowserStack , a website testing platform for developer operations (DevOps) teams, has raised $200 million in a series B round. The funding, which gives the company a valuation of $4 billion, comes as businesses across every sector have had to embrace digital transformation due to the global pandemic. This has created the need for more tools to test software and accelerate the rate at which features go to market, with the cloud playing an integral role in the process. “Enterprises today need to release software with speed and quality to remain competitive,” BrowserStack cofounder and CEO Ritesh Arora told VentureBeat. “We replace the need for teams to own and manage an in-house test infrastructure. This means development teams can focus on building quality software at speed, rather than maintaining an in-house testing infrastructure that is complex to build and impossible to scale.” How it works Founded in 2011, BrowserStack helps developer and quality assurance (QA) teams test their software on thousands of device, browser, and operating system combinations to identify any bugs — both manually and automatically. The company, which has amassed an impressive roster of customers, including Amazon, Google, Microsoft, Twitter, and Spotify, says it has 15 datacenters around the world, ensuring developers benefit from minimal latency wherever they are. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The crux of the issue is this: A website might work fine on the latest version of Chrome installed on Samsung’s most recent Android flagship, but what about on Firefox on an old version of Windows? All the potential hardware and software configurations make it difficult for developers and testers to check that their software will work well for all users. This is where BrowserStack’s cloud-based testing platform comes into play, with automated Selenium testing for desktop and mobile browsers. Above: BrowserStack For companies looking to test prototype or other work-in-progress web and mobile apps away from the public stage, BrowserStack also supports testing in local development environments. “We support every development and testing environment used by developers,” Arora said. “Local testing allows developers to test applications hosted behind firewalls by creating a secure tunnel between the developer’s environment and BrowserStack’s platform” On the enterprise side, BrowserStack offers advanced administrative controls, single sign-on support for user authentication, and data governance via network controls. On top of this, BrowserStack offers fairly deep analytics, which can be used to evaluate the performance of software testing automation, for example, while businesses can dig down into metrics around build times and coverage to ensure they’re testing for the right device/browser combinations. Above: BrowserStack: Usage analytics Testing times The broader software testing market was estimated to be worth nearly $46 billion last year, a figure that’s set to more than double within six years. BrowserStack raised $50 million in its series A round of funding more than three years ago, and its latest cash injection was spearheaded by Bond, with participation from Insight Partners and Accel. Elsewhere, BrowserStack rival LambdaTest secured $16 million in funding earlier this month, a few months after another notable competitor called Sauce Labs upped its investment from asset firm TPG. So what is driving demand for these platforms? It seems the pandemic has left an indelible mark on just about every company and industry, and BrowserStack and its cloud-based competitors are no different. “COVID has forced every single organization globally to look at work-from-home and remote working options,” Arora said. “This led to a large number of companies looking at cloud solutions to replace their on-premise infrastructure. COVID has also led to the acceleration of digital transformation across sectors, as companies look for ways to scale and increase velocity without relying on their in-house systems.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,372
2,021
"Grafana Labs acquires load-testing startup K6 | VentureBeat"
"https://venturebeat.com/2021/06/17/grafana-labs-acquires-load-testing-startup-k6"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Grafana Labs acquires load-testing startup K6 Share on Facebook Share on X Share on LinkedIn Office with software developers Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Observability platform Grafana Labs today announced it has acquired K6 , a Stockholm-headquartered startup that’s building an open source load testing tool for engineering teams. Grafana says it and K6 will work together on an integrated offering as part of Grafana’s tech stack, providing a way to monitor and connect logs, metrics, and traces to diagnose app performance issues. “When we first spoke with the team at K6, we were immediately impressed by the incredible similarities between them and Grafana, including our passion for open source and how they modernized load testing,” Grafana cofounder and CEO Raj Dutt said in a press release. “In the past, load testing required costly infrastructure investments to run, resulting in only the most well-funded efforts reaping the benefits. These days, compute is available on demand and microservices are ephemeral, so it is far less costly to run test simulations. Because K6 is open source and also has a cloud offering, developers can realize the benefits much more rapidly.” In 2000, K6’s founding team was working on a massively multiplayer online role-playing game (MMORPG), with the goal of supporting hundreds of players simultaneously. The need for early load testing arose, and the team opted to open a consultancy to support organizations including the European Space Agency. In 2008, the team pivoted to bring its load testing tool to market as a website benchmarking product. And in 2016, K6 started working on a new open source load testing tool for automating performance tests. Above: K6’s load testing platform. “Grafana is the unquestioned leader in the observability space, and [they] have done an incredible job building an open and composable observability stack for their users — both on the open source and enterprise fronts,” K6 CEO Robin Gustafasson said. “Joining the Grafana family will accelerate our ability to give modern engineering teams better ways to observe and build reliable applications.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Platform expansion The K6 acquisition comes a week after Grafana detailed a series of product updates, including the general availability of Grafana 8.0 and Grafana Tempo 1.0, as well as new machine learning capabilities. The latest release of Grafana includes enhanced visualizations, a unified alerting system, and the debut of Grafana Tempo distributed tracing. More than 740,000 people are actively using the Grafana platform to visualize data from smart home devices, business apps, and more, according to Grafana cofounder Torkel Ödegaard. And Grafana has over 1,000 paying customers, including Bloomberg, JP Morgan Chase, eBay, PayPal, and Sony. A comprehensive software monitoring solution, along with a number of other technical practices, has the potential to positively contribute to continuous delivery. According to VMWare Tanzu, 92% of businesses are using observability tools to enable more effective business decision-making. Given the large number of metrics collected about the behavior of distributed app environments, real-time business insights can emerge with the use of observability tools, creating value for stakeholders. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,373
2,021
"Polar Signals open-sources Parca to optimize code and cut cloud bills | VentureBeat"
"https://venturebeat.com/2021/10/08/polar-signals-open-sources-parca-to-optimize-code-and-cut-cloud-bills"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Polar Signals open-sources Parca to optimize code and cut cloud bills Share on Facebook Share on X Share on LinkedIn Polar Signals: Improve performance using profiling data collected over time Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. “Continuous profiling” might not be a familiar concept to every company, but in a world increasingly drawn to cloud software and infrastructure , it’s something that most should be aware of. It’s a signal of sorts that belongs to the broader software monitoring category known as observability, which is concerned with measuring the internal state of a system by analyzing the data outputs — this can tell companies how their software is performing, and identify issues. Continuous profiling, specifically, is all about monitoring the resources that an application is using, such as CPU or memory, giving engineers deeper insights into what code — down to the line number — is consuming the most resources. A common use case is in helping companies reduce their cloud bill, given that most of the major cloud platform providers charge on a consumption basis: the more consumption, the higher the cost. So continuous profiling is basically optimizing codebases to save on cloud costs. Google was one of the early champions of the practice, detailing it in a 2010 white paper titled, Google-Wide Profiling: A Continuous Profiling Infrastructure for Data Centers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There are several notable players in the space, such as software monitoring giant Datadog , while Andreessen Horowitz-backed Optimyze, which develops the closed source Prodfiler , does something similar. And newcomer Polar Signals has officially thrown its hat into the ring today with the launch of a new continuous profiling open source project called Parca , which is available on GitHub now. Additionally, Polar Signals today announced it has raised $4 million in seed funding from Alphabet’s venture capital arm GV and Lightspeed. Backbone Founded in 2020 by Frederic Branczyk , a former Red Hat senior principal engineer and prominent figure in the Prometheus and Kubernetes open source ecosystems, Polar Signals is designed for large-scale infrastructure, which means it’s gunning for the enterprise segment in a sizable way. Parca is the backbone of Polar Signals, and as an open source project, it’s designed to bring the power of continuous profiling to developers from all businesses. It packs a bunch of features out of the box, including capabilities for collecting, storing, and making profiles available for query over time — this includes CPU profiling to determine the amount of time a CPU needs to execute a specific piece of code. Above: Parca: Point-in-time CPU metrics Polar Signals has been designed from the get-go to play nicely with all the usual observability tools, such as Jaeger and Prometheus , the latter now being the “defacto standard” for monitoring any Kubernetes environment. “We have taken special care that Parca and Polar Signals integrate particularly well with those environments,” Branczyk told VentureBeat. The Parca agent is deployed into each Kubernetes cluster node, with the workloads automatically profiled with “super-low overhead,” Branczyk added. “We have prepared lots of pre-baked deployment options and tutorials to make this as easy as possible — users can then choose to run the storage themselves, or purchase the hosted version from us.” The commercial hosted Polar Signals product launched in beta back in February, and there it shall remain until next year. Branczyk said that the company will eventually offer additional enterprise-grade features, such as automatic recommendations to address infrastructure configuration and code. Polar Signals’ early user base includes companies that “run foundational pieces of the internet,” such as content delivery networks (CDNs), SaaS companies, database platforms, and even ecommerce companies such as Zalando. “Our early users find it most useful for saving cost on a cloud bill, and it has shown that most companies are leaving an easy 20% savings on the table because they don’t have insight into what to optimize,” Branczyk said. Big bills To some, saving on cloud costs might sound like something that younger, cash-strapped startups would be most interested in. But as Branczyk points out, it’s actually larger companies that stand to benefit the most. “Typically the larger the company, the larger their cloud bill is, so those companies have more to gain, therefore medium to large enterprises are our perfect customers,” Branczyk explained. “Small companies with cloud bills tend to be early-stage startups that don’t really care about their cloud bill efficiency — yet — so those are less likely to be our target customer base.” Of course, continuous profiling isn’t purely about saving cloud costs — customers expect software to be fluid and fast, so it’s ultimately about improving the overall user experience too. With $4 million in the bank from big-name backers like GV and Lightspeed, Polar Signals is now well-financed to double down on Parca development and prepare the core commercial hosted product for launch in early 2022. “Our mission is to not just observe but to truly understand production systems,” Branczyk added. “We feel continuous profiling shines a light on aspects that have been lacking in the observability space, and we have many ideas to further extend our understanding of running systems beyond continuous profiling. We want observability to become understandability.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,374
2,021
"Inside Microsoft's open source program office | VentureBeat"
"https://venturebeat.com/2021/10/22/inside-microsofts-open-source-program-office"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Inside Microsoft’s open source program office Share on Facebook Share on X Share on LinkedIn Microsoft's OSPO team pictured in 2019. Stormy Peters is second from left. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Microsoft hasn’t always been a bastion of open source software (OSS) — former CEO Steve Ballmer once even went so far as to call OSS ‘a cancer’. But it’s changed days at the technology giant, with Ballmer’s successor Satya Nadella going to great lengths to convince the world that it was wrong about open source. Seven years in the hot seat and counting, Nadella has overseen Microsoft joining the Linux Foundation , the open source initiative (OSI), and the open source security foundation (OSSF). The company has also open-sourced many of its own technologies, including the .NET framework. Elsewhere, Microsoft is a top contributor to third-party projects such as Google’s Chromium , and let’s not forget that it doled out $7.5 billion for GitHub , the de facto code hosting and collaboration platform for open source projects. Earlier this year, Sarah Novotny, Microsoft’s open source lead for the Azure Office of the CTO, wrote that open source software is now the “accepted model” for cross-company collaboration, enabling Big Tech rivals to quickly join forces for the greater good. Underpinning much of this is the humble open source program office (OSPO), which has emerged as an integral part of business operations, ranging from venture capital-backed startups to the tech giants of the world. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! OSPOs bring formality and order to companies’ open source endeavors, helping them align project goals with key business objectives, set policies, manage license and compliance issues, and more. VentureBeat caught up with Stormy Peters , director of Microsoft’s open source programs office, to get the lowdown on Microsoft’s OSPO, its evolution since its launch back in 2014, and the role it plays in helping Microsoft manage its myriad open source efforts. Above: Stormy Peters: Director of Microsoft’s open source programs office (OSPO) The open source factor The benefits of open source software are well understood — it lowers the barrier to entry and gives companies greater control over their technology and data stack. But perhaps more than that, engaging and collaborating with the open source community is a focal point for most of the big tech companies because it helps them compete for top technical talent. “These are exciting times as more and more organizations are engaging more with open source,” Peters said. “It’s also just as important to developers to be able to use open source in their work — jobs that involve open source are more likely to retain developers.” However, the growing threat of software supply chain attacks and other security issues, not to mention all the license and compliance complexities, puts considerable pressure on developers and engineers when all they really want to be doing is building products. And that, ultimately, is what the OSPO is all about. “OSPOs help make sure your developers can move quickly,” Peters said. “Without an OSPO, teams across Microsoft would probably have to do a lot more manual compliance work, and they would all have to reinvent the wheel when it comes to understanding open source licenses, compliance, best practices, and community — we know they’d do well, but we want to help them do even better and faster by learning from each other and using tools standard across the company.” OSPO evolution Open source program offices have evolved greatly through the years, according to Peters, with two specific changes standing out in terms of scope and industry adoption. “OSPOs no longer focus solely on license compliance and intellectual property concerns — we now help with best practices, training, outreach, and more,” Peters explained. “And, it’s no longer just tech companies that have OSPOs.” Indeed, a recent survey from TODO Group , a membership-based organization for collaborating and sharing best practices around open source projects, found that while OSPO adoption is still at its highest in the tech industry, other industries such as education and the public sector are gaining steam. “The types of Microsoft customers interested in creating OSPOs that I’ve spoken to range from a large retail business in North America, to a bank in South America, to a car manufacturer in Europe,” Peters added. Above: OSPO adoption by Industry Microsoft’s OSPO tracks all the open source it uses internally, while it engages with its development teams that are looking to open-source their own software. It also figures out all the license issues to ensure they remain compliant, instigates any necessary legal and business reviews where required, provides training, and more. “The OSPO works across the company to collaborate with different open source experts and leaders to help curate guidance and policy,” Peters said. “We want to reduce friction and make it easier for employees to use open source — that includes using and contributing to open source software, as well as launching projects in the community.” Despite the extensive scope of the work, Microsoft’s OSPO team remains relatively lean with just eight people, though that doesn’t account for all those across the business and beyond that they actively engage with, from engineering through security, legal, marketing, and more. There is also a group of more than “100 open source champions” from across its global divisions who regularly meet with the OSPO to help pass on knowledge down the chain and through their own networks. “Our job is to help make it easier for employees to use and contribute to open source,” Peters explained. “We work with all the groups to help set policy, empower employees with knowledge and tools, and consult different groups across Microsoft and others in the industry on their open source strategy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,375
2,021
"Quantum venture funding dipped 12% in 2020, but quantum investments rose 46% | VentureBeat"
"https://venturebeat.com/2021/02/11/quantum-venture-funding-dipped-12-in-2020-but-quantum-investments-rose-46"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quantum venture funding dipped 12% in 2020, but quantum investments rose 46% Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Sorting through the hype surrounding quantum computing these days isn’t easy for enterprises trying to figure out the right time to jump in. Skeptics say any real impact is still years away, and yet quantum startups continue to seduce venture capitalists in search of the next big thing. A new report from CB Insights may not resolve this debate, but it does add some interesting nuance. While the number of venture capital deals for quantum computing startups rose 46% to 37 in 2020 compared to 2019, the total amount raised in this sector fell 12% to $365 million. Looking at just the number of deals, the annual tally has ticked up steadily from just 6 deals in 2015. As for the funding total, while it was down from $417 million in 2019, it remains well above the $73 million raised in 2015. There’s a couple of conclusions to draw from this. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! First, the number of startups being drawn into this space is clearly rising. As research has advanced, more entrepreneurs with the right technical chops feel the time is now to start building their startup. Second, the average deal size for 2020 was just under $10 million. And if you include the $46 million IQM raised , that squeezes the average for most other deals down even further. That certainly demonstrates optimism, but it’s far from the kind of financial gusher or valuations that would indicate any kind of quantum bubble. Finally, it’s important to remember that startups are likely a tiny slice of what’s happening in quantum these days. A leading indicator? Perhaps. But a large part of the agenda is still being driven by tech giants who have massive resources to invest in a technology that may have a long horizon and could be years away from generating sufficient revenues. That includes Intel, IBM , Google , Microsoft , and Amazon. Indeed, Amazon just rolled out a new blog dedicated to quantum computing. Last year, Amazon Web Services launched Amazon Braket , a product that lets enterprises start experimenting with quantum computing. Even so, AWS quantum computing director Simone Severini wrote in the inaugural blog post that business customers are still scratching their heads over the whole phenomenon. “We heard a recurring question, ‘When will quantum computing reach its true potential?’ My answer was, ‘I don’t know.'” he wrote. “No one does. It’s a difficult question because there are still fundamental scientific and engineering problems to be solved. The uncertainty makes this area so fascinating, but it also makes it difficult to plan. For some customers, that’s a real issue. They want to know if and when they should focus on quantum computing, but struggle to get the facts, to discern the signal from all the noises.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,376
2,021
"Quantum computing startup Quantum Machines raises $50M | VentureBeat"
"https://venturebeat.com/2021/09/06/quantum-computing-startup-quantum-machines-raises-50m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quantum computing startup Quantum Machines raises $50M Share on Facebook Share on X Share on LinkedIn Quantum Machines' cofounders (left to right): Drs. Nissim Ofek, Yonatan Cohen, and Itamar Sivan. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Quantum Machines , a company that’s setting out to “bring about useful quantum computers,” has raised $50 million in a series B round of funding as it looks to fund expansion into quantum cloud computing. Founded out of Tel Aviv in 2018, Quantum Machines last year formally launched its Quantum Orchestration Platform , pitched as an extensive hardware and software platform for “performing the most complex quantum algorithms and experiments” and taking quantum computing to the next level by making it more practical and accessible. Based on principles from quantum mechanics , quantum computing is concerned with quantum bits ( qubits ) rather than atoms. While still in its relative infancy, quantum computing promises to revolutionize computation by performing in seconds complex calculations that would take the supercomputers of today years or longer. The societal and business implications of this are huge and could expedite new drug discoveries or enhance global logistics in the shipping industry to optimize routes and reduce carbon footprints. Quantum Machines is focused on developing a new approach to controlling and operating quantum processors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Quantum processors hold the potential for immense computational power, far beyond those of any classical processor we could ever develop, and they will impact each and every aspect of our lives,” Quantum Machines CEO Dr. Itamar Sivan said in a press release. Quantum leap Venture capital (VC) investments in quantum computing have been relatively modest , but Ionq became the first such company to go public via a SPAC merger in March. And a few months back, PsiQuantum closed a $450 million round of funding to develop the first “commercially viable” quantum computer, with big-name backers that included BlackRock and Microsoft’s M12 venture fund. Microsoft also launched its Azure Quantum cloud computing service, which it first announced back in 2019 , in public preview. So quantum computing appears to be gaining momentum, as evidenced by Quantum Machines’ latest raise. The company had previously raised $23 million, including a $17.5 million series A from last year , and its series B round was led by Red Dot Capital Partners, with the participation from Samsung Next, Battery Ventures, Valor Equity Partners, Exor, Claridge Israel, Atreides Management LP, TLV Partners, and 2i Ventures, among others. The company said it plans to use its fresh capital to help implement an “effective cloud infrastructure for quantum computers.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,377
2,021
"Amazon announces Graviton3 processors for AI inferencing | VentureBeat"
"https://venturebeat.com/2021/11/30/amazon-announces-graviton3-processors-for-ai-inferencing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon announces Graviton3 processors for AI inferencing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At its re:Invent 2021 conference today, Amazon announced Graviton3, the next generation of its custom ARM-based chip for AI inferencing applications. Soon to be available in Amazon Web Services (AWS) C7g instances, the company says that the processors are optimized for workloads including high-performance compute, batch processing, media encoding, scientific modeling, ad serving, and distributed analytics. Alongside Graviton3, Amazon unveiled Trn1, a new instance for training deep learning models in the cloud — including models for apps like image recognition , natural language processing , fraud detection, and forecasting. It’s powered by Trainium , an Amazon-designed chip that the company last year claimed would offer the most teraflops of any machine learning instance in the cloud. (A teraflop translates to a chip being able to process 1 trillion calculations per second.) As companies face pandemic headwinds including worker shortages and supply chain disruptions, they’re increasingly turning to AI for efficiency gains. According to a recent Algorithmia survey, 50% of enterprises plan to spend more on AI and machine learning in 2021, with 20% saying they will be “significantly” increasing their budgets for AI and ML. AI adoption is, in turn, driving cloud growth — a trend of which Amazon is acutely aware, hence the continued investments in technologies like Graviton3 and Trn1. Graviton3 AWS CEO Adam Selipsky says that Graviton3 is up to 25% faster for general-compute workload and provides two times faster floating-point performance for scientific workloads, two times faster performance for cryptographic workloads, and three times faster performance for machine learning workloads versus Graviton2. Moreover, Graviton3 uses up to 60% less energy for the same performance compared with the previous generation, Selipsky claims. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Graviton3 also includes a new pointer authentication feature that’s designed to improve overall security. Before return addresses are pushed onto the stack, they’re first signed with a secret key and additional context information, including the current value of the stack pointer. When the signed addresses are popped off the stack, they’re validated before being used. An exception is raised if the address isn’t valid, blocking attacks that work by overwriting the stack contents with the address of harmful code. As with previous generations, Graviton3 processors include dedicated cores and caches for each virtual CPU, along with cloud-based security features. C7g instances will be available in multiple sizes, including bare metal, and Amazon claims that they’re the first in the cloud industry to be equipped with DDR5 memory, up to 30Gbps of network bandwidth, and elastic fabric adapter support. Trn1 According to Selipsky, Trn1, Amazon’s instance for machine learning training, delivers up to 800Gbps of networking and bandwidth, making it well-suited for large-scale, multi-node distributed training use cases. Customers can leverage up to tens of thousands of clusters of Trn1 instances for training models containing upwards of trillions of parameters. Trn1 supports popular frameworks including Google’s TensorFlow, Facebook’s PyTorch, and MxNet and uses the same Neuron SDK as Inferentia , the company’s cloud-hosted chip for machine learning inference. Amazon is quoting 30% higher throughput and 45% lower cost-per-inference compared with the standard AWS GPU instances. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,378
2,021
"Medical digital twins secure COVID-19 data | VentureBeat"
"https://venturebeat.com/2021/10/31/medical-digital-twins-secure-covid-19-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Medical digital twins secure COVID-19 data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dell has partnered with the i2b2 tranSMART foundation to create privacy-preserving digital twins to treat the long-haul symptoms of COVID-19 patients. The project hopes to improve treatment for the 5% of COVID-19 patients who develop chronic health issues. The new tools integrate de-identified data — which refers to data from which all personally identifiable information has been removed — AI, and sophisticated models that allow researchers to perform millions of treatment simulations based on genetic background and medical history. This initiative is part of Dell’s long-term goal to bring digital transformation across the healthcare industry. Jeremy Ford, Dell vice-president of strategic giving and social innovation, told VentureBeat, “AI-driven research and digital twins will support hospitals and research centers globally and contribute to Dell’s goal to use technology and scale to advance health, education and economic opportunity for 1 billion people by 2030.” The i2b2 TranSMART foundation (Informatics for Integrating Biology at the Bedside) is an open-source open-date community for enabling collaboration for precision medicine. The group is focused on projects to facilitate sharing and analysis of sensitive medical data in a way that benefits patients and protects privacy. The partnership between Dell and i2b2 promises to create best practices for applying privacy-enhanced computation (PEC) to medical data. I2b2 chief architect Dr. Shawn Murphy told VentureBeat that medical digital twins are essential because they enable “patients like me” comparisons across very large cohorts of similar medical twins. This will help identify things like biological markers for diseases and compare treatment options for patients who share similar features like age, gender, underlying conditions, and ethnicity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Multiple sources and types of data go into constructing the medical twins, including a patient’s Electronic Health Record (EHR), consultation information directly from the patient, and waveform data from cardiac monitors, ventilators, and personal fitness tracking devices. “They could be used in the future to help researchers perform millions of individualized treatment simulations to identify the best possible therapy option for each patient, based on genetic background, medical history, and a greater overall knowledge of the long-term treatment effects,” Murphy said. Privacy required for adoption of medical digital twins Privacy is a crucial requirement for the widespread adoption of medical digital twins , which require combining sensitive medical data to create the best models. “ There is a significant amount of work to collect, harmonize, store and analyze the different forms of data coming from multiple locations while maintaining patient privacy and data integrity,” Murphy said. Dell is focused on providing data management hardware, software, and integration services for the project. The data enclave was designed to provide the computational, artificial intelligence, machine learning, and advanced storage capabilities needed for this work. It consists of Dell EMC PowerEdge, PowerStore and PowerScale storage systems, and VMware Workspace ONE. Researchers are still in the early days of identifying vulnerabilities in these architectures and balancing these against performance and workflow bottlenecks. With secure enclaves, sensitive data from various sources is encrypted in transit to a secured server, decrypted, and processed together. It assures the best performance and streamlines workflows of all PEC technologies, but also requires extensive security analysis because the data is processed in the clear. Other PEC approaches, such as homomorphic computing, can process encrypted data but also run much slower and are more challenging to integrate. Murphy said additional infrastructure would be required to support new locations and expand the data pool to include research centers in minority institutions and hospitals outside the U.S. “This is particularly critical for the full representation of diversity in digital twins,” he said. Building a common language The digital twin research started with the creation of the 4CE Consortium , an international coalition of more than 200 hospitals and research centers including data collaboratives across the U.S., France, Germany, Italy, Singapore, Spain, Brazil, India, and the United Kingdom. The 4CE consortium brings together all the sources and types of data to create a ‘common language’ to enable comparisons between different sample populations. This allows comparing medical digital twins that share similar biological markers to see what therapies work most effectively for other patients in the real world. In theory, researchers should be able to pull in data from the EHR, which is designed to manage all the medical history, including treatment options, medical appointments, diagnostic tests, and resulting treatments and prescriptions. However, in practice, Murphy said EHRs are prone to inaccuracies and missing information. For example, in the U.S., the code for rheumatoid arthritis is used in error four out of ten times when the code for osteoarthritis should be used. “This is why we need to aggregate multiple sources and types of data which in unison will spell out the condition of the patient,” Murphy explained. The real value of the EHR comes when combined with real-world patient interviews and other forms of data to create medical digital twins and drive population-level insights. The technology used to understand long-term COVID-19 symptoms can also help create high-resolution, disease-specific medical digital twins that can be used by physicians and researchers for many other applications in the healthcare system. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,379
2,021
"Kodiak Robotics to expand autonomous trucking with $125M | VentureBeat"
"https://venturebeat.com/2021/11/10/kodiak-robotics-to-expand-autonomous-trucking-with-125m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kodiak Robotics to expand autonomous trucking with $125M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Kodiak Robotics , a startup developing self-driving truck technologies, today announced that it raised $125 million in an oversubscribed series B round for a total of $165 million to date. The tranche — which includes investments from SIP Global Partners, Lightspeed Venture Partners, Battery Ventures, CRV, Muirwoods Ventures, Harpoon Ventures, StepStone Group, Gopher Asset Management, Walleye Capital, Aliya Capital Partners, and others — will be put toward expanding Kodiak’s team, adding trucks to its fleet, and growing its autonomous service capabilities, according to CEO Don Burnette. “Our series B drives us into hyper-growth so we can double our team, our fleet, and continue to scale our business,” Burnette said in a statement. “With [it], we will further accelerate towards launching our commercial self-driving service with our partners in the coming years to help address these critical challenges.” While autonomous trucks could face challenges in commercializing at scale until clearer regulatory guidelines are established, the technology has the potential to reduce the cost of trucking from $1.65 per mile to $1.30 per mile by mid-decade, according to a Pitchbook analysis. That’s perhaps why in the first half of 2021, investors poured a record $5.6 billion into driverless trucking companies, eclipsing the $4.2 billion invested in all of 2020. The semi- and fully autonomous truck market will reach approximately $88 billion by 2027, a recent Acumen Research and Consulting estimates, growing at a compound annual growth rate of 10.1% between 2020 and 2027. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Kodiak technology Kodiak, which was cofounded by Burnette and former venture capitalist Paz Eshel, emerged from stealth in 2018. After leaving Google’s self-driving project for Otto in early 2016, Burnette briefly worked at Uber following the company’s acquisition of Otto in 2016 at a reported $680 million valuation. “I was very fortunate to be an early member of and software tech lead at the Google self-driving car project, the predecessor to Waymo. I spent five years there working on robotaxis, but ultimately came to believe that there were tons of technical challenges for such applications, and the business case wasn’t clear,” Burnette told VentureBeat via email. “I realized in those early days that long-haul trucking represented a more compelling use case than robotaxis. I wanted a straight-forward go-to-market opportunity, and I saw early on that autonomous trucking was the logical first application at scale.” Kodiak’s self-driving platform uses a combination of light detection and ranging radar known as lidar as well as camera and radar hardware. A custom computer processes sensor data and plans the truck’s path. Overseen by a safety driver, the brakes, steering column, and throttle are controlled by the computer to move the truck to its destination. Kodiak’s sensor suite collects raw data about the world around the truck, processing raw data to locate and classify objects and pedestrians. The above-mentioned computer reconciles the data with lightweight road maps, which are shipped to Kodiak’s fleet over the air and contain information about the highway, including construction zones and lane changes. Kodiak claims its technology can detect shifting lanes, speed changes, heavy machinery, road workers, construction-specific signs, and more in rain or sunshine. Moreover, the company says its truck can merge on and off highways and anticipate rush hour, holiday traffic, and construction backups, adjusting their braking and acceleration to optimize for delivery windows while maximizing fuel efficiency. “Slower-moving vehicles, interchanges, vehicles on the shoulder, and even unexpected obstacles are common on highways. The Kodiak Driver can identify, plan, and execute a path around obstacles to safely continue towards its destination,” Kodiak says on its website. “The Kodiak Driver was built from the ground up specifically for trucks. Trucks run for hundreds of thousands of miles, in the harshest of environments, for extremely long stretches. Our focus has always been on building technology that’s reliable, safe, automotive-grade, and commercial ready.” The growing network of autonomous trucking In the U.S. alone, the American Trucking Association (ATA) estimates that there are more than 3.5 million truck drivers on the roads, with close to 8 million people employed across the segment. Trucks moved more than 80.4% percent of all U.S. freight and generated $791.7 billion in revenue in 2019, according to the ATA. But the growing driver shortage remains a strain on the industry. Estimates peg the shortfall of long-haul truck drivers at 80,000 in the U.S., a gap that’s projected to widen to 160,000 within the decade. Chasing after the lucrative opportunity, autonomous vehicle startups focused on freight delivery have racked up hundreds of millions in venture capital. In May, Plus agreed to merge with a special purpose acquisition company in a deal that — while terminated this week — would’ve been worth an estimated $3.3 billion. Self-driving truck maker TuSimple raised $1 billion through an initial public offering (IPO) in March. Autonomous vehicle software developer Aurora filed for an IPO last week. And Waymo, which is pursuing driverless truck technology through its Waymo Via business line, has raised billions of dollars to date at a valuation of just over $30 billion. Other competitors in the self-driving truck space include Locomation and Pony.ai. But Kodiak points to a minority investment from Bridgestone to test and develop smart tire technology as one of its key differentiators. BMW i Ventures is another backer, along with South Korean conglomerate SK, which is exploring the possibility of deploying Kodiak’s vehicle technology in Asia. “Kodiak was founded in April 2018 and took delivery of its first truck in late 2018. We completed our first closed-course test drive just three weeks later, and began autonomously moving freight for [12] customers between Dallas and Houston in the summer of 2019,” Burnette said. “Our team is the most capital-efficient of the autonomous driving companies while also having developed industry leading technology. We plan to achieve driverless operations at scale for less than 10% of what Waymo has publicly raised to date, and less than 25% of what TuSimple has raised to date.” Eight-five-employee Kodiak recently said that it plans to expand freight-carrying pilots to San Antonio and other cities in Texas. The company is also testing trucks in Mountain View, California, with a headcount that now stands at 85 people. In the next few months, Kodiak plans to add 15 new trucks to its fleet, for a total of 25. “We are at a pivotal moment in the autonomous vehicle industry. It’s not a question of will autonomous trucking technology happen — it’s when is it going to happen,” Burnette continued. “That being said, logistics is an $800 billion-per-year industry with a lot of room for many players to be successful. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,380
2,021
"LinkedIn launches Sales Insights to provide real-time data on business opportunities | VentureBeat"
"https://venturebeat.com/2021/02/18/linkedin-launches-sales-insights-to-provide-real-time-data-on-business-opportunities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn launches Sales Insights to provide real-time data on business opportunities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. LinkedIn has launched a new data analytics platform designed to give sales teams real-time insights into potential opportunities based on data generated by more than 700 million members on the social network. Called LinkedIn Sales Insights , the product is part of LinkedIn’s sales solutions unit, which includes Sales Navigator, a tool that helps sales teams find prospects by harnessing the vast swathes of business and engagement data on LinkedIn. Just last week, LinkedIn added a new account mapping feature to Sales Navigator, enabling users to visualize all the key stakeholders in a customer account and identify the right people to build relationships with. LinkedIn Sales Insights builds on that to help sales teams better understand prospects by mapping the relationships between companies, employees, skills, jobs, and more. The Microsoft-owned company first debuted LinkedIn Sales Insights back in December , but today it’s pushing the product into general availability. Above: LinkedIn Sales Insights At its core, LinkedIn Sales Insights helps sellers segment their customers and glean an up-to-date overview of the size of specific departments or job titles, how fast they’re growing, and how big an opportunity this might represent. Moreover, it can help sales teams compare opportunities across markets, locations, and segments while also displaying how well their sellers are connected to existing or prospective customers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From all this data, businesses can generate sales reports with specific delineations, such as “engineering managers at software companies with more than 1,000 employees.” Above: LinkedIn Sales Insights: Creating a new report According to LinkedIn, the sales insights platform leans on a range of AI and data mining techniques. For example, it uses character-level language models to determine the legitimacy of a company page, such as whether the company it purports to represent is a real company. It can also detect and connect LinkedIn pages that are part of the same company, such as “JP Morgan Chase” and “Morgan Chase Bank.” Elsewhere, the underlying data mining smarts can scan for companies’ addresses on their websites and import those into the LinkedIn Sales Insights database, as well as matching a sales team’s existing records in their CRM with those on LinkedIn to add supplementary data. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,381
2,021
"AI holds the key to even better AI | VentureBeat"
"https://venturebeat.com/2021/01/28/ai-holds-the-key-to-even-better-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI holds the key to even better AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For all the talk about how artificial intelligence technology is transforming entire industries, the reality is that most businesses struggle to obtain real value from AI. 65% of organizations that have invested in AI in recent years haven’t yet seen any tangible gains from those investments, according to a 2019 survey conducted by MIT Sloan Management Review and the Boston Consulting Group. And a quarter of businesses implementing AI projects see at least 50% of those projects fail, with “lack of skilled staff” and “unrealistic expectations” among the top reasons for failure, per research from IDC. A major factor behind these struggles is the high algorithmic complexity of deep learning models. Algorithmic complexity refers to the computational complexity of building and running these models in production. Faced with prolonged development cycles, high computing costs, unsatisfying inference performance, and other challenges, developers often find themselves stuck in the development stage of AI adoption, attempting to perfect deep learning models through manual trial-and-error, and nowhere near the production stage. Alternatively, data scientists rely on facsimiles of other models, which ultimately prove to be poor fits for their unique business problems. If human-developed algorithms inevitably run up against barriers of cost, time, manpower, and business fit, how can the AI industry break those barriers? The answer lies in algorithms that are designed by algorithms – a phenomenon that has been confined to academia to date but which will open up groundbreaking applications across industries when it is commercialized in the coming years. This new approach will enable data scientists to focus on what they do best – interpreting and extracting insights from data. Automating complex processes in the AI lifecycle will also make the benefits of AI more accessible, meaning it will be easier for organizations that lack large tech budgets and development staff to tap into the technology’s true transformative power. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! More of an art than a science Because the task of creating effective deep learning models has become too much of a challenge for humans to tackle alone, organizations clearly need a more efficient approach. With data scientists regularly bogged down by deep learning’s algorithmic complexity, development teams have struggled to design solutions and have been forced to manually tweak and optimize models – an inefficient process that often comes at the expense of a product’s performance or quality. Moreover, manually designing such models prolongs a product’s time-to-market exponentially. Does that mean that the only solution is fully autonomous deep learning models that build themselves? Not necessarily. Consider automotive technology. The popular dichotomy between fully autonomous and fully manual driving is far too simplistic. Indeed, this black-and-white framing obscures a great deal of the progress that automakers have made in introducing greater levels of autonomous technology. That’s why automotive industry insiders speak of different levels of autonomy – ranging from Level 1 (which includes driver assistance technology) to Level 5 (fully self-driving cars, which remain a far-off prospect). It is plausible that our cars can become much more advanced without needing to achieve full autonomy in the process. The AI world can (and should) develop a similar mindset. AI practitioners require technologies that automate cumbersome processes involved in designing a deep learning model. Similar to how Advanced Driver Assistance Systems (ADAS) (automatic braking systems, adaptive cruise control) are paving the way towards greater autonomy in the automotive industry, the AI industry needs its own technology to do the same. And it’s AI that holds the key to help us get there. AI building better AI Encouragingly, AI is already being leveraged to simplify other tech-related tasks, like writing and reviewing code (which itself is built by AI). The next phase of the deep learning revolution will involve similar complementary tools. Over the next five years, expect to see such capabilities slowly become available commercially to the public. So far, research on how to develop these superior AI capabilities has remained constrained to advanced academic institutes and, unsurprisingly, the largest names in tech. Google’s pioneering work on neural architecture search (NAS) is a key example. Described by Google CEO Sundar Pichai as a way for “ neural nets to design neural nets ,” NAS — an approach that began attracting notice in 2017 — involves algorithms searching among thousands of available models, a process that culminates in an algorithm suited to the particular problem at hand. For now, NAS is a new technology that hasn’t been widely introduced commercially. Since its inception, researchers have been able to shorten runtimes and decrease the amount of compute resources needed to run NAS algorithms. But these algorithms are still not generalizable among different problems and datasets — let alone ready for commercial use — because for each individual use case, one must manually tweak the architecture space for each problem, an approach that is far from scalable. Most research in the field has been carried out by tech giants like Google and Facebook, as well as academic institutes like Stanford, where researchers have hailed emerging autonomous methods as a “promising avenue” for driving AI progress. But with innovative AI developers building on the work that’s already been done in this field, the exclusivity of technology like NAS is set to give way to greater accessibility as the concept becomes more scalable and affordable in the coming years. The result? AI that builds AI, thus unleashing its true potential to solve the world’s most complex problems. As the world looks toward 2021, this is an area ripe for innovation – and that innovation will only beget further innovation. Yonatan Geifman is CEO and co-founder at Deci. [ VentureBeat regularly publishes guest posts from experts who can provide unique and useful perspectives to our readers on news, trends, emerging technologies, and other areas of interest.] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,382
2,021
"What you need to know about spotting deepfakes | VentureBeat"
"https://venturebeat.com/2021/08/12/spotting-deepfakes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What you need to know about spotting deepfakes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This post was written by Rajesh Ganesan, Vice President at ManageEngine. New technologies are frequently met with unwarranted hysteria. However, if the FBI’s recent private industry notification is any indication, AI-generated synthetic media may actually be cause for concern. The FBI believes that deepfakes will be used by bad actors to further spear phishing and social engineering campaigns. According to deepfake expert Nina Schick, AI-based synthetic media — hyper realistic images, videos, and audio files — are expected to become ubiquitous in the near future, and we should ensure we get better at spotting deepfakes. The consumerization of deepfake technologies is already upon us, with applications such as FaceApp, FaceSwap, Avatarify, and Zao rising in popularity. This content is protected under the First Amendment until it is used to further illegal efforts, which of course, we’ve already started to see. According to a UCL report published in Crime Science, deepfakes pose the most serious artificial intelligence-based crime threat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Your IP depends on spotting deepfakes We’ve already seen effective deepfake attacks on politicians, civilians, and organizations. In March 2019, cybercriminals successfully conducted a deepfake audio attack, tricking the CEO of a U.K.-based energy firm into transferring $243,000 to a Hungarian supplier. Last year, a lawyer in Philadelphia was targeted by an audio-spoofing attack , and this year, Russian pranksters duped European politicians in an attack initially thought to be deepfake video. Even if the Russians did not use deepfake technology in their attacks, the subsequent news coverage speaks to how the existence of deepfakes is sowing distrust of media content across the board. As synthetic media becomes more proliferate — and more convincing — it will become increasingly difficult for us to know which content to trust. The long-term effect of the proliferation of deepfakes could lead to a distrust of audio and video in general, which would be an inherent societal harm. Deepfakes facilitate the “liar’s dividend” As synthetic media populates the Internet, viewers may come to engage in “disbelief by default,” where we become skeptical of all media. This would certainly benefit dishonest politicians, corporate leaders, and spreaders of disinformation. In an environment polluted by distrust and misinformation, those in the public eye can deflect damaging information about themselves by claiming the video or audio in question is fake; Robert Chesney and Danielle Citron have described this effect as the “liar’s dividend.” As a quick example, after Donald Trump learned about the existence of deepfake audio, he rescinded his previous admission and asserted that he might not have been on the 2005 Access Hollywood tape after all. Trump aside, a “disbelief by default” environment would certainly be harmful for those on both sides of the aisle. Deepfake detection efforts are ramping up In addition to initiatives from Big Tech, namely Microsoft’s video authentication tool, Facebook’s deepfake detection challenge, and Adobe’s content authenticity initiative, we have seen particularly promising work out of academia. In 2019, USC scholar Hao Li and others were able to identify deepfakes via correlations between head movements and facial expressions, researchers from Stanford and UC Berkeley subsequently focused on mouth shapes, and most recently, Intel and SUNY Binghamton scholars have attempted to identify the specific generative models behind fake videos. It’s quite a game of cat and mouse, as the bad actors and altruistic detectors use generative adversarial networks (GANs) in an attempt to outwit one another. This past February, UC San Diego researchers admitted that it’s hard to stay ahead of the bad actors, as criminals have adapted enough to trick the deepfake detection systems. The private sector is working on deepfake detection as well. The SemaFor Project, Sensity, TruePic, AmberVideo, Estonia-based Sentinel, and Tel-Aviv-based Cyabra all have initiatives in the works. Additionally, blockchain technologies could help to identify media’s provenance. By creating a cryptographic hash from any given audio, video, or text source, and placing it on the blockchain, one could ensure that the media in question has not been altered. Nevertheless, seeing as the FBI is already seeing bad actors using AI-generated synthetic media in spear phishing and social engineering efforts, it is vital that all employees remain vigilant in their own personal deepfake detection. Spotting deepfakes 101 According to the FBI, deepfakes can be identified by distortions around a subject’s pupils and earlobes. Additionally, it is wise to look for jarring head and torso movements, as well as syncing issues between lip movements and the associated audio. Another common tell is a distortion in the background, or a blurry or indistinct background, in general. Lastly, be on the lookout for social media profiles and other images with consistent eye spacing across a large group of images. As a caveat, the deepfake tells are constantly changing. When deepfake video first circulated, odd breathing patterns and unnatural blinking were the most common signs; however, the technology quickly improved, making these particular tells obsolete. Aside from looking for tells and relying on third-party tools to authenticate the veracity of media, there are certain basic things can help employees with spotting deepfakes. If an image or video appears to be dubious in nature, one can check the metadata to ensure that the creation time and creator ID make sense. One can ascertain a great deal by learning when, where, and on what device an image was created. At this point in time, a healthy skepticism of media from unknown origins is warranted. It’s important to train employees on media literacy tactics, including watching out for unsolicited phone calls and requests that don’t sound quite right. Whether a request comes through an email or a call, employees should be sure to confirm the request through secondary channels — particularly if the request is for sensitive information. Also, employees who manage corporate social media accounts should always use two-factor authentication. Companies that deploy a continuous learning model when it comes to security risks, such as deepfakes, should teach all employees to maintain some level of skepticism when it comes to any shared media content. If synthetic media proliferates as quickly as Nina Schick and other deepfake experts expect, it will be vital to maintain this skepticism. Also, through the use of anti-spam and anti-malware software, employees can be alerted to any unusual or anomalous activity, as the software filters and checks all emails that come through. As with any technology though, employees should still do gut checks as an added layer of protection. Deepfake awareness part of your cybersecurity plan Deepfakes pose serious risks to society, including sowing mistrust of media in general, which has its own devastating repercussions. Given the potential stock market manipulation, risks to business and personal reputations, and the ability to disrupt elections and create geopolitical conflict, the potential negative effects of deepfakes are vast. That said, there are some potential positive effects as well, including the creation of synthetic voices to help those with amyotrophic lateral sclerosis (ALS) like the Project Rejoice initiative where deepfake technology was used to give Pat Quinn, co-founder of the ALS Ice Bucket Challenge his voice back, or to serve as an educational output, like when David Beckham once delivered an anti-malaria message in nine languages, as part of “Malaria No More’s” attempt to protect against the deadly disease. Nevertheless, it’s vital that employees make spotting deepfakes a part of their media literacy and Zero Trust mindset, as synthetic media will get more convincing and more prolific in the near future. Rajesh Ganesan is Vice President at ManageEngine, the IT management division of Zoho Corporation. Rajesh has been with Zoho Corp. for over 20 years developing software products in various verticals including telecommunications, network management, and IT security. He has built many successful products at ManageEngine, currently focusing on delivering enterprise IT management solutions as SaaS. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,383
2,021
"The business value of clustering algorithms | VentureBeat"
"https://venturebeat.com/2021/08/20/the-business-value-of-clustering-algorithms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The business value of clustering algorithms Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A single type of machine learning algorithm can be used to identify fake news, filter spam, and personalize marketing materials. Known as clustering algorithms, or “clustering” for short, they can automatically discover natural groupings of events, people, and objects in large datasets. Operating on the theory that data points in groups should have similar features, clustering algorithms have been adopted widely across enterprises to detect fraud, recommend content to users, and more. But they come with challenges that can be difficult for businesses to overcome without the right approaches in place. For example, before a clustering algorithm can be used, data has to be in a standardized format. And the number of clusters sometimes must be decided ahead of deployment, because too many clusters could lead to process inefficiencies while too few could sacrifice accuracy. Clustering algorithms Clustering algorithms are a form of unsupervised learning algorithm. With unsupervised learning, an algorithm is subjected to “unknown” data for which no previously defined categories or labels exist. The machine learning system must teach itself to classify the data, processing the unlabeled data to learn from its inherent structure. This means that clustering algorithms can be used to automatically identify patterns and structures in data. A grocer could employ clustering to segment its loyalty card customers into different groups based on their buying behavior, for example, while an email provider could apply clustering for spam filtering by looking at the different sections of the email (e.g., the header and sender) and grouping together similar messages. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another example of clustering algorithms in use is recommender systems, which group together users with similar viewing, browsing, or shopping patterns to recommend similar content. Clustering enables anomaly detection in manufacturing, helping to spot defective parts. And in the life sciences, clustering has been applied to analyzing evolutionary biology to surface patterns in DNA. Choosing a clustering algorithm A key step in deploying clustering is deciding which algorithm to use. One of the most common is k-means, which works by computing the “distances” (i.e., similarity) between data points and “group centers” (commonalities). But there’s also mean-shifted clustering, which attempts to find dense areas of data points; density-based spatial clustering of applications with noise (DBSCAN); and agglomerative hierarchical clustering, to name a few algorithms. K-means has the advantage of speed, but it requires that someone select many groups and start with a random choice of group commonalities. Because of this, k-means clustering can yield different results on different runs of the algorithm — which isn’t ideal in mission-critical domains like finance. By contrast, mean-shift clustering doesn’t need a person to select the number of groups — it automatically discovers this in-process. DBSCAN doesn’t require a preset number of groups, either, and helpfully identifies outliers as noises. But both processes can be slow. As for hierarchical clustering, it’s useful when the underlying data has a hierarchical structure as it can often recover the hierarchy. However, it’s less efficient than k-means clustering. Using clustering Despite its potential, clustering isn’t appropriate for every business scenario. It’s best applied when starting from a large, unstructured dataset divided into an unknown number of classes, which would be too labor-intensive to segment manually. As the engineering team at data science platform Explorium wrote in a recent blog, clustering should be deployed where and when it’ll give the greatest impact and insights. In some cases, clustering might serve as a starting point rather than an end-to-end solution, shedding light on important features in a dataset that can be elucidated with deeper — and richer — analyses. “Much like with other useful algorithms and data science models, you’ll get the most out of clustering when you deploy it not as a standalone, but as part of a broader data discovery strategy,” the team wrote. “Cluster analysis can help you segment your customers, classify your data better, and generally structure your datasets, but it won’t do much more if you don’t give your data a broader context.” The road to implementation can be tricky, but successful clustering projects can yield sizeable returns on investment. As McKinsey wrote in a 2020 report, it’s possible for any company to get a good amount of value from AI — including clustering algorithms — if it’s applied effectively in a repeatable way. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,384
2,020
"Qualcomm's Snapdragon 888 is an AI and computer vision powerhouse | VentureBeat"
"https://venturebeat.com/2020/12/02/qualcomms-snapdragon-888-is-an-ai-and-computer-vision-powerhouse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Qualcomm’s Snapdragon 888 is an AI and computer vision powerhouse Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Although Apple’s latest A14 Bionic chip enabled the iPhone 12 family and iPad Air tablets to deliver impressive performance improvements, Qualcomm is making clear that the next generation of Android devices will rely heavily on advanced AI and computer vision processors to retake the performance lead. Teased yesterday at Qualcomm’s virtual Tech Summit, the Snapdragon 888 is getting a full reveal today, and the year-over-year gains are impressive, notably including the largest jump in AI performance in Snapdragon history. The Snapdragon 888’s debut is significant for technical decision makers because the chip will power most if not all of 2021’s flagship Android phones, which collectively represent a large share of the over 2 billion computers sold globally each year. Moreover, the 888’s increasing reliance on AI processing demonstrates how machine learning’s role is now critical in advancing all areas of computing, ranging from how devices work when they’re fully on to what they’re quietly doing when not in active use. From a high-level perspective, the Snapdragon 888 is a sequel to last year’s flagship 865 chips, leveraging 5-nanometer process technology and tighter integration with 5G and AI chips to deliver performance and power efficiency gains. The 888 will be Qualcomm’s first chip with an integrated Snapdragon X60 modem , including third-generation millimeter wave 5G with the promise of 7.5Gbps downloads and 3Gbps uploads, and notably won’t be offered without that modem or 5G functionality. It will also be the first to support Wi-Fi 6E networks and dual-antenna Bluetooth 5.2 for non-cellular wireless connectivity. One of the largest changes in the Snapdragon 888 is its shift to a unified AI architecture, including a Hexagon 780 processor with fused rather than separated AI accelerators. Promising 26 TOPS of performance — substantially higher than the A14 Bionic’s 11 TOPS and the 15 TOPS in last year’s Snapdragon 865 — the sixth-generation AI system includes 16 times more dedicated memory and twice the tensor accelerator compute capacity. Thanks to the new integrated design, it delivers up to three times the performance per watt and 1,000 times faster hand-off times than before. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! All of this power, Qualcomm notes, is needed to empower a series of tripling improvements in photo and video processing. Instead of asking all of a device’s cameras to share one image signal processor, the Spectra 580 computer vision processor now contains three ISPs, so you can snap three 28-megapixel still images at 30 frames per second without lag or record three 4K HDR videos at the same time, in either case with separate AI workloads for each camera. As crazy as that sounds, the premise is that new AI-powered cameras will automatically monitor three lenses and maintain optimal video zoom at all times, use data from one lens to assist with person or object removal from another, and deliver video composited from multiple HDR image sensors in real time. The latter technology, Qualcomm notes, is for the first time being brought over from the automotive and security camera markets to phones and tablets, enabling Computational HDR Video Capture: using staggered HDR sensors to record videos with “extreme dynamic range” spanning simultaneous long, medium, and short exposures, yet with low ghosting. Photographs can be captured in extremely low light — 0.1 Lux — and with 10-bit HDR support for over 1 billion colors. Spectra 580 also benefits from a 35% speed increase, and 2.7 gigapixel per second processing, enough throughput to enable burst captures of 120 full-resolution photos per second for extreme sports and action photography. Another sign of machine learning’s criticality to image processing is found in the new chip’s 10th-generation “3A” AI system, which handles autofocus, autoexposure, and auto white balance. To train the new 3A system, Qualcomm equipped photo analysts with eye-tracking VR headsets and recorded how their eyes perceived images with different lighting and focus conditions. Now the system uses human perception-based guidance to focus and expose images, rather than merely optimizing captures by what computers might consider ideal standards. If all of those improvements weren’t significant enough, the Snapdragon 888’s smaller AI core — the Sensing Hub — is also becoming significantly more capable in its second generation. While it runs TensorFlow Micro rather than TensorFlow Lite , the Hub’s performance has increased by a factor of five, and it can now offload 80% of the tasks that were previously handled by the Hexagon AI system. It will continue to enable always-on lift detection, screen waking, and ambient audio detection to fire up AI assistants, as well as supporting detection of car crashes, earthquakes, and specific activities, and performing low-power monitoring of 5G, Wi-Fi, Bluetooth, and location data streams. Snapdragon 888 is also the first chip to natively support both Truepic and the Content Authenticity Initiative , a cross-industry collaboration to ensure the veracity and authenticity of pictures. The chip is capable of placing cryptographic seals on pictures, enabling independent verifications that images weren’t modified after being snapped. This year’s CPU and GPU improvements aren’t trivial, but compared to the AI and camera improvements, they’re more straightforward evolutions of what came before. Built with one ARM Cortex X-1 core at 2.84GHz, three Cortex-A78s with 2.4GHz speeds, and four Cortex-A55s at 1.8GHz, the 5-nanometer Kryo 680 CPU benefits from 25% more power and 25% better power efficiency. On the GPU side, the Adreno 660 claims to have the Snapdragon’s biggest ever year-on-year increases, with 35% faster graphics rendering and 20% improved power efficiency, notably without the use of an ARM Mali core — the Adrenos are built with Qualcomm’s own internally developed graphics IP. Additionally, the 660 offers variable rate shading, up to 144fps frame rates, and touch responsive improvements in the 10% to 20% range. The Snapdragon 888 is sampling to OEMs now and should begin appearing in smartphones starting in the first quarter of 2021. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,385
2,021
"NICE and Google Cloud team up to improve customer interactions | VentureBeat"
"https://venturebeat.com/2021/11/24/nice-and-google-cloud-team-up-to-improve-customer-interactions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NICE and Google Cloud team up to improve customer interactions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. NICE CXone, one of the more pleasantly named IT software providers around, is hooking up with Google Cloud, aiming to create more effective customer self-service systems that integrate with traditional contact centers. Hoboken, New Jersey-based NICE (which stands for Neptune Intelligence Computer Engineering) announced last week that it is connecting its cloud-native, AI-powered CXone customer experience platform with Google Cloud Contact Center Artificial Intelligence (CCAI), a group of APIs that engage Google AI for contact-center use cases. The combination is designed to provide businesses with more efficient ways to engage and help customers navigate digital and voice touchpoints. CX (customer experience) and UX (user experience) are all about reducing online friction so customers and potential customers don’t consider switching to another vendor. NICE CXone and Google AI endeavor to enhance CX and UX behind the scenes. What sets NICE CXone apart “We connect the dots from the end-to-end self-service experience to the more traditional agent-assisted service support and sales,” Chris Bauserman, NICE CXone’s VP of marketing, told VentureBeat. “When you think of bridging chatbots, voice bots, and other digital self-service experiences with the contact center, or a call center — either online or voice channels — what that does is provide all-in-one ease of management and visibility for organizations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bauserman said that CXone “allows us to connect the customer query to the customer experience. Most queries start with a Google search or a mobile app, and they may involve different attempts to self-serve. They may involve agents multiple times. And so by putting it all together in one orchestration layer, one set of options, and being able to move between them without the customer ever having to repeat themselves or start over, is really powerful for reducing friction and keeping CX loyalty.” Faster answers to queries NICE CXone claims to provide faster answers at the beginning of customer queries, thanks to its AI engine — sometimes even before a conversation with an agent starts, according to Bauserman. He also claims that customized next-best-action guidance capabilities and up-to-date FAQs allow agents to be better equipped to deliver improved service quality. The NICE platform provides no-code/low-code integration and consolidated software orchestration with Google Cloud CCAI to enable intelligent natural-language capabilities through various stages of customer interactions. CXone’s Virtual Agent Hub, included in CXone, enables businesses to use conversational bots for voice and chat, leveraging Google Cloud’s Contact Center AI. Businesses can integrate Google Cloud Dialogflow self-service bots without any coding while retaining control of the customer experience, Bauserman said. Deployed in combination with CXone Agent Assist Hub, companies can use Google Cloud Agent Assist to empower their customer service reps with real-time, automated knowledge support during live chat interactions. Google Cloud reports that contact centers using Agent Assist have seen their agents respond up to 15% faster to chats, reducing chat abandonment rates and solving more customer problems. The competitive landscape Five9 and Genesys Cloud CX stand out as NICE CXone’s top competitors, based on similarity, popularity, and user reviews, according to Getapp.com. When comparing NICE CXone to its top 100 alternatives, Salesforce Sales Cloud has the highest rating, with Freshdesk as the runner-up, and NICE CXone ranking in 12th place. How NICE implements AI Andy Traba, NICE’s director of product marketing, offered VentureBeat readers some insight into how NICE CXone uses AI in its implementation: VentureBeat: What AI and ML tools are you using specifically? Andy Traba: NICE leverages an extensive AI, ML, and data analysis toolbox, but our go-to platforms and applications include TensorFlow, PyTorch, Python, and R. VentureBeat: Are you using models and algorithms out of a box — for example, from DataRobot or other sources? Traba: Yes, we draw from the above, but most of our models and algorithms are customized and purpose-built for our customer experience use-cases by our team of data scientists and ML engineers. VentureBeat: What cloud service are you using mainly? Traba: Amazon Web Services mostly, along with some Azure options. VentureBeat: Are you using a lot of the AI workflow tools that come with that cloud? Traba: We customize our workflows using the platform’s core capabilities, but we don’t rely much on the “with a few clicks”-type offerings. VentureBeat: How much do you do yourselves? Traba: We do 100% [of] everything ourselves. VentureBeat: How are you labeling data for the ML and AI workflows? Traba: Our key differentiator for labeling data is applying the robust speech and text analytics from the NICE acquisitions of Nexidia and Mattersight, where we also continue to innovate to ensure we have the best ML training datasets. Also, outcomes and metadata from CXone applications like Omnichannel Routing or Workforce Management enrich our datasets. VentureBeat: Can you give us a ballpark estimate on how much data you are processing? Traba: NICE analyzes billions of conversations across a wide range of channels for various industries that serve as the foundation for all our AI and ML work. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,386
2,021
"What is explainable AI? Building trust in AI models | VentureBeat"
"https://venturebeat.com/2021/11/26/what-is-explainable-ai-building-trust-in-ai-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is explainable AI? Building trust in AI models Share on Facebook Share on X Share on LinkedIn Man and 2 laptop screen with program code. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As AI-powered technologies proliferate in the enterprise, the term “ explainable AI ” (XAI) has entered mainstream vernacular. XAI is a set of tools, techniques, and frameworks intended to help users and designers of AI systems understand their predictions, including how and why the systems arrived at them. A June 2020 IDC report found that business decision-makers believe explainability is a “critical requirement” in AI. To this end, explainability has been referenced as a guiding principle for AI development at DARPA, the European Commission’s High-level Expert Group on AI, and the National Institute of Standards and Technology. Startups are emerging to deliver “explainability as a service,” like Truera , and tech giants such as IBM , Google , and Microsoft have open-sourced both XAI toolkits and methods. But while XAI is almost always more desirable than black-box AI, where a system’s operations aren’t exposed, the mathematics of the algorithms can make it difficult to attain. Technical hurdles aside, companies sometimes struggle to define “explainability” for a given application. A FICO report found that 65% of employees can’t interpret how AI model decisions or predictions are made — exacerbating the challenge. What is explainable AI (XAI)? Generally speaking, there are three types of explanations in XAI: Global, local, and social influence. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Global explanations shed light on what a system is doing as a whole as opposed to the processes that lead to a prediction or decision. They often include summaries of how a system uses a feature to make a prediction and “metainformation,” like the type of data used to train the system. Local explanations provide a detailed description of how the model came up with a specific prediction. These might include information about how a model uses features to generate an output or how flaws in input data will influence the output. Social influence explanations relate to the way that “socially relevant” others — i.e., users — behave in response to a system’s predictions. A system using this sort of explanation may show a report on model adoption statistics, or the ranking of the system by users with similar characteristics (e.g., people above a certain age). As the coauthors of a recent Intuit and Holon Institute of Technology research paper note, global explanations are often less costly and difficult to implement in real-world systems, making them appealing in practice. Local explanations, while more granular, tend to be expensive because they have to be computed case-by-case. Presentation matters in XAI Explanations, regardless of type, can be framed in different ways. Presentation matters — the amount of information provided, as well as the wording, phrasing, and visualizations (e.g., charts and tables), could all affect what people perceive about a system. Studies have shown that the power of AI explanations lies as much in the eye of the beholder as in the minds of the designer; explanatory intent and heuristics matter as much as the intended goal. As the Brookings Institute writes : “Consider, for example, the different needs of developers and users in making an AI system explainable. A developer might use Google’s What-If Tool to review complex dashboards that provide visualizations of a model’s performance in different hypothetical situations, analyze the importance of different data features, and test different conceptions of fairness. Users, on the other hand, may prefer something more targeted. In a credit scoring system, it might be as simple as informing a user which factors, such as a late payment, led to a deduction of points. Different users and scenarios will call for different outputs.” A study accepted at the 2020 ACM on Human-Computer Interaction discovered that explanations, written a certain way, could create a false sense of security and over-trust in AI. In several related papers , researchers find that data scientists and analysts perceive a system’s accuracy differently, with analysts inaccurately viewing certain metrics as a measure of performance even when they don’t understand how the metrics were calculated. The choice in explanation type — and presentation — isn’t universal. The coauthors of the Intuit and Holon Institute of Technology layout factors to consider in making XAI design decisions, including the following: Transparency: the level of detail provided Scrutability: the extent to which users can give feedback to alter the AI system when it’s wrong Trust: the level of confidence in the system Persuasiveness: the degree to which the system itself is convincing in making users buy or try recommendations given by it Satisfaction: the level to which the system is enjoyable to use User understanding: the extent a user understands the nature of the AI service offered Model cards, data labels, and fact sheets Model cards provide information on the contents and behavior of a system. First described by AI ethicist Timnit Gebru, cards enable developers to quickly understand aspects like training data, identified biases, benchmark and testing results, and gaps in ethical considerations. Model cards vary by organization and developer, but they typically include technical details and data charts that show the breakdown of class imbalance or data skew for sensitive fields like gender. Several card-generating toolkits exist, but one of the most recent is from Google, which reports on model provenance, usage, and “ethics-informed” evaluations. Data labels and factsheets Proposed by the Assembly Fellowship , data labels take inspiration from nutritional labels on food, aiming to highlight the key ingredients in a dataset such as metadata, populations, and anomalous features regarding distributions. Data labels also provide targeted information about a dataset based on its intended use case, including alerts and flags pertinent to that particular use. Along the same vein, IBM created “ factsheets ” for systems that provide information about the systems’ key characteristics. Factsheets answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. For natural language systems specifically, like OpenAI’s GPT-3 , factsheets include data statements that show how an algorithm might be generalized, how it might be deployed, and what biases it might contain. Technical approaches and toolkits There’s a growing number of methods, libraries, and tools for XAI. For example, “layerwise relevance propagation” helps to determine which features contribute most strongly to a model’s predictions. Other techniques produce saliency maps where each of the features of the input data are scored based on their contribution to the final output. For example, in an image classifier, a saliency map will rate the pixels based on the contributions they make to the machine learning model’s output. So-called glassbox systems, or simplified versions of systems, make it easier to track how different pieces of data affect a system. While they do not perform well across domains, simple glassbox systems work on types of structured data like statistics tables. They can also be used as a debugging step to uncover potential errors in more complex, black-box systems. Introduced three years ago, Facebook’s Captum uses imagery to elucidate feature importance or perform a deep dive on models to show how their components contribute to predictions. In March 2019, OpenAI and Google released the activation atlases technique for visualizing decisions made by machine learning algorithms. In a blog post, OpenAI demonstrated how activation atlases can be used to audit why a computer vision model classifies objects a certain way — for example, mistakenly associating the label “steam locomotive” with scuba divers’ air tanks. IBM’s explainable AI toolkit , which launched in August 2019, draws on a number of different ways to explain outcomes, such as an algorithm that attempts to spotlight important missing information in datasets. In addition, Red Hat recently open-sourced a package, TrustyAI , for auditing AI decision systems. TrustyAI can introspect models to describe predictions and outcomes by looking at a “feature importance” chart that orders a model’s inputs by the most important ones for the decision-making process. Transparency and XAI shortcomings A policy briefing on XAI by the Royal Society provides an example of the goals it should achieve. Among others, XAI should give users confidence that a system is an effective tool for the purpose and meet society’s expectations about how people are afforded agency in the decision-making process. But in reality, XAI often falls short , increasing the power differentials between those creating systems and those impacted by them. A 2020 survey by researchers at The Alan Turing Institute, the Partnership on AI, and others revealed that the majority of XAI deployments are used internally to support engineering efforts rather than reinforcing trust or transparency with users. Study participants said that it was difficult to provide explanations to users because of privacy risks and technological challenges and that they struggled to implement explainability because they lacked clarity about its objectives. Another 2020 study , focusing on user interface and design practitioners at IBM working on XAI, described current XAI techniques as “fail[ing] to live up to expectations” and being at odds with organizational goals like protecting proprietary data. Brookings writes: “[W]hile there are numerous different explainability methods currently in operation, they primarily map onto a small subset of the objectives outlined above. Two of the engineering objectives — ensuring efficacy and improving performance — appear to be the best represented. Other objectives, including supporting user understanding and insight about broader societal impacts, are currently neglected.” Forthcoming legislation like the European Union’s AI Act, which focuses on ethics, could prompt companies to implement XAI more comprehensively. So, too, could shifting public opinion on AI transparency. In a 2021 report by CognitiveScale, 34% of C-level decision-makers said that the most important AI capability is “explainable and trusted.” And 87% of executives told Juniper in a recent survey that they believe organizations have a responsibility to adopt policies that minimize the negative impacts of AI. Beyond ethics, there’s a business motivation to invest in XAI technologies. A study by Capgemini found that customers will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and punish those that don’t. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,387
2,020
"Amazon launches new AI services for DevOps and business intelligence applications | VentureBeat"
"https://venturebeat.com/2020/12/01/amazon-launches-new-ai-services-for-devops-and-business-intelligence-applications"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches new AI services for DevOps and business intelligence applications Share on Facebook Share on X Share on LinkedIn AWS CEO Andy Jassy speaks at the company's re:Invent customer conference in Las Vegas on November 29, 2017. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Amazon today launched SageMaker Data Wrangler, a new AWS service designed to speed up data prep for machine learning and AI applications. Alongside it, the company took the wraps off of SageMaker Feature Store, a purpose-built product for naming, organizing, finding, and sharing features, or the individual independent variables that act as inputs in a machine learning system. Beyond this, Amazon unveiled SageMaker Pipelines, which CEO Andy Jassy described as a CI/CD service for AI. And the company detailed DevOps Guru and QuickSight Q, offerings that uses machine learning to identify operational issues, provide business intelligence, and find answers to questions in knowledge stores, as well as new products on the contact center and industrial sides of Amazon’s business. During a keynote at Amazon’s re:Invent conference, Jassy said that Data Wrangler has over 300 built-in conversion transformation types. The service recommends transformations based on data in a target dataset and applies these transformations to features, providing a preview of the transformations in real time. Data Wrangler also checks to ensure that the data is “valid and balanced.” As for SageMaker Feature Store, Jassy said that the service, which is accessible from SageMaker Studio, acts as a storage component for features and can access features in either batches or subsets. SageMaker Pipelines, meanwhile, allows users to define, share, and reuse each step of an end-to-end machine learning workflow with preconfigured customizable workflow templates while logging each step in SageMaker Experiments. DevOps Guru is a different beast altogether. Amazon says that when it’s deployed in a cloud environment, it can identify missing or misconfigured alarms to warn of approaching resource limits and code and config changes that might cause outages. In addition, DevOps Guru spotlights things like under-provisioned compute capacity, database I/O overutilization, and memory leaks while recommending remediating actions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Amazon QuickSight, which was already generally available, aims to provide scalable, embeddable business intelligence solutions tailored for the cloud. To that end, Amazon says it can scale to tens of thousands of users without any infrastructure management or capacity planning. QuickSight can be embedded into applications with dashboards and is available with pay-per-session pricing, automatically generating summaries of dashboards in plain language. A new complementary service called QuickSight Q answers questions in natural language, drawing on available resources and using natural language processing to understand domain-specific business language and generate responses that reflect industry jargon. Amazon didn’t miss the opportunity this morning to roll out updates across Amazon Connect, its omnichannel cloud contact center offering. New as of today is Real-Time Contact Lens, which identifies issues in real time to impact customer actions during calls. Amazon Connect Voice ID, which also works in real time, performs authentication using machine learning-powered voice analysis “without disrupting natural conversation.” And Connect Tasks ostensibly makes follow-up tasks easier for agents by enabling managers to automate some tasks entirely. Amazon also launched Amazon Monitron, an end-to-end equipment monitoring system to enable predictive maintenance with sensors, a gateway, an AWS cloud instance, and a mobile app. An adjacent service — Amazon Lookout for Equipment — sends sensor data to AWS to build a machine learning model, pulling data from machine operations systems such as OSIsoft to learn normal patterns and using real-time data to identify early warning signs that could lead to machine failures. For industrial companies looking for a more holistic, computer vision-centric analytics solution, there’s the AWS Panorama Appliance, a new plug-in appliance from Amazon that connects to a network and identifies video streams from existing cameras. The Panorama Appliance ships with computer vision models for manufacturing, retail, construction, and other industries, supporting models built in SageMaker and integrating with AWS IoT services including SiteWise to send data for broader analysis. Shipping alongside the Panorama Appliance is the AWS Panorama SDK, which enables hardware vendors to build new cameras that run computer vision at the edge. It works with chips designed for computer vision and deep learning from Nvidia and Ambarella, and Amazon says that Panorama-compatible cameras will work out of the box with AWS machine learning services. Customers can build and train models in SageMaker and deploy to cameras with a single click. The slew of announcements come after Amazon debuted AWS Trainium , a chip custom-designed to deliver what the company describes as cost-effective machine learning model training in the cloud. Amazon claims that when Trainium becomes available in the second half of 2020, it will offer the most teraflops of any machine learning instance in the cloud, where a teraflop translates to a chip being able to process 1 trillion calculations a second. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,388
2,021
"Amazon debuts IoT TwinMaker and FleetWise | VentureBeat"
"https://venturebeat.com/2021/11/30/amazon-debuts-iot-twinmaker-and-fleetwise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon debuts IoT TwinMaker and FleetWise Share on Facebook Share on X Share on LinkedIn AWS CTO Werner Vogels onstage November 29, 2018 at re:Invent. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today announced the new Amazon Web Services (AWS) IoT TwinMaker, a service designed to make it easier for developers to create digital twins of real-time systems like buildings, factories, industrial equipment, and product lines. Alongside this, the company debuted AWS IoT FleetWise, an offering that makes it ostensibly easier and more cost-effective for automakers to collect, transform, and transfer vehicle data in the cloud in near-real-time. “Digital twin” approaches to simulation have gained currency in other domains. For instance, London-based SenSat helps clients in construction, mining, energy, and other industries create models of locations for projects they’re working on. GE offers technology that allows companies to model digital twins of actual machines and closely track performance. And Microsoft provides Azure Digital Twins and Project Bonsai, which model the relationships and interactions between people, places, and devices in simulated environments. With IoT TwinMaker, Amazon says that customers can leverage prebuilt connectors to data sources like equipment, sensors, video feeds, and business applications to automatically build knowledge graphs and 3D visualizations. IoT TwinMaker supplies dashboards to help visualize operational states and updates in real time, mapping out the relationships between data sources. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To help developers create web-based apps for end-users, the IoT TwinMaker comes with a plugin for Amazon Managed Grafana, Amazon’s fully managed service for the visualization platform from Grafana Labs. Grafana’s apps can enable users to observe and interact with digital twins created using IoT TwinMaker. IoT FleetWise As for IoT FleetWise, it enables AWS customers to collect and standardize data across fleets of upwards of millions of vehicles. IoT FleetWise can apply intelligent filtering to extract only what’s needed from connected vehicles to reduce the volume of data being transferred. Moreover, it features tools that allow automakers to perform remote diagnostics, analyze fleet health, prevent safety issues, and improve autonomous driving systems. As Amazon explains in a press release : “Automakers start in the AWS management console by defining and modeling vehicle attributes (e.g., a two-door coupe) and the sensors associated with the car’s model, trim, and options (e.g., engine temperature, front-impact warning, etc.) for individual vehicle types or multiple vehicle types across their entire fleet. After vehicle modeling, automakers install the IoT FleetWise application on the vehicle gateway (an in-vehicle communications hub that monitors and collects data), so it can read, decode, and transmit information to and from AWS.” “The cloud is fundamentally changing [the automobile] industry, including how vehicles are designed and manufactured, the features they offer, how we drive,” AWS CEO Adam Selipsky said onstage at Amazon’s re:Invent 2021 conference. “[Automakers] are designing vehicles that are fused with software connected by sensors, and systems generating on [enormous] amounts of data.” Vehicle telematics — a method of monitoring and harvesting data from any moving asset, including cars and trucks — could be a boon for automakers in the coming years, not to mention service providers like Amazon. Monetizing onboard services could create $1.5 trillion, or 30% more, in additional revenue potential by 2030, according to McKinsey. One analysis found that even during the height of the pandemic, the demand for fleet management and telematics software has continued to grow at a rate of 10.6% and 9.9%, respectively. As Sudip Saha noted in Automotive World, the current health crisis has proven to be an opportunity to showcase the benefits of effective fleet management systems — especially in the context of the ecommerce boom. Businesses that delivered better when contactless and remote tracking of consignments was the need of the hour have largely fared better than their competitors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,389
2,020
"Why cloud vendors are investing in new sources of compute power | VentureBeat"
"https://venturebeat.com/2020/12/16/why-cloud-vendors-are-investing-in-new-sources-of-compute-power"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why cloud vendors are investing in new sources of compute power Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2014, data was declared the “oil of the digital economy,” and the analogy remained accurate until recently. In 2020, however, data reflected oil only in the parallels to the 2020 oil glut — too much production, not enough consumption, and the wholesale commoditization of storage. Today, the overriding demand is for data’s refined end product — business insight. And the most crucial link in the data insights supply chain is compute power. This makes the infrastructure of CPU cycles that enables distillation of value from mountains of data the new oil of the digital economy. And it’s driving some dramatic changes in the computing hardware ecosystem. Here’s what I mean: Processing power doesn’t just belong to Intel anymore Cloud vendors like AWS came to understand that the core differentiation of their offerings had little to do with data itself and everything to do with what customers can get from their data. Yet deriving value from massive datasets spread across multiple cloud storage instances, and leveraging advanced AI and ML-powered graph analytics and other analytics, takes a lot of processing juice. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The exponential growth in demand for processing capacity (and the costs associated with it) was what initially drove organizations to move to the cloud. Yet once the move to the cloud was a fait accompli, cloud vendors could take a long, hard look at their own processing capabilities. What they saw was that processing was the hands-down biggest variable cost in the cloud environment. And they realized that buy versus build priorities had flipped. Just as Amazon had verticalized deliveries — lowering costs and competing with UPS and FedEx — cloud vendors could verticalize chipmaking, or outsource to competitors other than Intel and AMD. So they did. AWS dipped its toes in the silicon waters in 2018 , when it began offering services over its first gen Graviton chips, which were designed with technology licensed from Arm (which NVIDIA is in the process of acquiring). This year, AWS dove headfirst into the chip pool, launching services based on Graviton2 – which are touted as massively faster and cheaper than its Intel-based offerings. AWS also announced a new ARM-based supercomputing service two weeks ago. In 2017 , Microsoft announced it was committing to use chips based on Arm-based technology for cloud purposes. It was among the first to test the Altra processor from Arm server CPU start-up Ampere in March , actively evaluating the chip’s capacities in their labs to help bolster Microsoft’s hyperscale data centers. Two years ago, Google launched its Tensor Processing Unit (TPU) 3.0 , a custom application specific processor to accelerate machine learning and model training. Meanwhile, Apple announced in June that it would gradually transition away from Intel-based chipsets in its personal computers, and more recently stated it was going to produce its own cellular modem chips too. What comes next? What we’re seeing is the decoupling of processing power from its traditional members-only club. Like oil, compute power is moving the direction of storage and other commodity services. And just like airlines care deeply about oil prices, inasmuch as oil’s derivatives are a pillar of their service offering, enterprises will look at computing power as a means to an end. Cloud vendors will relentlessly pursue ever-cheaper processing power. The entire compute layer will be commoditized, and we’ll see apps routinely running across tens of thousands of CPUs in parallel. Companies that embrace multicloud will be able to split processing intensive tasks between providers, based on highly-competitive and micro-segmented incremental pricing. Computing power will become a commodity in the full and traditional sense of the word, too. It will be traded on markets like any metal, energy, livestock, or agricultural commodity. Traders will be able to arbitrage processing cycles and hedge with processing futures. This shift will force cloud vendors to rethink themselves. Differentiation will be based on computing cycle availability and the quality of the algorithms used for AI/ML analysis. What does all this mean for Intel and AMD? Unless they make some radical changes, I think the expression “ old soldiers never die, they just fade away ” may be apt. Consider high street retail, whose demise began with the advent of widespread e-retail and accelerated during the pandemic. With the shift to cloud computing, the demand for CPU power on the desktop and in the data center will continue to shrink. And if cloud vendors make their own processing power, we could see traditional chipmakers go the way of Sears. The bottom line The burgeoning demand for insights from the petabytes of data that continues to flood into enterprise cloud storage is completely reshaping the computing ecosystem. As cloud vendors step into new verticals to take control of their computing supply chain, the old order of processors stands before a time of dramatic and fundamental change. David Richards is co-founder and CEO of WANdisco. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,390
2,021
"AI Weekly: AI adoption is driving cloud growth | VentureBeat"
"https://venturebeat.com/2021/07/30/ai-weekly-ai-adoption-is-driving-cloud-growth"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: AI adoption is driving cloud growth Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The adoption of cloud technologies continues to accelerate. According to the newest report from Canalys, in Q2 2021, companies spent $5 billion more on cloud infrastructure services compared to the previous quarter. While a number of factors are responsible, including an increased focus on business resiliency planning, the uptick illustrates the effect AI’s embracement has had — and continues to have — on enterprise IT budgets. In a recent survey , 80% of U.S. enterprises said they accelerated their AI adoption over the past two years. A majority consider AI to be important in their digital transformation efforts and intend to set aside between $500,000 to $5 million per year for deployment efforts. Organizations were projected to invest more than $50 billion in AI systems globally in 2020, according to IDC, up from $37.5 billion in 2019. And by 2024, investment is expected to reach $110 billion. The cloud is playing a role in this due to its potential to improve AI training and inferencing performance, lowering costs and in some cases providing enhanced protection against attacks. Most companies lack the infrastructure and expertise to implement AI applications themselves. As TierPoint highlights , outside of corporate datacenters, only public cloud infrastructure can support massive data storage as well as the scalable computing capability needed to crunch large amounts of data and AI algorithms. Even companies that have private datacenters often opt to avoid ramping up the hardware, networking, and data storage required to host big data and AI applications. According to Accenture global lead of applied intelligence Sanjeev Vohra, who spoke during VentureBeat’s Transform 2021 conference, the cloud and data have come together to give companies a higher level of compute, power, and flexibility. Cloud vendor boost Meanwhile, cloud vendors are further stoking the demand for AI by offering a number of tools and services that make it easier to develop, test, enhance, and operate AI systems without big upfront investments. These include hardware optimized for machine learning, APIs that automate speech recognition and text analysis, productivity-boosting automated machine learning modeling systems, and AI development workflow platforms. In a 2019 whitepaper , Deloitte analysts gave the example of Walgreens, which sought to use Microsoft’s Azure AI platform to develop new health care delivery models. One of the world’s largest shipbuilders is using Amazon Web Services to develop and manage autonomous cargo vessels, the analysts also noted. And the American Cancer Society uses Google’s machine learning cloud services for automated tissue image analysis. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The symbiosis between cloud and AI is accelerating the adoption of both,” the analysts wrote. “Indeed, Gartner predicts that through 2023, AI will be one of the top workloads that drive IT infrastructure decisions. Technology market research firm Tractica forecasts that AI will account for as much as 50% of total public cloud services revenue by 2025: AI adoption means that, ‘essentially, another public cloud services market will be added on top of the current market.'” With the global public cloud computing market set to exceed $362 billion in 2022 and the average cloud budget reaching $2.2 million today, it appears clear that investments in the cloud aren’t about to slow down anytime soon. As long as AI’s trajectory remains bright — and it should — the cloud industry will have an enormous boom from which to benefit. For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,391
2,021
"Nvidia's latest AI tech translates text into landscape images | VentureBeat"
"https://venturebeat.com/2021/11/22/nvidias-latest-ai-tech-translates-text-into-landscape-images"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia’s latest AI tech translates text into landscape images Share on Facebook Share on X Share on LinkedIn Nvidia logo is seen on an android mobile phone. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don’t exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings. “Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images,” Isha Salian, a member of Nvidia’s corporate communications team, wrote in a blog post. “Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky.” Generated images from text GauGAN2, whose namesake is post-Impressionist painter Paul Gauguin, improves upon Nvidia’s GauGAN system from 2019, which was trained on more than a million public Flickr images. Like GauGAN, GauGAN2 has an understanding of the relationships among objects like snow, trees, water, flowers, bushes, hills, and mountains, such as the fact that the type of precipitation changes depending on the season. GauGAN and GauGAN2 are a type of system known as a generative adversarial network (GAN), which consists of a generator and discriminator. The generator takes samples — e.g., images paired with text — and predicts which data (words) correspond to other data (elements of a landscape picture). The generator is trained by trying to fool the discriminator, which assesses whether the predictions seem realistic. While the GAN’s transitions are initially poor in quality, they improve with the feedback of the discriminator. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unlike GauGAN, GauGAN2 — which was trained on 10 million images — can translate natural language descriptions into landscape images. Typing a phrase like “sunset at a beach” generates the scene, while adding adjectives like “sunset at a rocky beach” or swapping “sunset” to “afternoon” or “rainy day” instantly modifies the picture. With GauGAN2, users can generate a segmentation map — a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like “sky,” “tree,” “rock,” and “river” and allowing the tool’s paintbrush to incorporate the doodles into images. AI-driven brainstorming GauGAN2 isn’t unlike OpenAI’s DALL-E, which can similarly generate images to match a text prompt. Systems like GauGAN2 and DALL-E are essentially visual idea generators, with potential applications in film, software, video games, product, fashion, and interior design. Nvidia claims that the first version of GauGAN has already been used to create concept art for films and video games. As with it, Nvidia plans to make the code for GauGAN2 available on GitHub alongside an interactive demo on Playground, the web hub for Nvidia’s AI and deep learning research. One shortcoming of generative models like GauGAN2 is the potential for bias. In the case of DALL-E, OpenAI used a special model — CLIP — to improve image quality by surfacing the top samples among the hundreds per prompt generated by DALL-E. But a study found that CLIP misclassified photos of Black individuals at a higher rate and associated women with stereotypical occupations like “nanny” and “housekeeper.” In its press materials, Nvidia declined to say how — or whether — it audited GauGAN2 for bias. “The model has over 100 million parameters and took under a month to train, with training images from a proprietary dataset of landscape images. This particular model is solely focused on landscapes, and we audited to ensure no people were in the training images … GauGAN2 is just a research demo,” an Nvidia spokesperson explained via email. GauGAN is one of the newest reality-bending AI tools from Nvidia, creator of deepfake tech like StyleGAN, which can generate lifelike images of people who never existed. In September 2018, researchers at the company described in an academic paper a system that can craft synthetic scans of brain cancer. That same year, Nvidia detailed a generative model that’s capable of creating virtual environments using real-world videos. GauGAN’s initial debut preceded GAN Paint Studio , a publicly available AI tool that lets users upload any photograph and edit the appearance of depicted buildings, flora, and fixtures. Elsewhere, generative machine learning models have been used to produce realistic videos by watching YouTube clips, creating images and storyboards from natural language captions, and animating and syncing facial movements with audio clips containing human speech. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,392
2,021
"Zendesk acquires customer service automation startup Cleverly.ai | VentureBeat"
"https://venturebeat.com/2021/08/26/zendesk-acquires-customer-service-automation-startup-cleverly-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zendesk acquires customer service automation startup Cleverly.ai Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Zendesk today announced it has acquired Cleverly.ai, a Lisbon, Portugal-based platform that finds answers to customer’s questions by creating a knowledge layer on top of apps. Zendesk says it will integrate Cleverly’s technology across its existing products, enabling teams to automate more processes while keeping up with customer demand. With conversation volume increasing by more than 20% year-over-year , customer support teams are struggling to keep up. As a result, businesses are increasingly turning to AI to provide faster and more reliable service. A 2020 IDC survey found that automated customer service agents are a top priority for companies with over 5,000 employees. Indeed, technologies like chatbots are expected to save customers and enterprises over 2.5 billion hours by 2023. Cleverly, whose customers include Vodafone, Dashlane, and Decathlon, taps machine learning to classify, prioritize, and route customer support tickets based on intents. The platform can classify content in over a dozen languages, integrating with help desk, FAQ, and customer relationship management software to identify knowledge gaps and automate replies for common customer queries — on either the agent or self-service side. “Cleverly and Zendesk share a vision of democratizing AI, as well as a passion for creating practical applications that make it possible for businesses to get started with AI right out of the box — without a team of data scientists required,” Zendesk EVP Shawna Wolverton wrote in a blog post. “With Cleverly, we will deliver a range of capabilities that automate key insights, further reduce manual tasks and improve workflows, and overall lead to happier, more productive support teams.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Zendesk’s purchase of Cleverly comes after the company snatched up ecommerce customer service startup Smooch in 2019 for an undisclosed amount. Prior to that, Zendesk bought San Francisco-based Base, which develops software for analyzing large volumes of sales data. New features Alongside the acquisition, Zendesk added new features to its software-as-a-service ticketing system, including workflow automation on social media customer service channels. Now Zendesk can report on the performance of a brand’s automation strategy, like the number of conversations engaged with bots and interactions escalated to an agent. It also suggests macros that leverage machine learning to recommend a response based on ticket context, employing prebuilt integrations with Zoom, Microsoft Teams, and Monday.com to keep teams connected. “While Zendesk has invested in AI to help our customers achieve better, faster, and more reliable customer service, we believe there is still so much untapped potential. Today, our AI-enabled capabilities help businesses automate the conversations they have with customers, boost agent productivity, and increase operational efficiency,” Wolverton added. Zendesk, which was founded in Copenhagen, Denmark in 2007, went public in 2014 after raising about $86 million in venture capital investments. In recent years, it has leaned heavily into automation, introducing a chatbot that has conversations with customers and attempts to help them find useful information, with algorithms that better predict answers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,393
2,021
"AI-powered contract management platform Malbek lands $15.3M | VentureBeat"
"https://venturebeat.com/2021/09/28/ai-powered-contract-management-platform-malbek-lands-15-3m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-powered contract management platform Malbek lands $15.3M Share on Facebook Share on X Share on LinkedIn (GERMANY OUT) Behinderte, Mann im Rollstuhl arbeitet am Computer in einem Buero, handicapped person in a wheel chair working in the office (Photo by Wodicka/ullstein bild via Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Contract lifecycle management startup Malbek today announced it has raised $15.3 million in a series A funding round led by Noro-Moseley Partners, with participation from TDF Ventures and Osage Venture Partners. The funding, which brings the company’s total raised to over $20 million, will be used to support product development and expansion, according to CEO Hemanth Puttaswamy. The market for contract management systems — which was worth $1.5 billion in 2019 — continues to grow as companies realize their value. Goldman Sachs estimates companies that don’t adopt these systems risk spending almost 5% of their revenue tracking agreements after signing a contract. Indeed, according to PricewaterhouseCoopers, enterprises stand to save 2% of their annual costs by implementing automated contract management systems to improve accuracy and compliance. Somerset, New Jersey-based Malbek was founded in 2017 by Brian Madocks, Puttaswamy, Madhusudan Poolu, and Matt Patel. The company employs AI to automate contract workflows across sales, finance, procurement, and other business units. The platform provides connectors for software from Salesforce, Workday, Slack, Microsoft, and others, allowing contract data to flow between first- and third-party systems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “This investment unlocks our next stage as a company and enables us to empower more business users to get deeper, more actionable contract data insights that will ultimately save organizations valuable time, reduce risk, and accelerate topline revenue,” Puttaswamy said in a press release. “Malbek is the proven, next-generation contract lifecycle management made for everyone. Our modern solution is trusted by Fortune 500 customers and other large enterprise teams, as well as many small to mid-sized high-growth organizations, to unite [different] teams to take the hassle out of the entire contract process, from pre- to post-signature and every step in between.” Managing contracts Malbek offers a number of products to help businesses avoid contract management pitfalls, which can erode an average of 9.2% of revenue annually, Puttaswamy says. For example, Malbek’s Konnect Integration Marketplace allows customers to integrate Malbek with business apps using no-code, drag-and-drop software integrations. As for Lifecycle AI, it provides recommendations for contract authoring, review, negotiation, approval, and milestone management. “At Momentive, we use Malbek’s contract lifecycle management processes to increase efficiencies … [and free] up resources to focus on more strategic business initiatives,” Momentive (formerly SurveyMonkey) legal operations head Ewa Hugh said in a statement. While Malbek competes with startups like Lexicon , LinkSquares , Evisort , Contractbook , and Concord , it has managed to increase sales nearly 500% year-over-year with brands including Tibco Software, EDF Renewables, Pantheon, and Rothman Orthopaedic Institute. “Malbek leads the way in the customer lifecycle management space, as evidenced by the company’s sales win rate versus alternative solutions,” Noro-Moseley Partners’ John Ale said in press release. “Malbek modernizes real-world contract management at scale while providing more insights into contract data that could be easily missed during review.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,394
2,019
"Nasty online rhetoric hurts brands and business, not just our sense of niceness (VB Live) | VentureBeat"
"https://venturebeat.com/2019/05/08/nasty-online-rhetoric-hurts-brands-and-business-not-just-our-sense-of-niceness-vb-live"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live Nasty online rhetoric hurts brands and business, not just our sense of niceness (VB Live) Share on Facebook Share on X Share on LinkedIn Presented by Two Hat Security Efficient moderation and positive reinforcement boosts online community retention and growth. Catch up on this talk featuring analyst and author Brian Solis, along with Two Hat Security CEO Chris Priebe, about the changing landscape of online conversations, and how artificial intelligence paired with human interaction is solving current content moderation challenges. Access on demand for free. “I want to quote the great philosopher Ice-T, who said recently, social media has made too many of us comfortable with disrespecting people and not getting punched in the mouth for it,” says Brian Solis, principal digital analyst at Altimeter and the author of Life Scale. “Somehow this behavior has just become the new normal.” It seems like hate speech, abuse, and extremism is the cost of being online today, but it came out swinging back at the dawn of the internet, says Chris Priebe, CEO and founder at Two Hat Security. Anyone can add content to the internet, and what that was supposed to offer the world was cool things like Wikipedia — everyone contributing their thoughts in this great knowledge share that makes us strong. But that’s not what we got. “Instead we ended up learning, don’t read the comments,” Priebe says. “The dream of what we could do didn’t become reality. We just came to accept in the 90s that this is the cost of being online. It’s something that happens as a side effect of the benefits of the internet.” And from the beginning, it’s been building on itself, Solis says, as social media and other online communities have given more people more places to interact online, and more people emboldened to say and do things they would never do in the real world. “It’s also being subsidized by some of the most popular brands and advertisers out there, without necessarily realizing that this is what they’re subsidizing,” he adds. “We’re creating this online society, these online norms and behaviors, that are being reinforced in the worst possible way without any kind of consequences or regulation or management. I think it’s just gone on way too long, without having this conversation.” Common sense used to tell us to be the best person online that you are in the real world, he continues, but something happened along the way where this just became the new normal, where people don’t even care about the consequence of losing friendships and family members, or destroying relationships, because they feel that the need to express whatever’s on their mind, whatever they feel, is more important than anything else. “That’s the effect of having platforms with zero guidelines or consequences or policies that reinforce positive behavior and punish negative behavior,” Solis says. “We wanted that freedom of speech. We wanted that ability to say and do anything. These platforms needed us to talk and interact with one another, because that’s how they monetize those platforms. But at the end of the day, this conversation is important.” “We reward people for the most outrageous content,” Priebe agrees. “You want to get more views, more likes, those kinds of things. If you can write the most incredible insult to someone, and really burn them, that kind of thing can get more eyeballs. Unfortunately, the products are designed in a way where if they get more eyeballs, they get more advertising dollars.” Moderation isn’t about whitewashing the internet — it’s about allowing real, meaningful conversations to actually happen without constant derailment. “We don’t actually have free speech on the internet right now,” says Priebe. “The people who are destroying it are all these toxic trolls. They’re not allowing us to share our true thoughts. We’re not getting the engagement that we really need from the internet.” Two Hat studies have found that people who have a positive social experience are three times more likely to come back on day two, and then three times more likely to come back on day seven. People stay longer if they find community and a sense of belonging. Other studies have shown that if users run into a bunch of toxic and hateful content, they’re 320 percent more likely to leave, as well. “We have to stop trading short-term wins,” Priebe adds. “When someone adds content, just because a whole bunch of people engage with it because it’s hateful and creates a bunch of ‘I can’t believe this is happening’ responses, that’s not actually good eyeballs or good advertising spend. We have to find the content that causes people to engage deeper.” “The communities themselves have to be accountable for the type of interaction and the content that is shared on those networks, to bring out the best in society,” Solis says.” “It has to come down to the platforms to say, what kind of community do we want to have? And advertisers to say, what kind of communities do we want to support? That’s a good place to start, at least.” There are three lines of defense for online communities: applying a filter, backed by known libraries of specifically damaging content keywords. The second line of defense helps the filter narrow down on abusive language, by using the reputation of your users — by making the filter more restrictive for known harassers. The third line of defense is asking users to report content, which is actually becoming required across multiple jurisdictions, and community owners are being required to deal with those reports. “The way I would tackle it or add to it would be on the human side of it,” Solis adds. “We have to reward the type of behaviors that we want, the type of engagement that we want. The value to users has to take incredible priority, but also to the right users. What kind of users do you want? You can’t just go after the market for everyone anymore. I don’t think that’s good enough. Also, bringing quality engagement and understanding that the numbers might be lower, but they’re more valuable to advertisers, so that advertisers want to reinforce that type of engagement. It really starts with having an introspective conversation about the community itself, and then taking the steps to reinforce that behavior.” To learn more about the role that AI and machine learning is playing in accurate, effective content moderating, the challenges platforms from Facebook to YouTube to LinkedIn are having on- and offline, and the ROI of safe communities, catch up now on this VB Live event. Don’t miss out! Access this free event on demand now. You’ll learn: How to start a dialogue in your organization around protecting your audience without imposing on free speech The business benefits of joining the growing movement to “raise the bar” Practical tips and content moderation strategies from industry veterans Why Two Hat’s blend of AI+HI (artificial intelligence + human interaction) is the first step towards solving today’s content moderation challenges Speakers: Brian Solis, Principal Digital Analyst at Altimeter, author of “Lifescale” Chris Priebe, CEO & founder of Two Hat Security Stewart Rogers, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,395
2,020
"AI still struggles to recognize hateful memes, but it's slowly improving | VentureBeat"
"https://venturebeat.com/2020/12/01/ai-still-struggles-to-recognize-hateful-memes-but-its-slowly-improving"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI still struggles to recognize hateful memes, but it’s slowly improving Share on Facebook Share on X Share on LinkedIn A woman looks at the Facebook logo on an iPad in this photo illustration. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Facebook in May launched the Hateful Memes Challenge , a $100,000 competition aimed at spurring researchers to develop systems that can identify memes intended to hurt people. The first phase of the one-year contest recently crossed the halfway mark with over 3,000 entries from hundreds of teams around the world. But while progress has been encouraging, the leaderboard shows even the top-performing systems struggle to outdo humans when it comes to identifying hateful memes. Detecting hateful memes is a multimodal problem requiring a holistic understanding of photos, words in photos, and the context around the two. Unlike most machine learning systems, humans intrinsically understand the combined meaning of captions and pictures in memes. For example, given text and an image that seem innocuous when considered apart (e.g., “Look how many people love you” and a picture of a barren desert), people recognize that these elements can take on potentially hurtful meanings when they’re paired or juxtaposed. Using a labeled dataset of 10,000 images Facebook provided for the competition, a group of humans trained to recognize hate speech managed to accurately identify hateful memes 84.70% of the time. As of this week, the top three algorithms on the public leaderboard attained accuracies of 83.4%, 85.6%, and 85.8%. While those numbers best the 64.7% accuracy the baseline Visual BERT COCO model achieved in May, they’re only marginally better than human performance on the absolute highest end. Given 1 million memes, the AI system with 85.8% accuracy would misclassify 142,000 of them. If it were deployed on Facebook, untold numbers of users could be exposed to hateful memes. The challenges of multimodal learning Why does classifying hateful memes continue to pose a challenge for AI systems? Perhaps because even human experts sometimes wrestle with the task. The annotators who attained 84.70% accuracy on the Hateful Memes benchmark weren’t inexperienced; they received four hours of training in recognizing hate speech and completed three pilot runs in which they were tasked with categorizing memes and given feedback to improve their performance. Despite the prep, each annotator took an average of 27 minutes to figure out whether a meme was “hateful.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Understanding why the classification problem is more acute within the realm of AI requires knowledge of how multimodal systems work. In any given multimodal system, computer vision and natural language processing models are typically trained on a dataset together to learn a combined embedding space, or a space occupied by variables representing specific features of the images and text. To build a classifier that can detect hateful memes, researchers need to model the correlation between images and text, which helps the system find an alignment between the two modalities. This alignment informs the system’s predictions about whether a meme is hateful. Above: Sample memes from Facebook’s hateful memes dataset. Some multimodal systems leverage a “two-stream” architecture that processes visual and language information before fusing them together. Others adopt a “single-stream” architecture that directly combines the two modalities in an earlier stage, passing images and text independently through encoders to extract features that can be fused to perform classification. Regardless of architecture, state-of-the-art systems employ a method called “attention” to model the relationships between image regions and words according to their semantic meaning, increasingly concentrating on only the most relevant regions in the various images. Many of the Hateful Memes Challenge contestants have yet to detail their work, but in a new paper , IBM and University of Maryland scientists explain how they incorporated an image captioning workflow into the meme detection process to nab 13th place on the leaderboard. Consisting of three components — an object detector, image captioner, and “triplet-relation network” — the system learns to distinguish hateful memes through image captioning and multimodal features. An image captioning model trains on pairs of images and corresponding captions from a dataset, while a separate module predicts whether memes are hateful by drawing on image features, image caption features, and features from image text processed by an optical character recognition model. The researchers believe their triplet-relation network could be extended to other frameworks that require “strong attention” from multimodal signals. “The performance boost brought by image captioning further indicates that, due to the rich effective and societal content in memes, a practical solution should also consider some additional information related to the meme,” they wrote in a paper describing their work. Other top-ranking teams, each of which had to agree to terms of use specific to Facebook’s hateful memes dataset in order to access it, are expected to present their work during the NeurIPS 2020 machine learning conference next week. Fundamental shortcomings Skills like natural language understanding, which humans acquire early on and practice in some cases subconsciously, present roadblocks for even top-performing models, particularly in areas like bias. In a study accepted to last year’s annual meeting of the Association for Computational Linguistics, researchers from the Allen Institute for AI found that annotators’ insensitivity to differences in dialect could lead to racial bias in automatic hate speech detection models. A separate work came to the same conclusion. And according to an investigation by NBC, Black Instagram users in the U.S. were about 50% more likely to have their accounts disabled by automated hate speech moderation systems than those whose activity indicated they were white. These types of prejudices can become encoded in computer vision models, which are the components multimodal systems use to classify images. Back in 2015, a software engineer discovered that the image recognition algorithms deployed in Google Photos, Google’s photo storage service, were labeling Black people as “gorillas.” A University of Washington study found women were significantly underrepresented in Google Image searches for professions like “CEO.” Google’s Cloud Vision API recently mislabeled thermometers held by people with darker skin as guns. And countless experiments have shown that image-classifying models trained on ImageNet , a popular (but problematic ) dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more. Audits of multimodal systems like visual question answering (VQA) models, which incorporate two data types (e.g., text and images) to answer questions, demonstrate that these biases and others negatively impact classification performance. VQA systems frequently lean on statistical relationships between words to answer questions irrespective of images. Most struggle when fed a question like “What time is it?” — which requires the skill of being able to read the time on a clockface — but manage to answer questions like “What color is the grass?” because grass is frequently green in the dataset used for training. Bias isn’t the only problem multimodal systems have to contend with. A growing body of work suggests natural language models in particular struggle to understand the nuances of human expression. A paper published by researchers affiliated with Facebook and Tel Aviv University discovered that on a benchmark designed to measure the extent to which an AI system can follow instructions, a popular language model performed dismally across all tasks. Benchmarks commonly used in the AI and machine learning research community, such as XTREME, have been found to poorly measure models’ knowledge. Facebook might disagree with this finding. In its latest Community Standards Enforcement Report, the company said it now proactively detects 94.7% of the hate speech it ultimately removes, which amounted to 22.1 million text, image, and video posts in Q3 2019. But critics take issue with these claims. A New York University study published in July estimated that Facebook’s AI systems make about 300,000 content moderation mistakes per day, and problematic posts continue to slip through Facebook’s filters. Multimodal classifiers are also vulnerable to threats in which attackers attempt to circumvent them by modifying the appearance of images and text. In a Facebook paper published earlier this year, which treated the Hateful Memes Challenge as a case study, researchers managed to trip up classifiers 73% of the time by manipulating both images and text and between 30% and 40% of the time by modifying either images or text alone. In one example involving a hateful meme referencing body odor, formatting the caption “Love the way you smell today” as “LOve the wa y you smell today” caused a system to classify the meme as not hateful. Above: Examples of hateful and non-hateful memes in the hateful memes dataset and adversarial image and text inputs like the ones the Facebook researchers generated. A tough road ahead Despite the barriers standing in the way of developing superhuman hateful meme classifiers, researchers are forging ahead with techniques that promise to improve accuracy. Facebook attempted to mitigate biases in its hateful memes dataset through the use of confounders, or memes whose effect is the opposite of the offending meme. By taking an originally mean-spirited meme and turning it into something appreciative or complimentary, the team hoped to upset whatever prejudices might allow a multimodal classifier to easily gauge the mean quality of memes. Separately, in a paper last year , Facebook researchers pioneered a new learning strategy to reduce the importance of the most biased examples in VQA model training datasets, implicitly forcing models to use both images and text. And Facebook and others have open-sourced libraries and frameworks, like Pythia, to bolster vision and language multimodal research. But hateful memes are a moving target because “hateful” is a nebulous category. The act of endorsing hateful memes could be considered hateful, and memes can be indirect or subtle in their perpetration of rumors, fake news, extremist views, and propaganda, in addition to hate speech. Facebook considers “attacks” in memes to be violent or dehumanizing speech; statements of inferiority; and calls for exclusion or segregation based on characteristics like ethnicity, race, nationality, immigration status, religion, caste, sex, gender identity, sexual orientation, and disability or disease, as well as mocking hate crime. But despite its broad reach, this definition is likely too narrow to cover all types of hateful memes. Emerging trends in hateful memes, like writing text on colored background images, also threaten to stymie multimodal classifiers. Beyond that, most experts believe further research will be required to better understand the relationship between images and text. This might require larger and more diverse datasets than Facebook’s hateful memes collection, which draws from 1 million Facebook posts but discards memes for which replacement images from Getty Images can’t be found to avoid copyright issues. Whether AI ever surpasses human performance on hateful meme classification by very much may be immaterial, given the unreliability of such systems at a scale as vast as, say, Facebook’s. But if that comes to pass, the techniques could be applied to other challenges in AI and machine learning. Research firm OpenAI is reportedly developing a system trained on images, text, and other data using massive computational resources. The company’s leadership believes this is the most promising path toward artificial general intelligence, or AI that can learn any task a human can. In the near term, novel multimodal approaches could lead to stronger performance in tasks from image captioning to visual dialogue. “Hate speech is an important societal problem, and addressing it requires improvements in the capabilities of modern machine learning systems,” the coauthors of Facebook’s original paper write in describing the Hateful Memes Challenge. “We found that results on the task reflected a concrete hierarchy in multimodal sophistication, with more advanced fusion models performing better. Still, current state-of-the-art multimodal models perform relatively poorly on this dataset, with a large gap to human performance, highlighting the challenge’s promise as a benchmark to the community.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,396
2,021
"Intel exec Huma Abidi on the urgent need for diversity and inclusion in AI | VentureBeat"
"https://venturebeat.com/2021/07/08/intels-huma-abidi-on-the-urgent-need-for-diversity-and-inclusion-initiatives-in-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Intel exec Huma Abidi on the urgent need for diversity and inclusion in AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As part of the lead-up to Transform 2021 coming up July 12-16 , we’re excited to put a spotlight on some of our conference speakers who are leading impactful diversity, equity, and inclusion initiatives in AI and data. We were lucky to land a conversation with Huma Abidi, senior director of AI software products and engineering at Intel. She spoke about her DE&I work in her private life, including her support for STEM education for girls in the U.S. and all over the world, founding the Women in Machine Learning group at Intel, and more. VB: Could you tell us about your background, and your current role at your company? HA: This one is easy. As a senior director of AI software products and engineering at Intel, I’m responsible for strategy, roadmaps, requirements, validation and benchmarking of deep learning, machine learning and analytics software products. I lead a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions. VB: Any woman and person of color in the tech industry, or adjacent to it, is already forced to think about DE&I just by virtue of being “a woman and person of color in tech” — how has that influenced your career? HA: That is very true. Being a woman, and especially a woman of color, you are constantly aware that you are under-represented in the tech industry. When I joined the tech workforce over two decades ago, I was often the only woman in the room and in meetings and it was very obvious to me that there was something wrong with that picture. I decided to do my part to change that and I also proactively sought leaders who would help me progress in my career as a technical leader as well as support my DE&I efforts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From early on in my career, I volunteered to be part of Intel’s initiatives working on creating a diverse and inclusive workforce. I participated in hiring events which were focused on hiring women and other under-represented minorities (URM) for tech jobs. To help with the onboarding of new URM hires, I led cohorts to offer support, and help make connections and build their networks. To ensure retention, I mentored (and still do!) women and URMs at various career stages, and also helped match mentors and mentees. I am especially proud to have founded the Women in Machine Learning group at Intel where we discuss exciting technical topics in AI, while also bringing in experts in other areas such as mindfulness. During the pandemic it has been particularly challenging for parents with small children, and we continue to provide support and coaching to help with regards to work-life balance. After meeting the 2020 goal of achieving full representation of women and URMs at every level (at market availability) in the U.S., Intel’s goal is to increase the number of women in technical roles to 40% by 2030 and to double the number of women and URM in senior leadership. I am very proud to be part of Intel’s RISE initiative. VB: Can you tell us about the diversity initiatives you’ve been involved in, especially in your community? HA: I am very passionate about technology and equally about diversity and inclusion. As mentioned above I am involved in many initiatives at Intel related to DE&I. Just last week at the launch event of our AI for Youth program, I met with 18 young cadets –mostly Black and Hispanic youth — who are committed to military service as part of a Junior ROTC program. We had a great discussion about technology, artificial intelligence, and the challenges of being a minority, URM, and women in tech. I support several organizations around the world for the cause of women’s education particularly in STEM, including Girl Geek X , Girls innovate, and I am on the board for “ Led by ,” an organization that provides mentorship to minority women. According to the United Nations Educational, Scientific and Cultural Organization ( UNESDOC ) girls lose interest in science after fourth grade. I believe that before young girls start developing negative perceptions about STEM, there needs to be role models who can show them that it is cool to be an engineer or a scientist. I enjoy talking to high school and college students both in the U.S. and other countries to influence them in considering a career in engineering and AI. Recently, I was invited to talk to 400 students in India , mostly girls, to share with them what it is to be a woman in the tech industry, working in the field of AI. VB: How do you see the industry changing in response to the work that women, especially Black and BIPOC women, are doing on the ground? What will the industry look like for the next generation? HA: Women make up nearly half the world’s population and yet there is a large gap when it comes to technical roles and even more so for BIPOC. There have been several hopeful signs recently. In recent years, there has been an increasing number of high-profile women in technology as well as in leadership roles in tech companies, academia as well as startups. This includes Susan Wojcicki, CEO of YouTube; Aicha Evans, CEO of Zoox; Fei Fei Li leading human centered AI at Stanford; and Meredith Whittaker working on social implications of AI at NYU AI Now Institute, to name a few. Media and publications are also helping highlight these issues and recognizing women who are making a difference in this area. In the past few years I have participated in a few VentureBeat events and a panel to discuss and bring forward issues like Bias in AI, DE&I, and gender and race gaps in tech industry. I am grateful to be recognized as a 2021 “woman of influence” by the Silicon Valley Business Journal and 2021 “Tribute to Women” by YWCA Golden Gate Silicon Valley for the work I have done in this area. All tech companies are tackling with lack of gender parity issues and it is well understood that unless we build a pipeline of women in technology, the gender gap will not be narrowed or closed. When they put measures into place around achieving more gender diversity, there should be an explicit focus on race as well as gender. It’s especially important to get more women and underrepresented minorities in AI (an area that I am working on), because of potential biases that a lack of representation can cause when creating AI solutions. Focused efforts need to be made to provide women, especially BIPOC, leadership opportunities. This is possible only if they have advocates, mentors, and sponsors. These issues are common to all tech companies and the best way we can make real progress is by joining forces, to make collective investment in fixing these issues, particularly for the underserved communities and partnering with established non-profits. Earlier this year, Intel announced a new industry coalition with 5 major companies to develop shared diversity and inclusion goals and metrics. The coalition’s inclusion index serves as a benchmark to track diversity and inclusion improvements, shares current best practices, and highlights opportunities to improve outcomes across industries. The coalition is focusing on four critical areas: 1) leadership representation 2) inclusive language 3) inclusive product development and 4) STEM readiness in underserved communities. These are examples of great steps in the right direction to close diversity, gender, and race gaps in the tech industry going forward. [Abidi’s talk is just one of many conversations around D,E&I at Transform 2021 next week (July 12-16). On Monday, we’ll kick off with our third Women in AI breakfast gathering. On Wednesday, we will have a session on BIPOC in AI. On Friday, we’ll host the Women in AI awards. Throughout the agenda, we’ll have numerous other talks on inclusion and bias, including with Margaret Mitchell, a leading AI researcher on responsible AI, as well as with executives from Pinterest, Redfin, and more.] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,397
2,021
"Enterprise tech adoption fuels cyber risks | VentureBeat"
"https://venturebeat.com/2021/09/22/tech-adoption-in-the-enterprise-is-increasing-cyber-risk"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise tech adoption fuels cyber risks Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Seventy-four percent of companies attribute recent cyberattacks to vulnerabilities in technology put in place during the pandemic. That’s according to a new report from Forrester (commissioned by Tenable), which surveyed security leaders, executives, and remote employees to explore shifts in cybersecurity strategies at enterprises in response to the pandemic. From cloud services and apps to personal devices and remote access tools, the number of corporate attack surfaces has dramatically increased. Worldwide IT spending is projected to total $3.9 trillion in 2021 — an increase of 6.2% from 2020, according to Gartner. And now, new research suggests that difficulty managing technologies has made enterprises more vulnerable to cyberattacks. The Forrester and Tenable survey shows that 80% of security and business leaders believe their organizations are more exposed to risk as a result of remote work. Over half of remote workers access customer data using a personal device, yet 71% of security leaders lack high or complete visibility into remote employee home networks, the respondents said. Unfortunately, this gap is well-understood by bad actors, as reflected in the fact that 67% of business-impacting cyberattacks targeted remote employees. The findings agree with a Snow Software report that revealed that hybrid employees are expected to become a bigger burden on IT staff. The new work model, the whitepaper said, will change employees’ technology needs and increase their use of IT resources. Another concern is “shadow IT,” which refers to department-led technology purchases that can disrupt systems and workflows. Twenty-six percent of those surveyed by Snow cited shadow IT as the biggest hurdle posed by hybrid work. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IT departments also face pushback from employees adapting to hybrid and remote work arrangements. An HP Wolf Security and YouGov poll found that almost half of younger office workers surveyed view security tools as a hindrance, leading to nearly a third trying to bypass corporate security policies to get their work done. Furthermore, HP reported that 83% of IT teams believe that the increase in home workers has created a “ticking time bomb” for a corporate network breach. Cloud security According to Forrester and Tenable, expanding the software supply chain and migrating to the cloud are two other major sources of cyber vulnerability enterprises are facing. Sixty-five percent of security and business leaders attribute recent cyberattacks to a third-party software compromise, while 80% of security and business leaders believe moving business-critical functions to the cloud elevated their risk. Moreover, 62% of organizations report having suffered business-impacting attacks involving cloud assets. A recent study from Dimensional Research for Tripwire similarly identified cloud security as a top concern among enterprises. Almost all security professionals surveyed told Dimensional that relying on multiple cloud providers creates security challenges and that providers’ efforts to ensure security are “just barely” adequate. They cited a lack of consistent security frameworks and expedience to communicate security problems, among other issues. To address the challenges, two thirds or more of security leaders told Forrester and Tenable that they plan to increase their cybersecurity investments over the next 12 to 24 months. What’s more, 64% of leaders lacking security staff plan to increase their headcount over the next 12 months. “Remote and hybrid work strategies are here to stay, and so will the risks they introduce unless organizations get a handle on what their new attack surface looks like,” Tenable CEO Amit Yoran said in a press release. “This study reveals two paths forward — one riddled with unmanaged risk and unrelenting cyberattack and another that accelerates business productivity and operations in a secure way. [Executives] have the opportunity and responsibility to securely harness the power of technology and manage cyber risk for the new world of work.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,398
2,021
"Executives and teams disagree on who is responsible for software security | VentureBeat"
"https://venturebeat.com/2021/09/25/executives-and-teams-disagree-on-who-is-responsible-for-software-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Executives and teams disagree on who is responsible for software security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Executives from the boardroom and the C-suite are realizing the damaging effect software supply chain attacks can have on their organizations, but they aren’t taking action. According to a recent report from Venafi, senior IT executives agree (97%) that software build processes are not secure enough , yet there is a disconnect when it comes to which team is responsible for driving security changes… 61% of executives said IT security teams should be responsible for software security, while 31% said development teams should be. This lack of consensus is hindering efforts to improve the security of software build and distribution environments and exposing every company that buys commercial software to SolarWinds-style supply chain attacks. At the same time, security teams, who are strapped for budget and resources, rarely have visibility or control into the security controls in software development environments. To make matters worse, there is no standard framework that would help them evaluate the security of the software they use. The survey also found that 94% of executives believe there should be clear consequences for software vendors that fail to protect the integrity of their software build pipelines. These consequences could be penalties such as fines and greater legal liability for companies proven to be negligent. It might seem surprising that executives are encouraging such a practice, but they understand that clear consequences will force software vendors to shift away from the ‘build fast, fix security later’ mentality that leaves their customers and partners at risk. Venafi’s survey evaluated the opinions of more than 1,000 IT and development professionals, including 193 executives with responsibility for both security and software development, and revealed a glaring disconnect between executive concern about software supply chain security and executive action. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Read the full report by Venafi. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,399
2,021
"Your cybersecurity team will face burnout, and you need to help | VentureBeat"
"https://venturebeat.com/2021/10/09/your-cybersecurity-team-will-face-burnout-and-you-need-to-help"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Your cybersecurity team will face burnout, and you need to help Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Regardless of the industry we work in or our title and role, the concept of burnout is not a new one. However, it can reach new heights in “high-adversity” industries such as cybersecurity , where individuals are prone to always being on high alert. The phrase “attackers never sleep” rings loudest for security teams constantly wondering when the next cyberattack will strike. Speaking from my own experience, this state of being “always-on” can impact more than just our mental health. Burnout can permeate all facets of our lives, and it can be challenging to address without the right resources and support in place. Managers need to be able to spot signs of burnout, offer help and resources, and let their teams know it’s okay to not always be okay. Addressing the skills gap to combat burnout The business impact of burnout should not be ignored or underestimated. A recent VMware survey of incident responders and security professionals indicates that of the 51% who experienced extreme stress or burnout during the past 12 months, 65% said they have considered leaving their job because of it. With security teams already spread thin, we can’t afford for more defenders to leave the industry. There’s a looming skills gap of almost 500,000 open security jobs in the U.S. alone, and nearly 60 % of organizations note being impacted by the cybersecurity skills shortage. With most teams finding themselves understaffed, there’s little time allocated for time off duty even if it’s on the heels of mitigating a major attack. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Talking about burnout in the cybersecurity industry without addressing the need to fill these critical jobs would miss a major piece of the puzzle. It’s such an important topic that the White House recently held a meeting with the private sector and requested companies better address the dire talent shortage. Burnout is common, and ok to talk about Security leaders must recognize that burnout is a serious issue — not a personal failure — and appropriately address it. To start, learn to recognize the early signs of burnout, like disengagement and cynicism, because burnout is not something that happens all at once. It starts off small and then gradually builds. Ask yourself, is a once attention-prone employee now making careless errors and mistakes? Because our emotional and physical states are completely intertwined, frequent sick days could be another sign someone on the security team is feeling burnt out. It’s important that managers recognize burnout as a hazard that comes with the job, not a personal fault or weakness. The responsibility is on a company’s leadership to create a space where employees feel safe to express concerns and ask for help. Look for opportunities to invest in your security team’s wellbeing. Resilience training workshops are a great option, as are inclusive social events. Many organizations also now offer wellbeing and coaching programs that can serve as another resource to help manage burnout. Leading by example sends the message to security teams that despite the high frequency of attacks, it’s ok to slow down and unplug. Empowering security teams Self-care and empathy are incredibly important, but the third leg of the stool to prevent burnout is empowerment. The only way the security industry can retain its existing workforce and attract future talent is to better empower security teams to take charge, work smarter, and achieve a feeling of accomplishment. This comes with improving processes, automation, and baselining the environment. For example, a cloud-first strategy is only as good as the training that’s provided to systems engineers, operations staff, and the end users who will be leveraging it. Organizations should pace the implementation of innovative technology to match the available talent. There’s also an opportunity for security leaders to use the security operations center (SOC) as a learning assignment to teach those working in the SOC how to better manage stress when responding to security incidents. In addition to a “post-mortem” assessment following an intense security incident, arrange for a stress assessment so that the team can improve their awareness. As the industry works to close the security skills gap, we must ensure that today’s defenders have the resources and support they need to actively prevent burnout. This begins with changing the stigma associated with it, valuing wellbeing, and empowering security teams to better protect their own health and the health of their companies. Karen Worstell is a Senior Cybersecurity Strategist at VMware , where she advises customers, partners, and the security industry at large based on her more than 25 years of technology thought leadership. She has previously worked as a CISO for brands such as Russell Investments, Microsoft, and AT&T Wireless, and has served in roles at NIST, Aerospace Industries Association, US Department of Commerce Computer Systems Security and Privacy Advisory Board, and other organizations. She is passionate about improving representation and equity for women in the tech workforce and has spoken internationally about how organizations can retain their female brain trust. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,400
2,021
"Endpoint security is a double-edge sword: protected systems can still be breached | VentureBeat"
"https://venturebeat.com/2021/06/12/protected-endpoints-can-still-get-breached"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Endpoint security is a double-edge sword: protected systems can still be breached Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Endpoint protection can be a double-edged sword. That is because overloading endpoints with too many clients, not keeping OS patches current, and lacking reliable visibility to endpoints all combine to increase, rather than reduce, the risk of a breach. In fact, conflicting layers of security on an endpoint is proving to be just as risky as none at all. That’s based on a new study that finds that the greater the endpoint complexity, the more unmanageable an entire network becomes in terms of lack of insights, control, and reliable protection. One of the most valuable insights from Absolute Software’s 2021 Endpoint Risk Report is that the most over-configured endpoint devices often can’t identify or manage risks and breaches. Absolute used anonymized data from nearly five million Absolute-enabled endpoint devices active across 13,000 customer organizations in North America and Europe to gain new insights into endpoint risks and manage them. Endpoints comprise high-priority attack vector Well-managed endpoints gain increasing importance as bad actors become increasingly skilled at finding security gaps in endpoints and capitalizing on them for financial gain. They’re searching for vulnerable corporate networks containing marketable data that can quickly be exfiltrated and sold on the Dark Web. Absolute’s study shows how overly complex endpoint controls and out-of-date OS patches put an organization’s most sensitive data at risk. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The pandemic quickly created a surge in endpoint device demand. This trend continues to affect organizations today, as 76% of IT security decision-makers responding to the survey say their organizations’ use of endpoint devices increased since the beginning of the COVID-19 pandemic. Moreover, 82% of IT security decision-makers had to re-evaluate their security policies in response to work-from-home requirements. All this occurs as a decades-long reliance on server-based domain controllers to define the interdomain trust perimeter has proved hackable by bad actors. Once a domain controller is breached, bad actors can move laterally across any system, resource, or endpoint on the network. Organizations that stand the best chance of prioritizing endpoint security and surviving a breach are the same ones that apply urgency and reliability standards to ensuring dial tones on their employees’ cell phones are always on. Sensitive data for sale End-points attract special attention as they contain key data, such as Protected Health Information (PHI). Such data is selling for up to $1,000 a record on the Dark Web today, according to Experian. Bad actors concentrate their efforts on endpoint devices containing PHI and Personally Identifiable Information (PII) because it’s among the most challenging types of data to track and the easiest to sell. Absolute’s survey found that, on average, 73% of all endpoint devices contain sensitive data, with Financial Services and Professional Services data leading all industries in this regard, residing on 81% of all endpoint devices containing sensitive data. For purposes of the survey, sensitive data is defined as any information that could create a data breach notification (e.g., credit card data, protected health information [PHI], personally identifiable information [PII]). Above: Sensitive data resides in vulnerable endpoints. Financial service apps are a major target. Sensitive data is running rampant across endpoints today, made more vulnerable by organizations relying on dated technologies, including the interdomain controllers mentioned earlier. It’s not surprising that Absolute finds nearly one in four, or 23%, of all endpoints have the unfortunate combination of highly sensitive data residing on endpoints that lack sufficient security (a further one in four, or 25%, aren’t entirely protected either). Software conflicts compromise endpoints Adding too many conflicting software clients to each endpoint weakens an entire network. That’s because the software conflicts between each client create gaps and lapses in endpoint perimeters. Bad actors using advanced scanning techniques can find and capitalize on them. What does this vulnerable endpoint clutter look like? There are an average of 96 unique applications per device, including 13 mission-critical applications on the average endpoint device today. Software client sprawl on endpoints is increasing, growing to an average of 11.7 software clients or security controls per endpoint device in 2021. Nearly two-thirds of endpoint devices, 66%, also have two or more encryption apps installed. Endpoint devices’ software configurations are becoming so overbuilt that it’s common to find multiple endpoint software clients for the same task. Evidence discloses that 60% of devices have two or more encryption apps installed, and 52% have three or more endpoint management tools installed today, while 11% have two or more identity access management (IAM) clients installed. Above: Endpoints today are overbuilt with a confusing mix of software clients. Patch procrastinating increases breach risk Putting off patch updates on endpoint devices is like leaving the front door of your home wide open when you go on vacation. Bad actors know the OS versions that are the easiest to hack and look for organizations standardizing on them. For example, knowing an entire corporate networks’ endpoints are running Windows 10, version 1909, is invaluable to bad actors devising a breach attack strategy. This is a version estimated to have over 1,000 known vulnerabilities. Absolute’s survey found over 40% of Windows 10 devices analyzed were running version 1909, with the average Windows 10 enterprise device 80 days behind in applying the latest OS patches. Despite the FBI’s warnings of an increase in successful cyberattacks in health care when operating systems reach end-of-life, this industry has the highest proportion of endpoints running Windows 7, at 10%, and the lowest running Windows 10, at 89%. Financial services shows the most extended lag to upgrade, with 91% of devices two or more OS versions behind. Above: Endpoint patching is seldom up to date. Upgrades lag. Formulating an endpoint protection strategy Any business can take steps to get started protecting their endpoints. Contrary to what many cybersecurity vendors would have you believe, you don’t have to go all-in on an entire platform or a prolonged infrastructure project to protect endpoints. There are several actions you can take today. They include: Turn on multi-factor authentication (MFA) for all devices and applications now — and get away from relying solely on passwords. As a first step to protecting every endpoint from a potential breach, make MFA a requirement for accessing every endpoint now. Even if you have Okta or another single sign-on platform installed, still get MFA configured. Passwords are one of the most significant weaknesses of any endpoint. Devise a long-term strategy to get away from using them and concentrate on passwordless authentication for the future. Evidence shows 80% of breaches start with a password being compromised or privileged access credentials being stolen. Adopt tools that can provide real-time monitoring of endpoint device health , scale up, and provide an inventory of the software agents on each endpoint. There are endpoint tools available that deliver real-time device health data, which is invaluable in determining if a given device has configuration problems that could lead to it being compromised. The end goal of adopting real-time monitoring tools is to capture both IT asset management and security risk assessment data by device. Do an audit of any email security suites already installed to see how they’re configured and if they need updates. It’s common to find organizations with email security suites purchased years ago and a year or more behind on patch updates. Doing a quick audit of email security suites often finds they were configured with default settings, making them easier to bypass by bad actors who’ve long since figured out how to breach default configurations. Get all the email security suites updated immediately, change default configurations, and periodically audit how effective they are against malware, phishing, and other attacks. Increase the frequency and depth of vulnerability scans across your network and endpoints to gain greater visibility and early warning of potential incidents. Many network monitoring applications can be configured to provide vulnerability scans on a periodic basis. If vulnerability scans are done manually, get them automated as soon as possible, along with reporting that can find anomalies in the data and send alerts. Have your employees take more cybersecurity training programs, including those offered from LinkedIn, to stay current on the latest cybersecurity techniques. LinkedIn Learning has 752 cybersecurity courses available today, 108 of which are on practical cybersecurity. Given how advanced social engineering-based attacks are becoming, it’s a good idea to keep your organization updated with the latest training and knowledge on overcoming potential threats. Better threat detection starts at the endpoints For endpoint security to improve, CIOs and IT teams must re-evaluate how many software clients they have per endpoint device and consolidate them down to a more manageable amount. Today there are so many clients per endpoint that they’re causing software conflicts that accidentally create security gaps bad actors look to exploit. Another area that needs to improve is how often endpoint devices have their OS patches updated. Ignoring software patch availability dates is unacceptable. Organizations who procrastinate on patching are practically inviting a breach — especially if they are running Windows 10, version 1909. The Absolute 2021 Endpoint Risk Report clearly shows why endpoints also need greater visibility and control with better real-time monitoring. The cybersecurity industry needs to step up its innovation efforts and provide better asset management to the configuration level with more prescriptive threat detection and incident response. While there is a significant amount of hype swirling around self-healing endpoints , the industry needs to double down on that aspect of their product strategy and deliver because organizations will need more self-regenerative endpoints as attack sophistication increases. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,401
2,021
"2 million malicious emails bypassed secure email defenses over 12 months | VentureBeat"
"https://venturebeat.com/2021/09/21/2-million-malicious-emails-bypassed-secure-email-defenses-over-12-months"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 2 million malicious emails bypassed secure email defenses over 12 months Share on Facebook Share on X Share on LinkedIn Tessian prevents misaddressed emails Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Two million malicious emails slipped past traditional email defenses, like secure email gateways, between July 2020-July 2021, according to a new report from human layer security company, Tessian. These emails were detected by Tessian’s platform and analyzed by the company’s researchers to reveal the tactics cybercriminals use to make advanced spear phishing attacks bypass detection and deceive their victims. Cybercriminals predominantly set their sights on the retail industry during this time, with the average employee in this sector receiving 49 malicious emails over the year. This was 3x more than the average 14 malicious emails that were received per user, per year, across all industries. To evade detection, attackers used impersonation tactics. The most common was display name spoofing, where the attacker changes the sender’s name and disguises themselves as someone the target recognizes. This was used in 19% of malicious emails detected while domain impersonation, whereby the attacker sets up an email address that looks like a legitimate one, was used in 11%. The brands most likely to be impersonated were Microsoft, ADP, Amazon, Adobe Sign, and Zoom. Account takeover attacks were also identified as a major threat, with employees in the legal and financial services industries receiving this type of attack most frequently. In this instance, the malicious emails come from a trusted vendor or supplier’s legitimate email address. They likely won’t be flagged by a secure email gateway as suspicious and to the person receiving the email, it would look like the real deal. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Interestingly, less than one quarter (24%) of the emails analyzed in the report contained an attachment, while 12% contained neither a URL nor file — the typical indicators of a phishing attack. Evidently, attackers are evolving their techniques in order to evade detection, trick employees and, in some cases, build trust with their targets before delivering a payload. According to Josh Yavor, Tessian’s Chief Information Security Officer, this report highlights why it’s unreasonable to rely on employees to identify every phishing attack they receive and not fall for the deception. There are too many varieties and attacks are getting harder to detect, he says. Read the full report by Tessian. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,402
2,021
"Trend Micro: 80% of global orgs anticipate customer data breach in the next year | VentureBeat"
"https://venturebeat.com/2021/08/06/trend-micro-80-percent-global-orgs-anticipate-customer-data-breach-in-next-year"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Trend Micro: 80% of global orgs anticipate customer data breach in the next year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A staggering 86% of global organizations believe they will suffer serious cyber attacks in the next year and 80% reported they are likely to experience a data breach, according to a new report by Trend Micro and the Ponemon Institute. The greatest risk was found in North America. Based on the global survey results, the greatest areas of concern for businesses are centered around three areas. Many organizations said they aren’t prepared enough to manage new attacks across employees, executives, and board members. Others stressed the lack of sufficient processes to combat attacks, ranging from patching to threat sharing. Finally, organizations have a strong need to evaluate their existing security tools and ensure they’re using the latest advanced detections technologies across their networks. Taking the current threat landscape into consideration and based on the CRI findings, global businesses can still greatly minimize their risks by implementing security best practices. It’s important to build security around critical data by focusing on risk management and the threats that could target the data. Organizations should look to minimize infrastructure complexity and improve alignment across the whole security stack and review existing security solutions with the latest technologies. There must also be a focus on people. Senior leadership needs to view security as a competitive advantage and priority and organizations need to invest in both new and existing talent to help them keep up with the rapidly evolving threat landscape. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Trend Micro and the Ponemon Institute teamed up to investigate the level of cyber risk across organizations and create a Cyber Risk Index (CRI), a comprehensive measure of the gap between and organization’s current security posture and its likelihood of being attacked. A total of 3,677 respondents were surveyed across North America, Europe, Asia-Pacific and Latin America. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,403
2,021
"When it comes to security, all software is 'critical' software | VentureBeat"
"https://venturebeat.com/2021/08/07/when-it-comes-to-security-all-software-is-critical-software"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest When it comes to security, all software is ‘critical’ software Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Defining critical software has become a more complex task in recent years, as both tech professionals and government officials aim to contain or diminish the impact of cybersecurity breaches that are more difficult to label. The lines of definition have blurred beyond recognition, as all software platforms are hackable and threat actors are motivated by a slew of financial, geopolitical, or ideological agendas. Cyberattacks are undoubtedly part of the national security conversation, as more potent threats emanate from nations unfriendly to the United States. As a partial response to this growing number of attacks, the White House released an executive order on May 12, 2021 to help improve the United States’ posture on cybersecurity. The order mandated that the National Institute of Standards and Technology (NIST) provide a definition for what should be considered “critical software.” On June 24, NIST released its definition, which will help both government and industry better understand where to focus and how to ramp up their efforts in securing software. According to NIST, “ EO-critical software is defined as any software that has, or has direct software dependencies upon, one or more components with at least one of these attributes: is designed to run with elevated privilege or manage privileges; has direct or privileged access to networking or computing resources; is designed to control access to data or operational technology; performs a function critical to trust ; or, operates outside of normal trust boundaries with privileged access.” The NIST announcement goes on to provide examples of software that fits the definition: identity, credential and access management (ICAM); operating systems; hypervisors; container environments; web browsers; endpoint security; network control; network protection; network monitoring and configuration; operational monitoring and analysis; remote scanning; remote access and configuration management; backup/recovery and remote storage. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This rather wide definition confirms what many cybersecurity experts know but perhaps fail to execute on: All software, regardless of its ingenuity or seeming insignificance, is absolutely critical. The NIST definition should not be considered too broad; rather, software today is sprawling and touches almost all aspects of our lives. Hybrid work, social media, and a greater dependence on mobile devices have made software an irreplaceable piece of modern society. An example of software’s boundless reach can be found in web applications that are designed to control access to data. Many mobile applications require access to operational technologies of the mobile device in order to carry out their most basic functionalities. Software can give access to the underlying operating system or other software, which can then lead to a privilege escalation that enables takeover of a system. All software can enable access. Even if software was not intended to be used in a “critical” way, it could potentially be used that way or repurposed into a “direct software dependency.” Threat actors fully understand this, coming up with unconventional workarounds to manipulate software that otherwise wouldn’t be considered critical. Analyzing these incidents and taking stock of their frequency should be part of the software security process. One specific, well known example highlighting unexpected software criticality comes from a fish tank thermometer in a Las Vegas casino. Attackers were able to get into the casino’s network via the fish tank thermometer which was connected to the Internet. The attackers then accessed data on the network and sent it to a server in Finland. Most people would not think of a fish tank thermometer as critical software, but it may very well meet the definition of EO-critical software, as it did enable access to a PC over a network — and ultimately, sensitive data. This example highlights that the criticality of software is not based solely on a particular application, but the application’s context in a larger cyber ecosystem. If one component of that ecosystem is exploited, the entire operation is at risk, exposing private information, access to financial resources, or granting the ability to control the ecosystem itself. When dealing with limited resources, the security focus should begin with what is “most critical,” but that security focus should not end once the items initially marked as “most critical” are secured. A long-term, comprehensive security outlook is necessary when working with a limited budget, allowing for later security coverage for software that may not be used as frequently but could still be considered an access point. Prioritization of what is critical is very dependent on the system, the purpose, the data it is connected to, and context. Coming up with a taxonomy for prioritization is difficult and should be done on a case-by-case basis with trusted security partners. Companies both small and large are targets for threat actors. Dismissing the critical nature of an organization’s software due to the operation’s size would be unwise; the recent Kaseya supply chain attack clearly demonstrates this. Threat actors are demonstrating a lot of creativity, and the NIST definition of critical software underscores this point. Finding a long-term security strategy that adequately addresses all software components of an ecosystem is non-negotiable, as all software should be considered critical. Jared Ablon is CEO of HackEDU. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,404
2,021
"It's Cybersecurity Awareness Month. Does your business have a viable plan yet? | VentureBeat"
"https://venturebeat.com/2021/10/05/its-cybersecurity-awareness-month-does-your-business-have-a-viable-plan-yet"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest It’s Cybersecurity Awareness Month. Does your business have a viable plan yet? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The cybersecurity world is evolving rapidly — perhaps more quickly than at any other time in its history. It would be easy to attribute the cyber hiccups that many businesses face to the fact that they are simply unable to keep up with bad actors. The facts are more complicated. While it’s true that new threats are emerging every day, more often than not, breaches result from long-standing organizational issues, not a sudden upturn in the ingenuity of cybercriminals. For example, phishing has been around since the mid-’90s. Furthermore, its tactics and strategies are largely unchanged over the last 25 years — save for slightly improved graphics and copyediting. Yet, 75% of organizations experienced a phishing attack in 2020 — and 74% of attacks targeting US companies were successful. How can this be? The answer is frustratingly simple: IT Security departments are still unable to get out of their own way when it comes to developing, implementing and running cybersecurity engagement, training and preparedness campaigns. I’ve seen far too many brilliant engaging campaigns get squashed by the group-think that occurs when content goes through round after round of reviews with multiple stakeholders. The process frequently drains every last compelling drop out of content that started as a really good idea. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Human error is a significant contributing factor in over 90% of cyber breaches , but too many organizations aren’t using training and awareness content designed for most humans. Humans have short attention spans, are easily bored, like to laugh (cat videos, anyone?), and like things to be easy. And honestly, once you really get into it, cybersecurity is fascinating, so there’s no excuse to be boring. Here are a few areas that undermine business’s ability to build the strong security training and awareness programs needed for today’s threat environment. Missing on messaging Day-to-day backend cybersecurity execution may be technical, but getting people to buy into cybersecurity best practices is not. In a world where most marketing content strategy and activation tactics have become more sophisticated and creative, the same cannot be said for cybersecurity. There are an astounding number of cybersecurity “engagement” strategies today that look like technical manuals. They may work within IT departments where efficient guidance is paramount. But unfortunately, they don’t work well outside the IT sector. Simply saying, “do this, because I said so” is not the way to get everyday people to act. Instead, we need customized strategies to drive engagement much as a sales funnel operates — nurturing employees along the way to conversion. Successful campaigns like this do not exist at many organizations, which is largely why cybersecurity engagement remains a challenge. Internal politics and disorganization Two characteristics of high-functioning organizations are established departmental boundaries and strong interdepartmental collaboration. Yet frequently neither is evident in the typical business approach to cybersecurity with departments competing with one another. This can be true for training and awareness programs when it comes to the relationship between HR, corporate communications and Security. For example, it is common for corporations to run phishing exercises to test how well employees can identify phishing threats and identify those who may need extra training. If the same people fail subsequent tests, security teams often demand harsh sanctions. The problem is, these types of decisions are not the job of the security team; they more properly reside with Human Resources. On the flipside, security departments have a clear understanding of present threats and what best practices should be in place. However, corporate communications teams often get accused of overstepping the mark and overediting guidance from security, thus making it less effective and unclear, or even worse, less compelling. The way to build cybersecurity defenses is through cohesive and collaborative messaging and tactics. Of course, it can be frustrating when employees fall for phishing emails, but Security departments should provide information on repeat clickers to HR and work on an escalation plan that ultimately HR and the business will own. This will foster mutual respect and lay the groundwork for collaborative progress toward a more secure workplace. Drab training and awareness curriculum There is a common misperception in regards to cyber education and awareness training: training materials and sessions are boring, uneventful and easily forgettable. The truth is, cyber education and awareness training is only as drab and forgettable as you make it. The cybersecurity education and awareness category is light years ahead of where it was even a couple of years ago. With new engagement methods ranging from scavenger hunts and games to live action content, there is no shortage of tools and assets available to businesses looking to bring their preparedness training to the next-level. Unfortunately, businesses continue to struggle to integrate many of these “new age” tools into their cyber education protocols. Delivering effective cybersecurity awareness education and training is an end-to-end proposition. So while delivering compelling content is a great first step, to truly maximize content strategies they need to be paired with engaging training tools. If not, businesses are depriving employees of the valuable experience that they need on a day-to-day basis. Cybersecurity hygiene is not easy. But by continuing to focus on external challenges rather than internal missed marks, businesses are set for a long, hard road. The good news is that IT teams are as innovative as ever, and there has never been more interest among the business community in cybersecurity. These two elements by themselves provide a great starter for success. If we can build on them by removing existing barriers, the future for business cybersecurity can be far more stable and secure. Lisa Plaggemier is Interim Executive Director of the National Cybersecurity Alliance. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,405
2,021
"Facebook unveils Horizon Workrooms for remote coworking in VR | VentureBeat"
"https://venturebeat.com/2021/08/19/facebook-unveils-horizon-workrooms-for-remote-co-working-in-vr"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Preview Facebook unveils Horizon Workrooms for remote coworking in VR Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook is launching its open beta for Horizon Workrooms , a way for people to work together remotely in groups with Oculus Quest 2 virtual reality headsets or other ways of connecting. Workers can use the avatar creation system (built for the Facebook Horizon virtual play platform) to create cartoon-like characters in 3D-animated work spaces and communicate with coworkers in virtual meetings. It’s not quite the metaverse , the universe of virtual worlds that are all interconnected. But one of these days it probably will be, if Facebook CEO Mark Zuckerberg follows through on his pledge that Facebook is a metaverse company. And it will help VR find its footing as it endeavors to become the next universal computing platform, as Facebook’s Oculus team certainly hopes it will be. Horizon Workrooms available for free to download on Oculus Quest 2 in countries where Quest 2 is supported. You can get the full benefits of the free platform by wearing an Oculus Quest 2 VR headset while you’re working. But other people can join via smartphones, desktops, or laptops and participate with varying levels of interaction. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “You are actually in the demo,” said Andrew “Boz” Bosworth, the vice president of Facebook Reality Labs, in a press briefing. “We’ve been talking about the future of work for a long time. It’s why we’re in AR/VR. The pandemic hitting in the last 18 months has given us greater confidence in this technology.” He noted we’ve spent that time doing our jobs on video conferencing. A lot of people want to move on to something else, he said. Mike LeBeau, the director of FRL Work Experiences at Facebook, added, “We have been looking toward our ambitions of this as a metaverse and the future of computing platforms. We think this is an inflection point for VR. This is the moment where we are adapting the medium for that end. We are evolving beyond a games thing to something where you can be productive for work. To make it work, we have brought together a lot of technologies that are needed for work. We are designing the paradigms that are needed for that new computing platform.” Changing work Facebook said it designed Workrooms with an acknowledgment that the way we work is changing. More people are working remotely, want flexible work options, and are rethinking what it means to be in an office. But without the right connectivity tools, remote work can be really challenging and isolating. Brainstorming with other people just doesn’t feel the same if you’re not in the same room, advocates of working in the office maintain. Workrooms lets people come together to work in the same virtual room, regardless of physical distance. I tried a brief demo of it this week. It took me a while to download the app and log in, but once I did, it was easy to join a room that I had been invited to attend. It works in both virtual reality and the web and is designed to improve your team’s capability to collaborate, communicate, and connect remotely. You can sit at a table and watch your colleagues’ virtual avatars engage in physical gestures while talking. You can watch them type, or get up and walk to a whiteboard where they can use their (virtual) hands to draw or mark up a document. You can see who is listening to you as you look around the room, or see who happens to be away from the room at the moment, though their avatar is still there. The sound quality was good, and I could hear things happening in the room and see from body gestures when someone was getting ready to talk. So I could see it was easier to avoid talking over each other, as happens in audio-only apps. The gestures were expressive and they didn’t look random, and so that part of the experience felt good. The tech Above: Facebook Workroom lets you design semi-realistic cartoon avatars in 3D. The technology is pretty impressive when you’re using a Quest 2. You can join with hand controllers or just use your hands to navigate through the features. You can raise a hand and pinch in the air to push a button. You can create a “mixed-reality desk,” where you sit at a physical desk, scan it into the room, and then enable other people to see you are sitting at a desk. They can grab virtual papers and drop them on your virtual desk. And since you can sit down at a real desk and use your computer, you don’t have to leave your regular work tools behind to be in VR. You can also pair your laptop and bring it into the room, enabling you to share things on your computer with other people. So you can share slide decks or documents. Workrooms has video conferencing integration. It has spatial audio, so you can detect from what direction a sound is coming and so more easily figure out who is talking (this is helpful for when 16 people are in the virtual room). “This is the software that pushes headsets the hardest,” said Bosworth. “It’s a credit to the team. The question was what was needed for the experience to convince people to put a headset on.” The new Oculus Avatars were launched earlier this year and give you lots of customization options so that you can get your look right. The spatial audio enables low-latency conversations. When I was in the meeting, I could hear people without audio glitches, though one person dropped out and had to rejoin. That’s something many of us are used to from things like Clubhouse and Zoom. “You can change your outfit very quickly,” said Saf Samms, a technical program manager at Facebook, in our press briefing. “If I move far away, you can hear that my voice will get quieter.” On top of that, it can filter out a lot of noises, like typing. You can actually hear some typing, but it’s not overwhelming as it can be in some other kinds of online events. If a baby is crying in the background, people may not hear it, Samms said. The system uses an Oculus Remote Desktop companion app for Mac and Windows to give you one-click access to your entire computer from VR. You can take notes during the meeting, bring your files into VR, pin images from your computer on the whiteboard, and even share your screen with colleagues if you choose. When you are done, you can export the whiteboard out of VR to share as an image on your computer. You can also sync your Outlook or Google Calendar to make it easier to schedule meetings and send invites. As I noted, I simply joined Workrooms and tapped on a calendar invite to join a meeting that I was late for. You can also change the environment and configure the virtual room’s layout to match what you need. We sat at a round table with an open part where we could see people tuning in via video conferencing or smartphones on a virtual screen. A total of 16 people can participate together in VR, while up to 50 can fit on a call, including video participants. Getting started If you’re the first of your colleagues to try Workrooms, you can sign up to create a new Workrooms team at workrooms.com. And if your colleagues are already using Workrooms, they can send you an email invite to join their existing Workrooms team. You’ll need to agree to the terms, confirm that you’re 18 years or older, and choose a name to display in Workrooms. Once you’ve created an account, you can download and install Horizon Workrooms from the Oculus Store on your Quest 2, then follow the instructions in the app to pair your headset to your account and get started. I happened to be in such a rush that I was standing up in my living room when I created my “virtual desk” using my hand controllers. I basically had to draw the safe space first. Then I had to indicate where the desk was and how high it was. I did this part quickly, and so when I showed up at the meeting, people noticed I was sitting inside a chair, rather than sitting at a virtual table like everyone else. Lebeau had to explain why I looked the way I did to everyone else. I felt a bit sheepish. Safety and privacy in Workrooms Above: Working in a Facebook Horizon Workroom. These were actually real people talking to me. When you choose to collaborate with your coworkers in Workrooms, you should feel in control of your experience, Facebook said. Workrooms will not use your work conversations and materials to inform ads on Facebook. Additionally, Passthrough processes images and videos of your physical environment from the device sensors locally. Facebook and third-party apps do not access, view, or use these images or videos to target ads. Finally, other people are not able to see your computer screen in Workrooms unless you choose to share it, and the permissions you grant for the Oculus Remote Desktop app are only used for the purposes of allowing streaming from your computer to your headset. Anyone who signs up for Workrooms must agree to follow Facebook Community Standards and Conduct in VR Policy. If other members or content in the workroom violate these policies, users can contact the team admin who can take action such as removing someone from the Workrooms team. You can also report an entire Workrooms team if you think it’s not following policies. And if you’re in VR with people who are bothering you, you can report them using the Oculus reporting tool and include evidence for Facebook to review. Using Workrooms requires a Workrooms account, which is separate from your Oculus or Facebook accounts, although your Oculus username may be visible to other users in some cases—for example if someone reports you for violating policies and your username appears in the tool. And to experience Workrooms in VR, you’ll need to access the app on Quest 2, which requires a Facebook login. Your use of Workrooms will not make any updates to your Facebook profile or timeline unless you choose to do so. Summing it up Samms said that one thing that is notable is that you’re able to remember meetings that took place in Workrooms. I don’t know if that is because it is so novel, but that would be great for things like education if it really is true that we can remember such interactions more because they involve visual interaction as well as just hearing people talk. Bosworth said he has been in some meetings and felt comfortable for a half-hour. But a boring meeting is still a boring meeting, he said. “This isn’t the metaverse, but it is a step in the direction of the metaverse,” Bosworth said. I think that logging into VR is still somewhat clunky compared to using your smartphone. But this is a very good step in the “work” part of the quest to enable us to “live, work, and play” in the metaverse. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,406
2,021
"Sky Mavis raises $152M at nearly $3B valuation for Axie Infinity play-to-earn NFT game | VentureBeat"
"https://venturebeat.com/2021/10/06/sky-mavis-raises-152m-at-nearly-3b-valuation-for-axie-infinity-play-to-earn-nft-game"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sky Mavis raises $152M at nearly $3B valuation for Axie Infinity play-to-earn NFT game Share on Facebook Share on X Share on LinkedIn Axie Infinity lets players battle with NFT Axie characters. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Sky Mavis has raised $152 million at a nearly $3 billion valuation to help grow its Axie Infinity “play-to-earn” game that monetized via nonfungible tokens (NFTs). And this funding bankrolled by the likes of Andreessen Horowitz and Mark Cuban is aimed at stirring a developer and player revolt against the establishment of gaming. The game uses NFTs to uniquely identify cute characters. Players spend real money to acquire those characters and engage in battles with other players. They can level up the characters and sell them to other players, and that generates income for the players. The capability to earn money in games is called “play-to-earn,” and it has taken off in a variety of ways. Blockchain sales measurement firm DappRadar has been tracking the space and it said that Axie Infinity has hit No. 1 in NFT collectibles, even though it isn’t in any of the popular iOS or Google Android app stores. It is distributed on PCs and Macs, and can be played on mobile devices. More than 615,000 traders have bought or sold Axie Infinity NFTs in 4.88 million transactions, according to DappRadar. This means that an average transaction for an Axie Infinity NFT is worth about $420. The company and secondary sales of the NFT characters have now reached $33 million a day. All of this is pretty amazing for a company that had $100,000 in sales in January. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Cofounder Jeffrey “Jiho” Zirlin (who is speaking at our GamesBeat Summit Next event on November 9-10) said in an interview with GamesBeat that the number of daily active users in Axie Infinity has now grown to two million — meaning that is how many people play the game each day. All told, the players and the company have generated more than $2 billion in transactions. “Because we started so long ago, in 2018 with the game launch, we have this product and we have this community,” Zirlin said. “It’s not something that’s speculative or theoretical. That’s really something in the market.” Back in April, Axie Infinity had 3,000 daily active users. The NFT explosion Above: Axie Infinity has two million daily users. What happened? The explosion of excitement around NFTs is what happened. Driven by Bitcoin hype or NFT art sales, everything related to NFTs has seen a surge this year. The market for NFTs surged to new highs in the second quarter of 2021, with $2.5 billion in sales in the first half of the year, up from just $13.7 million in the first half of 2020. NFTs have exploded in other applications such as art, sports collectibles, and music. NBA Top Shot (a digital take on collectible basketball cards) is one example. Published by Dapper Labs , NBA Top Shot has surpassed $780 million in sales in just a year. And an NFT digital collage by the artist Beeple sold at Christie’s for $69.3 million. Investors are pouring money into NFTs, and some of those investors are game fans. SoftBank, one of the world’s biggest investors, recently invested $680 million into Sorare , a Paris-based maker of an NFT-based fantasy soccer game built by 30 employees. Dapper Labs also raised $250 million two weeks ago. The weekly revenues for NFTs peaked in May and then crashed. NFTs sales hit a new peak in August, crashed again in Sepetmber, and have been rising again in October, based on sales numbers of Nonfungible.com. But these systems have drawbacks too in numerous scams where people steal art and sell it as their own NFTs. Play-to-earn The graphics may not look that impressive. But Axie Infinity’s primary function is as a game, not an NFT collection. Why did it take off? Zirlin thinks it has to do with the pet angle, as people love collecting and trading pets, making to things like Pokemon. “Our stance is that, first of all, Axie Infinity is an incredibly fun game. And there are people who are come from the Magic the Gathering pro circuit who say that Axie is the game that they’ve always wanted to play,” Zirlin said. “We believe that a simple, fun, accessible game that centers around collecting pets is the most accessible and most scalable type of game of all.” Sky Mavis was founded in 2017 to make Axie Infinity, a game where you create cute characters akin to Pokemon or Tamagotchi pet breeding. Those characters are “minted” as NFTs, which use the power of the peer-to-peer verification of the blockchain (the transparent and secure digital ledger) to authenticate the uniqueness of digital items. These are one-of-a-kind NFTs attached to one-of-a-kind characters. The players breed these characters called Axies and level them up and fight with them. And they give the players ownership of the characters, in contrast to other games where the game publisher owns the characters. The players can also sell these characters on a marketplace to other players or even investors. The key concept here was play-to-earn (P2E), which has given many players — starting with poor players in the rural Philippines first and now spreading to many other emerging markets — the ability to earn money, even a living, playing in the virtual world of the game. I call this the Leisure Economy , where we all get paid to play games. Zirlin believes that play-to-earn has transformative economic power for communities, particularly in developing natures that were hit hard by COVID-19 and where jobs are in short supply. It could even be a model for more developed nations as the coming of AI will likely wipe out a lot of jobs, he said. In recognition of the opportunity, Yield Guild Games started a guild with thousands of players to help players earn money more easily and then invest some of the proceeds in investments in blockchain games, including Axie Infinity. In the case of Axie Infinity, players pay an upfront fee to purchase their Axie characters. They need three of them to fight matches, and so the cost of starting the game is above $400. That’s very expensive for people in developing nations. But Yield Guild Games pioneered “scholarships” for players, where it paid the upfront fees through pooled guild money. Those players can then use the funds to buy Smooth Love Potion (SLP) tokens to generate more Axies, and they will keep a large percentage of the tokens earned that way, and the rest goes to the guild. The guilds can also acquire land (sometimes in other games too), and its players can build out their plots and then the whole guild benefits from the resources generated on that land. This brought the new players into the game and the guild. “We’ve created hundreds of thousands of jobs in the Philippines,” Zirlin said. Now many such guilds are doing that, helping to grow the number of Axie Infinity players at a tremendous rate. Yield Guild Games helped the Axie Infinity community get noticed by funding Emfarsis to make the documentary, Play-to-earn , about the roots of the game in the Philippines. In this way, more than one company is benefiting from the Axie Infinity ecosystem. Besides the Philippines, where the game originally took off, Axie Infinity is now growing in emerging markets such as Venezuela, Brazil, Indonesia, Malaysia, Thailand, Nigeria, Ghana, and Turkey. “I think these markets are just a little bit behind the Philippines,” he said. “We’re really seeing this missionary model emerge. I think that we’re poised for huge growth. I believe that most of the world survives on less than $10 a day. The math shows that there’s a huge potential market out there.” While it is disruptive, Sky Mavis is trying to play by all the rules in the world. In the Philippines, the company said to players that if the local government requires them to pay taxes on earnings in a game, then they should do that. Zirlin is excited because of the income generation among underserved people around the world. About 25% of the players are “unbanked,” meaning they have no bank accounts. And 50% have not previously used cryptocurrencies, while 75% are new to NFTs. “It’s beautiful that we have been able to introduce crypto to people that have traditionally not been using it,” Zirlin said. “We’re getting this technology in the hands of the people that really need it. And that is super rare.” Decentralized community ownership Above: Sky Mavis/Axie Infinity cofounders. The team was started by a band of missionaries (mostly based in Vietnam) who believed that NFTs would enable new types of games. Just as mobile gaming unlocked new design spaces and player archetypes, so too would NFT games. These games won’t look like the games of the past and will require an entirely new perspective and skillset to build, they believed. But when they launched Axie Infinity three years ago, almost nobody cared or understood what an NFT was. ( I actually knew! ). Sky Mavis’ leaders include CEO Trung Thanh Nguyen, chief operating officer Aleksander Leonard Larsen, art director and game designer Tu Doan, chief technology officer Andy Ho, and growth lead Zirlin. But these people don’t completely control the company. Sky Mavis and Axie Infinity is really more like a coalition of the company, investors, and players. They believed property rights incentivized players to act more like founders and employees rather than users. These rights include being able to sell your game assets to anyone in the world, earning liquid tokens for playing/contributing, and being able to own a piece of the game you’re playing. If you spend money in a game like Axie Infinity, it was more like making an investment, as you might make money on that investment later on by selling what you bought. They also believed that play-to-earn unlocks new types of work around digital metaverse economies, just as Uber, Airbnb, and DoorDash created new types of jobs and professions. “It’s a great example of incentive alignment, or co-ownership, between the game developer and community,” Zirlin said. “And it’s a model that we think is going to transform the way that builders and users interact, going forward. And we think that it’s going to transform the way that the internet works, where it’s going. I think it will create a more open and more fair, a more empowering version of the internet, a little bit more in line with actually what the original creators of the internet envisioned.” The decentralized nature of the company doesn’t end with the decentralized blockchain transactions. In addition, players can be rewarded with SLP (Smooth Love Potion) utility tokens through gameplay. SLP, currently worth 8.185 cents per token, is what players need to create new Axies. The pricing has been fairly rocky in recently weeks, and SLP is well off the peaks it saw in the summer. Or they can purchase AXS governance tokens. The market values of these tokens have skyrocketed. The AXS token, for instance, has a price of $122.63 per token now. It would be worrisome if all the players were speculators and they bailed out if they saw the price decline. “We don’t necessarily think too much about a target price for the tokens as this is kind of a free market,” Zirlin said. “There are many market forces at work. We do think a lot about what are the sources of demand for tokens within the economy, and how the tokens are created or destroyed and how that has to be balanced.” Regardless of price fluctuations, retention of day one users — or those that come back after one day — has been 65% regardless of the token price, Zirlin said. What’s amazing about Sky Mavis’ valuation at nearly $3 billion is that it only owns 20% of Axie Infinity. The rest is owned by a combination of the players and investors (or speculators) who have purchased the tokens. Those citizen owners will eventually get a say in the ownership decisions of the company and its protocol, though Sky Mavis is acting as the steward right now. When it comes to handling payments, the protocol is earning fees and these are being stored in a community treasury. And in the future, the owners of the Axie tokens will have have the chance to figure out what that treasury is used for. The treasury has 36,300 Eth (the Ethereum cryptocurrency) and 18.3 million Axie tokens in it, Zirlin said. The worth is $7.48 billion, according to measurement firm CoinGecko. “That’s an indication of the power of games that are owned and driven forward by the communities that play them,” Zirlin said. A revolt Above: Axie Infinity has generated $2 billion in sales and resales. In traditional gaming, the publishers and the distribution platforms (app stores) have all the power. If characters are created, the developer and publisher do that work, and they permanently own them. Players merely “rent” those characters, Zirlin said. Zirlin believes that this is far more fair of a relationship between a game developer and its community. Sky Mavis doesn’t have to share 30% of its revenues with the likes of Apple or Google, as it isn’t on the app stores. It is still getting noticed on its own, even after cutting out the middleman, and it can afford to share more of the wealth generated by the game with the players themselves. Axie takes just 4.25% of each transaction. In that way, Axie Infinity is part of a social movement. This level of ownership and accessibility within the Axie economy is upending society and creating an economically viable digital nation, Zirlin believes. Axie has grown to be the largest NFT gaming ecosystem and has amassed players around the world with more than two million daily active users logging into the platform in August. Axie Infinity has already achieved $33 million in daily transactions, for a total volume of over $2 billion. Again, without being on the app stores, Zirlin points out that Axie Infinity is one of the top 60 games being streamed on Twitch. This economy has challenges, as Sky Mavis has bumped up the price of the Axie minting several times, and sometimes the price has fallen, as in recent weeks. If the price really slides, then the amount people can earn will also slide. “We put the infrastructure into the game to grease the wheels of the economy, and NFTs are permeating mainstream culture,” Zirlin said. “And money has gotten more abstract over time, from shells to gold to paper money, to fractional reserve banking, and now to full digital currencies. Historically, whoever has been in charge of this abstraction of money has been able to define the future of culture and entertainment. I think that’s happening right now.” The funding Above: Art Art made money from Axie Infinity while he couldn’t operate his business in the Philippines. Andressen Horowitz, one of the most active game VC investors with 32 deals in the first nine months of the year, led the round. Other participating investors included Accel and Paradigm. Other investors include Libertus and Dallas Mavericks owner Mark Cuban. The company has 60 employees (mostly in Ho Chi Minh City in Vietnam) now and it will use the money to build out a global team, evaluate future projects, scale its infrastructure, build its own distribution platform to support game developers making blockchain-enabled games. “Sky Mavis’ marquee game, Axie Infinity, has introduced a new way for anyone to turn their time into money through play-to-earn, a new mechanic that allows gamers to transform their skills and time into earnings and distribution rights for tokenized in-game items,” said Arianna Simpson, general partner at Andreessen Horowitz, in a statement. “The Axie team has unlocked a new way to build and play games that is already completely redefining this category. The game’s growth is a phenomenal testament to how deeply this model is resonating with people around the world. The Axie team has triggered an earthquake in gaming and the industry is now forever changed.” To date, Sky Mavis has raised $161 million. Back in May, the company had a relative small round with $7.5 million raised. But things have changed dramatically in the past four months, Zirlin said. While Yield Guild Games drew a lot of attention by creating a guild of players — many of them in the Philippines — who play the game for a living in what is known as “play-to-earn,” now hundreds of such guilds have popped up. Zirlin noted the company has already created Ronin blockchain protocol, which enables the company to bypass the slow parts of the Ethereum blockchain platform and use its own sidechain to speed up transactions, get rid of the expensive “gas” fees (associated with environment and computing costs for blockchain transactions), and use Sky Mavis’ own security. The Ronin blockchain Above: Axie Infinity lets you convert game rewards to real money. The Ronin blockchain protocol is a “sidechain” that does what I described above, but the company could also use it to easily host games created by other developers. Since Sky Mavis did the work of building a bridge to Ethereum (so people could transact using the popular Eth cryptocurrency) and created other things like a storefront for transactions, it hopes to host other developers with new NFT games on the platform. That could generate more revenue for Sky Mavis. It also built The Mavis Hub as a way to distribute games on PCs and Macs. The Ronin blockchain currently secures more than $1.5 billion in assets. “With the launch of Ronin in April, that was like the last piece that gave us scalable infrastructure,” Zirlin said. “Once we added those together, we became a vertically integrated NFT gaming project where we had all the right pieces fitting together at the right time.” With blockchain, the peer-to-peer network of computers can be used to verify a digital thing, like an NFT. If all of the computers are basically in agreement, then the authenticity is verified. If someone compromises one computer, that’s not a big deal if all the other ones are not compromised. The verification still works. Only if more than half the computers are compromised in a large-scale attack, then the verification can be compromised. That’s very hard to do with Ethereum since so many computers are part of its blockchain protocol. Sky Mavis takes the security onto its own shoulders with the Ronin blockchain. But it uses partners to verify authenticity. And even if Sky Mavis is hacked, and its partner “validators” are not, then the security holds up. If you make money in the game, you can cash out your SLP for Eth. And you can convert that to dollars if you wish, or grow your stockpile of cryptocurrency. If players worry about the security, storing their earnings in Eth could be considered more secure. “Our plan is to decentralize more in the future,” Zirlin said. “It’s a step-by-step process, and so far it’s not something the players are asking for. They’re asking for things like more gameplay, a decentralized exchange, and fast transactions.” So far, this system has worked well, and it doesn’t use nearly as much computing power and wastes fewer environmental resources, resulting in lower “gas fees” per transaction, Zirlin said. You’ll have to pay small fees if you convert Eth into the Ronin system. Since this investment into Ronin has broader implications, the company could become something like an indie game publisher. “For now, we have a lot of work to do as stewards of the Axie ecosystem,” Zirlin said. “There’s so much to build on it still. This is an amazingly inspirational story that feels good. It’s one of the rare stories that has shown a glimmer of hope in these really difficult times of the pandemic. It makes us feel really happy that we’re able to help people, but at the same time, it’s this huge responsibility that we take really seriously.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,407
2,021
"Facebook unveils Horizon Home social VR, Messenger VR calls, and fitness VR on road to metaverse | VentureBeat"
"https://venturebeat.com/2021/10/28/facebook-unveils-horizon-home-social-vr-messenger-vr-calls-and-fitness-vr-on-road-to-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook unveils Horizon Home social VR, Messenger VR calls, and fitness VR on road to metaverse Share on Facebook Share on X Share on LinkedIn Mark Zuckerberg speaks at Facebook Connect. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook has turned its social media battleship toward the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. And that was evident at its Facebook Connect online event today, where CEO Mark Zuckerberg introduced new stepping stones on its virtual reality and augmented reality journey toward the metaverse. Zuckerberg is so serious about this journey that he changed the company’s name to Meta today. Yep. As part of a broad effort that involves billions of dollars of research, Zuckerberg introduced social virtual reality with Horizon Home, which uses an Oculus Quest 2 VR headset to entertain people in connected VR spaces. You will also be able to make Messenger calls within VR. And he teased new augmented reality glasses (Nazare) and a high-end VR headset (Project Cambria) that are coming soon. “We see view this progression of technology as we’re constantly getting more natural ways to connect and communicate with each other,” Zuckerberg said in a press briefing regarding the metaverse focus. “Through Facebook’s lifetime, we started off typing text into websites, and we got phones with cameras. So the internet became more visual and mobile. And then as connections got better, we now have a rich video which is more immersive as the main way that we share experiences.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Of course, Facebook faces a lot of issues now. Antitrust regulators are investigating it around the world, it faced some bad press for weeks from whistleblower leaks about putting profits ahead of the welfare of its users, and its financial performance slowed after Apple prioritized privacy over targeted advertising. Its social media business has entangled it with many issues, but it has also given the company an unprecedented financial engine with $29 billion in revenues in the September 30 quarter. And that has given it the war chest to be perhaps the most ambitious company when it comes to ushering in the metaverse, which Zuckerberg said is the next generation of the internet. Above: Facebook said the metaverse will make you feel presence. Given the controversies, Zuckerberg said, “I just want to acknowledge that I get that probably there are a bunch of people will say that this isn’t a time to focus on the future. And I want to acknowledge that there are clearly important issues to work on in the present. And we’re committed to continuing to do that and continuing the massive industry-leading effort that we have on that. At the same time, I think there will always be issues in the present. So I think it’s still important to push forward and continue trying to create what the future is.” Zuckerberg continues to invest in VR, noting how his Facebook Reality Lab division — which includes Oculus — will invest more than $10 billion this year in metaverse efforts that include both VR and AR. Facebook also plans to hire more than 10,000 people in Europe for the division. As for defining the metaverse, Zuckerberg said it isn’t like being on a Zoom call, as we have all done during the pandemic. Zuckerberg said the metaverse should feel like you’re embedded in a place, with a sense of presence, the feeling that you are transported to some place. He said that gaming will be the way that people step into the metaverse for the first time, as gaming has the infrastructure for economies through virtual goods and engagement with fans. The metaverse Above: Facebook’s demo of the metaverse. In the press briefing ahead of the event as well as in his speech itself, Zuckerberg talked extensively about how the company views the metaverse. “We basically think of the metaverse is the successor of the mobile internet, in the sense that the mobile internet didn’t completely replace everything that came before it,” Zuckerberg said. “It’s not that the metaverse is going to completely replace something that comes before it. But it’s the next platform. In that sense, it’s not a thing that a company builds. It is a broader platform that I think we’re all going to contribute towards building in a way that is open and interoperable.” He added, “We think about the metaverse as, it’s an embodied internet. So instead of looking at the internet, or at documents, which I think is sort of the experience of what we have today, it’s an internet and you’re going to be in and a part of something that’s going to feel qualitatively different. And the defining characteristic that we think is going to exist here in the metaverse is this feeling of presence, like you’re right there with another person or in another place.” Above: Mark Zuckerberg said the metaverse will let you teleport to different worlds. Andrew Bosworth, head of Facebook Reality Lab, said in a press briefing, “We really want to dispel the idea that the metaverse is only accessible through virtual reality. It’s not the vision that we have imagined it being, even though it’s 3D, even though it’s immersive. There are lots of 3D immersive things that we access today through screens and through our phones, via Fortnite and video games or more social experiences.” But Bosworth said that the experience of the metaverse will be better in virtual reality and augmented reality. “And so it just depends on the context. If I’m on the go, I’m very likely to be on my mobile phone,” he said. “If I’m in this room by myself, virtual reality might be a better way to go. If I was in this room with other people physically here, augmented reality might be the way to approach it. And so there are many ways that people will interface with the metaverse.” Zuckerberg said that privacy and safety have to be built into the metaverse from the start. In his vision, he said you can take metaverse items and project them into the physical world as holograms. You can gesture with your hands rather than tap or type. Your devices won’t be the focal point of your attention. He said it has to be open, interoperable, and not built by one company. He said the metaverse will unlock a massively bigger creator economy. Games Above: Mark Zuckerberg said games will be popular in the metaverse. Facebook is partnering with VR game studio Vertigo Games on five upcoming titles. Zuckerberg said he loves Beat Saber, which just passed $100 million on Quest alone. Population: One, a battle royale shooter on Quest, has become the most popular VR multiplayer shooter game on Quest since it launched last year. It’s also getting regular updates. The game has 24 people in a match. Blade & Sorcery: Nomad is launching on Quest later this year. And Grand Theft Auto: San Andreas is coming to VR as well, following the launch of Resident Evil 4 on the Quest this month. Zuckerberg said it was a new version of “one of the greatest games ever made.” Zuckerberg showed off a lot of visions, like a simple game of chess played with a friend via augmented reality, or fencing with someone who is remote, or even playing basketball against friends over VR. To talk more about the coming games, Oculus will have a gaming showcase in 2022. Social VR Above: Facebook is headed toward the metaverse. Facebook has unveiled Horizon Home. Soon, when you join an Oculus Party in VR, you’ll be able to invite your friends into a new social version of your Home where they’ll be embodied as their avatars. You’ll be able to spend time together with friends, co-watch videos together, and launch games and apps together On top of that, you will also be able to communicate with your friends across all your apps and devices — including Portal — with Messenger calling in VR coming later this year. From anywhere in VR, you’ll be able to invite your Facebook friends to join a Messenger call and eventually spend time together or travel to VR destinations. “In terms of the software aspects, it’s important that this can be continuous across different devices,” Zuckerberg said. “So we’ll talk a bit about AR glasses. We’ll talk about the VR devices for having the most immersive experiences. But of course, it’s going to be really important that you can jump into the metaverse through phones and computers, and including the social media apps that we build where people are connecting all day already.” Above: Mark Zuckerberg’s vision for the metaverse. Horizon Home, Horizon Workroom , and Horizon Worlds are all part of the company’s effort to create VR spaces for the home, work, and other spaces. “It’s our collaboration experience where people come together to work in a virtual room and feel that sense of presence together,” Bosworth said. “It helps them communicate, collaborate, and connect while remote.” He said the Horizon Worlds VR social space is still in testing and growing daily. “We’ve been really amazed at the imagination, the collaboration that we’re seeing from the creator community there,” Bosworth said. He said the company has added a $10 million creator fund to encourage even more investment in Horizon Worlds. Fitness XR Above: Fitness in VR will get serious. The new fitness offerings on Oculus include Supernatural boxing and new FitXR fitness studios. Player 22 by Rezzil, which is currently used by pro athletes, is adding guided and hand-tracked bodyweight exercises. Next year, Facebook will also release Active Pack for Quest 2. The company is making a fitness accessories pack that makes Quest 2 more comfortable, with controller grips for when things get intense and a facial interface that you can wipe the sweat off, making your sessions more comfortable. Above: Fitness in VR could include fencing. Fitness on Quest is like a Peloton without the bike, Zuckerberg said. The vision for what comes next looks pretty cool. You’ll be able to work out in new worlds against an AI, like the photos in the top of this section shows. You’ll be able to play fitness apps in groups, like three-on-three basketball, Zuckerberg said. Your Facebook cycling group could do an AR charity ride, he suggested. You could fence with someone on the other side of the world, and so on. VR for Work Above: Working in a Facebook Horizon Workroom. These were actually real people talking to me. Facebook also unveiled Quest for Business, including Work Accounts support on Quest 2. The new business offering will bring work capabilities into consumer Quest devices, including the ability to log into Quest 2 with a Work Account instead of your personal Facebook account. It will also bring businesses the tools they need, like account management, IDP & SSO integration, mobile device management, and more. It will begin testing this year, move to open beta in 2022, and will be fully available in 2023. There are also 2D apps coming to Quest in Horizon Home. Facebook will announce that services like Slack, Dropbox, Facebook, Instagram, and many more will soon work in VR as 2D panel apps in Horizon Home — so you can multitask, cross things off your to-do list between gaming sessions, and stay connected while in VR. This starts bringing some of your favorite 2D internet services into the metaverse. Above: Facebook wants you to work in VR. The first 2D apps are available in the Oculus Store today, including Facebook, Instagram, Smartsheet, and Spike. More apps will follow soon, like Dropbox, Monday.com, Mural, My5 (UK), PlutoTV, and Slack — all built using the Progressive Web App industry standard. Facebook has also added a new personal workspace environment in Horizon Home. This is a place to focus and work using the new suite of 2D panel apps, or just check a few things off your to-do list. Above: You can log in on a work account in Facebook’s idea of VR work. Facebook earlier announced its Horizon Workrooms VR experience for office meetings. Now it is adding Workrooms customization. For Horizon Workrooms, it will add the capability to customize your Workroom with your company logo, posters, or designs. “I am generally optimistic about work in the metaverse,” Zuckerberg said. He said that working in the metaverse, with things like teleportation, could make a huge difference for the environment if you wind up taking one less business trip per year. Oculus developers Above: The Oculus Quest 2 Facebook is unveiling its Presence Platform, a broad range of machine perception and AI capabilities that will enable developers to build mixed reality experiences on the Quest platform. A realistic sense of presence will be key to feeling connected in the metaverse, and Presence Platform’s capabilities deliver on this promise with things like environmental understanding, content placement and persistence, voice interaction, and standardized hand interactions. “One of the lessons that we’ve learned over the last five years or so is around trying to take into account some of the principles are what we want to build up front and be clear about where we’re going and building something that is not just a great product for consumers, but can build a great creative economy for creators and developers as well to participate and be a part of the upside of what gets created,” Zuckerberg said. Presence Platform consists of three offerings: Insight software development kit (SDK) for developing mixed reality experiences; Interaction SDK to make it easier to add hand interactions to apps; and Voice SDK to make voice input a part of the experiences they build. “There’s something magical about that sense of presence. And I think that being able to deliver that is is the ultimate dream of building these social experiences,” Zuckerberg said. And it is rolling out the tools to allow any developer to start creating and testing progressive web apps (PWA) apps on Quest devices. In the near future, developers will be able to ship their PWAs to App Lab. PWA developers will be able to submit app packages to Oculus, and their apps will show up in either the Oculus Store or App Lab. PWA apps stay up-to-date without requiring app package updates since they display live content from the developers’ site. This will allow developers to transform the 2D experience of their websites into an app on Oculus. PWAs in App Lab can also use WebXR. The Avatars 2.0 SDK, which overhauls Avatars in VR, will be available in December. And Facebook is launching a new cloud backup system later this year, allowing users to back up their device’s app data, like game progress or settings, so they can easily pick up where they left off in a game. It works at the filesystem level, with no coding required. Facebook said it is exploring new ways of viewing ownership through technologies such as nonfungible tokens (NFTs). Multiplayer gaming is also getting an upgrade. That includes direct invite application programming interfaces (APIs) that let you send invites directly from your own UX. It will also intro a new channel into an app from discovery surfaces called Ask to Join (existing integrated apps get this for free), and it will have ways to more easily friend other users and discovery opportunities in VR and 2D. Facebook also built a new multiplayer sample called SharedSpaces to help developers get started with the new social platform APIs. It’s available for Unity and Unreal 4. AR news Above: Augmented reality effects on your phone. As VR hits an inflection point, Facebook is investing in the core technology and work needed to bring fully featured AR glasses to market. The company says it has packed in as much technology as it could into good-looking glasses today with Ray-Ban Stories ; it is also working toward fully featured AR glasses. The company also claims to be cultivating the content, capabilities, and communities that can enrich Facebook experiences today and illuminate the path to AR glasses ahead with the Spark AR platform. With Spark AR, Facebook’s AR platform for creation and distribution across our apps and devices, Facebook is seeing a lot of people engaging with AR technology today. More than 700 million people use AR effects across Facebook apps and devices every month, the company says. New Spark AR capabilities will unlock more sophisticated AR experiences and use cases with location services, virtual objects, and new input models. It has new geo-locked experiences for public spaces. This allows for location-locked effects that link together in a cohesive, long-form experience, using multiple AR activation points. For example, imagine a theme park scavenger hunt or guided tour of monuments in a city center. Spark AR is currently testing with Spark Partner Network and select brands, including Sanrio in Japan, opening to all creators in 2022. Body tracking and hand tracking Above: Mike Abrash of Facebook Reality Lab talked about a dozen things needed for the metaverse. Facebook will have new capabilities to enable what it calls more fantastical, fun, and imaginative self-expression through AR effects. It is doing foundational work to unlock people-centric forms of input and virtual object interaction in AR, coming in November. The company says its upcoming Virtual Objects Pipeline will let people create and place 3D objects in the real world that can include text, characters, GIFs, stickers, and more. To ensure realistic performance, this will also include underlying technical capabilities like depth, occlusion, and improved plane tracking. This will be available in private beta later this year and opening to everyone in 2022. Virtual objects are critical to the future of AR and the cornerstone of continuity in the metaverse. Built on Spark AR, these objects will be versatile and scalable across different surfaces and use cases like commerce and shopping, Facebook says, with virtual try-ons and product previews. For creators, Facebook says it’s making it easier to participate and reach new audiences in the AR ecosystem. Polar is its new, free iOS app that the company promises makes it easy to imagine, create, and share AR effects and filters without needing to code or work in the Spark AR Studio. Creators will be able to extend their personal brands, art, and creative vision in new ways — like a virtual sticker with the creator’s own tagline, or a piece of swag they can share during an AMA. Opening applications to the closed beta program for iOS later this year. Facebook also said its Facebook Reality Labs will invest $150 million in an education program aimed to help create economic opportunity for AR/VR creators and developers, ranging from new training and career development resources to new content and technology partnerships. After over 22,000 creators enrolled in Facebook’s AR Curriculum program in less than a year, the company is expanding the Spark AR Curriculum to include additional AR training courses — including a new “AR Pro” course — as well as a formal Spark AR certification program. Spark AR Certification Above: Facebook’s Spark AR For the first time, Facebook will provide AR creators with a formal pathway and program to demonstrate their knowledge and proficiency of Spark AR, and to earn a Facebook Certified Spark AR Creator credential. The first exam will take place in November, and registration will open soon. Creators who earn the Spark AR Certification will get access to Facebook Certification Career Network. This job-search platform features 60-plus companies looking to hire skilled talent, including agencies like BBDO, Havas Media, GroupM, and more. And Facebook is working with game engine developer Unity to help people gain the skills necessary for creating incredible VR content, bundling Unity’s “Create with VR for Educators” tool and training with Quest 2 devices for nonprofits and educational institutions. The company is also partnering with a number of institutions to help bring immersive and collaborative learning experiences to life: VictoryXR and Byju’s FutureSchool ; nonprofits like Generation , Urban Arts Partnership , and the Peace Literacy Institute ; and learning organizations, including a number of historically Black colleges and universities. As for how far away we are from the metaverse, it’s not clear. But Bosworth said, “We’re not there.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,408
2,020
"How technology and policy can mitigate climate effects in an age of colliding crises | VentureBeat"
"https://venturebeat.com/2020/10/31/how-technology-and-policy-can-mitigate-climate-effects-in-an-age-of-colliding-crises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How technology and policy can mitigate climate effects in an age of colliding crises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, the federal government spends less than $9 billion annually on energy innovation, which is less than a quarter of what it invests in health innovation and less than a tenth of what it invests in defense innovation. As we sit at a crossroads of an unprecedented confluence of challenges — from a public health crisis, to a leadership crisis , to a climate crisis, to a racial equity and social justice crisis — it’s time we look for new solutions to solve some of our most urgent problems. Our leaders must explore the ways that energy resiliency and climate action can help to see us through these critical times and create a new normal where a resilient, reliable, and affordable energy system powers our economy, safeguards our public health, and provides a path to social and economic mobility. One part of the equation requires increased federal support for the growing climate technology sector aimed at creating a resilient energy system to support America’s people and economy. The Department of Energy estimates that weather-related power outages alone cost the U.S. economy $18 billion to $33 billion per year. (These estimates were made before the recent years of wildfires and public safety power shut-offs in California.) It’s not just the “coastal elites” who are suffering: Extreme weather is a threat to life, livelihoods, and the consistent supply of electricity in the Midwest and Rust Belt as well. To make energy resilience the centerpiece for our national recovery, we should push legislation through Congress that focuses on the following four areas: Creating a modern, self-healing smart grid Protecting the grid from cyber attack Fostering a series of microgrids to create local energy resilience Incentivizing restorative behaviors from large electricity consumers The first aspect of this legislation would be to create a federal energy resilience grant program that covers different aspects of resilience throughout the energy system. Such a program, an expanded successor to the Smart Grid Investment Grant from the American Reinvestment and Recovery Act, would help fund transformational efforts in each of those four major areas, with prioritization on projects that impact multiple areas. Federal grants could be awarded to state energy regulatory agencies or directly to utilities. The second element would be creating an energy resilience data hub. This hub could be hosted by the Energy Information Administration and the Office of Cybersecurity, Energy Security, and Emergency Response (CESER) and would collect and organize information from around the country that could foster a better understanding of energy threats, responses, and best practices. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Finally, the legislation should establish a Presidential Award in each of these four areas to be awarded annually. This award (coordinated between the White House and CESER) would highlight the year’s best energy resilience efforts as a means of raising awareness of the issue and encouraging ambitious action from the research, development, and tech communities. A resilient, low-carbon economy must also be built upon the foundation of justice and equality. According to the Asthma and Allergy Foundation of America, African Americans are almost three times more likely to die from asthma-related causes than their white counterparts. And nearly one in two Latinos in the U.S. live in counties where the air doesn’t meet EPA public health standards for smog. This type of environmental injustice is apparent across the U.S. and has only been exacerbated by the COVID-19 pandemic. It’s clear that more stringent environmental regulations are needed to put an end to polluting industries’ disproportionate effects in poor and minority communities. Building on Congressman Raul Grijalva’s (D-Ariz.) Environmental Justice for All Act is a start. We should develop a formal scoring system that prioritizes environmental justice and frontline engagement over dollar-and-cents cost-benefit analysis, as proposed in the Climate Equity Act , which would use data to inform planning and balance the scales. Investment in microgrids and energy storage will also help to reduce the need to operate “peaker plants” in times of highest demand. These plants produce high levels of particulate emissions and other pollutants that exacerbate already-poor air quality and are disproportionately located near low-income communities and communities of color. Along with new regulations, we need to merge the minds of community organizers, energy companies, renewable energy developers, and environmental organizations who are on the ground in these communities across the country. By working closely with those on the front line of these issues and leveraging their ideas and insights, we can effect policy with real, lasting change. Progress can be made without congressional approval, massive investments, or new laws. Finally, to help develop, scale, and fund a path to net zero, Wall Street, venture capitalists, and Big Tech need to be deeply engaged and committed. Old industries, which have been too slow to change, need new tools to tackle climate change. (High-temperature processes like steel and concrete manufacturing are one of the most difficult areas to decarbonize , yet the U.S. DOE spends only 6% of its R&D budget on “Industry.”) It’s imperative we increase both “technology push” policies that fund academic research and “market pull” policies that create a path for impact at scale. What this requires is early-stage investors to help mitigate risk, and a concerted effort from Wall Street and Big Tech, to support new ideas, technologies, and companies that are solving some of our toughest climate challenges. Real capital, real commitments, real culture change. Startups aren’t waiting for federal action. Founders continue to develop new solutions across transportation, energy generation, and industry (which collectively make up ~80% of U.S. emissions). Proterra is designing and manufacturing electric buses that operate at a lower overall cost than diesel, hybrid, or natural gas vehicles. Roadbotics (an URBAN-X portfolio company) helps governments better administer their public infrastructure assets by unifying their data on a single cloud platform. Innovation precedes deployment; we need policy that links the two and provides sufficient funding for new solutions to reduce emissions in a major way. Research by PwC indicates that approximately 6% of total capital invested in 2019 is focused on climate tech, reflecting an increase from $418 million in 2013 to $16.3 billion in 2019. Major corporations, from BlackRock to Amazon to Softbank , also have the power to effect change, both through investment and deployment of forward-thinking climate technologies and by asserting a benevolent influence on Capitol Hill that demands transparent, long-term, and clear policies for an equitable climate agenda. And yes, large companies like these are increasingly committing to ambitious climate goals. However, it’s our role as citizens, investors, entrepreneurs, and shareholders to demand accountability that they live up to their word. Today, in the midst of a hotly contested presidential election, we’ve seen the conversation on climate change grow in prominence across the nation. As part of Joe Biden’s $5 trillion “Build Back Better” plan, he calls for a $2 trillion investment and strong push for energy innovation to drive a low-carbon future. This type of attention to and investment in climate tech is critical. With it, we may finally be able to act on the promise our country holds to take the lead on climate action be at the forefront of the industries that will define the next century. We can create a robust pipeline for jobs in a low-carbon economy, rather than one pegged to oil and gas. We can have the tools to bridge the yawning equity and environmental justice divide that COVID-19 has laid bare. We can build new companies at scale that bring sustainability-forward solutions to age-old industries. But to turn this vision into reality requires real leadership, a belief in science, and a true commitment to answer calls for climate action from across the nation. Without strong federal backing, we can’t possibly hope to meaningfully address the society-scale challenge we face. If the last four years are any indication, a second Trump term would mean more inaction, more uncertainty, and more cities and states left to fend for themselves in the face of unprecedented climate disasters. This election season has been characterized by anxiety, misinformation, and interference from foreign actors. But in my conversations with swing state voters, I’ve also experienced moments of energy, hope, and clarity. The two candidates couldn’t have more oppositional views on the future. Optimism does not come easy, but it’s a choice. And despite the suffering around us and the challenges to come, I’m optimistic that, know it or not, we’ve embarked on a new path that can meet this moment. Micah Kotch is Managing Director of URBAN-X , an accelerator from MINI and Urban Us for startups that are reimagining city life. He’s a board member of Green City Force, an AmeriCorps program that engages young adults from New York City Housing Authority (NYCHA) communities in national service related to the environment. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,409
2,020
"Why AI can’t move forward without diversity, equity, and inclusion | VentureBeat"
"https://venturebeat.com/2020/11/12/why-ai-cant-move-forward-without-diversity-equity-and-inclusion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Why AI can’t move forward without diversity, equity, and inclusion Share on Facebook Share on X Share on LinkedIn The need to pursue racial justice is more urgent than ever, especially in the technology industry. The far-reaching scope and power of machine learning (ML) and artificial intelligence (AI) means that any gender and racial bias at the source is multiplied to the n th power in businesses and out in the world. The impact those technology biases have on society as a whole can’t be underestimated. When decision-makers in tech companies simply don’t reflect the diversity of the general population, it profoundly affects how AI/ML products are conceived, developed, and implemented. Evolve, presented by VentureBeat on December 8th, is a 90-minute event exploring bias, racism, and the lack of diversity across AI product development and management, and why these issues can’t be ignored. “A lot has been happening in 2020, from working remotely to the Black Lives Matter movement, and that has made everybody realize that diversity, equity, and inclusion is much more important than ever,” says Huma Abidi, senior director of AI software products and engineering at Intel – and one of the speakers at Evolve. “Organizations are engaging in discussions around flexible working, social justice, equity, privilege, and the importance of DEI.” Abidi, in the workforce for over two decades, has long grappled with the issue of gender diversity, and was often the only woman in the room at meetings. Even though the lack of women in tech remains an issue, companies have made an effort to address gender parity and have made some progress there. In 2015, Intel allocated $300 million toward an initiative to increase diversity and inclusion in their ranks, from hiring to onboarding to retention. The company’s 2020 goal is to increase the number of women in technical roles to 40% by 2030 and to double the number of women and underrepresented minorities in senior leadership. “Diversity is not only the right thing to do, but it’s also better for business,” Abidi says. “Studies from researchers, including McKinsey, have shown data that makes it increasingly clear that companies with more diverse workforces perform better financially.” The proliferation of cases in which alarming bias is showing up in AI products and solutions has also made it clear that DEI is a broader and more immediate issue than had previously been assumed. “AI is pervasive in our daily lives, being used for everything from recruiting decisions to credit decisions, health care risk predictions to policing, and even judicial sentencing,” says Abidi. “If the data or the algorithms used in these cases have underlying biases, then the results could be disastrous, especially for those who are at the receiving end of the decision.” We’re hearing about cases more and more often, beyond the famous Apple credit check fiasco , and the fact that facial recognition still struggles with dark skin. There’s Amazon’s secret recruiting tool that avoided hiring qualified women because of the data set that was used to train the model. It showed that men were more qualified, because historically that’s been the case for that company. An algorithm used by hospitals was shown to prioritize the care of healthier white patients over sicker Black patients who needed more attention. In Oakland, an AI-powered software piloted to predict areas of high crime turned out to be actually tracking areas with high minority populations , regardless of the crime rate. “Despite great intentions to build technology that works for all and serves all, if the group that’s responsible for creating the technology itself is homogenous, then it will likely only work for that particular specific group,” Abidi says. “Companies need to understand, that if your AI solution is not implemented in a responsible, ethical manner, then the results can cause, at best, embarrassment, but it could also lead to potentially having legal consequences, if you’re not doing it the right way.” This can be addressed with regulation, and the inclusion of AI ethics principles in research and development, around responsible AI, fairness, accountability, transparency, and explainability, she says. “DEI is well established — it makes business sense and it’s the right thing to do,” she says. “But if you don’t have it as a core value in your organization, that’s a huge problem. That needs to be addressed.” And then, especially when it comes to AI, companies have to think about who their target population is, and whether the data is representative of the target population. The people who first notice biases are the users from the specific minority community that the algorithm is ignoring or targeting — therefore, maintaining a diverse AI team can help mitigate unwanted AI biases. And then, she says, companies need to ask if they have the right interdisciplinary team, including personnel such as AI ethicists, including ethics and compliance, law, policy, and corporate responsibility. Finally, you have to have a measurable, actionable de-biasing strategy that contains a portfolio of technical, operational, organizational actions to establish a workplace where these metrics and processes are transparent. “Add DEI to your core mission statement, and make it measurable and actionable — is your solution in line with the mission of ethics and DEI?” she says. “Because AI has the power to change the world, the potential to bring enormous benefit, to uplift humanity if done correctly. Having DEI is one of the key components to make it happen.” The 90-minute Evolve event is divided into two distinct sessions on December 8th: The Why, How & What of DE&I in AI From ‘Say’ to ‘Do’: Unpacking real-world case studies & how to overcome real-world issues of achieving DE&I in AI Register for free right here. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,410
2,021
"DeepMind claims its AI weather forecasting model beats conventional models | VentureBeat"
"https://venturebeat.com/2021/09/29/deepmind-claims-its-ai-weather-forecasting-model-beats-conventional-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind claims its AI weather forecasting model beats conventional models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a paper published in the journal Nature , meteorologists gave an AI model for predicting short-term weather events top rank in terms of accuracy and usefulness in 88% of cases. It marks the first time professional forecasters have expressed a preference for a machine learning-based model over conventional methods, claims DeepMind, which developed the model — paving the way to new weather forecasting approaches that leverage AI. While studies suggest some forms of machine learning contribute significantly to greenhouse gas emissions, the technology has also been proposed as a tool to combat climate change. For example, an IBM project delivers farm cultivation recommendations from digital farm “twins” that simulate the future weather and soil conditions of real-world crops. Other researchers are using AI-generated images to help visualize climate change and estimate corporate carbon emissions, and nonprofits like WattTime are working to reduce households’ carbon footprint by automating when electric vehicles, thermostats, and appliances are active based on where renewable energy is available. “Precipitation ‘nowcasting,’ the high-resolution forecasting of precipitation up to two hours ahead, supports the real-world socioeconomic needs of many sectors reliant on weather-dependent decision-making,” the DeepMind paper reads. “Skilful nowcasting is a longstanding problem of importance for much of weather-dependent decision-making. Our approach using deep generative models directly tackles this important problem, improves on existing solutions and provides the insight needed for real-world decision-makers.” Predicting weather events “Nowcasting” is key to weather-dependent decision making because it informs the operations of emergency services, energy management, retail, flood early-warning systems, air traffic control, marine services, and more. But for nowcasting to be useful, the forecast must provide accurate predictions and account for uncertainty, including events that could greatly impact human life. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Several approaches based on machine learning have been developed in recent years. Trained on large datasets of radar observations, they aim to better model heavy precipitation and other hard-to-predict precipitation phenomena. For example, Google partnered with the U.S. National Oceanic and Atmospheric Administration (NOAA) to study and develop machine learning systems that might be infused across NOAA’s enterprise. Microsoft has also funded efforts to identify repeating weather and climate patterns from historical data as a way to improve subseasonal and seasonal forecast models. But DeepMind notes that AI nowcasting models don’t always include small-scale weather patterns or provide forecasts over entire regions. As an alternative, the Alphabet-backed company created a deep generative model (DGM) for forecasting, which learned the probability distributions of data — functions that describe all the possible values a random variable could take — to generate “nowcasts” from its learned distributions. Above: DeepMind’s AI model predicts weather events up to 90 minutes in advance. DeepMind asserts that DGMs can predict weather events events that are inherently difficult to track due to the underlying randomness. Moreover, they can anticipate the location of precipitation as accurately as systems tuned to the task while preserving properties useful for decision-making. DeepMind trained its DGM on a large dataset of precipitation events recorded by radar in the U.K. between 2016 and 2018. Once trained, it could deliver nowcasts in just over a second running on a single NVIDIA V100 GPU. When compared to other popular nowcasting approaches, including other machine learning models, DeepMind’s DGM — judged by a panel of 56 meteorologists — produced more realistic and consistent predictions over regions up to 1,536 kilometers by 1,280 kilometers and with lead times from 5 to 90 minutes ahead. “Using a systematic evaluation by more than 50 expert meteorologists, we show that our generative model ranked first for its accuracy and usefulness in 89% of cases against two competitive methods,” the paper reads. “We show that generative nowcasting can provide probabilistic predictions that improve forecast value and support operational utility, and at resolutions and lead times where alternative methods struggle.” Real-world applications DeepMind’s model and others like it are emerging at a time when climate change is top of mind for the world’s largest companies. As a CDP analysis recently found, 500 of the biggest corporations potentially face roughly $1 trillion in costs related to climate change in the decades ahead unless they take proactive steps to prepare. Previous studies have estimated that the risks of global warming, if left unmanaged, could cost the world’s financial sector between $1.7 trillion to $24.2 trillion. In one stark example, Pacific Gas and Electric, California’s largest electric utility, faced up to $30 billion in January 2019 in fire liabilities alone. Facebook chief AI scientist Yann LeCun and Google Brain cofounder Andrew Ng, among others, have asserted that mitigating climate change and promoting energy efficiency are worthy challenges for AI researchers. “The ability to model complex phenomena, make fast predictions and represent uncertainty makes AI a powerful tool for environmental scientists, including those studying the impacts of climate change,” DeepMind senior staff scientist Shakir Mohamed said in a press release. “It’s very early days, but this trial shows that AI could be a powerful tool, enabling forecasters to spend less time trawling through ever growing piles of prediction data and instead focus on better understanding the implications of their forecasts. This will be integral for mitigating the adverse effects of climate change today, supporting adaptation to changing weather patterns and potentially saving lives.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,411
2,021
"How voice biometrics is saving financial services companies millions and eliminating fraud | VentureBeat"
"https://venturebeat.com/2021/07/14/how-voice-biometrics-is-saving-financial-services-companies-millions-and-eliminating-fraud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event How voice biometrics is saving financial services companies millions and eliminating fraud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As part of Five9’s Conversational AI Summit at Transform 2021, two leading experts in the field of voice biometrics, Paul Magee, CEO of Auraya and Daniel Thornbill, SVP Pre and Post Sales at ValidSoft, sat down with Richard Dumas, VP of Marketing at Five9, and explored how banks, credit unions, brokerages, and credit card companies are using the latest advances in voice biometrics to verify identity, authenticate callers, and protect against fraud — saving millions, reducing handle time, and increasing customer satisfaction. “Voice biometrics is just like your fingerprint, iris, or face,” Magee said. “But one of the advantages of using voice biometrics over those other biometrics is that every time you speak, it’s unique. Nobody can ever steal my voice, because they can’t steal what I’m going to say next.” Voice is swiftly replacing the traditional verification methods that use pins, passwords, and knowledge-based authentication, which lack security, privacy, and reliability. These older verifications often result in a poor customer experience that reduces the use of digital self-services, driving up higher cost with agent-assisted phone transactions. ValidSoft specializes in security solutions, and developed its own voice biometric technology, providing both active and passive voice biometrics. Auraya is a global leader in voice biometric technology, deployed by more than 10 million users licensed around the world. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As Magee explains, users create a voice print by depositing a sample of their voice in a vault. When a customer uses their voice to access a system, the model compares that voice print with the one in the vault to identify and authenticate the user. “To add to that, voice biometrics is a multidimensional biometric as well,” Thornbill said. “It’s measuring both behavioral elements in how you speak, but also physical — it’s tying your physiology, your physical makeup, and the distortion you create in sound to your biometric as well. Most other biometrics are only single dimension, which makes voice even more secure and versatile.” Implementing biometrics for security Security leaders in financial services especially are applying voice biometrics as part of a multilayered approach. The right one-time passcode with the right voice on the right device starts to build up a multitude of factors, providing security teams with the confidence that this transaction can be handled securely, and with non-refutable evidence required by regulatory requirements. The technology is taking off now because of a combination of factors, including increasing consumer preference for speech interfaces, especially with enterprises and financial services companies. They’re using voice commerce through apps, smart speakers to check their balances, and IVRs for customer self-service. “How do you apply security across all these different ways in a consistent and secure manner? And that’s where voice biometrics comes into play,” Thornbill said. “It’s the only thing that enables a consumer to traverse all these different channels. You need to be able to apply the same level of security, or continuous voice authentication for these services. “We’ve seen amazing growth in the use cases and the organizations,” Magee added. “It used to be a technology that was the preserve of very large organizations that had multi-million-dollar problems, so they could spend millions of dollars on a complicated system. Today, voice biometrics can be deployed by organizations from small to medium to large.” The advent of cloud, increasing customer demand, and the effect of regulatory compliance on consumer privacy have all contributed to the rise of voice biometrics for security. Original voice prints are encrypted and kept behind firewalls, and so they are unlikely to be compromised. And though fraudsters have been looking for ways to crack the security of voice biometrics with deep fakes and replay attacks, platforms like ValidSoft’s have measures in place to detect anomalies in the speech stream to detect those and prevent them. Laying security with passive and active biometrics There are two types of voice biometrics: passive and active voice biometrics. To understand active biometrics in the simplest terms, it’s what a user says. And if you get to an IVA, or you’re using a browser, and you get asked to say a specific phrase, you never want to say something like, ‘My voice is my secure password.’ “I should be saying my phone number, or account number, or the digits displayed on the screen,” Magee said. “That’s an active process — I’m actively taking part in providing a key to open my account.” Passive biometrics is the technology working in the background, for example, when a customer is talking to a call center agent. Their voice is being sampled and the agent is provided with a confirmation that the speaker has been authenticated. “We’re great believers in using all of these techniques to give a smooth and efficient and effective verification process that offers both security and convenience,” Magee said. “Asking a specific question provides a high level of security quickly, allowing self-service in the IVA, and also allowing the agent to start the conversation already knowing who it is.” But a lot of people don’t want to talk to the IVA, so providing a passive verification to the agent provides an additional layer of security. It allows the agent to start a conversation relatively quickly with confirmation of who the person really is. Both active and passive are important elements of a successful solution. Getting consumers on board Getting consumers to enroll their voice is the greatest barrier to a successful voice biometrics deployment, Magee said. “Our solution, after many years of doing far too many deployments that didn’t go as well as they should have, has given us the lessons of history, and that is that not everyone is the same,” he said. For someone who’s a traditional contact center user, frequently interacting with the IVR and speaking to the live agent, then that’s the best way to enroll that person. For the person who uses their app or browser instead, enroll them in their channel of preference. For example, when they use their password to get into the app, present them with the invitation to enroll their voice then and there. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,412
2,021
"How voice biometrics can protect your customers from fraud | VentureBeat"
"https://venturebeat.com/2021/07/28/how-voice-biometrics-can-protect-your-customers-from-fraud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How voice biometrics can protect your customers from fraud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Voice identity verification is catching on, especially in finance. Talking is convenient, particularly for users already familiar with voice technologies like Siri and Alexa. Voice identification offers a level of security that PIN codes and passwords can’t, according to experts from two leading companies innovating in the voice biometrics space. In a conversation at VentureBeat’s Transform 2021 virtual conference, Daniel Thornhill, senior VP at cybersecurity solutions company Validsoft , and Paul Magee, president of voice biometrics company Auraya , discussed the emerging field with Richard Dumas, Five9 VP of marketing. Passive vs. active voice biometrics Just like a fingerprint, an iris, or a face, voice biometrics are unique to an individual. To create a voiceprint, a speaker provides a sample of their voice. “When you want to verify your identity, you use another sample of your voice to compare it to that initial sample,” Magee explained. “It’s as simple as that.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What sets it apart from other biometrics is that every time someone speaks when prompted, the voiceprint is unique, Magee said. “Nobody can steal my voice because you can’t steal what I’m going to say next.” When users are prompted to say their phone or account numbers or digits displayed on the screen, that’s active biometrics. “Passive is more in the background,” Magee said. “So while I’m talking with the call center agent, my voice is being sampled and the agent is being provided with a confirmation that it really is me.” Voice identity biometrics security An organization responsible for the voice biometrics can store it with a trusted service provider, Magee said. “The last thing that we advocate is for the voiceprints to be flying around into some unknown place with limited security,” he added. “We think they should be locked up securely behind the clients’ firewall, like [companies] protect the rest of their clients’ information.” Cheating in the voice identification system Thornhill described how the system can be cheated : Someone can record a user and replay that audio, or someone can use a computer to generate synthetic versions of people’s voices, also known as deep fakes. But there are ways to prevent such fraud. “You can apply some kind of [live element], so maybe a random element of the phrase, or use passive voice biometrics so the user is continuously speaking,” Thornhill explained. There’s also technology that looks at anomalies in speech. “Does this look like it’s being recorded and replayed? Does it look like it’s been synthetically produced or modified by a machine?” Thornhill said. “So there are ways that fraudsters can potentially try to subvert the system, but we do have measures in place that detect those and prevent them.” Industry-wide voice identification adoption The greatest barrier to a successful biometric deployment is getting people to enroll their voice, Magee said. That’s why companies should avoid a one-size-fits-all approach. If a customer often contacts a call center for their needs, that’s the best way to enroll them, Magee said. If they usually use an app, present them with the invitation there. A great time to enroll in a voiceprint is while customers enter their account details during onboarding. Thornhill agreed. “It’s about understanding your client’s needs, their interactions with their customers, to help them get those enrollments up and help them achieve return on investment,” he said. “They’ll benefit from it, whether it’s from fraud reduction or customer experience.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,413
2,021
"Security.org: 68% of Americans use the same password across accounts | VentureBeat"
"https://venturebeat.com/2021/10/09/security-org-68-of-americans-use-the-same-password-across-accounts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Security.org: 68% of Americans use the same password across accounts Share on Facebook Share on X Share on LinkedIn Login page on laptop screen. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In conjunction with Cybersecurity Awareness Month, a new report by Security.org finds that 68% of Americans use the same password across accounts. This isn’t the only disturbing password statistic: More than one in three of Americans (37%) also share passwords with others — up 25% from last year. That rise may be due to increased sharing of streaming services’ login information. Research found about 88 million accounts are “borrowed” from people outside the account holder. While the report also found nearly 40% of more than 1,000 U.S. adults’ passwords have been hacked, and less than half feel very confident in the security of their passwords , there were encouraging security measures that took place in the past year. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Notably, 85% are now employing two-factor authentication, adding a layer of online security to their passwords. In addition, the use of password generators nearly doubled year-over-year, from 15% to 27%, and password management services or browser vaults increased by 10%. Furthermore, Americans have ditched shorter passwords of fewer than eight characters, with 84% using at least eight. However, more than half use familiar names in their passwords, such as their own name, their children’s names, or their pet’s names. Using familiar names makes hackers’ work easier, as greater portions of users’ personal, professional, and financial lives transpire online. Americans have a lot to learn about managing passwords. After all, the most used password in the U.S. is “123456,” which can be cracked in less than a second. Read the full report by Security.org. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,414
2,021
"SolarWinds breach exposes hybrid multicloud security weaknesses | VentureBeat"
"https://venturebeat.com/2021/05/16/solarwinds-breach-exposes-hybrid-multi-cloud-security-weaknesses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SolarWinds breach exposes hybrid multicloud security weaknesses Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A hybrid multicloud strategy can capitalize on legacy systems’ valuable data and insights while using the latest cloud-based platforms, apps, and tools. But getting hybrid multicloud security right isn’t easy. Exposing severe security weaknesses in hybrid cloud, authentication, and least privileged access configurations, the high-profile SolarWinds breach laid bare just how vulnerable every business is. Clearly, enterprise leaders must see beyond the much-hyped baseline levels of identity and access management (IAM) and privileged access management (PAM) now offered by cloud providers. In brief, advanced persistent threat (APT) actors penetrated the SolarWinds Orion software supply chain undetected, modified dynamically linked library (.dll) files, and propagated malware across SolarWinds’ customer base while taking special care to mimic legitimate traffic. The bad actors methodically studied how persistence mechanisms worked during intrusions and learned which techniques could avert detection as they moved laterally across cloud and on-premises systems. They also learned how to compromise SAML signing certificates while using the escalated Active Directory privileges they had gained access to. The SolarWinds hack shows what happens when bad actors focus on finding unprotected threat surfaces and exploiting them for data using stolen privileged access credentials. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The incursion is particularly notable because SolarWinds Orion is used for managing and monitoring on-premises and hosted infrastructures in hybrid cloud configurations. That is what makes eradicating the SolarWinds code and malware problematic, as it has infected 18 different Orion platform products. Cloud providers do their part — to a point The SolarWinds hack occurred in an industry that relies considerably on cloud providers for security control. A recent survey by CISO Magazine found 76.36% of security professionals believe their cloud service providers are responsible for securing their cloud instances. The State of Cloud Security Concerns, Challenges, and Incidents Study from the Cloud Security Alliance found that use of cloud providers’ additional security controls jumped from 58% in 2019 to 71% in 2021, and 74% of respondents are relying exclusively on cloud providers’ native security controls today. Above: Cloud providers’ security controls are not enough for most organizations, according to the State of Cloud Security Concerns report. Taking the SolarWinds lessons into account, every organization needs to verify the extent of the coverage provided as baseline functionality for IAM and PAM by cloud vendors. While the concept of a shared responsibility model is useful, it’s vital to look beyond cloud platform providers’ promises based on the framework. Amazon’s interpretation of its shared responsibility model is a prime example. It’s clear the company’s approach to IAM, while centralizing identity roles , policies, and configuration rules, does not go far enough to deliver a fully secure, scalable, zero trust-based approach. The Amazon Shared Responsibility Model makes it clear the company takes care of AWS infrastructure, hardware, software, and facilities, while customers are responsible for securing their client-side data, server-side encryption, and network traffic protection — including encryption, operating systems, platforms, and customer data. Like competitors Microsoft Azure and Google Cloud, AWS provides a baseline level of support for IAM optimized for just its environments. Any organization operating a multi-hybrid cloud and building out a hybrid IT architecture will have wide, unsecured gaps between cloud platforms because each platform provider only offers IAM and PAM for their own platforms. Above: The AWS Shared Responsibility Model is a useful framework for defining which areas of cloud deployment are customers’ responsibility. While a useful framework, the Shared Responsibility Model does not come close to providing the security hybrid cloud configurations need. It is also deficient in addressing machine-to-machine authentication and security, an area seeing rapid growth in organizations’ hybrid IT plans today. Organizations are also on their own when it comes to how they secure endpoints across all the public, private, and community cloud platforms they rely on. There is currently no unified approach to solving these complex challenges, and every CIO and security team must figure it out on their own. But there needs to be a single, unified security model that scales across on-premises, public, private, and community clouds without sacrificing security, speed, and scale. Averting the spread of a SolarWinds-level attack starts with a single security model across all on-premises and cloud-based systems, with IAM and PAM at the platform level. Amid hybrid cloud and tool sprawl, security suffers The SolarWinds attack came just as multicloud methods had started to gain traction. Cloud sprawl is defined as the unplanned and often uncontrolled growth of cloud instances across public, private, and community cloud platforms. The leading cause of cloud sprawl is a lack of control, governance, and visibility into how cloud computing instances and resources are acquired and used. Still, according to Flexera’s 2021 State of the Cloud Report , 92% of enterprises have a multicloud strategy and 82% have a hybrid cloud strategy. Above: Cloud sprawl will become an increasing challenge, given organizations’ tendency to prioritize multicloud strategies. Cloud sprawl happens when an organization lacks visibility into or control over its cloud computing resources. Organizations are reducing the potential of cloud sprawl by having a well-defined, adaptive, and well-understood governance framework defining how cloud resources will be acquired and used. Without this, IT faces the challenge of keeping cloud sprawl in check while achieving business goals. Overbuying security tools and overloading endpoints with multiple, often conflicting software clients weakens any network. Buying more tools could actually make a SolarWinds-level attack worse. Security teams need to consider how tool and endpoint agent sprawl is weakening their networks. According to IBM’s Cyber Resilient Organization Report , enterprises deploy an average of 45 cybersecurity-related tools on their networks today. The IBM study also found enterprises that deploy over 50 tools ranked themselves 8% lower in their ability to detect threats and 7% lower in their defensive capabilities than companies employing fewer toolsets. Rebuilding on a zero trust foundation The SolarWinds breach is particularly damaging from a PAM perspective. An integral component of the breach was compromising SAML signing certificates the bad actors gained by using their escalated Active Directory privileges. It was all undetectable to SolarWinds Orion, the hybrid cloud-monitoring platform hundreds of organizations use today. Apparently, a combination of hybrid cloud security gaps, lack of authentication on SolarWinds accounts, and lack of least privileged access made the breach undetectable for months, according to a Cybersecurity & Infrastructure Security Agency (CISA) alert. One of the most valuable lessons learned from the breach is the need to enforce least privileged access across every user and administrator account, endpoint, system access account, and cloud administrator account. The bottom line is that the SolarWinds breach serves as a reminder to plan for and begin implementing zero trust frameworks that enable any organization to take a “never trust, always verify, enforce least privilege” strategy when it comes to their hybrid and multicloud strategies. Giving users just enough privileges and resources to get their work done and providing least privileged access for a specific time is essential. Getting micro-segmentation right across IT infrastructures will eliminate bad actors’ ability to move laterally throughout a network. And logging and monitoring all activity on a network across all cloud platforms is critical. Every public cloud platform provider has tools available for doing this. On AWS, for example, there’s AWS CloudTrail and Amazon CloudWatch , which monitors all API activity. Vaulting root accounts and applying multi-factor authentication across all accounts is a given. Organizations need to move beyond the idea that the baseline levels of IAM and PAM delivered by cloud providers are enough. Then these organizations need to think about how they can use security to accelerate their business goals by providing the users they serve with least privileged access. Adopting a zero trust mindset and framework is a given today, as every endpoint, system access point, administrative login, and cloud administrator console is at risk if nothing changes. The long-held assumptions of interdomain trust were proven wrong with SolarWinds. Now it’s time for a new, more intensely focused era of security that centers on enforcing least privilege and zero-trust methods across an entire organization. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,415
2,021
"45% of execs admit initiatives to secure software supply chains are incomplete | VentureBeat"
"https://venturebeat.com/2021/10/03/45-of-execs-admit-initiatives-to-secure-software-supply-chains-are-incomplete"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 45% of execs admit initiatives to secure software supply chains are incomplete Share on Facebook Share on X Share on LinkedIn Computer server access. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Forty-five percent of executives admit that initiatives to secure their software supply chains are halfway complete or less, and 64% say they are not sure who they would turn to first if their supply chain was attacked. DevOps platform maker CloudBees surveyed 500 C-suite executives across the U.S., U.K., France, and Germany to understand how they felt about their supply chain security and compliance. The survey uncovered that while many executives said they were prepared, when asked about their specific response plans, the responses didn’t match up. Overwhelmingly, 95% claim their software supply chains are secure (95%), or very secure (55%); however, when asked further about the security of their supply chain, the responses revealed they may not be as prepared as originally thought. On the surface, leaders are confident in their software supply chains, but more than two in five (45%) admit that initiatives to secure their software supply chains are halfway complete or less, and 64% say they are not sure who they would turn to first if their supply chain was attacked. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Security is now top of mind for members of the boardroom, with the past year seeing an unprecedented number of attacks, specifically on the supply chain. According to the survey results, almost all C-suite executives (95%) say they think more about securing their supply chain now than just two years ago. Despite the attention and discussion around potential supply chain attacks, 64% of those surveyed say it would take more than four days to fix the problem if they did experience an issue. And, while 93% of executives say they routinely practice dealing with a supply chain production vulnerability, 58% say that if they experienced one they have no idea what their company would do. Ultimately, while security is top of mind for C-suite executives, many are still unsure of how to actually secure the supply chain and ensure preparedness if that day does come. Read the full report from CloudBees. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,416
2,019
"Google's Cloud Services Platform is now Anthos, and it works with AWS and Azure | VentureBeat"
"https://venturebeat.com/2019/04/09/googles-cloud-services-platform-is-now-anthos-and-it-works-with-aws-and-azure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Cloud Services Platform is now Anthos, and it works with AWS and Azure Share on Facebook Share on X Share on LinkedIn Google CEO Sundar Pichai announces the debut of Anthos during the keynote address at Moscone Center on April 9, 2019 in San Francisco Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google today announced that Anthos, a service for hybrid cloud and workload management run on the Google Kubernetes Engine , is now generally available. Anthos will work with Google Cloud Platform of course, but also plays well with multiple other cloud providers, including some of Google’s biggest competitors: Amazon’s AWS and Microsoft’s Azure. Anthos is the new name for the Cloud Services Platform , which Google introduced in beta for hybrid cloud management last year. Google also introduced Anthos Migrate in beta for portability across clouds without the need to modify apps or virtual machines. “It gives you the flexibility to move on-prem apps to the cloud when you’re ready,” Google CEO Sundar Pichai said today onstage at the Cloud Next developer conference in San Francisco. Also announced today: The new Cloud Run for serverless and portable compute , partnerships with some of the leading open source projects, new regions in Salt Lake City and Seoul , and instances for Nvidia’s Quadro Virtual Workstation powered by T4 GPUs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Anthos will launch with support from 30 major tech and developer services and will be supported by infrastructure service providers like Cisco, MongoDB, HPE, and VMware. A number of Kubernetes apps were also introduced for the Google Cloud Platform last summer. In 2014, Google open-sourced the Kubernetes project for containers that can run applications on cloud-based or on-premises servers. Kubernetes is now overseen by Linux Foundation’s Cloud Native Computing Foundation. Containers like Docker and Kubernetes are an increasingly popular way to deploy apps and AI. A number of solutions have been made available in the past year that focus on deploying AI from Kubernetes containers, particularly for organizations interested in portability for on-premises or cloud-based inference such as Intel’s Nauta project for distributed deep learning and Linux Foundation’s Acumos AI platform. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,417
2,021
"Microsoft launches Azure Arc machine learning and container services | VentureBeat"
"https://venturebeat.com/2021/03/02/microsoft-launches-azure-arc-machine-learning-and-container-services"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft launches Azure Arc machine learning and container services Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At its Ignite 2021 virtual conference this week, Microsoft announced updates across Azure Arc , its service that brings Azure products and management to multiple clouds, edge devices, and datacenters with auditing, compliance, and role-based access. Azure Arc-enabled Kubernetes, which launched in preview last May, is now generally available. And a new offering, Azure Arc-enabled Machine Learning, is entering preview starting today. The benefits of AI and machine learning can feel intangible at times, but surveys show this hasn’t deterred enterprises from adopting the technology. Business use of AI grew 270% from 2015 to 2019, according to Gartner, while Deloitte says 62% of respondents to its corporate October 2018 report deployed some form of AI , up from 53% in 2017. But adoption doesn’t always meet with success, as the roughly 25% of companies that have seen half their AI projects fail will tell you. Microsoft says Azure Arc-enabled Machine Learning and Azure Arc-enabled Kubernetes are designed to help companies strike a balance between enjoying the benefits of the cloud and maintaining apps and workloads on-premises for regulatory and operational reasons. With the new services, companies can deploy Kubernetes clusters and build machine learning models where data lives, as well as managing applications and models from a single dashboard. “By extending machine learning capabilities to hybrid and multi-cloud environments, customers can run training models where the data lives while leveraging existing infrastructure investments,” Azure general manager Arpan Shah said in a press release. “This reduces data movement and network latency while meeting security and compliance requirements … In one click, data scientists can now use familiar tools to build machine learning models consistently and reliably across on-premises, multi-cloud, and edge.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Microsoft first announced Arc, which competes with Google’s Anthos and Amazon’s AWS Outpust service, at Ignite 2019. Beyond allowing containerized workloads from anywhere, Arc supports hardware running Linux and Windows Server and features the ability to bring services like Azure SQL Database and Azure Database for PostgreSQL to datacenter platforms. Developers can use Arc’s controls to build containerized apps that take advantage of the Azure tools of their choice, like Azure Resource Manager, Azure Shell, Azure Portal, API, and Azure Policy, while IT teams can launch and configure the apps using GitOps-based configuration management. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,418
2,021
"The cost of cloud, a trillion dollar paradox | VentureBeat"
"https://venturebeat.com/2021/06/04/the-cost-of-cloud-a-trillion-dollar-paradox"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The cost of cloud, a trillion dollar paradox Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There is no doubt that the cloud is one of the most significant platform shifts in the history of computing. Not only has cloud already impacted hundreds of billions of dollars of IT spend, it’s still in early innings and growing rapidly on a base of over $ 100B of annual public cloud spend. This shift is driven by an incredibly powerful value proposition — infrastructure available immediately, at exactly the scale needed by the business — driving efficiencies both in operations and economics. The cloud also helps cultivate innovation as company resources are freed up to focus on new products and growth. source: Synergy Research Group However, as industry experience with the cloud matures — and we see a more complete picture of cloud lifecycle on a company’s economics — it’s becoming evident that while cloud clearly delivers on its promise early on in a company’s journey, the pressure it puts on margins can start to outweigh the benefits, as a company scales and growth slows. Because this shift happens later in a company’s life, it is difficult to reverse as it’s a result of years of development focused on new features, and not infrastructure optimization. Hence a rewrite or the significant restructuring needed to dramatically improve efficiency can take years, and is often considered a non-starter. Now, there is a growing awareness of the long-term cost implications of cloud. As the cost of cloud starts to contribute significantly to the total cost of revenue (COR) or cost of goods sold (COGS), some companies have taken the dramatic step of “repatriating” the majority of workloads (as in the example of Dropbox) or in other cases adopting a hybrid approach (as with CrowdStrike and Zscaler). Those who have done this have reported significant cost savings: In 2017, Dropbox detailed in its S-1 a whopping $75M in cumulative savings over the two years prior to IPO due to their infrastructure optimization overhaul, the majority of which entailed repatriating workloads from public cloud. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Yet most companies find it hard to justify moving workloads off the cloud given the sheer magnitude of such efforts, and quite frankly the dominant, somewhat singular, industry narrative that “cloud is great”. (It is, but we need to consider the broader impact, too.) Because when evaluated relative to the scale of potentially lost market capitalization — which we present in this post — the calculus changes. As growth (often) slows with scale, near term efficiency becomes an increasingly key determinant of value in public markets. The excess cost of cloud weighs heavily on market cap by driving lower profit margins. The point of this post isn’t to argue for repatriation, though; that’s an incredibly complex decision with broad implications that vary company by company. Rather, we take an initial step in understanding just how much market cap is being suppressed by the cloud, so we can help inform the decision-making framework on managing infrastructure as companies scale. To frame the discussion: We estimate the recaptured savings in the extreme case of full repatriation, and use public data to pencil out the impact on share price. We show (using relatively conservative assumptions!) that across 50 of the top public software companies currently utilizing cloud infrastructure, an estimated $100B of market value is being lost among them due to cloud impact on margins — relative to running the infrastructure themselves. And while we focus on software companies in our analysis, the impact of the cloud is by no means limited to software. Extending this analysis to the broader universe of scale public companies that stands to benefit from related savings, we estimate that the total impact is potentially greater than $500B. Our analysis highlights how much value can be gained through cloud optimization — whether through system design and implementation, re-architecture, third-party cloud efficiency solutions, or moving workloads to special purpose hardware. This is a very counterintuitive assumption in the industry given prevailing narratives around cloud vs. on-prem. However, it’s clear that when you factor in the impact to market cap in addition to near term savings, scaling companies can justify nearly any level of work that will help keep cloud costs low. Unit economics of cloud repatriation: The case of Dropbox, and beyond To dimensionalize the cost of cloud, and understand the magnitude of potential savings from optimization, let’s start with a more extreme case of large scale cloud repatriation: Dropbox. When the company embarked on its infrastructure optimization initiative in 2016, they saved nearly $75M over two years by shifting the majority of their workloads from public cloud to “lower cost, custom-built infrastructure in co-location facilities” directly leased and operated by Dropbox. Dropbox gross margins increased from 33% to 67% from 2015 to 2017, which they noted was “primarily due to our Infrastructure Optimization and an… increase in our revenue during the period.” But that’s just Dropbox. So to help generalize the potential savings from cloud repatriation to a broader set of companies, Thomas Dullien, former Google engineer and co-founder of cloud computing optimization company Optimyze, estimates that repatriating $100M of annual public cloud spend can translate to roughly less than half that amount in all-in annual total cost of ownership (TCO) — from server racks, real estate, and cooling to network and engineering costs. The exact savings obviously varies company, but several experts we spoke to converged on this “formula”: Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud. Furthermore, a director of engineering at a large consumer internet company found that public cloud list prices can be 10 to 12x the cost of running one’s own data centers. Discounts driven by use-commitments and volume are common in the industry, and can bring this multiple down to single digits, since cloud compute typically drops by ~30-50% with committed use. But AWS still operates at a roughly 30% blended operating margin net of these discounts and an aggressive R&D budget — implying that potential company savings due to repatriation are larger. The performance lift from managing one’s own hardware may drive even further gains. Across all our conversations with diverse practitioners, the pattern has been remarkably consistent: If you’re operating at scale, the cost of cloud can at least double your infrastructure bill. The true cost of cloud When you consider the sheer magnitude of cloud spend as a percentage of the total cost of revenue (COR), 50% savings from cloud repatriation is particularly meaningful. Based on benchmarking public software companies (those that disclose their committed cloud infrastructure spend), we found that contractually committed spend averaged 50% of COR. Actual spend as a percentage of COR is typically even higher than committed spend: A billion dollar private software company told us that their public cloud spend amounted to 81% of COR, and that “cloud spend ranging from 75 to 80% of cost of revenue was common among software companies”. Dullien observed (from his time at both industry leader Google and now Optimyze) that companies are often conservative when sizing cloud commit size, due to fears of being overcommitted on spend, so they commit to only their baseline loads. So, as a rule of thumb, committed spend is often typically ~20% lower than actual spend… elasticity cuts both ways. Some companies we spoke with reported that they exceeded their committed cloud spend forecast by at least 2X. If we extrapolate these benchmarks across the broader universe of software companies that utilize some public cloud for infrastructure, our back-of-the-envelope estimate is that the cloud bill reaches $8B in aggregate for 50 of the top publicly traded software companies (that reveal some degree of cloud spend in their annual filings). While some of these companies take a hybrid approach — public cloud and on-premise (which means cloud spend may be a lower percentage of COR relative to our benchmarks) — our analysis balances this, by assuming that committed spend equals actual spend across the board. Drawing from our conversations with experts, we assume that cloud repatriation drives a 50% reduction in cloud spend, resulting in total savings of $4B in recovered profit. For the broader universe of scale public software and consumer internet companies utilizing cloud infrastructure, this number is likely much higher. source: company S-1 and 10K filings; a16z analysis While $4B of estimated net savings is staggering on its own, this number becomes even more eye-opening when translated to unlocked market capitalization. Since all companies are conceptually valued as the present value of their future cash flows, realizing these aggregate annual net savings results in market capitalization creation well over that $4B. How much more? One rough proxy is to look at how the public markets value additional gross profit dollars: High-growth software companies that are still burning cash are often valued on gross profit multiples, which reflects assumptions about the company’s long term growth and profitable margin structure. (Commonly referenced revenue multiples also reflect a company’s long term profit margin, which is why they tend to increase for higher gross margin businesses even on a growth rate-adjusted basis). Both capitalization multiples, however, serve as a heuristic for estimating the market discounting of a company’s future cash flows. Among the set of 50 public software companies we analyzed, the average total enterprise value to 2021E gross profit multiple (based on CapIQ at time of publishing) is 24-25X. In other words: For every dollar of gross profit saved, market caps rise on average 24-25X times the net cost savings from cloud repatriation. (Assumes savings are expressed net of depreciation costs incurred from incremental CapEx if relevant). This means an additional $4B of gross profit can be estimated to yield an additional $100B of market capitalization among these 50 companies alone. Moreover, since using a gross profit multiple (vs. a free cash flow multiple) assumes that incremental gross profit dollars are also associated with certain incremental operating expenditures, this approach may underestimate the impact to market capitalization from the $4B of annual net savings. For a given company, the impact may be even higher depending on its specific valuation. To illustrate this phenomenon [please note this is not investment advice, see full disclosures below and at https://a16z.com/disclosures/ ], take the example of infrastructure monitoring as a service company Datadog. The company traded at close to 40X 2021 estimated gross profit at time of publishing, and disclosed an aggregate $225M 3-year commitment to AWS in their S-1. If we annualize committed spend to $75M of annual AWS costs — and assume 50% or $37.5M of this may be recovered via cloud repatriation — this translates to roughly $1.5B of market capitalization for the company on committed spend reductions alone! While back-of-the-envelope analyses like these are never perfect, the directional findings are clear: market capitalizations of scale public software companies are weighed down by cloud costs, and by hundreds of billions of dollars. If we expand to the broader universe of enterprise software and consumer internet companies, this number is likely over $500B — assuming 50% of overall cloud spend is consumed by scale technology companies that stand to benefit from cloud repatriation. For business leaders, industry analysts, and builders, it’s simply too expensive to ignore the impact on market cap when making both long-term and even near-term infrastructure decisions. The paradox of cloud Where do we go from here? On one hand, it is a major decision to start moving workloads off of the cloud. For those who have not planned in advance, the necessary rewriting seems SO impractical as to be impossible; any such undertaking requires a strong infrastructure team that may not be in place. And all of this requires building expertise beyond one’s core, which is not only distracting, but can itself detract from growth. Even at scale, the cloud retains many of its benefits — such as on-demand capacity, and hordes of existing services to support new projects and new geographies. But on the other hand, we have the phenomenon we’ve outlined in this post, where the cost of cloud “takes over” at some point, locking up hundreds of billions of market cap that are now stuck in this paradox: You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it. So what can companies do to free themselves from this paradox? As mentioned, we’re not making a case for repatriation one way or the other; rather, we’re pointing out that infrastructure spend should be a first-class metric. What do we mean by this? That companies need to optimize early, often, and, sometimes, also outside the cloud. When you’re building a company at scale, there’s little room for religious dogma. While there’s much more to say on the mindset shifts and best practices here — especially as the full picture has only more recently emerged — here are a few considerations that may help companies grapple with the ballooning cost of cloud. Cloud spend as a KPI. Part of making infrastructure a first-class metric is making sure it is a key performance indicator for the business. Take for example Spotify’s Cost Insights, a homegrown tool that tracks cloud spend. By tracking cloud spend, the company enables engineers, and not just finance teams, to take ownership of cloud spend. Ben Schaechter, formerly at Digital Ocean, now co-founder and CEO of Vantage, observed that not only have they been seeing companies across the industry look at cloud cost metrics alongside core performance and reliability metrics earlier in the lifecycle of their business, but also that “Developers who have been burned by surprise cloud bills are becoming more savvy and expect more rigor with their team’s approach to cloud spend.” Incentivize the right behaviors. Empowering engineers with data from first-class KPIs for infrastructure takes care of awareness, but doesn’t take care of incentives to change the way things are done. A prominent industry CTO told us that at one of his companies, they put in short-term incentives like those used in sales (SPIFFs), so that any engineer who saved a certain amount of cloud spend by optimizing or shutting down workloads received a spot bonus (which still had a high company ROI since the savings were recurring). He added that this approach — basically, “tie the pain directly to the folks who can fix the problem” — actually cost them less, because it paid off 10% of the entire organization, and brought down overall spend by $3M in just six months. Notably, the company CFO was key to endorsing this non-traditional model. Optimization, optimization, optimization. When evaluating the value of any business, one of the most important factors is the cost of goods sold or COGS — and for every dollar that a business makes, how many dollars does it cost to deliver? Customer data platform company Segment recently shared how they reduced infrastructure costs by 30% (while simultaneously increasing traffic volume by 25% over the same period) through incremental optimization of their infrastructure decisions. There are a number of third-party optimization tools that can provide quick gains to existing systems, ranging anywhere from 10-40% in our experience observing this space. Think about repatriation up front. Just because the cloud paradox exists — where cloud is cheaper and better early on and more costly later in a company’s evolution — exists, doesn’t mean a company has to passively accept it without planning for it. Make sure your system architects are aware of the potential for repatriation early on, because by the time cloud costs start to catch up to or even outpace revenue growth, it’s too late. Even modest or more modular architectural investment early on — including architecting to be able to move workloads to the optimal location and not get locked in — reduces the work needed to repatriate workloads in the future. The popularity of Kubernetes and the containerization of software, which makes workloads more portable, was in part a reaction to companies not wanting to be locked into a specific cloud. Incrementally repatriate. There’s also no reason that repatriation (if that’s indeed the right move for your business), can’t be done incrementally, and in a hybrid fashion. We need more nuance here beyond either/or discussions: for example, repatriation likely only makes sense for a subset of the most resource-intensive workloads. It doesn’t have to be all or nothing! In fact, of the many companies we spoke with, even the most aggressive take-back-their-workloads ones still retained 10 to 30% or more in the cloud. While these recommendations are focused on SaaS companies, there are also other things one can do; for instance, if you’re an infrastructure vendor, you may want to consider options for passing through costs — like using the customer’s cloud credits — so that the cost stays off your books. The entire ecosystem needs to be thinking about the cost of cloud. * * * How the industry got here is easy to understand: The cloud is the perfect platform to optimize for innovation, agility, and growth. And in an industry fueled by private capital, margins are often a secondary concern. That’s why new projects tend to start in the cloud, as companies prioritize velocity of feature development over efficiency. But now, we know. The long term implications have been less well understood — which is ironic given that over 60% of companies cite cost savings as the very reason to move to the cloud in the first place! For a new startup or a new project, the cloud is the obvious choice. And it is certainly worth paying even a moderate “flexibility tax” for the nimbleness the cloud provides. The problem is, for large companies — including startups as they reach scale — that tax equates to hundreds of billions of dollars of equity value in many cases… and is levied well after the companies have already, deeply committed themselves to the cloud (and are often too entrenched to extricate themselves). Interestingly, one of the most commonly cited reasons to move the cloud early on — a large up-front capital outlay (CapEx) — is no longer required for repatriation. Over the last few years, alternatives to public cloud infrastructures have evolved significantly and can be built, deployed, and managed entirely via operating expenses (OpEx) instead of capital expenditures. Note too that as large as some of the numbers we shared here seem, we were actually conservative in our assumptions. Actual spend is often higher than committed, and we didn’t account for overages-based elastic pricing. The actual drag on industry-wide market caps is likely far higher than penciled. Will the 30% margins currently enjoyed by cloud providers eventually winnow through competition and change the magnitude of the problem? Unlikely, given that the majority of cloud spend is currently directed toward an oligopoly of three companies. And here’s a bit of dramatic irony: Part of the reason Amazon, Google, and Microsoft — representing a combined ~5 trillion dollar market cap — are all buffeted from the competition, is that they have high profit margins driven in part by running their own infrastructure, enabling ever greater reinvestment into product and talent while buoying their own share prices. And so, with hundreds of billions of dollars in the balance, this paradox will likely resolve one way or the other: either the public clouds will start to give up margin, or, they’ll start to give up workloads. Whatever the scenario, perhaps the largest opportunity in infrastructure right now is sitting somewhere between cloud hardware and the unoptimized code running on it. Acknowledgements: We’d like to thank everyone who spoke with us for this article (including those named above), sharing their insights from the frontlines. Companies selected denoted some degree of public cloud infrastructure utilization in 10Ks Sarah Wang is a partner at Andreessen Horowitz focused on late stage venture investments across enterprise, consumer, fintech, and bio. Martin Casado is a general partner at Andreessen Horowitz, where he focuses on enterprise investing. This story originally appeared on A16z.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,419
2,021
"Opinion: Andreessen Horowitz is dead wrong about cloud  | VentureBeat"
"https://venturebeat.com/2021/06/10/opinion-andreessen-horowitz-is-dead-wrong-about-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion Opinion: Andreessen Horowitz is dead wrong about cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In The Cost of Cloud, a Trillion-Dollar Paradox , Andreessen Horowitz Capital Management’s Sarah Wang and Martin Casado highlighted the case of Dropbox closing down its public cloud deployments and returning to the datacenter. Wang and Casado extrapolated the fact that Dropbox and other enterprises realized savings of 50% or more by bailing on some or all of their cloud deployments in the wider cloud-consuming ecosphere. Wang and Casado’s conclusion? Public cloud is more than doubling infrastructure costs for most enterprises relative to legacy data center environments. Unfortunately, the article contains a number of common misconceptions. As practitioners supporting over 800 cloud environments, we see deployments at every stage of life — from as early as the architecture (planning) phase all the way through to long-duration deployments that have already been subjected to multiple rounds of carefully targeted optimization. In our view, a generalized debate over whether on-prem environments are cheaper to operate than cloud is incredibly simplistic. Well-architected and well-operated cloud deployments will be highly successful compared to datacenter deployments in most cases. However, “highly successful” may or may not mean less expensive. A singular comparison between the cost of cloud versus the cost of a datacenter shouldn’t be made as an isolated analysis. Instead, it’s important to analyze the differential ROI of one set of costs versus the alternative. While this is true for any expenditure, it’s doubly true for public cloud, since migration can have profound impacts on revenue. Indeed, the major benefits of the cloud are often related to revenue, not cost. Two common examples of the cloud’s ability to enhance revenue: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Acceleration of time-to-market cycles The possibility of rapid expansions in infrastructure (within or even across geographies) to capture revenue blooms The revenue enhancements associated with both can exceed any theoretical cost premiums for cloud by significant amounts, resulting in very attractive returns on investment when these technologies are applied well. Short-term thinking brings short-term results An oversimplified counterexample to Wang and Casado’s assertions will make our logic clear. Suppose a private equity firm approaches a manufacturing concern and advises them that they can cut their cost of revenue metric in half by shuttering half of their factory. What happens to production volumes if they follow this advice? What happens to revenue? If the plant was running at or near capacity, their production capacity — and therefore their revenue — would also be cut in half. Now imagine the half of the factory they closed actually had the most productive assembly lines. Their costs have dropped by half, but their revenue will drop by more. This approach may result in some favorable near-term financial results, but investors with longer-term goals are going to take it on the chin down the road when revenue collapses. If an enterprise bails on the cloud to save costs, how might their time-to-market or revenue elasticity be impacted? What opportunities would be foregone? These dynamics must be considered, and that means analyzing ROI, not isolated metrics like cost of sales or cost of goods sold. The Dropbox repatriation: statistical cherry-picking What’s more, by extrapolating the results of successful repatriations to the wider ecosphere of cloud consumers, the authors take entirely too many liberties with the notion that one cloud deployment can be easily compared with another from a cost perspective. The true “cost” of a public cloud is a function of: The appropriateness of cloud for specific workloads The architecture Efficient operation By definition, the cloud deployments that were successfully repatriated failed along some or all of these dimensions, as directly evidenced by their successful repatriations. But even in cases where the repatriations were deemed successful, it is hardly certain that repatriation was the best option. For example, if a cloud deployment was poorly architected and/or based almost entirely on lift-and-shift workloads, could those workloads have been refactored to cloud-native instead of returned to a datacenter? We have seen savings of 90% and more in such cases. To extrapolate the “savings realized” in “successful” repatriations cases to the wider universe of cloud consumers and thereby conclude that most or all cloud deployments are equivalent failures represents a wholesale backfire of logic. The fact that these deployments were poorly architected or were better-suited to run on-prem hardly means that all cloud workloads are. If the majority of cloud deployments resulted in outcomes this unfavorable, the stampede to the cloud would not have begun and would not be continuing today. Don’t worry, you’re not wasting more than half of your infrastructure spend For modern enterprises, the question is not “cloud versus datacenter” but “which workloads for cloud, which workloads for datacenter?” The process steps for analyzing this decision involve asking the following questions: Which workloads benefit from the elasticity, geo-flexibility, or technological innovation cloud offers? Which workloads can really “take off” if migrated or currently rely on innovative new services only offered in the cloud? These are the best candidates to be run on a public cloud. Are current or planned workloads architectured to use cloud-native technologies where possible, or are they lifted-and-shifted clones of datacenter infrastructure? If they can be cloned 1:1 in a datacenter, then companies should always consider re-architecting the workload to take advantage of cloud-native technologies. For example, you can move your Hadoop to cloud as is, but we’ve seen identical queries run in BigQuery 73x faster. You could keep running on VMs, but you could save 60% by refactoring into containers. You could stay with your teraflops on CPU, but you can get an exaflop (yes, that’s 1,000,000x faster) on TPUv4. Is the ROI of infrastructure spend in the cloud being measured and compared to a model of the same infrastructure costs on-prem? And vice versa? Regular validation should be carried out to verify that the correct mix of on-prem and public cloud workloads is being maintained. Critically, the ROI analysis must factor revenue opportunity costs of one alternative over the other. For example, if a workload is being considered for repatriation, the model must factor the revenue degradation that would be imposed by eliminating the cloud’s elasticity and thereby slowing time to market, causing stock-outs instead of capitalizing on revenue blooms, etc. Are best-in-class practices for operating public cloud infrastructure being followed? Has a well-trained and equipped FinOps team been established? If you’re running large workloads in the public cloud, it’s not time to panic. It’s highly unlikely you are wasting half or two-thirds of your infrastructure costs by running in the cloud without any incremental benefits to show for it. By following the guidelines above, you can ensure that both your cloud and on-prem deployments are successful, without bailing out of one or the other as a result of tunnel vision on cost alone. As the director of the FinOps group at SADA, Rich Hoyer develops and delivers services designed to help clients monitor, measure and improve the value of their Google Cloud services. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,420
2,021
"Verizon-AWS private mobile edge computing available to U.S. enterprises | VentureBeat"
"https://venturebeat.com/2021/10/23/verizon-aws-private-mobile-edge-computing-available-to-u-s-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Verizon-AWS private mobile edge computing available to U.S. enterprises Share on Facebook Share on X Share on LinkedIn Verizon is setting up 5G partnerships. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Verizon Communications — ranked 20 in the Fortune 500 list with the latest reported revenue of more than $128 billion — has made its private mobile edge computing solution with AWS Outposts available for enterprise clients in the U.S. The Verizon 5G Edge with AWS Outposts is a cloud computing platform at its core. Enterprise customers find it useful for the massive bandwidth and low latency it offers. These features are highly beneficial for enterprises in more ways than one. On the one hand, it enhances the efficiency of real-time applications such as intelligent logistics, factory automation, and robotics. On the other hand, it ensures increased levels of security, reliability, and productivity. The collaboration between Verizon and AWS started in August 2020 when it launched Verizon 5G Edge with AWS Wavelength. Structurally, the solution served as a public edge computing platform. It combined the capabilities of Verizon’s public wireless networks with AWS compute, storage, and database services. The collaboration has so far resulted in 13 Wavelength Zones across the U.S. It provides clients with a mobile edge computing infrastructure that helps develop, deploy, and scale ultra-low-latency applications. More Wavelength Zones will come this year. According to Tami Erwin, Verizon Business CEO, combined Verizon and AWS helps customers “unlock the true potential of 5G and edge computing, which together will enable innovative applications involving computer vision, augmented and virtual reality, and machine learning.” The low lag and high bandwidth ensure near-real-time information processing with actionable data-backed insights that help optimize operations. The on-ground use of mobile edge computing One of the companies using Verizon 5G Edge with AWS Outposts and On-Site 5G to enhance their innovation strength is Corning, Inc. Corning is one of the leading materials science and advanced manufacturing services in the U.S. It has deployed the Verizon-AWS solution on the factory floor of the world’s largest fiber optic cable plant. The goal is to conduct high-speed, high-volume data collection with assured quality and on-site reasoning using machine learning. Michael A. Bell, senior vice president and general manager of Corning Optical Communications, Verizon 5G Edge with AWS Outposts offers safe, precise, and efficient use of 5G and private mobile edge computing. Companies using a dedicated private network need to connect and manage multiple devices at scale and speed to benefit from the edge computing infrastructure of on-site 5G and 5G Edge with AWS Outposts. Security and almost real-time connectivity will help create customized customer experiences without compromising the low latency and data residency requirements. According to Erwin, more benefits are yet to come up, as she believes that we are still “scraping the surface of the new experiences that will be enabled by having 5G and edge compute on site.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,421
2,020
"AI lifecycle management startup Cnvrg.io launches free community tier | VentureBeat"
"https://venturebeat.com/2020/03/31/ai-lifecycle-management-startup-cnvrg-io-launches-free-community-tier"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI lifecycle management startup Cnvrg.io launches free community tier Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cnvrg.io , a data science startup headquartered in Jerusalem and New York, today released a community version of its machine learning automation platform designed to help enterprises manage and scale AI. CEO Yochay Ettun says the release was motivated in part by the influx of social distancing and remote work stemming from the COVID-19 pandemic. “The release of cnvrg.io CORE is our contribution to the strong data science community responsible for advancing AI innovation,” said Ettun. “CORE’s release marks a new vision for the data science field. As data scientists, we built CORE to fill the need that so many data scientists require, to focus less on infrastructure and more on what they do best — algorithms.” CORE facilitates machine learning workflow management with end-to-end AI model tracking and monitoring. Its built-in cluster orchestration supports hybrid cloud and multi-cloud configurations, and its compute querying and autoscaling — which can be fine-tuned from a dashboard — ensure that every available resource is fully utilized. CORE can be installed on-premises or in a cloud environment directly from Cnvrg.io’s website. Developers can connect data sources to it to build and automatically retrain machine learning models; run machine learning experiments at scale to ensure reproducibility; and deploy to production with any framework or programming language. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There’s no shortage of orchestration platforms in the over $1.5 billion global machine learning market. Amazon recently rolled out SageMaker Studio, an extension of its SageMaker platform that automatically collects all code and project folders for machine learning in one place. Google offers its own solution in Cloud AutoML , which supports tasks like classification, sentiment analysis, and entity extraction, as well as a range of file formats, including native and scanned PDFs. Not to be outdone, Microsoft recently introduced enhancements to Azure Machine Learning , its service that enables users to architect predictive models, classifiers, and recommender systems for cloud-hosted and on-premises apps, and IBM has a comparable product in Watson Studio AutoAI. But two-year-old Cnvrg.io, which is backed by Jerusalem Venture Partners and private investors Kevin Bermeister and Prashant Malik, has managed to raise $8 million in venture capital to date and attract customers that include Nvidia, Sisense, NetApp, Lightricks, and Wargaming.net. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,422
2,021
"3 traps companies should avoid in their AI journeys | VentureBeat"
"https://venturebeat.com/2021/09/09/three-traps-companies-should-avoid-in-their-ai-journeys"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 3 traps companies should avoid in their AI journeys Share on Facebook Share on X Share on LinkedIn AI adoption Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was written by Bob Friday, Vice President and Chief Technology Officer of Juniper’s AI-Driven Enterprise Business. In a recent survey of 700 IT pros around the world, 95% said they believe their companies would benefit from embedding artificial intelligence (AI) into daily operations, products, and services, and 88% want to use AI as much as possible. When was the last time you heard that many people agree on anything? Yes, AI is all the rage because it is the next step in the evolution of automation in doing tasks on par with human domain experts whether it is driving a car or helping doctors diagnose disease. But make no mistake while we are starting to see the fruits of AI here and there: By and large, the industry and most organizations are still in the early days of AI adoption. And as with any new momentous technology, organizations need to develop an adoption strategy specific to their organization to get the full benefits of AI automation and deep learning technology. The complication as Gartner put it : “How to make AI a core IT competency still eludes most organizations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But failing to learn how to leverage the benefits AI/ML will leave an organization at a competitive disadvantage in terms of customer experience and operational efficiency. So, what’s the way to get there? Here are three common traps that companies should steer clear of as they get themselves AI-ready. 1. Data and Mission vagueness Great wine requires good grapes and great AI starts with good data, but great AI also needs a clear business ROI. The business benefit ROI and the data needed to automate the domain expert task must be clearly defined at the outset of the project if the AI solution is to deliver real value and continue receiving the resources to grow from pilot to production. AI ingredients, like algorithms and machine learning, sound very science-y, but business AI projects should never resemble science experiments. The “Shiny New Toy Syndrome” is a real pitfall for AI. To avoid succumbing to it, organizations should tie every AI project to specific business outcomes and know the business outcome question and what task you are trying to do on par with a domain expert. For example, is the objective of using intelligent automation to relieve IT team members of mundane, routine tasks so they can focus on higher-value activities? Beyond the IT department, is it to help the marketing department gain competitive advantage by delivering more personalized experiences to customers? Is it automating more of the sales process to boost lead volume and close rate? C-suite leaders would have to be living under a rock at this point not to recognize AI’s potential and the fact that investment is required for AI-ready technology stacks, but they’re going to want to understand how it’s good for the business. Everyone in a company needs to recognize this reality, and ward off any squishiness in an AI project’s reason for being. 2. Lack of AI/ML skills in the company The AI talent shortage is often cited as one of the tech industry’s toughest challenges. It has even been called a national security threat amid China’s ambitions to become the world leader in AI. According to O’Reilly’s 2021 AI Adoption in the Enterprise report , which surveyed more than 3,500 business leaders, a lack of skilled people and difficulty hiring tops the list of AI challenges. To make sure their companies have the talent to fully leverage the benefits of AI/ML they should start both a hiring and training program. On the hiring side, companies should look for talent beyond the typical data science degree and look at adjacent degrees such as physics, math and self-taught computer science. But hiring talent is not enough for a companies’ strategy to build their AI workforces, especially when they’re competing with behemoths like Amazon and Facebook. Another good solution to consider: If you can’t hire them, train them. While it’s unreasonable to expect someone to become a data scientist after taking a couple of online Coursera classes. Engineers with Physics, Math and Computer Science backgrounds have the foundation to master data science and deep learning. Sources of talent may exist inside the organization in unexpected places. Take, for example, the large business intelligence (BI) ecosystems that many companies have. These have talent that is familiar with using Bayesian statistical analysis that is common to most machine learning algorithms. In making sure they have the right skills to support AI initiatives, it makes sense for companies to re-train existing employees as much as possible in addition to having an AI/ML hiring strategy. Companies need to get creative in pinpointing those employees and AI/ML talent. 3. Building rather than buying I’ve seen companies get bogged down by trying to build their own AI tools and solutions from scratch rather than buying them or leveraging open source. The algorithms being used to develop AI solutions are fast evolving and companies should look to partner with vendors in their industry who are leading the AI wave. Unless it happens to be one of the company’s core competencies, building AI solutions is usually an overreach. Why reinvent the wheel when you can buy one of the many commercial AI tools on the market? Deloitte’s most recent State of the AI in the Enterprise report , which surveyed 2,737 IT and line-of-business executives worldwide, found that “seasoned” and “skilled” AI adopters are more likely than “starters” to buy the AI systems they need. “This suggests that many organizations may go through a period of internal learning and experimentation before they know what’s necessary and then seek it from the market,” the report said. Companies that avoid these three traps will have a much easier time accelerating their AI adoption and enjoying the benefits of revenue growth, lower operating costs, and improved customer experience. Bob Friday is Vice President and Chief Technology Officer of Juniper’s AI-Driven Enterprise Business. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,423
2,019
"H2O raises $72.5 million to simplify enterprise AI deployment | VentureBeat"
"https://venturebeat.com/2019/08/20/h2o-raises-72-5-million-to-simplify-enterprise-ai-deployment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages H2O raises $72.5 million to simplify enterprise AI deployment Share on Facebook Share on X Share on LinkedIn H2O cofounder and CEO Sri Ambati speaks at the startup's 2015 H2O World conference in Mountain View, California. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI and machine learning can be complicated for a layperson to fully grasp, yet surveys show this hasn’t deterred enterprises from adopting the technology in droves. Business use of AI grew a whopping 270% over the past four years, according to Gartner, while Deloitte says 62% of respondents to its corporate October 2018 report deployed some form of AI , up from 53% the year before. But adoption doesn’t always meet with success, as the roughly 25% of companies who’ve seen half of their AI projects fail will tell you. That’s why managed machine learning solutions firms like H2O.ai are growing at a healthy clip. Indeed, more than 200,000 data scientists and over 18,000 organizations, including Aetna, Booking.com, Comcast, Hitachi, Nationwide Insurance, PwC, and Walgreens, actively use H2O’s data science tools. In perhaps another sign of the industry’s upward momentum, H2O today announced that it has raised $72.5 million in a round led by Goldman Sachs and the Ping An Global Voyager fund, with continued investments from Wells Fargo and Nexus Venture Partners. Jade Mandel from Goldman Sachs will join H2O’s board of directors as part of the round, which brings the Mountain View, California-based company’s total raised to $147 million. This follows a $20 million series B raised in November 2015 and a $40 million series C in November 2017. It also comes after H2O tripled its customer base and increased its data scientist headcount by 10%. H2O founder and CEO Sri Ambati said the capital will accelerate the company’s global sales, R&D, and marketing efforts. He also expects it to bolster H2O’s ongoing AI for good initiatives (with a focus on wildlife and water conservation) along with its academic programs and AI centers of excellent that afford students, researchers, and universities free access to its product portfolio. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “H2O.ai is democratizing AI and powering the imagination of every entrepreneur and business globally — we are making them the true AI superpowers,” said Ambati. “Our customers are unlocking discovery in every sphere and walk of life and challenging the dominance of technology giants. This will be fun.” Above: H2O’s driverless AI platform. For the uninitiated, H2O was founded in 2012 by Ambati, who previously served as a research assistant at the Indian Space Research Organization. H2O’s suite of solutions is designed to simplify machine learning deployment at scale across verticals like financial services, insurance, health care, telecommunications, retail, pharmaceutical, and marketing, with applications in customer churn prediction, credit risk scoring, and more. H2O’s eponymous flagship product is an AI platform that runs on bare metal or atop existing clusters and supports a range of statistical models and algorithms. Its AutoML functionality automatically runs through models and their hyperparameters to produce a leaderboard of the best models, taking advantage of the computing power of distributed systems and in-memory computing to accelerate data processing and model training. According to H2O, it’s able to ingest data directly from HDFS, Spark, S3, Azure Data Lake, or virtually any other local or cloud data source. H2O’s Sparkling Water marries H2O with Spark, Apache’s distributed cluster computing framework, by initializing H2O services on Spark and providing a way to access data stored in Spark and H2O data structures. As for H2O’s H2OGPU, it’s a GPU-accelerated AI package with APIs in Python and R that enables users to tap graphics cards for accelerated machine learning model development. Then there’s H2O Driverless AI, an “automatic” AI solution that guides customers through the process of creating their own AI-imbued apps and services. The latest version, which was released today, adds the ability to create recipes that extend and customize the platform, in addition to administration and collaboration features for model management and implementation and new explainable AI capabilities for fairness and bias checks. Among the Driverless AI features debuting this week are health checks and data science metrics around drift detection, model degradation, A/B testing, and alerts for recalibration and retraining. In addition, the platform can now perform disparate impact analysis to test for sociological biases in models, allowing users to analyze whether a model produces adverse outcomes for different demographic groups even if those features were not included in the original model. The first set of vertical-specific Driverless AI solutions making their debut target anti-money laundering, customer monitoring, and malicious domain detection. Over 100 open source recipes curated by top achievers on Google’s Kaggle community are also available, all of which feed into an interactive dashboard that explains their outputs in plain English. For customers with highly specific deployment requirements, H2O offers enterprise support with training, dedicated account managers, accelerated issue resolution, and direct enhancement requests. Plans also include access to Enterprise Steam or H2O Sparkling Water for the orchestration of machine learning models in Hadoop or Spark clusters. “We have been a big believer in H2O.ai since day one. We are ecstatic to see their success across the world with so many companies, in so many industries,” said Nexus Venture Partners managing director Jishnu Bhattacharjee in a statement. “AI in the enterprise is a reality that H2O.ai is driving. We are thrilled to continue backing Sri and team as they accelerate their growth trajectory.” The lengthy list of H2O’s existing and previous investors includes Barclays, Capital One, Crane Ventures, CreditEase, New York Life, Nvidia, Paxion Ventures, SST Holdings, TransAmerica, and Walden River Wood. H2O has offices in New York and Prague, in addition to its Mountain View headquarters. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,424
2,021
"Feature store repositories emerge as an MLOps linchpin for advancing AI | VentureBeat"
"https://venturebeat.com/2021/01/15/feature-store-repositories-emerge-as-an-mlops-linchpin-for-advancing-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature store repositories emerge as an MLOps linchpin for advancing AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A battle for control over machine learning operations (MLOps) is beginning in earnest as organizations embrace feature store repositories to build AI models more efficiently. A feature store is at its core a data warehouse through which developers of AI models can share and reuse the artifacts that make up an AI model as well as an entire AI model that might need to be modified or further extended. In concept, feature store repositories play a similar role as a Git repository does in enabling developers to build applications more efficiently by sharing and reusing code. Early pioneers of feature store repositories include Uber, which built a platform dubbed Michaelangelo, and Airbnb, which created a feature store dubbed Zipline. But neither of those platforms are available as open source code. Leading providers of feature store repositories trying to fill that void include Tecton, Molecula, Hopsworks, Splice Machine, and, most recently, Amazon Web Services (AWS). There is also an open source feature store project, dubbed Feast, that counts among its contributors Google and Tecton. It can take a data science team six months or longer to construct a single AI model, so pressure to accelerate those processes is building. Organizations that employ AI models not only want to build more of them faster, but AI models deployed in production environments also need to be either regularly updated or replaced as business conditions change. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Less clear right now, however, is to what degree feature store repositories represent a standalone category versus being a foundational element of a larger MLOps platform. As investment capital starts to pour into the category, providers of feature store platforms are trying to have it both ways. Splice Machine, for example, offers a SQL-based feature store platform that organizations can deploy apart from its platform for managing data science processes. “It’s important to modularize the feature store so it can be used in other environments,” said Splice Machine CEO Monte Zweben. “I think you’ll see adoption of feature stores in both manners.” Over time, however, it will become apparent that feature stores one way or another need to be part of a larger platform to derive the most value, he added. Fresh off raising an additional $17.6 million in funding , Molecula is also positioning its feature store as a standalone offering in addition to being a foundation around which MLOps processes will revolve. In fact, Molecula is betting that feature stores, in addition to enabling AI models to be constructed more efficiently, will also become critical to building any type of advanced analytics application, said Molecula CEO H.O. Maycotte. To achieve that goal, Molecula built its own storage architecture to eliminate all the manual copy-and-paste processes that make building AI models and other types of advanced analytics applications so cumbersome today, he noted. “It’s not just for MLOps,” said Maycotte. “Our buyer is the data engineer.” Tecton, meanwhile, appears to be more focused on enabling the creation of a best-of-breed MLOps ecosystem around its core feature store platform. “Feature stores will be at the center of an MLOps toolchain,” said Tecton CEO Mike Del Balso. Casting a shadow over each of these vendors, however, are cloud service providers that will make feature store repositories available as a service. Most AI models are trained on a public cloud because of the massive amounts of data required and the cost of the graphics processor units (GPUs) required. Adding a feature store repository to a cloud service that is already being employed to build an AI model is simply a logical extension. However, providers of feature store platforms contend it’s only a matter of time before MLOps processes span multiple clouds. Many enterprise IT organizations are going to standardize on a feature store repository that makes it simpler to share AI models and their components across multiple clouds. Regardless of how MLOps evolves, the need for a centralized repository for building AI models has become apparent. The issue enterprise IT organizations need to address now is determining which approach makes the most sense today, because whatever feature store platform they select now will have a major impact on their AI strategy for years to come. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,425
2,021
"AT&T launches Connected Climate Initiative to reduce carbon emissions | VentureBeat"
"https://venturebeat.com/2021/08/31/att-launches-connected-climate-initiative-to-reduce-carbon-emissions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AT&T launches Connected Climate Initiative to reduce carbon emissions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AT&T today launched a Connected Climate Initiative (CCI) that promises to bring together partners and researchers in academia to further reduce carbon emissions. The overall goal is to enable AT&T to help businesses reduce greenhouse emissions by 1 billion metric tons (aka 1 gigaton) by 2035. A gigaton is equal to approximately 15% of U.S. greenhouse gas emissions and nearly 3% of global energy-related emissions generated in 2020. Organizations that are lending their support to CCI include Microsoft, Equinix, Duke Energy, Texas A&M University System, The University of Missouri, SunPower, Badger Meter, IndustLabs, Traxen, BSR, RMI, Third Derivative, and the Carbon Trust. At the core of that effort is a drive to encourage enterprise IT organizations to rely more on renewable energy sources , coupled with the shifting of more application workloads to the cloud. Equinix, for example, is committing to enabling customers that connect to its datacenters over AT&T networks to reduce carbon emissions by using renewable energy sources such as fuel cells, which create energy via electrochemical reactions. The hosting services company already makes the energy source available to 10,000 customers spanning 220 datacenters, said Jennifer Ruch, director for sustainability and environmental, social, and governance (ESG) at Equinix. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Initiatives already underway Many of those organizations have carbon offset goals of their own that, beyond social responsibility concerns, are being driven by incentives and penalties that will be imposed by governments around the world. “It’s become an economic issue,” Rauch said. As part of its Future First initiative , Equinix has already committed to achieving global climate neutrality by 2030 and has issued over $3.7 billion in Green Bonds to improve power usage effectiveness (PUE) rating across all its datacenters. Microsoft, meanwhile, is working to deploy AT&T Guardian devices on its Azure Sphere internet of things security platform to enable businesses to securely collect and analyze data to identify efficiencies and reduce sources of carbon emissions. Azure Sphere is built on the Azure cloud, which Microsoft claims is 98% more carbon efficient than an on-premises IT environment. Duke Energy at the same time is committing to working with AT&T to explore how broadband technologies may help accelerate the transition to renewable energy. The provider of electricity has previously committed to achieving a net-zero carbon emissions goal by 2050. Texas A&M University System’s RELLIS Campus will research how wireless 5G technology might speed emissions reduction in industries with high emissions such as transportation, while the University of Missouri is exploring how 5G might reduce carbon emissions generated by buildings. Seeking carbon credits In general, carbon emissions generated by IT are being scrutinized more as organizations look for ways to earn carbon credits. IT is only one source of carbon emissions for an organization, but given the amount of energy datacenters consume, finding more efficient ways to consume IT infrastructure resources is becoming a higher priority. Many larger organizations are already finding themselves buying carbon credits from other organizations to offset carbon emissions. The more energy efficient they become, the less of a need there might be to acquire those carbon credits. In fact, cloud service providers are adding carbon emissions as a reason to shift workloads to cloud computing environments that make use of renewable energy sources to reduce the total amount of carbon generated by organizations that deploy applications on shared infrastructure. Many organizations that continue to operate their own datacenters don’t always have the resources required to reengineer a datacenter in a way that relies more on renewable energy sources. Of course, AT&T is only the latest in a series of providers of IT infrastructure that have launched climate control initiatives. It’s not clear to what degree those efforts are driving organizations to select one provider of IT infrastructure over another. However, all things being relatively equal from a cost and performance perspective, carbon emissions might very well tip the balance in favor of one vendor over another. One way or another, however, carbon emissions are about to become a lot bigger factor in IT decision-making processes than anyone would have thought a few short years ago. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,426
2,014
"What medtech can learn from digital health | VentureBeat"
"https://venturebeat.com/2014/10/27/what-medtech-can-learn-from-digital-health"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What medtech can learn from digital health Share on Facebook Share on X Share on LinkedIn Brian Russell of Zephyr Anywhere, left, speaks with Stacy Enxing Seng of Covidien Vascular Therapies at VentureBeat's 2014 HealthBeat conference in San Francisco on Oct. 27. SAN FRANCISCO — Big medical technology companies have the trust of hospitals. But digital-health startups are the ones developing the most exciting new technologies. Given partnerships and acquisitions lately, it looks like these two kinds of businesses can help each other. Take medical-device maker Covidien, which bought wearable-device startup Zephyr Anywhere earlier this year. Covidien is publicly traded, and it’s only natural that it looks out for technology that can improve performance, lower costs, and enhance patient experiences, while also being predictable from a top-line and bottom-line perspective. “Zephyr was able to do that beautifully,” Stacy Enxing Seng, president of Covidien’s vascular therapies division, said onstage today at VentureBeat’s 2014 HealthBeat conference. Look, too, at Intel’s Basis acquisition and Medtronic’s Corventis purchase as more proof that big companies are looking for health innovations. (Meanwhile, even Medtronic moved to buy Covidien itself. And in addition to outright acquisitions, medical-technology companies’ venture arms — like the one Covidien has, for example — can help these old-school companies “gain insights on emerging therapies or possibilities that can be incorporated into the company,” Enxing Seng said. That might be because startups can get technology out quickly, while legacy companies have deep experience with the regulatory approval process. “It takes a lot of dollars typically to move it through the health care system,” Enxing Seng said. The young startups, though, don’t always match up culturally with the big companies they team up with. “They don’t swear at meetings,” said Brian Russell, Zephyr Anywhere’s chief executive, triggering audience laughter. But more generally, startups do “need to learn their language” in order to explain to them the ways in which their hip technologies can align with the needs of the med-tech providers. In any case, the collaboration is happening on a greater scale as of late, and it seems these new digital-health startups have an approach the med-tech vendors like. “I think consumerification of health care is very, very important,” Russell said. HealthBeat is a two-day conference covering how new ways of tracking our personal data can improve our health and health care system. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,427
2,020
"AI, 5G, and IoT can help deliver the promise of precision medicine | VentureBeat"
"https://venturebeat.com/2020/02/06/ai-5g-and-medical-iot-can-help-deliver-the-promise-of-precision-medicine"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI, 5G, and IoT can help deliver the promise of precision medicine Share on Facebook Share on X Share on LinkedIn This article is part of the Technology Insight series, made possible with funding from Intel. When my son was a toddler, he went to his pediatrician for a routine CAT scan. Easy stuff. Just a little shot to subdue him for a few minutes. He’d be awake and finished in a jiffy. Except my son didn’t wake up. He lay there on the clinic table, unresponsive, his vitals slowly falling. The clinic had no ability to diagnose his condition. Five minutes later, he was in the back of an ambulance. My wife and I were powerless to do anything but look on, frantic with worry for our boy’s life. It turned out that he’d had a bad reaction to a common hydrochloride sedative. Once that was figured out, doctors quickly brought him back around, and he was fine. What if… But what if, through groundbreaking mixtures of compute, database, and AI technologies, a quick round of analyses on his blood and genome could have revealed his potential for such a reaction before it became a critical issue? What if it were possible to devise a course of treatment specific to him and his body’s unique conditions, rather than accepting a cookie-cutter approach and dealing with the ill effects immediately after? And what if that could be done with small, even portable medical devices equipped with high-bandwidth connectivity to larger resources? In short, what if, through the power of superior computing and next-generation wireless connectivity, millions of people like my son could be quickly, accurately treated on-site rather than endure the cost and trauma of legacy medical methods? Above: Pinpointing diagnosis and treatment that’s right for you. These questions I asked about my son are at the heart of today’s efforts in precision medicine. It’s the practice of crafting treatments tailored to individuals based on their characteristics. Precision medicine spans an increasing number of fields, including oncology, immunology, psychiatry, and respiratory disorders, and its back end is filled with big data analytics. Key Points: Precision medicine uses a patient’s individual characteristics, including genetics, to identify highly specific, optimized healthcare steps. 5G and new generations of wireless and processors are needed to provide the speed and accessibility required. Optimizing workloads for parallelized processing makes precision medicine more practical. Visions like Intel’s “All in One Day” use AI, 5G, and medical IoT to take a patient from examination to precision treatment in 24 hours. Data drives individual-centric care Pairing drugs to gene characteristics only covers a fraction of the types of data that can be pooled to target specific patient care. Consider the Montefiore Health System in the Bronx. It has deployed a semantic data lake, an architecture for collecting large, disparate volumes of data and collating them into usable forms with the help of AI. Besides the wide range of data specific to patients collected onsite (including from a host of medical sensors and devices), Montefiore healthcare professionals also collate data from sources as needed, including PharmGKB databank (genetic variations and drug responses), the National Institute of Health’s Unified Medical Language System (UMLS), and the Online Mendelian Inheritance in Man (human genomic data). Long story short, the Intel/Cloudera/Franz-based solution proved able to accurately create risk scores for patients, predict whether they would have a critical respiratory event, and advise doctors on what actions to take. Above: The semantic data lake architecture implemented by Montefiore Health System pulls from multiple databases to address open-ended queries and provide a range of actionable healthcare results. “We are using information for the most critically ill patients in the institution to try and identify those at risk of developing respiratory failure (so) we can change the trajectory,” noted Dr. Andrew Racine, Montefiore’s system SVP and chief medical officer. Now that institutions like Montefiore can perform AI-driven analytics across many databases, the next step may be to integrate off-site communications via 5G networking. Doing so will enable physicians to contribute data from the field, from emergency sites to routine in-home visits, and receive real-time advice on how to proceed. Not only can this enable healthcare professionals to deliver faster, more accurate diagnoses, it may permit general physicians to offer specialized advice tailored to a specific patient’s individual needs. Enabling caregivers like this with guidance from afar is critical in a world that, researchers say , faces a shortage 15 million healthcare workers by 2030. What it will take. (AI, for starters) Enabling services like these is not trivial — in any way. Consider the millions of people who might need to be genetically sequenced in order to arrive at a broad enough sample population for such diagnostics. That’s only the beginning. Different databases must be combined, often over immense distances via the cloud, without sacrificing patients’ rights or privacy. Despite the clear need for this, according to the Wall Street Journal , only 4% of U.S. cancer patients in clinical trials have their genomic data made available for research, leaving most treatment outcomes unknown to the research and diagnostic communities. New methods of preserving patient anonymity and data security across systems and databases should go a long way toward remedying this. One promising example : using the processing efficiencies of Intel Xeon platforms in handling the transparent data encryption (TDE) of Epic EHR patient information with Oracle Database. Advocates say the more encryption and trusted execution technologies, such as SGX , can be integrated from medical edge devices to core data centers, the more the public will learn to allow its data to be collected and used. Beyond security, precision medicine demands exceptional compute power. Molecular modeling and simulations must be run to assess how a drug interacts with particular patient groups, and then perhaps run again to see how that drug performs the same actions in the presence of other drugs. Such testing is why it can take billions of dollars and over a decade to bring a single drug to market. Fortunately, many groups are employing new technologies to radically accelerate this process. Artificial intelligence plays a key role in accelerating and improving the repetitive, rote operations involved in many healthcare and life sciences tasks. Pharmaceuticals titan Novartis, for example, uses deep neural network (DNN) technology to accelerate high-content screening, which is the analysis of cellular-level images to determine how they would react when exposed to varying genetic or chemical interactions. By updating the processing platform to the latest Xeon generation, parallelizing the workload, and using tools like the Intel Data Analytics Acceleration Library (DAAL) and Intel Caffe, Novartis realized nearly a 22x performance improvement compared to the prior configuration. These are the sorts of benefits healthcare organizations can expect from updating legacy processes with platforms optimized for acceleration through AI and high levels of parallelization. Faster than trained radiologists Interestingly, such order-of-magnitude leaps in capability, while essential for taming the torrents of data flowing into medical databases, can also be applied to medical IoT devices. Think about X-ray machines. They’re basically cameras that require human specialists (radiologists) to review images and look for patterns of health or malady before passing findings to doctors. According to GE Healthcare , hospitals now generate 50 petabytes of data annually. A “staggering” 90% comes from medical imaging,” GE says, with more than 97% unanalyzed or unused. Beyond working to use AI to help reduce the massive volume of “reject” images, and thus cut reduce on multiple fronts, GE Healthcare teamed with Intel to create an X-ray system able to capture images and detect a collapsed lung (pneumothorax) within seconds. Simply being able to detect pneumothorax incidents with AI represents a huge leap. However, part of the project’s objective was to deliver accurate results more quickly and so help to automate part of the diagnostic workload jamming up so many radiology departments. Intel helped to integrate its OpenVINO toolkit, which enables development of applications that emulate human vision and visual pattern recognition. Those workloads can then be adapted for processing across CPUs, GPU, AI-specific accelerators and other processors. With the optimization, the GE X-ray system performed inferences (image assessments) 3.3x faster than without. Completion time was less than one second per image — dramatically faster than highly trained radiologists. And, as shown in the image above, GE’s Optima XR240amx X-ray system is portable. So this IoT device can deliver results from a wide range of places and send results directly to doctors’ devices in real time over fast connections, such as 5G. A future version could feed analyzed X-rays straight into patient records. There, they become another factor in the multivariate pool that constitutes the patient’s dataset, which in turn, enables personalized recommendations by doctors. What we’re dealing with By now, you see the problem/solution pattern: Traditional medical practices are having trouble scaling across a growing, aging global population. Part of the problem stems from the medical industry generating far more data than its infrastructure can presently handle. AI can help to automate many of the tasks performed by health specialists. By applying AI to a range of medical data types and sources, care recommendations can be tailored to individual patients based on their specific characteristics for greater accuracy and efficacy rather than suggesting blanket practices more likely to yield unwanted outcomes. AI can be accelerated through the use of hardware/software platforms designed specifically for those workloads. AI-enabled platforms can be embedded within and connected to medical IoT devices, providing new levels of functionality and value. IoT devices and their attached ecosystem can be equipped with connectivity such as 5G to extend their utility and value to those growing populations. The U.S. provides a solid illustration of the impact of population in this progression. According to the U.S. Centers for Disease Control (CDC) , even though the rate of new cancer incidents has flattened in the last several years, the country’s rising population pushed the number of new cases diagnosed from 1.5 million in 2010 to 1.9 million in 2020, driven in part by rising rates in overweight, obesity, and infections. The white paper “ Accelerating Clinical Genomics to Transform Cancer Care ” (below) paints a stark picture of the durations involved in traditional approaches to handling new cancer cases from initial patient visit to data-driven treatment. At each step, delays plague the process — extending patient anxiety, increasing pain, even leading to unnecessary death. All in one day Intel created an initiative called “All in One Day” to create a goal for the medical industry: take a patient from initial scan(s) to precision medicine-based actions for remediation in only 24 hours. This includes genetic sequencing, analysis that yields insights into the cellular- and molecular-level pathways involved in the cancer, and identification of gene-targeted drugs able to deliver the safest, most effective remedy possible. To make All in One Day possible, the industry will require secure, broadly trusted methods for regularly exchanging petabytes of data. (Intel notes that a typical genetic sequence creates roughly 1TB of data. Now, multiply that across the thousands of genome sequences involved in many genomic analysis operations.) The crunching of these giant data sets calls for AI and computational horsepower beyond what today’s massively parallel accelerators can do. But the performance is coming. As doctors will have to service ever-larger patient populations, expect them to need data results and visualizations delivered to wherever they may be, including in forms animated or rendered in virtual reality. This will require 5G-type wireless connectivity to facilitate sufficient data bandwidth to whatever medical IoT devices are being used. If successful, more people will get more personalized help and relief than ever possible. The medical IoT and 5G dovetail with other trends now reshaping modern medicine and making these visions everyday reality. A 2018 Intel survey showed that 37% of healthcare industry respondents already use AI; the number should rise to 54% by 2023. Promising new products and approaches appear daily. A few recent examples are here , h ere and here. As AI adoption continues and pairs with faster hardware, more diverse medical devices, and faster connectivity, perhaps we will soon reach a time when no parent ever has to watch an unresponsive child whisked away by ambulance because of adverse reactions that might have been avoided through precision medicine and next-gen technology. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,428
2,021
"How AI can enable better health care outcomes | VentureBeat"
"https://venturebeat.com/2021/07/12/how-ai-can-enable-better-health-care-outcomes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI can enable better health care outcomes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence isn’t just a tool for pure tech — health care providers can use it too. Clinical practice and AI go together, three top health care leaders at national enterprises agreed during a panel at Transform 2021 hosted by VentureBeat general manager Shuchi Rana. Using data to reduce medical waste and over-testing can help hospital systems save money, said Dr. Doug Melton, head of clinical and customer analytics at Evernorth, a subsidiary of insurance giant Cigna. “Before, we had unsupervised learning, and it was harder to do. You had to be prescriptive in your hypotheses,” Melton said. AI has the potential to help clinicians improve patient outcomes, said Dr. Taha Kass-Hout, director and chief medical officer at Amazon Web Services. Medical records can be a great source of data to develop algorithms , speech recognition, and decision-making tools that could help doctors and nurses identify risk factors for serious illnesses such as congestive heart failure. Early breast and lung cancer detection is another outcome that not only helps patients, but also benefits enterprise leaders. At Evernorth, Melton’s team used machine learning to analyze pre-certifications for radiology and past claims data, identifying who was at higher risk of developing more serious health issues down the line. ML improves prevention and holistic management, Melton said, and improves cost savings for both the patient and provider by as much as 3 times. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data analytics are also key to reducing other hospital costs, said Dr. Joe Colorafi, system VP of clinical data science and analytics at Commonspirit Health. By crunching the numbers, researchers can find which hospital stays last too long and when clinicians are over-assigned to a patient. Collecting additional data from users can also help providers determine a holistic health care plan, Melton said. For instance, information on stressors in patients’ lives and other social determinants of health, such as access to fresh food and stable housing, can anchor plans to improve health outcomes. “When we do that, I think we can have whole-person medicine instead of acute care management,” Melton said. Think of AI as a toolbox to understand the information presented to health care providers, Kass-Hout said. Using machine learning to narrow down symptoms and diagnoses also means building a repository of information to improve health systems. For instance, the accuracy of Amazon Web Services’ model to predict congestive heart failure increased by 4% as the algorithms took in notes about how physicians were treating the condition and monitoring patients for symptoms. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,429
2,021
"Transform 2021 puts the spotlight on women in AI | VentureBeat"
"https://venturebeat.com/2021/06/11/transform-2021-puts-the-spotlight-on-women-in-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Transform 2021 puts the spotlight on women in AI Share on Facebook Share on X Share on LinkedIn Submit your nominations for the women in AI awards by July 9th at 5pm. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. VentureBeat is proud to bring back the Women in AI Breakfast and Awards online for Transform 2021. In the male-dominated tech industry, women are constantly faced with the gender equity gap. There is so much work in the tech industry to become more inclusive of bridging the gender gap while at the same time creating a diverse community. VentureBeat is committed year after year to emphasize the importance of women leaders by giving them the platform to share their stories and obstacles they face in their male-dominated industries. As part of Transform 2021 , we are excited to host our annual Women in AI Breakfast , presented by Capital One, and recognize women leaders’ accomplishments with our Women in AI Awards. Women in AI Breakfast: VentureBeat’s third annual Women in AI Breakfast , presented by Capital One, will commemorate women leading the AI industry. Join the digital networking session and panel on July 12 at 7:35 a.m. Pacific. This digital breakfast includes networking and a discussion on the topic surrounding “Women in AI: a seat at the table.” Our panelists will explore how we can get more women into the AI workforce, plus the roles and responsibilities of corporates, academia, governments, & society as a whole in achieving this goal. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Featured speakers include Kay Firth Butterfield, Head of AI and Machine Learning and Member of the Executive Committee, World Economic Forum; Kathy Baxter, Principal Architect, Ethical AI Practice, Salesforce; Tiffany Deng, Program Management Lead- ML Fairness and Responsible AI, Google; and Teuta Mercado, Responsible AI Program Director, Capital One. Registration for Transform 2021 is required for attendance. Women in AI Awards: Once again, VentureBeat will be honoring extraordinary women leaders at the Women in AI Awards. The five categories this year include Responsibility & Ethics of AI, AI Entrepreneur, AI Research, AI Mentorship, and Rising Star. Submit your nominations by July 9th at 5 p.m. Pacific. Learn more about the nomination process here. The winners of the 2021 Women in AI Awards will be presented at VB Transform on July 16th, alongside the AI Innovation Awards. Register for Transform 2021 to join online. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,430
2,021
"What is a graph database? | VentureBeat"
"https://venturebeat.com/2021/02/08/what-is-a-graph-database"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is a graph database? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The problem: The app must store a collection of people and who they know. Sometimes it must find out everyone who knows someone who knows Bob. Sometimes it must look further for everyone who is three hops away. Sometimes it must find the friends of the friends of Bob who like fishing or listening to symphonies. The graph database was originally designed to store networks — that is, the connections between several elements such as people, places they might visit, or the things they might use. Relational databases can find single connections between them, but they get bogged down when asked to negotiate and analyze complex relationships between multiple parties. This can be so computationally challenging that it can threaten a business that grows too quickly. Doubling the number of users will quadruple the possible relationships, for instance. One of the early social networks, Friendster, struggled to manage the growing complexity of its social graph when it grew popular. Ultimately it lost its first-mover advantage to those with better database technology. The graph database grew in popularity with the rise of social networks, but there’s no reason to limit it to tracking people and their friendships. It can be used to find all of the books that cite a book that was in Isaac Newton’s personal library. Or all of the chemicals that react with table salt. Or every building that can be reached by waiting at no more than two traffic lights. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The word “graph” is often confusing because mathematicians use the word for several different constructs, and most people have experience with only the best known version: the line that plots the relationship between two variables like time and money. By contrast, graph databases specialize in storing more arbitrary relationships that may not be continuous. The graph database shines when asked to search through the networks defined by these connections. They have specialized algorithms for compiling the layers of relationships that radiate out from one entry. Some of the common use cases Recommendation engines — People often search for similar products as others do. That is, if one person buys shoelaces after buying sneakers, there’s a good chance others will do the same. Recommendation engines look for people who are connected by their purchases and then look for other closely connected products in the graph. Fraud detection — Good customers rarely commit fraud, and fraudsters often follow the same pattern again and again. Building out a graph of transactions can identify fraud by flagging suspicious patterns that often have no connection to legitimate transactions. Knowledge networks — Some artificial intelligence researchers have been creating graphs of facts and the connections between them so that computers can approximate human reasoning by following paths. Routing — Finding a path in the world is much like finding a path in an abstract graph that’s modeling the roads. If the intersections are nodes, and the streets between them the links or edges, then the graph is a good abstract representation of the world. Choosing a path for an autonomous car just requires a sequence of nodes and links between them. In general, standard databases can easily find connections that are only one hop away. Graph databases are optimized to handle queries that can follow multiple hops and collect all nodes within a radius. It can’t search through multiple links or hops without multiple queries. How the legacy players are approaching it Microsoft added special node and edge tables to SQL Server to make it simpler to execute more complicated searches. These tables can be searched with traditional SELECT commands, but they shine when the special MATCH command can look for particular patterns of items and the connections between them. A MATCH written by a dog breeder, for instance, might look for two potential parents that aren’t close cousins. Oracle offers a separate Graph Server that integrates with its main products, and the combination will store data in both graph and traditional tables. The tool can run more than 60 different graph algorithms , like finding the shortest connection between people or looking for particularly tight groups. Both extensions can work with standard SQL, but both are also integrating GraphQL engines for users who might want to use that query language. GraphQL, incidentally, was designed to simplify some queries. While it does a good job with graph databases, it also shines with basic relational tables. Many users are deploying GraphQL for tasks that aren’t strictly graph applications. For example, IBM integrated the Apache TinkerPop analytics framework with Db2. Queries are written in a language called Gremlin that is translated into more standard SQL requests. The upstarts A number of new startups are building graph databases from scratch. Some are purely commercial, and many offer hybrid models. Neo4j is a full-featured, open source database that can be run locally or purchased as service from Neo4J’s own Aura cloud. The company also offers tools for browsing through the networks ( Bloom ) and implementing more sophisticated network search algorithms for analyzing the most important nodes in the network and predicting performance. The centrality algorithms, for instance, find the most influential nodes using the number and structure of the connections. The community detection algorithms search for tightly connected groups of nodes. ArangoDB ‘s eponymous product is available as either a community license, an enterprise product, or an instance that can be started in any of the major clouds. The company says its product is “multi-modal,” which means that the nodes can either act like NoSQL key/value stores, parts of a graph, or both. The Enterprise version adds extra features for spreading larger graphs across multiple machines for faster performance. The tool works to keep connected records or nodes on the same machine to speed algorithms that require local searching or traversing. Amazon’s Neptune is a distributed graph database that’s optimized for very large datasets and fast response times. It works with two popular query languages (TinkerPop and SparQL). It is fully managed and priced as a service that’s integrated with the other AWS services. Dgraph is a distributed graph database with a core that’s available under the Apache license wrapped by a collection of enterprise routines that support larger data sets. The main query language is GraphQL, developed by Facebook for more general data retrieval. Dgraph has also extended the core language with a set of routines focused on searching and extracting the graph’s connections. This extension, called DQL , can execute more sophisticated tasks like finding the node with the greatest number of incoming edges matching a particular predicate. JanusGraph is a project of the Linux Foundation that is designed to store and analyze very large graphs. The work is supported by a number of companies, including Target. The source code is released under the Apache license, and it works alongside some of the big NoSQL databases like Apache HBase, Google’s Bigtable, or Oracle’s Berkeley DB. The code is tightly integrated with many of the other Apache projects, like Spark for distributed analysis of the graph, Lucene for storing and searching raw text, and TinkerPop for querying and visualizing the results. TigerGraph is built for large enterprises with big datasets that may want to run the tool locally or subscribe to a service in TigerGraph Cloud. The analytics are aimed at industries with well-understood use cases like the regulations that ask banks to track money flows among accounts to stop money laundering. Is there anything a graph database as a service can’t do? Graph databases are largely supersets of regular databases, and many of them were created by adding new table structures to existing databases. They usually can do everything a regular database can accomplish and also search through networks defined in the data, too. Some simpler graph search algorithms may not need the extra features of a graph database. A skilled programmer can duplicate them with a bit of code. The results, though, can be much slower, especially when the analysis requires multiple queries. That means more hardware and more licenses to handle the same workload. The real question is whether the use case needs the extra features and sacrifice associated with the graph tables. Will your algorithm need to make use of a larger collection of loosely connected objects? Is there some idea of locality or proximity that must be part of the algorithm? Is the rule about closeness or proximity strong enough that the algorithms will be able to ignore nodes that aren’t particular close? For instance, a restaurant recommendation algorithm might only suggest places that are nearby. Will users be happy with the recommendations if they don’t include a particularly notable place that’s a perfect fit but happens to lie just outside the search radius? This article is part of a series on enterprise database technology trends. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,431
2,021
"How NASA is using knowledge graphs to find talent | VentureBeat"
"https://venturebeat.com/2021/07/24/how-nasa-is-using-knowledge-graphs-to-find-talent"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How NASA is using knowledge graphs to find talent Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of NASA’s biggest challenges is identifying where data science skills reside within the organization. Not only is data science a new discipline – it’s also a fast-evolving one. Knowledge for each role is constantly shifting due to technological and business demands. That’s where David Meza, acting branch chief of people analytics and senior data scientist at NASA, believes graph technology can help. His team is building a talent mapping database using Neo4j technology to build a knowledge graph to show the relationships between people, skills, and projects. Meza and his team are currently working on the implementation phase of the project. They eventually plan to formalize the end user application and create an interface to help people in NASA search for talent and job opportunities. Meza told VentureBeat more about the project. VentureBeat: What’s the broad aim of this data led project? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! David Meza: It’s about taking a look at how we can identify the skills, knowledge and abilities, tasks, and technology within an occupation or a work role. How do we translate that to an employee? How do we connect it to their training? And how do we connect that back to projects and programs? All of that work is a relationship issue that can be connected via certain elements that associate all of them together – and that’s where the graph comes in. VentureBeat: Why did you decide to go with Neo4j rather than develop internally? Meza: I think there was really nothing out there that provided what we were looking for, so that’s part of it. The other part of the process is that we have specific information that we’re looking for. It’s not very general. And so we needed to build something that was more geared towards our concepts, our thoughts, and our needs for very specific things that we do at NASA around spaceflights, operations, and things like that. VentureBeat: What’s the timeline for the introduction of Neo4j? Meza: We’re still in the implementation phase. The first six to eight months was about research and development and making sure we had the right access to the data. Like any other project, that’s probably our most difficult task – making sure we have the right access, the right information and thinking about how everything is related. While we were looking at that, we also worked in parallel on other issues: what’s the model going to look like, what algorithms are we going to use, and how are we going to train these models? We’ve got the data in the graph system now and we’re starting to produce a beta phase of an application. This summer through the end of the year, we’re looking towards formalizing that application to make it more of an interface that an end user can use. VentureBeat: What’s been the technical process behind the implementation of Neo4j? Meza: The first part was trying to think about what’s going to be our occupational taxonomy. We looked at: “How do we identify an occupation? What is the DNA of an occupation?” And similarly, we looked at that from an employee perspective, from a training perspective, and from a program or project perspective. So simply put, we broke everything down into three different categories for each occupation: a piece of knowledge, a skill, and a task. VentureBeat: How are you using those categories to build a data model? Meza: If you can start identifying people that have great knowledge in natural language processing, for example, and the skills they need to do a task, then from an occupation standpoint you can say that specific workers need particular skills and abilities. Fortunately, there’s a database from the Department of Labor called O*NET , which has details on hundreds of occupations and their elements. Those elements consist of knowledge, skills, abilities, tasks, workforce characteristics, licensing, and education. So that was the basis for our Neo4j graph database. We then did the same thing with training. Within training, you’re going to learn a piece of knowledge; to learn that piece of knowledge, you’re going to get a skill; and to get that skill, you’re going to do exercises or tasks to get proficient in those skills. And it’s similar for programs: we can connect back to what knowledge, skills, and tasks a person needs for each project. VentureBeat: How will you train the model over time? Meza: We’ve started looking at NASA-specific competencies and work roles to assign those to employees. Our next phase is to have employees validate and verify that the associated case — around knowledge, skills, abilities, tasks, and technologies — that what we infer based on the model is either correct or incorrect. Then, we’ll use that feedback to train the model so it can do a little bit better. That’s what we’re hoping to do over the next few months. VentureBeat: What will this approach mean for identifying talent at NASA? Meza: I think it will give the employees an opportunity to see what’s out there that may interest them to further their career. If they want to do a career change, for example, they can see where they are in that process. But I also think it will help us align our people better across our organization, and we will help track and maybe predict where we might be losing skills, where we maybe need to modify skills based on the shifting of our programs and the shifting of our mission due to administration changes. So I think it’ll make us a little bit more agile and it will be easier to move our workforce. VentureBeat: Do you have any other best practice lessons for implementing Neo4j? Meza: I guess the biggest lesson that I’ve learned over this time is to identify as many data sources that can help you provide some of the information. Start small – you don’t need to know everything right away. When I look at knowledge graphs and graph databases, the beauty is that you can add and remove information fairly easily compared to a relational database system, where you have to know the schema upfront. Within a graph database or knowledge graph, you can easily add information as you get it without messing up your schema or your data model. Adding more information just enhances your model. So start small, but think big in terms of what you’re trying to do. Look at how you can develop relationships, and try to identify even latent relationships across your graphs based on the information you have about those data sources. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,432
2,021
"What are graph neural networks (GNN)? | VentureBeat"
"https://venturebeat.com/2021/10/13/what-are-graph-neural-networks-gnn"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What are graph neural networks (GNN)? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Graphs are everywhere around us. Your social network is a graph of people and relations. So is your family. The roads you take to go from point A to point B constitute a graph. The links that connect this webpage to others form a graph. When your employer pays you, your payment goes through a graph of financial institutions. Basically, anything that is composed of linked entities can be represented as a graph. Graphs are excellent tools to visualize relations between people, objects, and concepts. Beyond visualizing information, however, graphs can also be good sources of data to train machine learning models for complicated tasks. Graph neural networks (GNN) are a type of machine learning algorithm that can extract important information from graphs and make useful predictions. With graphs becoming more pervasive and richer with information, and artificial neural networks becoming more popular and capable , GNNs have become a powerful tool for many important applications. Transforming graphs for neural network processing VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Every graph is composed of nodes and edges. For example, in a social network, nodes can represent users and their characteristics (e.g., name, gender, age, city), while edges can represent the relations between the users. A more complex social graph can include other types of nodes, such as cities, sports teams, news outlets, as well as edges that describe the relations between the users and those nodes. Unfortunately, the graph structure is not well suited for machine learning. Neural networks expect to receive their data in a uniform format. Multi-layer perceptrons expect a fixed number of input features. Convolutional neural networks expect a grid that represents the different dimensions of the data they process (e.g., width, height, and color channels of images). Graphs can come in different structures and sizes, which does not conform to the rectangular arrays that neural networks expect. Graphs also have other characteristics that make them different from the type of information that classic neural networks are designed for. For instance, graphs are “permutation invariant,” which means changing the order and position of nodes doesn’t make a difference as long as their relations remain the same. In contrast, changing the order of pixels results in a different image and will cause the neural network that processes them to behave differently. To make graphs useful to deep learning algorithms, their data must be transformed into a format that can be processed by a neural network. The type of formatting used to represent graph data can vary depending on the type of graph and the intended application, but in general, the key is to represent the information as a series of matrices. For example, consider a social network graph. The nodes can be represented as a table of user characteristics. The node table, where each row contains information about one entity (e.g., user, customer, bank transaction), is the type of information that you would provide a normal neural network. But graph neural networks can also learn from other information that the graph contains. The edges, the lines that connect the nodes, can be represented in the same way, with each row containing the IDs of the users and additional information such as date of friendship, type of relationship, etc. Finally, the general connectivity of the graph can be represented as an adjacency matrix that shows which nodes are connected to each other. When all of this information is provided to the neural network, it can extract patterns and insights that go beyond the simple information contained in the individual components of the graph. Graph embeddings Graph neural networks can be created like any other neural network, using fully connected layers, convolutional layers, pooling layers, etc. The type and number of layers depend on the type and complexity of the graph data and the desired output. The GNN receives the formatted graph data as input and produces a vector of numerical values that represent relevant information about nodes and their relations. This vector representation is called “graph embedding.” Embeddings are often used in machine learning to transform complicated information into a structure that can be differentiated and learned. For example, natural language processing systems use word embeddings to create numerical representations of words and their relations together. How does the GNN create the graph embedding? When the graph data is passed to the GNN, the features of each node are combined with those of its neighboring nodes. This is called “message passing.” If the GNN is composed of more than one layer, then subsequent layers repeat the message-passing operation, gathering data from neighbors of neighbors and aggregating them with the values obtained from the previous layer. For example, in a social network, the first layer of the GNN would combine the data of the user with those of their friends, and the next layer would add data from the friends of friends and so on. Finally, the output layer of the GNN produces the embedding, which is a vector representation of the node’s data and its knowledge of other nodes in the graph. Interestingly, this process is very similar to how convolutional neural networks extract features from pixel data. Accordingly, one very popular GNN architecture is the graph convolutional neural network (GCN), which uses convolution layers to create graph embeddings. Applications of graph neural networks Once you have a neural network that can learn the embeddings of a graph, you can use it to accomplish different tasks. Here are a few applications for graph neural networks: Node classification: One of the powerful applications of GNNs is adding new information to nodes or filling gaps where information is missing. For example, say you are running a social network and you have spotted a few bot accounts. Now you want to find out if there are other bot accounts in your network. You can train a GNN to classify other users in the social network as “bot” or “not bot” based on how close their graph embeddings are to those of the known bots. Edge prediction: Another way to put GNNs to use is to find new edges that can add value to the graph. Going back to our social network, a GNN can find users (nodes) who are close to you in embedding space but who aren’t your friends yet (i.e., there isn’t an edge connecting you to each other). These users can then be introduced to you as friend suggestions. Clustering: GNNs can glean new structural information from graphs. For example, in a social network where everyone is in one way or another related to others (through friends, or friends of friends, etc.), the GNN can find nodes that form clusters in the embedding space. These clusters can point to groups of users who share similar interests, activities, or other inconspicuous characteristics, regardless of how close their relations are. Clustering is one of the main tools used in machine learning–based marketing. Graph neural networks are very powerful tools. They have already found powerful applications in domains such as route planning, fraud detection, network optimization, and drug research. Wherever there is a graph of related entities, GNNs can help get the most value from the existing data. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,433
2,021
"DeepMind says reinforcement learning is 'enough' to reach general AI | VentureBeat"
"https://venturebeat.com/2021/06/09/deepmind-says-reinforcement-learning-is-enough-to-reach-general-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind says reinforcement learning is ‘enough’ to reach general AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals. In a new paper submitted to the peer-reviewed Artificial Intelligence journal, scientists at U.K.-based AI lab DeepMind argue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization. Titled “ Reward is Enough ,” the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development of artificial general intelligence. Two paths for AI One common method for creating AI is to try to replicate elements of intelligent behavior in computers. For instance, our understanding of the mammal vision system has given rise to all kinds of AI systems that can categorize images, locate objects in photos, define the boundaries between objects, and more. Likewise, our understanding of language has helped in the development of various natural language processing systems , such as question answering, text generation, and machine translation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These are all instances of narrow artificial intelligence , systems that have been designed to perform specific tasks instead of having general problem-solving abilities. Some scientists believe that assembling multiple narrow AI modules will produce higher intelligent systems. For example, you can have a software system that coordinates between separate computer vision , voice processing, NLP, and motor control modules to solve complicated problems that require a multitude of skills. A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. “[We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence,” the researchers write. This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated. This simple yet efficient mechanism has led to the evolution of living beings with all kinds of skills and abilities to perceive, navigate, modify their environments, and communicate among themselves. “The natural world faced by animals and humans, and presumably also the environments faced in the future by artificial agents, are inherently so complex that they require sophisticated abilities in order to succeed (for example, to survive) within those environments,” the researchers write. “Thus, success, as measured by maximising reward, demands a variety of abilities associated with intelligence. In such environments, any behaviour that maximises reward must necessarily exhibit those abilities. In this sense, the generic objective of reward maximization contains within it many or possibly even all the goals of intelligence.” For example, consider a squirrel that seeks the reward of minimizing hunger. On the one hand, its sensory and motor skills help it locate and collect nuts when food is available. But a squirrel that can only find food is bound to die of hunger when food becomes scarce. This is why it also has planning skills and memory to cache the nuts and restore them in winter. And the squirrel has social skills and knowledge to ensure other animals don’t steal its nuts. If you zoom out, hunger minimization can be a subgoal of “staying alive,” which also requires skills such as detecting and hiding from dangerous animals, protecting oneself from environmental threats, and seeking better habitats with seasonal changes. “When abilities associated with intelligence arise as solutions to a singular goal of reward maximisation, this may in fact provide a deeper understanding since it explains why such an ability arises,” the researchers write. “In contrast, when each ability is understood as the solution to its own specialised goal, the why question is side-stepped in order to focus upon what that ability does.” Finally, the researchers argue that the “most general and scalable” way to maximize reward is through agents that learn through interaction with the environment. Developing abilities through reward maximization In the paper, the AI researchers provide some high-level examples of how “intelligence and associated abilities will implicitly arise in the service of maximising one of many possible reward signals, corresponding to the many pragmatic goals towards which natural or artificial intelligence may be directed.” For example, sensory skills serve the need to survive in complicated environments. Object recognition enables animals to detect food, prey, friends, and threats, or find paths, shelters, and perches. Image segmentation enables them to tell the difference between different objects and avoid fatal mistakes such as running off a cliff or falling off a branch. Meanwhile, hearing helps detect threats where the animal can’t see or find prey when they’re camouflaged. Touch, taste, and smell also give the animal the advantage of having a richer sensory experience of the habitat and a greater chance of survival in dangerous environments. Rewards and environments also shape innate and learned knowledge in animals. For instance, hostile habitats ruled by predator animals such as lions and cheetahs reward ruminant species that have the innate knowledge to run away from threats since birth. Meanwhile, animals are also rewarded for their power to learn specific knowledge of their habitats, such as where to find food and shelter. The researchers also discuss the reward-powered basis of language, social intelligence, imitation, and finally, general intelligence, which they describe as “maximising a singular reward in a single, complex environment.” Here, they draw an analogy between natural intelligence and AGI: “An animal’s stream of experience is sufficiently rich and varied that it may demand a flexible ability to achieve a vast variety of subgoals (such as foraging, fighting, or fleeing), in order to succeed in maximising its overall reward (such as hunger or reproduction). Similarly, if an artificial agent’s stream of experience is sufficiently rich, then many goals (such as battery-life or survival) may implicitly require the ability to achieve an equally wide variety of subgoals, and the maximisation of reward should therefore be enough to yield an artificial general intelligence.” Reinforcement learning for reward maximization Reinforcement learning is a special branch of AI algorithms that is composed of three key elements: an environment, agents, and rewards. By performing actions, the agent changes its own state and that of the environment. Based on how much those actions affect the goal the agent must achieve, it is rewarded or penalized. In many reinforcement learning problems, the agent has no initial knowledge of the environment and starts by taking random actions. Based on the feedback it receives, the agent learns to tune its actions and develop policies that maximize its reward. In their paper, the researchers at DeepMind suggest reinforcement learning as the main algorithm that can replicate reward maximization as seen in nature and can eventually lead to artificial general intelligence. “If an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour,” the researchers write, adding that, in the course of maximizing for its reward, a good reinforcement learning agent could eventually learn perception, language, social intelligence and so forth. In the paper, the researchers provide several examples that show how reinforcement learning agents were able to learn general skills in games and robotic environments. However, the researchers stress that some fundamental challenges remain unsolved. For instance, they say, “We do not offer any theoretical guarantee on the sample efficiency of reinforcement learning agents.” Reinforcement learning is notoriously renowned for requiring huge amounts of data. For instance, a reinforcement learning agent might need centuries worth of gameplay to master a computer game. And AI researchers still haven’t figured out how to create reinforcement learning systems that can generalize their learnings across several domains. Therefore, slight changes to the environment often require the full retraining of the model. The researchers also acknowledge that learning mechanisms for reward maximization is an unsolved problem that remains a central question to be further studied in reinforcement learning. Strengths and weaknesses of reward maximization Patricia Churchland, neuroscientist, philosopher, and professor emerita at the University of California, San Diego, described the ideas in the paper as “very carefully and insightfully worked out.” However, Churchland pointed it out to possible flaws in the paper’s discussion about social decision-making. The DeepMind researchers focus on personal gains in social interactions. Churchland, who has recently written a book on the biological origins of moral intuitions , argues that attachment and bonding is a powerful factor in social decision-making of mammals and birds , which is why animals put themselves in great danger to protect their children. “I have tended to see bonding, and hence other-care, as an extension of the ambit of what counts as oneself—‘me-and-mine,’” Churchland said. “In that case, a small modification to the [paper’s] hypothesis to allow for reward maximization to me-and-mine would work quite nicely, I think. Of course, we social animals have degrees of attachment—super strong to offspring, very strong to mates and kin, strong to friends and acquaintances etc., and the strength of types of attachments can vary depending on environment, and also on developmental stage.” This is not a major criticism, Churchland said, and could likely be worked into the hypothesis quite gracefully. “I am very impressed with the degree of detail in the paper, and how carefully they consider possible weaknesses,” Churchland said. “I may be wrong, but I tend to see this as a milestone.” Data scientist Herbert Roitblat challenged the paper’s position that simple learning mechanisms and trial-and-error experience are enough to develop the abilities associated with intelligence. Roitblat argued that the theories presented in the paper face several challenges when it comes to implementing them in real life. “If there are no time constraints, then trial and error learning might be enough, but otherwise we have the problem of an infinite number of monkeys typing for an infinite amount of time,” Roitblat said. The infinite monkey theorem states that a monkey hitting random keys on a typewriter for an infinite amount of time may eventually type any given text. Roitblat is the author of Algorithms are Not Enough , in which he explains why all current AI algorithms, including reinforcement learning, require careful formulation of the problem and representations created by humans. “Once the model and its intrinsic representation are set up, optimization or reinforcement could guide its evolution, but that does not mean that reinforcement is enough,” Roitblat said. In the same vein, Roitblat added that the paper does not make any suggestions on how the reward, actions, and other elements of reinforcement learning are defined. “Reinforcement learning assumes that the agent has a finite set of potential actions. A reward signal and value function have been specified. In other words, the problem of general intelligence is precisely to contribute those things that reinforcement learning requires as a pre-requisite,” Roitblat said. “So, if machine learning can all be reduced to some form of optimization to maximize some evaluative measure, then it must be true that reinforcement learning is relevant, but it is not very explanatory.” Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,434
2,021
"Evolution, rewards, and artificial intelligence | VentureBeat"
"https://venturebeat.com/2021/06/20/evolution-rewards-and-artificial-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Evolution, rewards, and artificial intelligence Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Last week, I wrote an analysis of Reward Is Enough , a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence , such as perception, motor functions, and language. This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences. The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals. The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands. Natural selection In their paper, the DeepMind scientists present the following hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.” Scientific evidence supports this claim. Humans and animals owe their intelligence to a very simple law: natural selection. I’m not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet. In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that don’t get eliminated. According to Dawkins, “In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple — that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.” But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills). If these mutations help improve the chances of the organism’s survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didn’t, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye. The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals. The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition , scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms. Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMind’s scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated. Reinforcement learning and artificial general intelligence In their paper, DeepMind’s scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms , a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment. According to the DeepMind scientists, “A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour.” In an online debate in December , computer scientist Richard Sutton, one of the paper’s co-authors, said, “Reinforcement learning is the first computational theory of intelligence… In reinforcement learning, the goal is to maximize an arbitrary reward signal.” DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science. The scientists further wrote in their paper, “According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].” This is where hypothesis separates from practice. The keyword here is “complex.” The environments that DeepMind (and its quasi-rival OpenAI ) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum. (It is worth noting that the scientists do acknowledge in their paper that they can’t offer “theoretical guarantee on the sample efficiency of reinforcement learning agents.”) Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we don’t have a fraction of the compute power needed to create quantum-scale simulations of the world. Let’s say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still don’t have a definite theory on that. An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation. Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power you’ll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet. Above: Image credit: Depositphotos Many will say that you don’t need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in. For example, in their paper, the scientists mention the example of a house-cleaning robot: “In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.” This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot. Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutor’s mental state. We might make wrong assumptions, but those are the exceptions, not the norm. And finally, developing a notion of “cleanliness” as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it? A robot that has been optimized for “cleanliness” would have a hard time co-existing and cooperating with living beings that have been optimized for survival. Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge. In theory, reward only is enough for any kind of intelligence. But in practice, there’s a tradeoff between environment complexity, reward design, and agent design. In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,435
2,021
"Reinforcement learning improves game testing, AI team finds | VentureBeat"
"https://venturebeat.com/2021/10/07/reinforcement-learning-improves-game-testing-ai-team-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Reinforcement learning improves game testing, AI team finds Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As game worlds grow more vast and complex, making sure they are playable and bug-free is becoming increasingly difficult for developers. And gaming companies are looking for new tools, including artificial intelligence, to help overcome the mounting challenge of testing their products. A new paper by a group of AI researchers at Electronic Arts shows that deep reinforcement learning agents can help test games and make sure they are balanced and solvable. “ Adversarial Reinforcement Learning for Procedural Content Generation ,” the technique presented by the EA researchers, is a novel approach that addresses some of the shortcomings of previous AI methods for testing games. Testing large game environments Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Today’s big titles can have more than 1,000 developers and often ship cross-platform on PlayStation, Xbox, mobile, etc.,” Linus Gisslén, senior machine learning research engineer at EA and lead author of the paper, told TechTalks. “Also, with the latest trend of open-world games and live service we see that a lot of content has to be procedurally generated at a scale that we previously have not seen in games. All this introduces a lot of ‘moving parts’ which all can create bugs in our games.” Developers have currently two main tools at their disposal to test their games: scripted bots and human play-testers. Human play-testers are very good at finding bugs. But they can be slowed down immensely when dealing with vast environments. They can also get bored and distracted, especially in a very big game world. Scripted bots, on the other hand, are fast and scalable. But they can’t match the complexity of human testers and they perform poorly in large environments such as open-world games, where mindless exploration isn’t necessarily a successful strategy. “Our goal is to use reinforcement learning (RL) as a method to merge the advantages of humans (self-learning, adaptive, and curious) with scripted bots (fast, cheap and scalable),” Gisslén said. Reinforcement learning is a branch of machine learning in which an AI agent tries to take actions that maximize its rewards in its environment. For example, in a game, the RL agent starts by taking random actions. Based on the rewards or punishments it receives from the environment (staying alive, losing lives or health, earning points, finishing a level, etc.), it develops an action policy that results in the best outcomes. Testing game content with adversarial reinforcement learning In the past decade, AI research labs have used reinforcement learning to master complicated games. More recently, gaming companies have also become interested in using reinforcement learning and other machine learning techniques in the game development lifecycle. For example, in game-testing, an RL agent can be trained to learn a game by letting it play on existing content ( maps , levels, etc.). Once the agent masters the game, it can help find bugs in new maps. The problem with this approach is that the RL system often ends up overfitting on the maps it has seen during training. This means that it will become very good at exploring those maps but terrible at testing new ones. The technique proposed by the EA researchers overcomes these limits with “adversarial reinforcement learning,” a technique inspired by generative adversarial networks (GAN), a type of deep learning architecture that pits two neural networks against each other to create and detect synthetic data. In adversarial reinforcement learning, two RL agents compete and collaborate to create and test game content. The first agent, the Generator, uses procedural content generation (PCG), a technique that automatically generates maps and other game elements. The second agent, the Solver, tries to finish the levels the Generator creates. There is a symbiosis between the two agents. The Solver is rewarded by taking actions that help it pass the generated levels. The Generator, on the other hand, is rewarded for creating levels that are challenging but not impossible to finish for the Solver. The feedback that the two agents provide each other enables them to become better at their respective tasks as the training progresses. The generation of levels takes place in a step-by-step fashion. For example, if the adversarial reinforcement learning system is being used for a platform game, the Generator creates one game block and moves on to the next one after the Solver manages to reach it. “Using an adversarial RL agent is a vetted method in other fields, and is often needed to enable the agent to reach its full potential,” Gisslén said. “For example, DeepMind used a version of this when they let their Go agent play against different versions of itself in order to achieve super-human results. We use it as a tool for challenging the RL agent in training to become more general, meaning that it will be more robust to changes that happen in the environment, which is often the case in game-play testing where an environment can change on a daily basis.” Gradually, the Generator will learn to create a variety of solvable environments, and the Solver will become more versatile in testing different environments. A robust game-testing reinforcement learning system can be very useful. For example, many games have tools that allow players to create their own levels and environments. A Solver agent that has been trained on a variety of PCG-generated levels will be much more efficient at testing the playability of user-generated content than traditional bots. One of the interesting details in the adversarial reinforcement learning paper is the introduction of “auxiliary inputs.” This is a side-channel that affects the rewards of the Generator and enables the game developers to control its learned behavior. In the paper, the researchers show how the auxiliary input can be used to control the difficulty of the levels generated by the AI system. EA’s AI research team applied the technique to a platform and a racing game. In the platform game, the Generator gradually places blocks from the starting point to the goal. The Solver is the player and must jump from block to block until it reaches the goal. In the racing game, the Generator places the segments of the track, and the Solver drives the car to the finish line. The researchers show that by using the adversarial reinforcement learning system and tuning the auxiliary input, they were able to control and adjust the generated game environment at different levels. Their experiments also show that a Solver trained with adversarial machine learning is much more robust than traditional game-testing bots or RL agents that have been trained with fixed maps. Applying adversarial reinforcement learning to real games The paper does not provide a detailed explanation of the architecture the researchers used for the reinforcement learning system. The little information that is in there shows that the the Generator and Solver use simple, two-layer neural networks with 512 units, which should not be very costly to train. However, the example games that the paper includes are very simple, and the architecture of the reinforcement learning system should vary depending on the complexity of the environment and action-space of the target game. “We tend to take a pragmatic approach and try to keep the training cost at a minimum as this has to be a viable option when it comes to ROI for our QV (Quality Verification) teams,” Gisslén said. “We try to keep the skill range of each trained agent to just include one skill/objective (e.g., navigation or target selection) as having multiple skills/objectives scales very poorly, causing the models to be very expensive to train.” The work is still in the research stage, Konrad Tollmar, research director at EA and co-author of the paper, told TechTalks. “But we’re having collaborations with various game studios across EA to explore if this is a viable approach for their needs. Overall, I’m truly optimistic that ML is a technique that will be a standard tool in any QV team in the future in some shape or form,” he said. Adversarial reinforcement learning agents can help human testers focus on evaluating parts of the game that can’t be tested with automated systems, the researchers believe. “Our vision is that we can unlock the potential of human playtesters by moving from mundane and repetitive tasks, like finding bugs where the players can get stuck or fall through the ground, to more interesting use-cases like testing game-balance, meta-game, and ‘funness,'” Gisslén said. “These are things that we don’t see RL agents doing in the near future but are immensely important to games and game production, so we don’t want to spend human resources doing basic testing.” The RL system can become an important part of creating game content, as it will enable designers to evaluate the playability of their environments as they create them. In a video that accompanies their paper, the researchers show how a level designer can get help from the RL agent in real-time while placing blocks for a platform game. Eventually, this and other AI systems can become an important part of content and asset creation, Tollmar believes. “The tech is still new and we still have a lot of work to be done in production pipeline, game engine, in-house expertise, etc. before this can fully take off,” he said. “However, with the current research, EA will be ready when AI/ML becomes a mainstream technology that is used across the gaming industry.” As research in the field continues to advance, AI can eventually play a more important role in other parts of game development and gaming experience. “I think as the technology matures and acceptance and expertise grows within gaming companies this will be not only something that is used within testing but also as game-AI whether it is collaborative, opponent, or NPC game-AI,” Tollmar said. “A fully trained testing agent can of course also be imagined being a character in a shipped game that you can play against or collaborate with.” Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,436
2,021
"Robotics-powered 'microfulfillment' startup Fabric raises $200M | VentureBeat"
"https://venturebeat.com/2021/10/26/robotics-powered-microrulfillment-startup-fabric-raises-200m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Robotics-powered ‘microfulfillment’ startup Fabric raises $200M Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Fabric, a startup developing a “microfulfillment” automation platform for retailers, today announced that it raised $200 million in series C funding led by Temasek with participation from Koch Disruptive Technologies, Union Tech Ventures, Harel Insurance & Finance, Pontifax Global Food and Agriculture Technology Fund, Canada Pension Plan Investment Board, KSH Capital, Princeville Capital, Wharton Equity, and others. With a valuation of over $1 billion and $336 million in capital raised to date, Fabric plans to expand its headcount and build a network of microfulfillment centers across major cities in the U.S. According to McKinsey, ecommerce sales penetration more than doubled to 35% in 2020, the equivalent of roughly 10 years of growth within a few months. The surge in online shopping has been compounded by a desire for faster shipping — a tough ask in the midst of a pandemic. While the same-day delivery market in the U.S. is poised to grow by $9.82 billion over the next four years, a worldwide labor shortage — not to mention backups at critical ports of call — make the prospect daunting for merchandisers without economies of scale. Above: An isometric view of a Fabric fulfillment center. Fabric claims to level the playing field with a modular, software-led robotics approach to fulfillment. AI orchestrates robots within its microfulfillment centers’ walls to break orders into tasks and delegate them autonomously. Some robots bring items awaiting shipment in totes to teams of employees who pack individual orders. Operating in rooms with ceilings as low as 11 feet, other robots move packaged orders from temperature-controlled zones for fresh, ambient, chilled, and frozen products to dispatch areas, where they’re loaded onto a scooter or van for delivery. Fabric’s customers choose either a platform model to run and operate independently on their real estate or a service model in which fulfillment is offered as a service with an investment. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Fabric’s solution was designed from the ground up for local, on-demand ecommerce, which means it was designed to achieve high throughputs in small urban footprints, with low operational costs and maximum flexibility,” Fabric CEO Elram Goren told VentureBeat via email. “By combining our software, automated robotics, and logistics expertise, Fabric helps brands and retailers to future-proof their businesses with profitable unit economics. Robotics and automation bring a range of efficiencies to the ecommerce fulfillment space, increasing throughput per square footprint and decreasing the reliance on costly manual labor. Keeping fulfillment local speeds up delivery times while reducing shipping costs.” Microfulfillment Microfulfillment centers — located inside existing stores or structures that hold a market’s worth of goods — are increasingly being hailed as the answer to speedy shipping in space-starved city centers. For example, Calgary, Alberta-based Attabotics’ solution condenses aisles of warehouse shelves into single vertical storage structures that roving shuttles traverse horizontally. As for Fabric, which was founded in 2015 and now employs over 300 people across its Tel Aviv, New York, and Atlanta offices, it’s among the most successful startups in the emerging segment. The company runs microfulfillment operations for grocery and retailers in New York City, Washington, D.C., and Tel Aviv and has partnerships with FreshDirect and Walmart as well as Instacart. For Instacart, Fabric plans to integrate its software and robotics solutions with Instagram’s technology and network of shoppers. And for Walmart, the company intends to add microfulfillment centers to dozens of store locations as part of a pilot involving other technology providers including Alert Innovation and Dematic. “[W]e’re building our robots to be as robust and simple as possible from a hardware perspective, shifting the heavy lifting as much as possible to our software stack, to allow for scalability, lower costs, and robustness. At the same time, our software leverages our robotics architecture and topology, which allows it functionality and performance optimization opportunities that are unparalleled in the market,” Goren said. In something of a proof of concept in December 2019, Fabric launched an 18,000-square-foot grocery site in Tel Aviv that’s now delivering orders to online customers. Fabric’s first sorting center, also in Tel Aviv, covers 6,000 square feet and services over 400 orders a day for drugstore chain Super-Pharm. “We’re utilizing AI and machine learning in many different ways,” Goren added. “We have task resource allocation and planning that uses supervised machine learning to predict the duration, resource, and demand of each possible resource assignment which then works with other optimization algorithms such as genetic algorithms and Bayesian optimization. We enable retailers forecasting and prediction capabilities over their stock, to make sure they always have the right items in the right place at the right time. Stock level optimization is composed of two stages: First, time series forecasting predicts future demand for each product, and expected replenishment time. Second, an optimization algorithm maximizes stock availability for orders while minimizing the total costs of replenishment shipments and not exceeding available storage. These are just some of the software components that we’re continuing to develop.” As logistics and fulfillment challenges continue to mount, companies are embracing automation across the entire supply chain. According to one estimate , 4 million commercial warehouse robots are to be installed in over 50,000 warehouses by 2025. Amazon alone uses over 350,000 autonomous robots to automate order fulfillment, the company recently reported. The concept is catching on particularly quickly among grocers and convenience stores with small delivery radiuses. On-demand food and goods startup Gopuff employs hundreds of microfulfillment centers in its delivery network. And Kroger, Albertsons, and H-E-B are using — or actively exploring — microfulfillment for online customers. Fabric rival Attabotics raised $25 million in July 2020 for its robotics supply chain tech, and InVia Robotics last summer nabbed $20 million to bring its subscription-based robotics to ecommerce warehouses. Softbank recently invested $2.8 billion in robotics and microfulfillment company AutoStore. In the European Union, supermarket chain Ocado deployed a robot that can grasp fragile objects without breaking them. And startup Exotec has detailed a system called Skypod that taps robots capable of moving in three dimensions. “[The pandemic] has changed very little, really, and at the same time — it accelerated everything. People still like to get more, pay less, and get it faster. Retailers still like to sell more and make more. But there has been a leap of a decade in this past year, and this is what we’re seeing. COVID caught retailers and brands off guard and has forced them to move much faster than they had planned for,” Goren said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,437
2,019
"Why do 87% of data science projects never make it into production? | VentureBeat"
"https://venturebeat.com/ai/why-do-87-of-data-science-projects-never-make-it-into-production"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why do 87% of data science projects never make it into production? Share on Facebook Share on X Share on LinkedIn “If your competitors are applying AI, and they’re finding insight that allow them to accelerate, they’re going to peel away really, really quickly,” Deborah Leff, CTO for data science and AI at IBM, said on stage at Transform 2019. On their panel, “What the heck does it even mean to ‘Do AI’? Leff and Chris Chapo, SVP of data and analytics at Gap, dug deep into the reason so many companies are still either kicking their heels or simply failing to get AI strategies off the ground, despite the fact that the inherent advantage large companies had over small companies is gone now, and the paradigm has changed completely. With AI, the fast companies are outperforming the slow companies, regardless of their size. And tiny, no-name companies are actually stealing market share from the giants. But if this is a universal understanding, that AI empirically provides a competitive edge, why do only 13% of data science projects, or just one out of every 10, actually make it into production? “One of the biggest [reasons] is sometimes people think, all I need to do is throw money at a problem or put a technology in, and success comes out the other end, and that just doesn’t happen,” Chapo said. “And we’re not doing it because we don’t have the right leadership support, to make sure we create the conditions for success.” The other key player in the whodunit is data, Leff adds, which is a double edged sword — it’s what makes all of these analytics and capabilities possible, but most organizations are highly siloed, with owners who are simply not collaborating and leaders who are not facilitating communication. “I’ve had data scientists look me in the face and say we could do that project, but we can’t get access to the data,” Leff says. “And I say, your management allows that to go on?” But the problem with data is always that it lives in different formats, structured and unstructured, video files, text, and images, kept in in different places with different security and privacy requirements, meaning that projects slow to a crawl right at the start, because the data needs to be collected and cleaned. And the third issue, intimately connected to those silos, is the lack of collaboration. Data scientists have been around since the 1950s — and they were individuals sitting in a basement working behind a terminal. But now that it’s a team sport, and the importance of that work is now being embedded into the fabric of the company, it’s essential that every person on the team is able to collaborate with everyone else: the data engineers, the data stewards, people that understand the data science, or analytics, or BI specialists, all the way up to DevOps and engineering. “This is a big place that holds companies back because they’re not used to collaborating in this way,” Leff says. “Because when they take those insights, and they flip them over the wall, now you’re asking an engineer to rewrite a data science model created by a data scientist, how’s that work out, usually?” “Well,” Chapo says, “It doesn’t.” For example, one of his company’s early data science projects created size profiles, which could determine the range of sizes and distribution necessary to meet demand. Four years ago the data science team handed the algorithm to an engineer, and it got recoded in Java and implemented. Two weeks ago, they realized that that it’s been broken for three and a half years. “It’s broken because nobody owned it, we didn’t have the data science team to be able to continually iterate on the models, think of it as an asset, and have data operations making sure it’s working well,” Chapo said. “We’re starting to bring those ways of working to life. But it’s hard, because can’t just do it all overnight.” “One of the biggest opportunities for all of us today is to figure out how we educate the business leaders across the organization,” Leff said. “Before, a leader didn’t need to necessarily know what the data scientist was doing. Now, the data scientist has stepped into the forefront, and it’s actually really important that business leaders understand these concepts.” AI is not going to replace managers, she adds, but managers who use AI are going to replace those who don’t. We’re starting to see that awakening of business leaders wanting to understand how machine learning works, and what AI really means for them, and how to leverage it successfully. And those leaders are going to be the most in demand, Leff said. Another essential key to success, Chapo added, is keeping it simple. “Oftentimes people imagine a world where we’re doing this amazing, fancy, unicorn, sprinkling-pixie-dust sort of AI projects,” he said. “The reality is, start simple. And you can actually prove your way into the complexity. That’s where we’ve actually begun to not only show value quicker, but also help our businesses who aren’t really versed in data to feel comfortable with it.” It’s not necessarily the sophistication of the model at the beginning, it’s about creating a better experience for customers. Companies actually no longer compete against their closest competitor, they’re actually competing against the best customer experience someone else has provided, even if that’s in an entirely different sector. If you can call up a ride sharing service on an app in just a few moments, you begin to want the same level of experience, when you call the bank, or file an insurance claim, or place an order online. There are three ways to get started, and avoid becoming one of the 87%, Chapo said. Pick a small project to get started, he says — don’t try to boil the ocean, but choose a pain point to solve, where you can show demonstrable progress. Ensure you have the right team, cross-functionally, to solve this. And third, leverage third parties and folks like IBM and others to help accelerate your journey at the beginning. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,438
2,021
"How PepsiCo uses AI to create products consumers don't know they want | VentureBeat"
"https://venturebeat.com/2021/06/28/how-pepsico-uses-ai-to-create-products-consumers-dont-know-they-want"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How PepsiCo uses AI to create products consumers don’t know they want Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you imagine how a food and beverage company creates new offerings, your mind likely fills with images of white-coated researchers pipetting flavors and taste-testing like mad scientists. This isn’t wrong, but it’s only part of the picture today. More and more, companies in the space are tapping AI for product development and every subsequent step of the product journey. At PepsiCo, for example, multiple teams tap AI and data analytics in their own ways to bring each product to life. It starts with using AI to collect intel on potential flavors and product categories, allowing the R&D team to glean the types of insights consumers don’t report in focus groups. It ends with using AI to analyze how those data-driven decisions played out. “It’s that whole journey, from innovation to marketing campaign development to deciding where to put it on shelf,” Stephan Gans, chief consumer insights and analytics officer at PepsiCo, told VentureBeat. “And not just like, ‘Yeah, let’s launch this at the A&P.’ But what A&P. Where on the shelf in that particular neighborhood A&P.” A new era of consumer research When it comes to consumer research, Gans likes to say that “seeing is the new asking.” Historically, this stage of product development has always been based on asking people questions: Do you like this? Why don’t you like this? What would you like? But participants’ answers aren’t as telling as we’d like to think. They might not really care because they’re paid to be there, or they might just be trying to be nice. They might also be sincere in the moment, but it doesn’t mean they’ll still be excited about the product three years after launch. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “People will give you all sorts of answers,” Gans said. “It’s just not very close to what is ultimately driving their buying behavior.” To uncover more telling insights PepsiCo can channel into product roadmaps, the company uses a tool called Tastewise , which deploys algorithms to uncover what people are eating and why. Also used by Nestlé, General Mills, Dole, and other major consumer packaged goods companies (CPGs), the AI-driven tool analyzes massive quantities of food data online. Specifically, Tastewise says its tool has monitored more than 95 million menu items, 226 billion recipe interactions, and 22.5 billion social posts, among other consumer touchpoints. By collecting data from all these different sources — which represent what people are voluntarily talking about, searching for, and ordering in their daily lives — Gans says his team “can get a really good idea as to what people are more and more interested in.” For example, it was findings from the tool that gave PepsiCo the idea to incorporate seaweed into a flavored savory snack. The company brought it to market as Off The Eaten Path, and long story short, Gans said it’s been a top seller since. “If you would’ve asked consumers, ‘tell me what your favorite flavors are and let us know what you think would be a great flavor for this brand,’ nobody would have ever come up with seaweed. People don’t associate that typically with a specialty snack from a brand. But because of the kind of listening and the outside-in work that we did, we were able to figure that out through the AI that’s embedded in that tool,” he said. Data-driven social prediction Taking another angle to insights, PepsiCo also leans heavily on Trendscope, a tool it developed in conjunction with Black Swan Data. Rather than analyze menus and recipes, it focuses exclusively on social conversations around food on Twitter, Reddit, blogs, review boards, and more. The tool considers context and whether or not the conversation is relevant to the business; it measures not only the volume of specific conversations, but how they grow over time. Gans says this allows the team to do what they call “social prediction.” “Because we have done this over and over and over again now, we can actually predict which of the topics are going to stick and which are just going to kind of fizzle out,” he said. The pandemic, for example, caused a massive spike in interest around immunity. By using Trendscope, PepsiCo determined that specifically for beverages, the interest is here to stay. About six months ago, the company acted on that insight when it launched a new line of its Propel sports drinks infused with immunity ingredients. From idea to a shelf near you Once the products are developed, there’s still plenty for AI and machine learning to do. Jeff Swearingen, who heads up PepsiCo’s demand accelerator (DX) initiative, said the company uses the technology in agriculture and manufacturing, which has helped reduce water consumption. Sales and marketing, his domain, also leans heavily on AI. He said the company started “moving very quickly” in 2015 by building big internal datasets. One has 106 million U.S. households, and for about half of that, he says the company has first-party data at the individual level. There’s additionally a store dataset of 500,000 U.S. retail outlets, as well as a retail output dataset, he says. Both his and Gans’ teams use the data to engage core consumers in “uniquely personalized ways,” from customizing retail environments to online ads. For the launch of Mountain Dew Rise Energy, for example, PepsiCo determined which consumers would be more likely than average to enjoy the drink, and then narrowed in further to determine a core target. The store data then enabled the company to figure out exactly which retailers those core consumers were likely to shop at and reach them with highly targeted “everything.” This includes digital media campaigns and content, as well as assortment, merchandising, and presentation. “If you go back five years, if you were to walk into those 50,000 [targeted] stores, the assortment, presentation, merchandising, all of those things would probably look like the other 450,000,” Swearingen said, using sample numbers to make the point. “Now in those 50,000 stores, we’re able to truly celebrate this product in a way that recognizes the shopper that’s walking in that store.” In regards to marketing, PepsiCo also uses AI to do quality control on massive amounts of personalized digital ads. Specifically, the company partnered with CreativeX to build algorithms that check each piece of advertising to make sure it meets an evolving set of “golden rules,” like that the brand logo is visible or the message still comes across with sound off. Gans said using AI is the only way they can do proper quality control when “you may end up making 1,000 [ads] to reach 1,000 different segments of consumers.” The company has invested “a ton” of resources into AI, he said, and will be investing more in the years to come. Five years ago, the company was still relying on traditional broadcast advertising, according to Swearingen, who added that the new AI-enabled efforts are much more efficient. “There’s so much waste, number one, and you’re not customizing the message to those people that really love this proposition,” he said of the traditional route. “And now we’re able to do that.” Maintaining human connections When it comes to customer relations, PepsiCo, like many companies , is tapping natural language processing (NLP) to more efficiently help anyone who may call with a question, suggestion, or complaint. “Through a simple NLP-driven system, we can make sure that the person that you end up talking to already has the content that is relevant for you,” Gans said, noting that talking to a robot for 45 minutes would be “AI gone very wrong.” It’s a good example of how the company is working to keep humans in the AI loop, which Gans said is “literally [his] favorite topic.” He feels that in integrating these technologies, it’s easy to become overly reliant on the data, which can’t always speak to people’s actual motivations. As an example, he referenced a recent Pepsi ad, which focuses on the shared human emotions of the pandemic and doesn’t feature any products. “I’m always making sure there is both a data-driven and a human empathy perspective brought to commercial decision making,” Gans said. “That is a key role and the ongoing challenge for my team.” Correction: An earlier version of this post said PepsiCo partnered with Creative Action when the correct name was CreativeX. The name of the new drink was Mountain Dew Rise Energy, not Mountain Dew Rise. We regret the error. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,439
2,021
"LinkedIn says it reduced bias in its connection suggestion algorithm | VentureBeat"
"https://venturebeat.com/2021/08/05/linkedin-says-it-reduced-bias-in-its-connection-suggestion-algorithm"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn says it reduced bias in its connection suggestion algorithm Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a blog post today, LinkedIn revealed that it recently completed internal audits aimed at improving People You May Know (PYMK), an AI-powered feature on the platform that suggests other members for users to connect with. LinkedIn claims the changes “level the playing field” for those who have fewer connections and spend less time building their online networks, making PYMK ostensibly useful for more people. PYMK was the first AI-powered recommender feature at LinkedIn. Appearing on the My Network page, it provides connection suggestions based on commonalities between users and other LinkedIn members, as well as contacts users have imported from email and smartphone address books. Specifically, PYMK draws on shared connections and profile information and experiences, as well as things like employment at a company or in an industry and educational background. PYMK worked well enough for most users, according to LinkedIn, but it gave some members a “very large” number of connection requests, creating a feedback loop that decreased the likelihood other, less-well-connected members would be ranked highly in PYMK suggestions. Frequently active members on LinkedIn tended to have greater representation in the data used to train the algorithms powering PYMK, leading it to become increasingly biased toward optimizing for frequent users at the expense of infrequent users. “A common problem when optimizing an AI model for connections is that it often creates a strong ‘rich getting richer’ effect, where the most active members on the platform build a great network, but less active members lose out,” Albert Cui, senior product manager of AI and machine learning at LinkedIn, told VentureBeat via email. “It’s important for us to make PYMK as equitable as possible because we have seen that members’ networks, and their strength, can have a direct impact on professional opportunities. In order to positively impact members’ professional networks, we must acknowledge and remove any barriers to equity.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Biased algorithms This isn’t the first time LinkedIn has discovered bias in the recommendation algorithms powering its platform’s features. Years ago, the company found that the AI it used to match job candidates with opportunities was ranking candidates partly on the basis of how likely they were to apply for a position or respond to a recruiter. The system wound up referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities. To counter this, LinkedIn built an adversarial algorithm designed to ensure that the recommendation system includes a representative distribution of users across gender before referring the matches curated by the original system. In 2016, a report in the Seattle Times suggested LinkedIn’s search algorithm might be giving biased results, too — along gender lines. According to the publication, searches for the 100 most common male names in the U.S. triggered no prompts asking if users meant predominantly female names, but similar searches of popular female first names paired with placeholder last names brought up LinkedIn’s suggestion to change “Andrea Jones” to “Andrew Jones,” “Danielle” to “Daniel,” “Michaela” to “Michael,” and “Alexa” to “Alex,” for example. LinkedIn denied at the time that its search algorithm was biased but later rolled out an update so any user who searches for a full name if they meant to look up a different name wouldn’t be prompted with suggestions. Recent history has shown that social media recommendation algorithms are particularly prone to bias, intentional or not. A May 2020 Wall Street Journal article brought to light an internal Facebook study that found the majority of people who join extremist groups do so because of the company’s recommendation algorithms. In April 2019, Bloomberg reported that videos made by far-right creators were among YouTube’s most-watched content. And in a recent report by Media Matters for America, the media monitoring group presents evidence that TikTok’s recommendation algorithm is pushing users toward accounts with far-right views supposedly prohibited on the platform. Correcting for imbalance To address the problems with PYMK, LinkedIn researchers used a post-processing technique that reranked PYMK candidates to decrement the score of recipients who’d already had many unanswered invitations. These were mostly “ubiquitously popular” members or celebrities, who often received more invites than they could respond to due to their prominence or networks. LinkedIn thought that this would decrease the number of invitations sent to candidates suggested by PYMK and therefore overall activity. However, while connection requests sent by LinkedIn members indeed decreased 1%, sessions from the people receiving invitations increased by 1% because members with fewer invitations were now receiving more and invitations were less likely to be lost in influencers’ inboxes. As a part of its ongoing Fairness Toolkit work, LinkedIn also developed and tested methods to rerank members according to theories of equality of opportunity and equalized odds. In PYMK, qualified IMs and FMs are now given equal representation in recommendations, resulting in more invites sent (a 5.44% increase) and connections made (a 4.8% increase) to infrequent members without majorly impacting frequent members. “One thing that interested us about this work was that some of the results were counterintuitive to what we expected. We anticipated a decrease in some engagement metrics for PYMK as a result of these changes. However, we actually saw net engagement increases after making these adjustments,” Cui continued. “Interestingly, this was similar to what we saw a few years ago when we changed our Feed ranking system to also optimize for creators, and not just for viewers. In both of these instances, we found that prioritizing metrics other than those typically associated with ‘virality’ actually led to longer-term engagement wins and a better overall experience.” All told, LinkedIn says it reduced the number of overloaded recipients — i.e., members who received too many invitations in the past week — on the platform by 50%. The company also introduced other product changes, such as a Follow button to ensure members could still hear from popular accounts. “We’ve been encouraged by the positive results of the changes we’ve made to the PYMK algorithms so far and are looking forward to continuing to use [our internal tools] to measure fairness to groups along the lines of other attributes beyond frequency of platform visits, such as age, race, and gender,” Cui said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,440
2,021
"AI lab DeepMind becomes profitable and bolsters relationship with Google | VentureBeat"
"https://venturebeat.com/2021/10/10/ai-lab-deepmind-becomes-profitable-and-bolsters-relationship-with-google"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI lab DeepMind becomes profitable and bolsters relationship with Google Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. DeepMind, the U.K.-based AI lab that seeks to develop artificial general intelligence , has finally become profitable, according to the company’s latest financial report. Since being acquired by Google (now Alphabet Inc.) in 2014, DeepMind has struggled to break even with its growing expenses. And now, it is finally giving its parent company and shareholders hopeful signs that it has earned its place among Alphabet’s constellation of profitable businesses. This could be wonderful news for the AI lab, which has been hemorrhaging large sums throughout its entire life. But the financial report is also shrouded in vagueness that suggests if DeepMind has indeed found its way to profitability, it has done so in a way that makes it inextricably tied to the products and business model of Google. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Three-fold increase in revenue According to DeepMind’s filing, it has raked in £826 million ($1.13 billion USD) in revenue in 2020, more than three times the £265 million ($361 million USD) it filed in 2019. In the same period, its expenses increased modestly from £717 million ($976 million USD) to £780 million ($1.06 billion USD). The company finished the fiscal year with a £44 million ($60 million USD) profit, up from a £477 million ($650 million USD) loss in 2019. The filing does not provide much detail about DeepMind’s sources of income aside from a paragraph that says: “The Company generates revenue through a service agreement with another group undertaking for the provision of Research and Development services.” DeepMind does not directly sell products or services to consumers and companies. Its customers are Alphabet and its subsidiaries. It is not clear which one of DeepMind’s ventures caused the spike in its revenue. One source who spoke to CNBC said that the sudden increase in DeepMind’s revenue could be “creative accounting.” Basically, it means that since Alphabet and its subsidiaries are DeepMind’s only clients, it could arbitrarily alter the price of its services to create the impression that it is becoming profitable. DeepMind did not comment on the claim. Selling reinforcement learning DeepMind’s main area of focus is deep reinforcement learning , a branch of machine learning that is very useful in scientific research. DeepMind and other AI labs have used deep RL to master complicated games, train robotic hands , predict protein structures, and simulate autonomous driving. DeepMind’s scientists believe that advances in reinforcement learning will eventually lead to the development of AGI. But deep reinforcement learning research is also very expensive and its commercial applications are limited. Unlike other deep learning systems , such as image classifiers and speech recognition systems, which can be directly ported and integrated into new applications, deep reinforcement learning models often have to be trained in the environment where they will be used. This imposes technical and financial costs that many organizations can’t afford. Another problem is that the kind of research that DeepMind is engaged in does not directly translate to profitable business models. Take, for instance, AlphaStar , the reinforcement learning system that mastered the real-time strategy game StarCraft 2. It is an impressive feat of science that costs millions of dollars (which was probably subsidized by Google, which owns vast cloud computation resources). But it has little use in applied AI without being repurposed (to the tune of extra millions). Alphabet has adapted DeepMind’s RL technology in some of its operations, such as reducing power consumption at Google data centers and developing the technology of Waymo, Alphabet’s self-driving company. But while we don’t know the details of how the technology is being applied, my own guess is that Alphabet outsources some of its applied AI tasks to DeepMind rather than directly integrate the AI lab’s technology into its products. In fact, a separate division of DeepMind is engaged in applied AI projects for Google and Alphabet, but that effort is not directly related to the AGI research being done by the main DeepMind lab. The costs of AI talent and research With large tech companies such as Facebook, Microsoft, and Apple becoming interested in deep learning, hiring AI talent has become an arms race that has driven up the salaries of researchers. Leading AI researchers can easily earn seven-digit salaries at large tech companies, which makes it difficult for academic institutions and non-profit research labs to retain their talent. In 2020, DeepMind paid £467 million in staff costs, nearly two-thirds of its total expenses. The company has around 1,000 employees, a small percentage of whom are highly paid scientists, researchers, and engineers. The growing costs of AI research and talent will pit DeepMind against exacerbating challenges as it moves forward. It will depend on Google to fund its operations and subsidize the costs of its research. Meanwhile, as the subsidiary of a publicly traded company, it will be scrutinized for how profitable its technology is. And for the moment, its only source of profit is Alphabet, so it will become increasingly dependent on Google purchasing its services. This can in turn push DeepMind toward directing its research in areas that can quickly turn into profitable ventures, which is not necessarily congruent with its scientific goals. For a company that is chasing the long-term dream of artificial general intelligence and whose professed mission is “to advance science and benefit humanity,” the distractions of short-term profits and incremental gains can prove to be detrimental. The closest example I can find for the work that companies like DeepMind and its quasi-rival OpenAI is Bell Labs, the former research outfit of AT&T. Bell Labs was the subsidiary of a very large for-profit company, but its work wasn’t bound by the goals of the next quarter’s earnings or the incentives of shareholders. While rewarded handsomely for their work, its scientists were driven by scientific curiosity, not money. They sought fundamental ideas that pushed the boundaries of science, creating innovations that would not bear fruit for years and decades to come. And this is how Bell Labs became the birthplace of some of the ideas and technologies that changed the twentieth century, including transistors, satellites, lasers, optical fibers, cellular telephony, and information theory. Bell Labs had the freedom to discover and innovate. For the moment, Alphabet has proven to be a patient owner for DeepMind. It waived a £1.1 billion ($1.5 billion USD) debt in 2019 and helped DeepMind report positive earnings in 2020. Whether Alphabet will remain generous and faithful in DeepMind’s mission in the long run — and it is a long run — remains to be seen. But if Alphabet’s patience does run out, DeepMind will be left with no customers, no funding, and fierce competition from tech giants who will want to poach its talented scientists to achieve fundamentally different goals. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,441
2,020
"AI researchers made a sarcasm detection model and it's sooo impressive | VentureBeat"
"https://venturebeat.com/2020/11/18/ai-researchers-made-a-sarcasm-detection-model-and-its-soo-impressive"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI researchers made a sarcasm detection model and it’s sooo impressive Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Researchers in China say they’ve created sarcasm detection AI that achieved state-of-the-art performance on a dataset drawn from Twitter. The AI uses multimodal learning that combines text and imagery since both are often needed to understand whether a person is being sarcastic. The researchers argue that sarcasm detection can assist with sentiment analysis and crowdsourced understanding of public attitudes about a particular subject. In a challenge initiated earlier this year, Facebook is using multimodal AI to recognize whether memes violate its terms of service. The researchers’ AI focuses on differences between text and imagery and then combines those results to make predictions. It also compares hashtags to tweet text to help assess the sentiment a user is trying to convey. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Particularly, the input tokens will give high attention values to the image regions contradicting them, as incongruity is a key character of sarcasm,” the paper reads. “As the incongruity might only appear within the text (e.g., a sarcastic text associated with an unrelated image), it is necessary to consider the intra modality incongruity.” On a dataset drawn from Twitter, the model achieved a 2.74% improvement on a sarcasm detection F1 score compared to HFM, a multimodal detection model introduced last year. The new model also achieved an 86% accuracy rate, compared to 83% for HFM. The paper was published jointly by the Chinese Academy of Sciences and the Institute of Information Engineering, both in Beijing, China. The paper was presented this week at the virtual Empirical Methods in Natural Language Processing ( EMNLP ) conference. The AI is the latest example of multimodal sarcasm detection to emerge since AI researchers began studying sarcasm in multimodal content on Instagram, Tumblr, and Twitter in 2016. University of Michigan and University of Singapore researchers used language models and computer vision to detect sarcasm in television shows, a model detailed in a paper titled “Towards Multimodal Sarcasm Detection (An Obviously Perfect Paper).” That work was highlighted as part of the Association for Computational Linguistics (ACL) last year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,442
2,021
"'Detoxified' language models might marginalize minorities, says study | VentureBeat"
"https://venturebeat.com/2021/04/20/study-finds-that-detoxified-language-models-might-marginalize-minority-voices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Detoxified’ language models might marginalize minorities, says study Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI language models like GPT-3 have an aptitude for generating humanlike text. A key factor is the large datasets, scraped from the web, on which they’re trained. But because the datasets are often too large to filter with precision, they contain expletives, slurs, and other offensive and threatening speech. Language models unavoidably learn to generate toxic text when trained on this data. To address this, research has pivoted toward “detoxifying” language models without affecting the quality of text that they generate. Existing strategies employ techniques like fine-tuning language models on nontoxic data and using “toxicity classifiers.” But while these are effective, a new study from researchers at the University of California, Berkeley, and the University of Washington finds issue with some of the most common detoxification approaches. According to the coauthors, language model detoxification strategies risk marginalizing minority voices. Natural language models are the building blocks of apps including machine translators, text summarizers, chatbots, and writing assistants. But there’s growing evidence showing that these models risk reinforcing undesirable stereotypes , mostly because a portion of the training data is commonly sourced from communities with gender, race, and religious prejudices. Detoxification has been proposed as a solution to this problem, but the coauthors of this latest research — as well as research from the Allen Institute — found that the technique can amplify rather than mitigate biases. In their study, the UC Berkeley and University of Washington researchers evaluated “detoxified” language models on text with “minority identity mentions” including words like “gay” and “Muslim,” as well as surface markers of African-American English (AAE). AAE, also known as Black English in American linguistics, refers to the speech distinctive to many Black people in the U.S. and Canada. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers — who used GPT-2, the predecessor to GPT-3, as a test model — showed that three different kinds of detoxification methods caused a disproportionate increase in language model perplexity on text with African-American English and minority identity mentions. In machine learning, perplexity is a measurement of the quality of a model’s outputs — lower is generally better. Using a curated version of English Jigsaw Civil Comments for training, a dataset from Alphabet-owned anti-cyberbullying firm Jigsaw , the researchers found that perplexity increased by a factor of 2.1 on nontoxic “white-aligned English” data and a factor of 4.3 on minority identity mention data. Increasing the strength of the detoxification worsened the bias. Why might this happen? The coauthors speculate that toxicity datasets like English Jigsaw Civil Comments contain spurious correlations between the presence of AAE and minority identity mentions and “toxic” labels — the labels from which the language models learn. These correlations cause detoxification techniques to steer models away from AAE and minority identity mentions because the models wrongly learn to consider these aspects of language to be toxic. As the researchers note, the study’s results suggest that detoxified language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to white-aligned English to ensure that the models work better for them, which could discourage minority speakers from engaging with the models to begin with. Moreover, because detoxified models tend to avoid certain topics mentioning minority identity terms, like religions including Islam, they could lead to ostracization and a lack of informed, conscious discussion on topics of identity. For example, tailoring an language model for white-aligned English could stigmatize AAE as incorrect or “bad” English. In the absence of ways to train accurate models in the presence of biased data, the researchers propose improving toxicity datasets as a potential way forward. “Language models must be both safe and equitable to be responsibly deployed in practice. Unfortunately, state-of-the-art debiasing methods are still far from perfect,” they wrote in the paper. “We plan to explore new methods for debiasing both datasets and models in future work.” The increasing attention on language biases comes as some within the AI community call for greater consideration of the role of social hierarchies like racism. In a paper published last June, Microsoft researchers advocated for a closer examination and exploration of the relationships between language, power, and prejudice in their work. The paper also concluded that the research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom that bias is harmful. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,443
2,021
"Nreal unveils enterprise edition of mixed-reality glasses | VentureBeat"
"https://venturebeat.com/2021/02/22/nreal-unveils-enterprise-edition-of-mixed-reality-glasses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nreal unveils enterprise edition of mixed-reality glasses Share on Facebook Share on X Share on LinkedIn Nreal has a new enterprise edition of its mixed-reality glasses. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nreal is unveiling a new mixed-reality headset for enterprise users, and it is also making its Nreal Light model available in additional countries after launching it in Korea and Japan last year. Starting today, Nreal will expand Nreal Light to new key markets including the European Union and the U.S. The enterprise edition makes sense because the technology is still pretty expensive for consumers. Enterprise customers, on the other hand, don’t mind paying more for high-end mixed reality solutions, especially if it makes highly paid employees more productive. The enterprise headset is expected to launch in 2021. One of the sweet spots for mixed-reality technology, whether its virtual reality or augmented reality, has been in virtual training, where the costs of making mistakes with very expensive equipment aren’t as high as they would be in the real world. Asked about the timing of the enterprise edition, Nreal told VentureBeat that while Nreal Light offers a compact and lightweight form factor that resonates with consumers, enterprise clients have also been using these MR glasses in conjunction with Nreal’s proprietary computing pack (CPU) as an enterprise device. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Nreal is launching an enterprise edition. It is also debuting new content and technology partners including Finch Technology’s FinchRing , a hands-free MR controller. The FinchRing is a new hands-free six degree of freedom (6DoF) mixed reality controller offering full freedom of movement and seamless interaction with 360-degree spatial tracking. FinchRing works in or out of Nreal Light’s Field of View (FoV) and can be used indoors, outdoors and in any weather condition. Last year, Nreal launched its Nreal Light mixed reality glasses, aiming to make augmented reality more accessible to consumers. In 2021, Nreal wants to bring mass adoption another step closer. Nreal is entering the fray with a new class of enterprise headsets. The company is launching a customizable and lightweight MR headset that the company will show off at Mobile World Congress Shanghai 2021. The Nreal Enterprise Edition tethers to a computing unit and battery pack for longger-lasting power. Sporting a wrap-around halo design, Nreal Enterprise Edition also offers a balanced fit from front-to-back. And for added convenience, users can also control the enterprise headset through eye-tracking and gesture recognition technologies. Above: Nreal’s headsets are tethered to a computing unit and battery pack. The company is targeting enterprise customers in manufacturing, retail, tourism, education, logistics, and automotive markets. Both the software and hardware can be made-to-order. Nreal said its Nreal Light launch in Korea went well, as it was launched in 20 LG Uplus retail stores in Korea, and has since expanded to 220 locations throughout the country. LG Uplus has also expanded its catalog of Nreal Light compatible apps from 700 to 3,000 as developers and carriers support the mobile platform. Users have spent an average of 49 minutes per day on Nreal Light. The top 20% of users spend an average of 120 minutes per day. Nreal also launched its headset in Japan in a partnership with KDDI. Nreal Light will launch in the European Union through a partnership with the region’s two leading carriers, while it will launch in the U.S. in Q2 2021. Nreal will show off new MR apps coming to Nreal Light during Mobile World Congress. These include Tagesschau 2025, a weather app that beams a holographic weather anchor before your eyes; Dunkaar, a basketball game where you can shoot and dunk virtual basketballs; Dragon Awakening, a heroic fantasy online game; Magenta Sport, an AR version of football; and an MR version of the rhythm game Space Channel 5. The company isn’t disclosing its sales yet. Nreal has raised $85 million to date, including a recent $55 million round. It has 250 employees. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,444
2,021
"Zero-trust security could reduce cyber trust gap | VentureBeat"
"https://venturebeat.com/2021/09/12/zero-trust-security-could-reduce-cyber-trust-gap"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zero-trust security could reduce cyber trust gap Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Customer trust in companies is increasingly rare, especially when it comes to data management and protection. The trend is accelerating as cyberattacks continue to grow and vendors look to utilize more customer data as part of strategic initiatives. Businesses need more customer data to improve online sales, and how well a business handles this cyber trust gap could mean the difference between driving new digital revenue or not. KPMG’s recent “ Corporate Data Responsibility: Bridging the consumer trust gap” report quantifies just how wide the trust gap is today and which factors are causing it to accelerate. With 86% of customers surveyed saying data privacy is a concern and 68% saying companies’ level of data collection is concerning, closing the growing trust gap isn’t going to be easy. The survey draws on interviews with 2,000 U.S.-based consumers and 250 director-level and higher security and data privacy professionals. While most security and data privacy leaders (62%) said their organizations should be doing more to strengthen existing data protection measures, one in three (33%) say customers should be concerned about how their company uses their data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In addition, security and data privacy leaders aren’t sure how trustworthy their own companies are when it comes to handling customer data. A third (29%) say their company sometimes uses unethical data collection methods. And 13% of employees don’t trust their employer to use their data ethically. In short, the cyber trust gap is wide, with enterprises’ future outlooks largely dependent on the soundness of their data security. Data governance alone isn’t working Top-down approaches to data governance and data management aren’t closing the gap fast enough. KPMG concludes 83% of customers are unwilling to share their data to help businesses make better products and services. And a third (30%) aren’t willing to share personal data for any reason at all. This cyber trust gap continues to accelerate despite many businesses implementing corporatewide data governance frameworks. The trend of customers pushing back against data requests comes as 70% of security and privacy leaders say their companies are increasing efforts to collect customer data, according to Orson Lucas, KPMG U.S. privacy services leader. “Failure to bridge this divide could present a real risk of losing access to the valuable data and insights that drive business growth,” Lucas said. Clearly, data governance and data management initiatives need to prioritize the customer from the start of a project if the major investments companies make in these areas are to pay off. This way to zero trust The goal is to protect privacy with cybersecurity that is adaptive enough to grant every customer access to their entire customer record. Three out of every four customers (76%) want greater transparency in terms of how their personal data is being managed and what it’s being used for, yet just 53% of companies are providing that today. To close the data trust gap, companies need to go for full disclosure, provide a complete view of customer data, and explain how they are using it. The best way to accomplish this is to implement zero-trust security at the individual customer account level to protect access endpoints, identities, and other threat vectors. By choosing to prioritize zero-trust security, companies can make progress in closing the trust gap with customers and achieve greater transparency at the same time. Choosing zero-trust security as the framework for securing data answers the concerns of customers who say companies are not doing enough to protect their data. Customers are not happy — 64% say companies are not doing enough to protect their data, 47% are very concerned their data will be compromised in a hack, and 51% are fearful their data will be sold. The following are a few of the many ways companies can use zero-trust security to provide secure, complete transparency while protecting every threat surface in their businesses at the same time: Define identity and access management (IAM) first to deliver accuracy, scale, and speed. Getting IAM right is the cornerstone of a successful zero-trust security framework that provides customers with secure transparency to their data. Defining an IAM strategy needs to take into account how privileged access management (PAM), customer identity and access management (CIAM), mobile multi-factor authentication (MFA), and machine identity management are going to be orchestrated to achieve the customer experience outcomes needed to improve trust. CIAM systems provide identity analytics and consent management audit data that is GDPR-compliant, something sales and marketing teams need to improve online selling programs. Companies are also adopting a more granular, dynamic approach to network access that can offer customers greater transparency. It’s based on zero-trust edge (ZTE), which links network activity and related traffic to authenticated authorized users that can include both human and machine identities. Ericom Software, with its ZTEdge platform, is one of several companies competing in this area. The ZTEdge platform is noteworthy for combining micro-segmentation, zero-trust network access (ZTNA), and secure web gateway (SWG) with remote browser isolation (RBI) and ML-enabled identity and access management for mid-tier enterprises and small businesses. Additional vendors include Akamai, Netskope, Zscaler, and others. Improve endpoint visibility, control, and resilience by reevaluating how many software clients are on each endpoint device and consolidating them down to a more manageable number. Absolute Software’s 2021 “Endpoint-Risk Report” found the more over-configured an endpoint device is, the greater the chance conflicting software clients will create security gaps bad actors can exploit. One of the report’s key findings is that conflicting layers of security on an endpoint are proving to be just as risky as none at all. There is an average of 11.7 software clients or security controls per endpoint device in 2021. Nearly two-thirds of endpoint devices (66%) also have two or more encryption apps installed. The goal with zero-trust security adoption is to achieve greater real-time visibility and control and enable greater endpoint resilience and persistence of each endpoint. Absolute Software’s approach to self-healing endpoints is based on a firmware-embedded connection that’s undeletable from every PC-based endpoint. Additional providers of self-healing endpoints include Ivanti and Microsoft. To learn more about self-healing endpoints, be sure to read: “Tackling the endpoint security hype: Can endpoints actually self-heal?” Enable multi-factor authentication (MFA) for all customer accounts so customers can view their data securely. Endpoints and user accounts get breached most often because of compromised passwords. Getting MFA configured across all customer accounts is a given. Long-term, the goal needs to be moving more toward passwordless authentication that will further protect all endpoints and customers from a breach. Define a roadmap for transitioning to passwordless authentication for customer record access as quickly as possible. Bad actors prefer to steal privileged access credentials to save time and move laterally throughout a network at will. Verizon’s annual look at data breach investigations consistently finds that privileged access abuse is a leading cause of breaches. What’s needed is a more intuitive, less obtrusive yet multi-factor-based approach to account access that overcomes passwords’ weaknesses. Leading providers of passwordless authentication solutions include Microsoft Azure Active Directory (Azure AD), Ivanti’s Zero Sign-On (ZSO), OneLogin Workforce Identity, and Thales SafeNet Trusted Access. Each of these has unique strengths, with Ivanti’s Zero Sign-On (ZSO) delivering results in production across multiple industries as part of the company’s unified endpoint management (UEM) platform. Ivanti uses biometrics, including Apple’s Face ID, as the secondary authentication factor for gaining access to personal and shared corporate accounts, data, and systems. KPMG’s research found that 88% of customers want companies to take the lead in establishing corporate data responsibility and share more details on how they protect data. Addressing cyber trust issues boils down to providing greater transparency, and companies need to focus on zero-trust security and its inherent advantages for customer data access. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,445
2,019
"BlueStacks Inside turns mobile games into 'native PC' games on Steam | VentureBeat"
"https://venturebeat.com/2019/06/04/bluestacks-inside-turns-mobile-games-into-native-pc-games-on-steam"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages BlueStacks Inside turns mobile games into ‘native PC’ games on Steam Share on Facebook Share on X Share on LinkedIn BlueStacks Insider makes it easy to take mobile games to Steam. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. PC gaming platform BlueStacks has launched BlueStacks Inside , which enables mobile game developers to publish their games on Steam with no porting to the PC required. BlueStacks inside has a one-step software development kit (SDK) that lets developers take existing mobile games to Steam and Discord. The initial launch will include several high-profile developers like KOG, Funplus, Fabled Game Studio, and many others whose games will be available directly on Steam. Mobile developers have started allocating large budgets to game development, and that means mobile games can be competitive on Steam without a ton of modification. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! With games like Lineage 2: Revolution and PlayerUnknown’s Battlegrounds, graphics and gameplay push the limits of what a mobile device can do. On the other hand, gamers are caught in a struggle to maintain devices that can keep up with demanding games. BlueStacks Inside gives developers an opportunity to reach a much wider and valuable PC-based audience without the need to hire a separate PC development team. Players can use their PCs to do the heavy lifting for games their phones would otherwise not be able to run well. “What we see is that the BlueStacks and Steam audiences overlap almost completely. So the partnership gives gamers access to the entire Android gaming library right on their PCs,” says Rosen Sharma, BlueStacks CEO, in a statement. “We eliminate the need for separate development teams just to bring mobile games to a PC audience. When published with BlueStacks, a player downloading the game through Steam gets the full game experience. It isn’t BlueStacks. It isn’t Steam. It’s a PC game.” Above: BlueStacks and Steam are a fit. BlueStacks Inside for Steam will give developers access to a spectrum of features from a simple and mandatory payments integration, replacing traditional app stores, to Steam’s Community Hub, promotions, curators, and collections. The Steam Wallet will process all in-game purchases in the same way as a traditional app store. Developers seeking higher engagement and average revenue per user (ARPU) will find PC gamers as some of the most loyal and hardcore of any gaming cohort. “We see a nearly 80% overlap between high-value mobile gamers and high-value PC gamers,” said Mike Peng, head of global operations at Funplus, in a statement. “Developers can not only reach more people, but the people they reach on Steam are much more likely to play the game for longer and spend more than the average mobile gamer. This is amazing news for our user acquisition team.” BlueStacks Inside is currently in a soft launch with a few select partners. Pirates Outlaws from Fabled Game Studio launched on Steam during the Game Developers Conference in March and is currently available in the Steam store. Nicolas Lavergne, Game Producer at Fabled Game Studio pointed to the scale of the Steam store, with over 40 million DAU, and ease of integration, a simple SDK, as the two main reasons to launch their new title Pirates Outlaws on PC as well as through Google and iOS. “One of the things we found exciting about publishing with BlueStacks is having the ability to connect easily their massive Steam audience without any additional development requirements,” he said. “Our players can now use the Steam platform to access player-generated guides and streamer videos which are fundamental to build a community.” Even large mobile developers like KOG are looking to BlueStacks instead of building out PC-porting teams. Rafael Noh, vice president at KOG, said, “We want players to play on their terms. If they want to play on PC, then we want them to have the best experience. BlueStacks Inside gives us the ability to distribute the best PC experience to players on platforms they already use.” BlueStacks investors include Ignition Partners, Radar Partners, Andreessen-Horowitz, Samsung, Redpoint, Qualcomm, Intel, Presidio Ventures (a Sumitomo Corporation Company), Citrix, AMD, and Helion Ventures. BlueStacks launched in May 2011, and the first version of BlueStacks was released in March 2012. BlueStacks crossed 370 million users in January of 2019. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,446
2,020
"Labelbox raises $25 million to grow its data-labeling platform for AI model training | VentureBeat"
"https://venturebeat.com/2020/02/04/labelbox-raises-25-million-to-grow-its-data-labeling-platform-for-ai-model-training"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Labelbox raises $25 million to grow its data-labeling platform for AI model training Share on Facebook Share on X Share on LinkedIn Labelbox CTO Dan Rasmuson, CEO Manu Sharma, and COO Brian Rieger Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Labelbox today announced the close of a $25 million series B round to grow its platform that helps customers label the data needed to train AI systems. The round was led by Andreessen Horowitz, with participation from Google’s AI-focused Gradient Ventures fund, Kleiner Perkins, and First Round Capital. The funds will be used to develop and accelerate Labelbox’s roadmap for machine learning and computer vision models by doubling the size of its engineering and sales teams. Labelbox also enables users to automate some labeling so a company can manually label all data except any that falls below a particular prediction confidence threshold, COO Brian Rieger told VentureBeat in a phone interview. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The funding will also be used to codify best practices and standard metrics for model performance among data scientists, developers, and data engineers, in part by working with university and business partners. “What happens today very often is that folks come out of the academic institutions, and they’ve kind of got that academic side of machine learning, but they haven’t experienced the process of taking a production system from nothing into production. And there are some common technologies, common formulas that need to be developed and understood amongst the community,” he said. Among the standardization policies Labelbox seeks: Common data exchange file formats and the need for roles within organizations — like data-labeling operations manager — to accelerate AI deployment and advance company business goals. “Labeling operations manager is this role that’s never been defined before globally but exists within many of the companies we work with,” Rieger said. Data annotation startups that raised funding recently include CloudFactory and Alegion. Labelbox has now raised $39 million to date, including a $10 million series A in April 2019. Andreessen Horowitz general partner Peter Levine will join the board as part of the latest round. The company was founded in 2018 and is based in San Francisco, with 30 employees. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,447
2,020
"SuperAnnotate uses AI techniques to speed up data labeling | VentureBeat"
"https://venturebeat.com/2020/06/11/superannotate-uses-ai-techniques-to-speed-up-data-labeling"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SuperAnnotate uses AI techniques to speed up data labeling Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SuperAnnotate , an AI-powered annotation platform for engineers and labeling teams, today announced it has raised $3 million in venture funding. The investment follows a four-month period during which the startup signed more than 3,000 data scientists and over 100 companies as customers, including Starsky Robotics, Percepto, Code42, Acme AI, and ClayAir. Data prep, processing, and engineering tasks consume over 80% of the time dedicated to most AI and machine learning projects, according to Cognilytica. Labeling is one of those tasks, with the vast majority of algorithms currently trained on human-annotated data. This may be why the market for annotation tools is expected to reach $2.57 billion by 2027. SuperAnnotate provides these tools, along with a simple communication system, recognition improvement, image status tracking, templates, dashboards, and more. Optimized for image annotation, its platform helps identify the right annotation partners — with expertise in everything from architecture to design, agriculture, and radiology — by sharing projects with multiple teams. Data annotators using SuperAnnotate gain access to automatic predictions and quality and data management systems. CTO Vahan Petrosyan created the algorithms underpinning the systems while completing his Ph.D. at the KTH Royal Institute of Technology in Sweden. The systems help create pixel-accurate training data for autonomous vehicles, drones, and medical computer vision, as well as medical applications from single images or several frames. In the case of videos, customers only need to label the first frame, and SuperAnnotate’s partners track the object in consecutive frames. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SuperAnnotate is in a league adjacent to companies Scale AI, which recently raised $100 million for its extensive suite of data labeling services, and CloudFactory , which says it offers labelers growth opportunities and “metric-driven” bonuses. That’s not to mention Hive , which raked in $10.6 million in November 2019; Alegion , which nabbed $12 million in August 2019; Appen; or Cognizant. And SuperAnnotate CEO Tigran Petrosyan says he’s been encouraged by the high level of organic adoption. “This [growth] shows that we are building something unique. That makes us very proud and humbled and motivates us to work even harder to address the pains of our users,” he said in a statement. “Coming from academia and seeing the pains of labeling firsthand, we work overtime to make sure our users have the right tools and are connected with the right teams to successfully complete their computer vision projects with the highest detection accuracies.” Point Nine Capital led SuperAnnotate’s seed funding round, with participation from Runa Capital, Fathom Capital, Berkeley SkyDeck Fund, and Plug and Play Ventures. The company is headquartered in Sunnyvale, California, with satellite offices in Sweden and Armenia. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,448
2,021
"Unbiased AI becomes mission-critical in 2021 | VentureBeat"
"https://venturebeat.com/2020/12/10/unbiased-ai-becomes-mission-critical-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Unbiased AI becomes mission-critical in 2021 Share on Facebook Share on X Share on LinkedIn Vector of robot candidates replacing humans hiding behind masks Presented by Appen This article is fourth in a 5-part series on predictions in AI in 2021 — catch up on the first, second, and third in the series. Perhaps the most succinct summary of the relationship between artificial intelligence (AI) and data can be described as follows: an AI model is only as good as the data it was trained on. Training data serves as the foundation of AI solutions everywhere and can make or break their success. Data management is a key focal point for companies building machine learning (ML) models, and this domain will only continue to grow in importance in 2021 and beyond. In the coming years, it will be more evident than ever how steep the price is of getting this area of AI wrong. In part four of our five part series on 2021 predictions, we focus on the shift in focus to diversification to avoid bias. Preparing training data is already a time-consuming process — most AI teams spend about 80% of their time just on this task. It requires a not-insignificant investment of money and people to annotate the data. Organizations have a choice in whether they annotate their training data in-house or turn to a third-party vendor to handle the massive effort. There are tradeoffs for each selection; using an in-house team to annotate datasets, for instance, can often result in less diverse perspectives and, therefore, more bias in the data. Using a third party vendor gives a company instant access to a large crowd of data annotators, but in some cases, less direct oversight into who these people are. It’s a vital question more companies are starting to consider: who’s annotating our data? Are we incorporating a diverse collection of voices, or are we unintentionally introducing bias? Regardless of which data annotation method a company chooses, recognizing how data annotators play a critical role in influencing model bias will be paramount to success. The role of data annotation in AI While companies have traditionally focused on the money aspect of training data, it’s the people behind it gaining increased attention — as they should. These people, the data annotators, provide ground-truth accuracy and a global perspective to AI. Data annotators undertake the most critical part of AI development, as the accuracy of their labels directly impacts the accuracy of the machine’s future predictions. A machine trained on poorly-labeled data will commit errors, make low-confidence predictions, and ultimately, not work effectively. The ramifications of poor data annotation can be enormous. Finance, retail, and other major industries rely on AI for various transactions, for example, and AI that’s not making accurate predictions will lead to poor customer experiences and impacts to business revenue. These problems are almost always created in the data collection and annotation stages. For instance, the data used may not cover all potential use cases, or the people used to annotate it may only reflect a small demographic of end-users. Even the largest companies with the most resources don’t always get it right, and the impact on brand and customer experience can be ultimately traumatic. As companies continue to struggle to remove unintended biases from their models, we expect to see more examples of these kinds of failures. If anything, these examples will serve as a stark reminder of how costly it can be to not have a bias mitigation plan from the start. How companies are reducing bias through a global AI economy How are some companies successfully reducing bias in their models? In part, by focusing on their data annotators. Annotators play an essential role in mitigating bias in AI, which is especially important for products and services that operate in diverse markets. Building responsible AI , where bias is minimized, is mission-critical: after all, AI that doesn’t work for everyone, ultimately doesn’t work. As the dialogue around responsible AI picks up steam in the next several years, expect organizations to zero in on reducing model bias further. Recall that AI training data prepared by humans can reflect their biases, which isn’t great for an algorithm’s objectivity. Solving for this bias requires including diverse perspectives from the beginning. Luckily, companies are starting to leverage the power of the AI economy by utilizing crowds of data annotators and sourcing these contributors on a global scale. Access to a worldwide crowd brings in diverse ideas, opinions, and values. These diverse perspectives become reflected in training data and the AI solution itself, leading to a final product that’s less biased and more functional for everyone. The global crowd also provides unique expertise and skills that may not be present on a company’s existing team, enabling broader project scope. The globalization of the AI economy offers the perfect platform for data annotators to contribute needed impact. As globalization continues, companies are becoming more cognizant of who they hire for annotation work and what type of diversity these individuals bring to the table. These factors are ideally covered in a comprehensive data management plan, one that should also include a protocol for data privacy and security. As data becomes more accessible, and more organizations jump into the AI space, there will be more significant opportunities for successes — and failures. But with each new story, knowledge is gained. Getting the data part right will continue to be viewed as instrumental for profitability, and concerted data management efforts should result in more effective, less biased models in 2021 and the years to come. At Appen, we have spent over 20 years annotating and collecting data using the best of breed technology platform and leveraging our diverse crowd to help ensure you can confidently deploy your AI models. To learn more about ethical considerations and our commitments when it comes to contractors annotating training data for AI, check out our Crowd Code of Ethics. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,449
2,021
"GAannotations automates contextual data for Google Analytics annotations | VentureBeat"
"https://venturebeat.com/2021/03/04/gaannotations-automates-contextual-data-for-google-analytics-annotations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GAannotations automates contextual data for Google Analytics annotations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Despite many pretenders to the Google Analytics throne , the service continues to dominate web and marketing analytics for small and enterprise-grade businesses alike. But popularity and omnipresence aside, there is always room for improvement. Over the years, many Google Analytics users have requested that Google produce an annotations API to enable web and marketing professionals to automate the process of adding contextual notes to Google Analytics, helping them figure out what events on a specific day spiked or decimated traffic. One of those users was Fernando Ideses, founder and CEO of a fledgling Israeli startup called GAannotations , which is emerging from stealth today with $1.2 million in funding to develop a tool he had wanted Google to offer natively. “After having no success with the requests, my team and I decided to take a stand,” Ideses said. While Google Analytics is fine for serving information on how end users engage with a website, what pages they spend the most time on, and so on, the data often lacks sufficient context. “How do you remember all the changes and improvements you made that affected your website, and what works?” Ideses said. “This question is why we created GAannotations — to add annotations in bulk.” Cause and effect It’s worth noting that Google Analytics already has a native annotations feature , but it’s a manual process that offers little in the way of automation via external APIs and integrations. Users can click on a date in a timeline and enter a description into a text box that explains what they did on that date — for example, rolling out a new software update or launching a marketing campaign. This helps create a narrative of sorts, with anyone in the company able to hover over a timeline’s peaks and troughs to see what was going on behind the scenes that day. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Google Analytics’ native annotations GAannotations builds on this by making it easier for marketing and analytics teams to create contextual notes by ingesting data in bulk from third-party sources through its API. The company has also created tutorials to help non-coders leverage Zapier to integrate with the likes of Google Ads, Mailchimp, Shopify, Slack, Asana, Trello, Jira, GitHub, and Bitbucket. This means it’s possible to correlate a Mailchimp marketing campaign or a new product that has been added to a company’s Shopify store with activity tracked by Google Analytics. Above: GAannotations: Integrations Additionally, GAannotations ships with a bunch of prebuilt integrations with external data sources around public holidays, retail events such as Black Friday, Google algorithm updates, Google Ads history, and even the weather. Above: GAannotation: Data source integrations out-the-box With the GAannotations Chrome extension installed, marketers can instantly see the impact of adding Black Friday-related keywords to a website landing page, for example, and whether a new display ad has had the desired effect, a software update has improved traffic, or that week-long snowstorm drove pageviews for a specific product. Above: GAannotations: The impact of Black Friday-related keywords to a website landing page GAannotations runs a freemium business model, starting at free for individual users — with restrictions on manual annotations and CSV uploads. The basic plan costs $19 per month for a single user, but it includes access to the annotations API, while $99 unlocks access for unlimited users and Google Analytics accounts, access to external data sources (e.g. holidays), and more. Above: GAannotations: Pricing There is at least one similar tool on the market, but GAannotations is hoping the breadth and flexibility of its data integrations will set it apart. Now, with funding from an under-the-radar Argentinian VC firm called Madero VC , the company has enough in the bank to grow its business beyond its early-stage customers and target everyone from small indie developers to big businesses and marketing teams. Ideses said the company plans to use a significant chunk of the money to add more metrics and analytics to the mix, including simplified table comparisons when using multiple data ranges, table heat maps to help visualize and compare data, and more. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,450
2,021
"Data labeling platform Snorkel AI nabs $85M | VentureBeat"
"https://venturebeat.com/2021/08/09/data-labeling-platform-snorkel-ai-nabs-85m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data labeling platform Snorkel AI nabs $85M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data labeling platform Snorkel AI today announced it has raised $85 million in a series C round co-led by Addition and funds and accounts managed by BlackRock, with participation from Greylock, GV, Lightspeed Venture Partners, Nepenthe Capital, and Walden. The round, which brings Snorkel’s total raised to $135 million and its valuation to $1 billion, will be used to continue scaling its engineering team, according to CEO Alex Ratner. Machine learning models have largely been commoditized for enterprise applications. The success of an a AI project hinges on the labeled data used to train these models, most of which are supervised or semi-supervised. In spite of spending time and resources on data labeling, organizations often end up with small and low-quality datasets. Data scientists are not uncommonly forced to focus on the model alone, an approach that can fail to deliver strong performance — and struggle to adapt to changing data. Scaling data labeling Founded in 2019 by a team spun out of the Stanford AI Lab and engineers hailing from Apple, Facebook, Google, Microsoft, and Nvidia, Snorkel enables companies to build AI-powered apps with “programmatic” data labeling, with the goal of minimizing the need to hand-label model training data. The company’s platform lets users annotate and manage data using software development kits and no-code interfaces, training models and identifying model error modes to iteratively improve on them. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Snorkel recently launched Application Studio, which hosts templates for common AI tasks, like contract intelligence, news analytics, customer interaction routing, text and document classification, named entity recognition, and information extraction. The service also provides packaged app-specific preprocessors, programmatic labeling templates, and high-performance open source models that can be trained with private data, in addition to workflows that decompose apps into modular pieces. Snorkel competes with Scale AI, Appen, Labelbox, and Cloudfactory in the data labeling space, as well as incumbents like Amazon. But among Snorkel’s customers and partners are “some of the world’s biggest brands,” including Apple, Intel, Stanford Medicine, and U.S. government agencies, Ratner says. “We’re incredibly excited by the value Snorkel has driven for our enterprise customers by enabling them to adopt a programmatic, data-centric approach to AI, taking projects previously blocked on the data to production value in days. This new series C funding will enable us to accelerate the pace of our product development even further and to bring Snorkel to even more domains and use cases,” he said in a statement. Palo Alto, California-based Snorkel previously raised $35 million in a series B round led by Lightspeed. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,451
2,017
"Synthetic DNA could be the next tech breakthrough | VentureBeat"
"https://venturebeat.com/2017/01/26/could-synthetic-dna-be-the-next-tech-breakthrough"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Synthetic DNA could be the next tech breakthrough Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Until recently, creating silk has been the exclusive domain of silkworms and some spiders, as well as the occasional superhero. Today, though, inside the laboratories of Bolt Threads in Emeryville, Calif., fermentation tanks use yeast, sugar — and some DNA code borrowed from spiders — to form a material that is then spun into fibers the way traditional silk, rayon, and polyester is made. The result, the company says, is fabric that is stronger than steel, stretchier than spandex, and softer than silk. “This is a new era of materials,” says Dan Widmaier, Bolt’s CEO. Most textiles today are made from petroleum-based polyester, which is harmful to the environment when disposed of. By contrast, Bolt’s fabric will be bio­degradable, the company says. As Widmaier puts it, the new material “has massive potential to change the world for the better.” This month Bolt will undertake a make-or-break challenge: expanding its lab-size process into a commercial-scale operation for three customers, including the apparel company Patagonia. (Eventually, Bolt hopes to produce its own branded clothing.) If the company succeeds, the development will be a key marker for the emerging field called synthetic biology. Bolt is only one startup using such technologies, which let scientists reengineer the genetics of living organisms to make products ranging from food sweeteners to “leather” to woodlike composites. Investors have taken note. Last year synthetic biology companies nabbed $1 billion from investors, including tech names like Peter Thiel, Eric Schmidt, Marc Andreessen, Max Levchin, and Jerry Yang. That’s double the amount from 2014, according to SynBioBeta, a consulting firm that tracks the industry. There’s a reason the Silicon Valley stars are drawn to synthetic biology. DNA, made up of four nucleotide molecules in a sequence, is a code that can be edited and written — not unlike software. The commercialization of DNA sequencing (the reading of an organism’s code) and synthesis (the writing of that code) has accelerated since the mapping of the human genome was completed in 2003. In the past few years new robotics, computational biology, and gene-editing and gene-synthesis technologies have emerged to make synthetic biology efficient and cost-effective. The highly touted Crispr tool, for instance, can snip DNA sequences and insert desired features, while technology from startup Twist Bioscience speeds up gene synthesis by miniaturizing the chemical reaction on silicon. Costs are also falling fast. “We’re decoding biology,” says Bryan Johnson, partner in the OS Fund and a vocal proponent of the field. “Life itself is becoming programmable.” Believers like Johnson offer audacious predictions. One day, they say, we’ll be able to grow tissue, cars, and houses using DNA, energy, and sunlight. Computers might be assembled out of brain cells. Of course, more than a dollop of caution is in order. One need look back only a few years for a sobering reminder. In 2008 some startups promised to use synthetic biology to produce biofuels from pond scum. But microorganisms behaved differently in factory settings, it turned out, than in labs. When oil prices fell, several of the startups failed. This time synthetic biology companies are focusing on materials — proponents assert they have higher margins and fewer market fluctuations than fuels — and specialty chemicals. Today the industry believes it has better tools for editing, measuring results, and automating the way chemicals and microorganisms are produced in large quantities. A flurry of innovation is underway. In Boston, Ginkgo Bioworks churns out organisms used for new perfume fragrances and food sweeteners, using DNA code from hard-to-grow plants and extinct flowers. Says CEO and cofounder Jason Kelly: “Things have really accelerated in just the past two years.” Fueled by $154 million from investors, Ginkgo recently opened its second “foundry,” an 18,000-square-foot factory stocked with fermentation tanks, mass spectrometers, software, robots, and traditional bench biology tools to design, build, and test DNA. Kelly says Ginkgo can cut the costs of production of these fragrances and flavors by 50% to 90%, offer customers entirely new scents for their products by mixing and matching DNA letters — and the company can do it without the environmental costs.Take rose oil, for instance, which is used for perfumes. The plant is hard to grow, produces very little oil per plant, and is increasingly in short supply. Ginkgo’s executives request the gene code for the oil from an outside provider. Within two to six weeks, they receive a vial with a liquid DNA sample by mail. They test it, then rearrange the DNA letters and request more samples until they come up with a unique-smelling oil that could be reproduced synthetically for half the cost of traditional oils. In the past nine months, Ginkgo says, it has landed 10 new customers who placed orders for dozens of new organisms. In Germany, a Bolt competitor called AMSilk is working to develop another spider-based fiber called biosteel for high-performance, biodegradable shoes. In Brooklyn, Modern Meadow, backed by $53 million from investors, creates “leather” using engineered cells rather than animal skins. A company called Ecovative, based in Green Island, N.Y., is “growing” living room tables, acoustical panels, and packaging. Ecovative takes a fiber made from wood or plants, chops it up, adds mycelium (the root system in mushrooms), and lets the mycelium grow through and around the fibers. Ecovative takes that composite and uses standard presses to shape it, creating a solid surface that looks laminated. Says Ecovative CEO Eben Bayer: “I like to think of it as a new kind of wood, and you can’t get a more sustainable piece of furniture on the planet.” The Department of Defense awarded Ecovative a preliminary contract to develop “programmable materials” to grow temporary living structures for the military that are sustainable and reduce waste. Now the synbio manufacturers have to achieve what many biofuel startups could not: transferring what works in a lab to large-scale commercial operations. “Production can be fickle and can be hard to control in a vat the size of a bus,” says Mark Bünger, who follows the sector at Lux Research. Widmaier says making that leap to commercial production has been far more difficult for his company than establishing the complex technology to make spider silk from DNA. This month, when Bolt flips the switch on its 11,000-square-foot factory, it will draw on the expertise of more than two dozen Ph.D. scientists, many of whom will bring lessons learned from the biofuel bust. “Now,” Widmaier says, “the real challenge begins.” A version of this article appears in the February 1, 2017 issue of Fortune with the headline “The Rise of Synthetic DNA.” This story originally appeared on Fortune.com. Copyright 2017 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,452
2,020
"Density raises $51 million to promote social distancing with AI occupancy-tracking sensors | VentureBeat"
"https://venturebeat.com/2020/07/28/density-raises-51-million-to-promote-social-distancing-with-ai-occupancy-tracking-sensors"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Density raises $51 million to promote social distancing with AI occupancy-tracking sensors Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Density , a startup building AI-powered, people-counting infrared sensors, today closed a $51 million financing round. The infusion brings the San Francisco-based startup’s total raised to over $74 million, following $23 million in previous funding. Cofounder and CEO Andrew Farah says the $51 million will be put toward addressing “unprecedented demand” from offices, manufacturers, grocery stores, industrial plants, and governments trying to abide by capacity limits during the pandemic. In many ways, Density’s products were tailor-made for a global health crisis. Cities around the world have imposed limits on businesses — particularly restaurants — regarding the number of customers they allow in. Moreover, the shift to work from home and financial headwinds have companies questioning the need for physical office space. Even before the pandemic, U.S. Commercial Real Estate Services estimated unused commercial property in the U.S. is worth about $1 trillion. Density leverages depth-measuring hardware and an AI backend to perform crowd analytics that overcome the challenges posed by corners, hallways, doorways, and conference rooms. Clients like Pepsi, Delta, Verizon, Uber, Marriot, and ExxonMobil use its stack to figure out which parts of their offices get the most use and which the least and to deliver people-counting metrics to hundreds and even thousands of employees. Farah conceived of Density’s core technology while in graduate school at Syracuse University and working at a mobile software development firm. His modest goal — to measure how busy a popular coffee shop was — led him to explore a couple of solutions before settling on the one that formed the foundation for Density’s people-counting sensors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: The view from one of Density’s sensors. The sensor consists of over 800 components sourced from 137 supply chains Density itself manages and operates sort of like a small laptop. It’s a rectangular box that fits in the palm of an average-sized hand, belying its complexity. The sensor attaches above a doorway and tracks movement frame-by-frame, with two Class 1 infrared lasers that bounce off the floor. Algorithms filter out signal noise (for example, boxes, strollers, pushcarts, plates, and other items being carried or pushed) to measure the direction, collision, and speed of people walking into and out of view. The data is funneled via Wi-Fi to Density’s cloud-hosted backend, where it’s processed and analyzed. A web dashboard, SMS messages, signage, and mobile apps provide insights like the real-time capacity of a room and historical crowd sizes, while an API allows third-party apps, services, and websites to make use of the data in novel ways. One of Density’s clients — a large pharmaceutical company — uses the sensors to keep its restrooms spick and span by deploying cleaners every 70 uses. Ride-hailing giant Uber uses the sensors in one of its support centers to make sure the center is adequately staffed. Other applications include identifying which building entrances are most used during evacuation drills and estimating the number of people on the top floor of an office during a fire. According to Farah, Density’s infrared tracking method offers a major advantage over other approaches: privacy. Unlike a security camera, its sensors can’t determine the gender or ethnicity of the people it tracks, nor perform invasive facial recognition. “It’s far easier to do a camera,” he told VentureBeat in a previous interview. “But we believe the data pendulum has swung too far in one direction. It’s good to see people ask about data being collected … We knew that the right market was corporate clients with office space because our sensor can do occupancy detection inside of a room where a camera can’t go.” Density charges $895 per device, and its customers — which include a homeless shelter network, theme parks, Ivy League colleges, and others beyond the above-mentioned brands — pay a monthly or annual fee for access to the data. The pandemic initially hurt Density’s sales because many potential customers temporarily shut down. But since the close of its series B funding in June 2018, Density says its sensors have counted more than 150 million people in dozens of countries across hundreds of millions of square feet. Recently, it expanded manufacturing in Syracuse, New York by 90% to keep up with a pandemic-related uptick in orders. Farah says Density is currently focusing its marketing efforts on grocery stores and other businesses that are deemed “essential” but required to adhere to social-distancing guidelines. Following the round, the company plans to expand its sales team and further develop software and platform products. Kleiner Perkins led Density’s series C round, with contributions from 01 Advisors, Upfront Ventures, Founders Fund, Ludlow Ventures, Launch, LPC Ventures, and individual investors Alex Rodriguez, Alex Davis, Kevin and Julia Hartz, and Cyan and Scott Banister. Founded in 2014, the startup now employs more than 50 people across its Syracuse and San Francisco offices. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,453
2,021
"GoodData unveils analytics as a set of microservices in data-as-a-service platform | VentureBeat"
"https://venturebeat.com/2021/04/23/gooddata-unveils-analytics-as-a-set-of-microservices-in-data-as-service-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GoodData unveils analytics as a set of microservices in data-as-a-service platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. GoodData this week unfurled a data-as-a-service platform that employs Docker containers and microservices running on Kubernetes clusters to dynamically scale analytics up and down on demand. The GoodData.CloudNative (GoodData.CN) platform heralds a new cloud-native era that enables easier embedding of analytics within applications. Key to that is a well-defined set of application programming interfaces (APIs), said Roman Stanek, CEO of GoodData. “It makes analytics much more flexible,” he said. Initially available for free via a community edition of the platform that comes in the form of a single Docker container image, GoodData also plans to make GoodData.CN available in Freemium, Growth, and Enterprise editions that come with additional capabilities, along with support from GoodData. Deploy anywhere Most existing analytics applications are based on monolithic architectures originally created for desktop PCs. These are not designed to dynamically scale up and down on demand. GoodData.CN takes advantage of the orchestration capabilities of Kubernetes to provide application developers with as much compute and storage resources as they can afford to consume, either via a public cloud or in an on-premises IT environment. The ability to deploy GoodData.CN anywhere is crucial because multiple centers of data gravity will always exist in the enterprise, noted Stanek. It’s unlikely any major enterprise is ever going to be able to standardize on a single data warehouse or data lake, he said. The GoodData.CN platform provides all the metadata capabilities required to maintain a single source of truth across what are rapidly becoming highly federated environments, noted Stanek. A programmable API also makes it feasible to deploy a headless data-as-a-service platform for processing analytics that can be readily accessed and consumed as a service by multiple applications. Previously, individual developers had to take the time and effort to embed analytics capabilities directly within their application, noted Stanek. The GoodData.CN platform makes applications more efficient and, as a consequence, smaller. That is because more analytics processing is offloaded to the headless platform, added Stanek. Employ microservices Pressure to embed analytics in every application is mounting as end users seek to make faster and better fact-based decisions. Rather than having to move data into a separate application to analyze it, Stanek said the GoodData.CN platform makes it simpler to infuse real-time analytics within an application. The need to embed analytics within applications is becoming more pronounced with the acceleration of various digital business transformation initiatives. The expectation is that next-generation applications will all provide some type of embedded analytics capability that enables end users to make better decisions in the moment versus long waits for a report prepared by a business analyst, Stanek said. In many cases, the query that was launched by a business analyst is no longer especially relevant by the time that a report can be delivered. GoodData is not likely the last provider of software that will be going cloud-native. A microservices-based application makes it easier to add new features and capabilities to software by ripping and replacing containers. It also makes applications more resilient. That is because, should any microservice become unavailable for any reason, calls are dynamically rerouted to other microservices to ensure redundancy. Most software developers are rapidly moving down the path to microservices as an alternative to monolithic applications that may be easier to build but that are increasingly viewed as being inflexible. In the case of GoodData, it’s not clear to what degree they may be ahead of rivals making similar transitions. However, enterprise IT organizations should expect in the months ahead a wave of headless services based on microservices architectures that will change the way data is consumed and managed. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,454
2,021
"Nvidia announces BlueField-3 DPUs for AI and analytics workloads | VentureBeat"
"https://venturebeat.com/2021/04/12/nvidia-announces-bluefield-3-dpus-for-ai-and-analytics-workloads"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia announces BlueField-3 DPUs for AI and analytics workloads Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At GTC 2021, Nvidia this morning took the wraps off of the BlueField-3 data processing unit (DPU), the latest in its lineup of datacenter machines built for AI and analytics workloads. BlueField-3 packs software-defined networking, storage, and cybersecurity acceleration capabilities, offering what Nvidia claims is the equivalent of up to 300 CPU cores of horsepower — or 1.5 TOPs. As of 2019, the adoption rate of big data analytics stood at 52.5% among organizations, with a further 38% intending to use the technology in the future, according to Statista. The advantages are obvious. A 2019 survey by Enterprenuer.com found that enterprises implementing big data analytics have seen a profit increase of 8% to 10%. Nvidia’s BlueField-3 DPUs features 300GbE/NDR interconnects and can deliver up to 10 times the compute of the previous-generation BlueField-2 DPUs, with 22 billion transistors, while isolating apps from the control and management plane. The 16 ARM A78 cores inside can manage 4 times the cryptography performance, and BlueField-3 is the first DPU to support fifth-generation PCIe and time-synchronized datacenter acceleration. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! BlueField-3 can additionally act as a monitoring agent for Morpheus, Nvidia’s AI-enabled cloud cybersecurity platform that was also announced today. Moreover, it takes advantage of DOCA, the company’s datacenter-on-a-chip architecture for building software-defined, hardware-accelerated networking, storage, security, and management apps running on BlueField DPUs. BlueField-3 is expected to sample in the first quarter of 2022. It’s fully backward-compatible with BlueField-2, Nvidia says. “Modern hyperscale clouds are driving a fundamental new architecture for data centers,” Nvidia founder and CEO Jensen Huang said in a press release. “A new type of processor, designed to process data center infrastructure software, is needed to offload and accelerate the tremendous compute load of virtualization, networking, storage, security and other cloud-native AI services. The time for BlueField DPU has come.” Above: Nvidia’s DPU roadmap. Nvidia’s datacenter business, which includes its DPU segment, is fast becoming a major revenue driver for the company. In February, it posted record quarterly revenue of $1.9 billion, up 97% from a year ago. Full-year datacenter revenue jumped 124%, to $6.7 billion. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,455
2,021
"AI startup funding remained strong in Q2, report finds | VentureBeat"
"https://venturebeat.com/2021/07/22/ai-startup-funding-remained-strong-in-q2-report-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI startup funding remained strong in Q2, report finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The pandemic spurred investments in AI across nearly every industry. That’s according to CB Insights’ AI in the Numbers Q2 2021 report , which found that AI startups attracted record funding — more than $20 billion — despite a drop in deal volume. While the adoption rate varies between businesses, a majority of them — 95% in a recent S&P Global report — consider AI to be important in their digital transformation efforts. Organizations were expected to invest more than $50 billion in AI systems globally in 2020, according to IDC, up from $37.5 billion in 2019. And by 2024, investment is expected to reach $110 billion. The U.S. led as an AI hub in Q2, according to CB Insights, attracting 41% of AI startup venture equity deals. U.S.-based companies accounted for 41% of deals in the previous quarter, up 39% year-over-year. Meanwhile, China remained second to the U.S., with an uptick of 17% quarter-over-quarter. AI startup funding in Q2 was driven mostly by “mega-rounds,” or deals worth $100 million or more. A total of 24 companies reached $1 billion “unicorn” valuations for the first time, and AI exits increased 125% from the previous quarter, while AI initial public offerings (IPO) reached an all-time quarterly high of 11. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unicorn valuations Cybersecurity and processor companies led the wave of newly minted unicorns, with finance and insurance and retail and consumer packaged goods following close behind. On the other hand, health care AI continued to have the largest deal share, accounting for 17% of all AI deals in Q2. Overall mid-stage deal share — i.e., series B and series C — reached an all-time high of 26% during Q2, while late-stage deal share — series D and beyond — remained tied with its Q1 2021 record of 9%. But the news wasn’t all positive. CB Insights found that seed, angel, and series A deals took a downward trend, making up only 55% of Q2 deals, with corporate venture backing leveling out. Just 39% of all deals for AI startups included participation from a corporate or corporate venture capital investor, up slightly from 31% in Q1 2021. But CB Insights says that the rise in AI startup exits in Q2 reflects the strength of the sector. “The decline of early-stage deals and increase of mid- and late-stage deals hint at a maturing market — however, early-stage rounds still represent the majority of AI deals,” analysts at the firm wrote. “Plateauing [corporate] participation in AI deals may reflect a stronger focus on internal R&D or corporations choosing to develop relationships with AI portfolio companies instead of sourcing new deals.” Experts predict that the AI and machine learning technologies market will reach $191 billion by the year 2025, a jump from the approximately $40 billion it’s valued at currently. In a recent survey, Appen found that companies increased investments by 4.6% on average in 2020, with a plan to invest 8.3% per year over the next three years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,456
2,020
"Recognizing data points that signal trends for the future of business post-pandemic | VentureBeat"
"https://venturebeat.com/2020/10/17/recognizing-data-points-that-signal-trends-for-the-future-of-business-post-pandemic"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Recognizing data points that signal trends for the future of business post-pandemic Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Planning for a post-COVID-19 future and creating a robust enterprise strategy require both strategic scenario planning and the ability to recognize what scenario planners call “news from the future” — data points that tell you whether the world is trending in the direction of one or another of your imagined scenarios. As with any scatter plot, data points are all over the map, but when you gather enough of them, you can start to see the trend line emerge. Because there are often many factors pushing or pulling in different directions, it’s useful to think of trends as vectors — quantities that are described by both a magnitude and a direction, which may cancel, amplify, or redirect each other. New data points can also show whether vectors are accelerating or decelerating. As you see how trend vectors affect each other, or that new ones need to be added, you can continually update your scenarios. Sometimes a trend itself is obvious. Twitter , Facebook , Google , and Microsoft each announced a commitment to new work-from-home policies even after the pandemic. But how widespread will this be? To see if other companies are following in their footsteps, look for job listings from companies in your industry that target new metro areas or ignore location entirely. Drops in the price or occupancy rate of commercial real estate, and how that spills over into residential real estate, might add or subtract from the vector. Think through possible follow-on effects to whatever trend you’re watching. What are the second-order consequences of a broader embrace of the work-from-home experience? Your scenarios might include the possible emptying out of dense cities that are dependent on public transportation and movement from megacities to suburbs or to smaller cities. Depending on who your workers and your customers are, these changes could have an enormous impact on your business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What are some vectors you might want to watch? And what are examples of news from the future along those trend lines? The progress of the pandemic itself. Are cases and deaths increasing or declining? If you’re in the U.S., Covid Act Now is a great site for tracking the pandemic. This suggests that pandemic response won’t be a “one and done” strategy, but more like what Tomas Pueyo described in his essay “ The Hammer and the Dance ,” in which countries drop the hammer to reduce cases, reopen their economies, see recurrences, and drop the hammer again, with the response increasingly fine-grained and local as better data becomes available. As states and countries reopen, there is a lot of new data that will shape all of our estimates of the future, albeit with new uncertainty about a possible resurgence (even if the results are positive). Is there progress toward treatment or a vaccine? Several vaccine candidates are in trials, and new treatments seem to improve the prognosis for the disease. A vector pushing in the other direction is the discovery of previously missed symptoms or transmission factors. Another is the politicization of public health, which began with masks but may also extend to vaccine denial. We may be living with uncertainty for a long time to come; any strategy involving a “return to normal” needs to be held very loosely. How do people respond if and when the pandemic abates? Whatever comes back is likely to be irretrievably changed. As Ben Evans said, sometimes the writing is on the wall, but we don’t read it. It was the end of the road for BlackBerry the moment the iPhone was introduced; it just took four years for the story to play out. Sometimes a seemingly unrelated shock accelerates a long overdue collapse. For example, ecommerce has been growing its share for years, but this may be the moment when the balance tips and much in-person retail never comes back. As Evans put it, a bunch of industries look like candidates to endure a decade of inevitability in a week’s time. Will people continue to walk and ride bikes, bake bread at home, and grow their own vegetables? (This may vary from country to country. People in Europe still treasure their garden allotments 70 years after the end of World War II, but U.S. victory gardens were a passing thing.) Will businesses have the confidence to hire again? Will consumers have the confidence to spend again? What percentage of businesses that shut down will reopen? Are people being rehired and unemployment rates going down? The so-called Y-shaped recovery , in which upper-income jobs have recovered while lower-income jobs are still stagnant, has been so unprecedented that it hasn’t yet made Wikipedia’s list of recession shapes. Are there meaningful policy innovations that are catching on? Researchers in Israel have proposed a model for business reopening in which people work four-day shifts followed by ten days off in lockdown. Their calculations suggest that this would lower transmissibility of the virus almost as well as full lockdown policies, but allow people in many more occupations to get back to work, and many more businesses to reopen. Might experiments like this lead to permanent changes in work or schooling schedules? What about other long-discussed changes like universal basic income or a shorter work week? How will governments pay for the cost of the crisis, and what will the economic consequences be? There are those, like Ray Dalio, who think that printing money to pay for the crisis actually solves a long-standing debt crisis that was about to crash down on us in any case. Others disagree. Are business models sustainable under new conditions? Many businesses, such as airlines, hotels, on-demand transportation, and restaurants, are geared very tightly to full occupancy. If airlines have to run planes with half as many passengers, will flights ever be cheap enough to attract the level of passengers we had before the pandemic? Could “on demand” transportation go away forever? Uber and Lyft were already unprofitable because they were subsidizing low prices for passengers. Or might these companies be replaced as the model evolves, much as AOL yielded online leadership to Yahoo!, which lost it in turn to Google? (My bet is that algorithmic, on-demand business models are still in their infancy.) These topics are all over the news. You can’t escape them, but you can form your own assessment of the deeper story behind them and its relevance to your strategy. Remember to think of the stories as clustering along lines with magnitude and direction. Do they start to show patterns? More importantly, find vectors specific to your business. These may call for deep changes to your strategy. Also remember that contrarian investments can bring outsized returns. It may be that there are markets that you believe in, where you think you can make a positive difference for your customers despite their struggles, and go long. For O’Reilly, this has been true of many technologies where we placed early bets against what seemed overwhelming odds of success. Chasing what’s “hot” puts you in the midst of ferocious competition. Thinking deeply about who needs you and your products and how you can truly help your customers is the basis for a far more robust strategy. Tim O’Reilly is founder and CEO of O’Reilly Media. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,457
2,021
"Google sponsors OSTIF security reviews of critical open source software | VentureBeat"
"https://venturebeat.com/2021/09/16/google-sponsors-ostif-security-reviews-of-critical-open-source-software"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google sponsors OSTIF security reviews of critical open source software Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Google is giving its financial backing to the Open Source Technology Improvement Fund ( OSTIF ), with plans to sponsor security reviews in a handful of critical open source software projects. Open source software plays an integral role in the software supply chain, and it is incorporated into many critical infrastructure and national security systems. However, data suggests “upstream” attacks on open source software have increased significantly in the past year. Moreover, after countless organizations — from government agencies to hospitals and corporations — were hit by targeted software supply chain attacks , President Biden issued an executive order in May outlining measures to combat it. Open-sourced Today’s announcement comes less than a month after Google unveiled a $10 billion cybersecurity commitment to support President Biden’s plans to bolster U.S. cyber defenses. As part of its five-year investment, Google said it would help fund zero-trust program expansions, secure the software supply chain, improve open source security, and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Specifically, Google pledged $100 million to third-party foundations that support open source security. The first fruits of this commitment will see Google fund OSTIF’s new managed audit program (MAP), with a view toward expanding its existing security reviews to more projects. OSTIF, a nonprofit organization founded back in 2015 to support security audits in open source technologies, initially identified 25 projects for MAP, which it says identifies “the most critical digital infrastructure.” From there, it prioritized eight libraries, frameworks, and apps “that would benefit the most from security improvements and make the largest impact on the open source ecosystem that relies on them.” These eight projects are: Git , Lodash , Laravel , Slf4j , Jackson-core, Jackson-databind, Httpcomponents-core , and Httpcomponents-client. It’s worth noting that Google’s investment isn’t entirely altruistic, as its own software and infrastructure relies heavily on robust open source components — the internet giant has announced a slew of similar open source-related security initiatives this year. Back in February, Google revealed it was sponsoring Linux kernel developers , for example, while a few months ago it introduced Supply Chain Levels for Software Artifacts (SLSA), which it touts as an end-to-end framework for “ensuring the integrity of software artifacts throughout the software supply chain.” The company also recently extended its open source vulnerabilities database to cover Python, Rust, Go, and DWF. Although OSTIF is focusing MAP on just eight projects for now, it hopes to “significantly grow operations to support hundreds of projects in the coming few years.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,458
2,021
"Open source security scanning platform Snyk raises $300M | VentureBeat"
"https://venturebeat.com/2021/09/09/open-source-security-scanning-platform-snyk-raises-530m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open source security scanning platform Snyk raises $300M Share on Facebook Share on X Share on LinkedIn Snyk Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Snyk , a security scanning platform used by developers at companies like Google, Salesforce, Intuit, and Atlassian, today announced a $530 million series F investment round that values the company at $8.5 billion. The transaction included primary and secondary investments, meaning Snyk only raised around $300 million in fresh capital, with investors buying existing shares for the rest. Snyk’s SaaS platform helps developers identify vulnerabilities and license violations in their open source codebases, containers, and Kubernetes applications. By connecting their code repository, be it GitHub, GitLab, or Bitbucket, Snyk customers gain access to a giant vulnerability database, which enables Snyk to describe the problem, point to where the flaw in the code lies, and even suggest a fix. That Snyk targets its security smarts at developers rather than security teams is notable, as it means it’s looking to catch issues not only before they go into the live codebase, but in real time as the developer codes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Simply shifting left [testing early in the software development process] is no longer enough, and security now needs to be fully owned by developers so that they are equipped to address security issues in real time as they emerge,” Snyk cofounder and president Guy Podjarny said. “Our approach makes security easy so that modern developers are set up for true success, securing what they build without having to become a security expert or slow down.” Above: Snyk in action The problem Most modern software relies to some degree on open source components, saving businesses the considerable resources involved in building and maintaining everything in-house. But reports suggest 84% of the commercial codebases contain at least one open source vulnerability, leaving the software supply chain vulnerable to myriad external threats. Thus, the business of securing open source software is growing. Earlier this year, Snyk rival WhiteSource raised $75 million to bolster its open source security management and compliance platform, which is used by companies like Microsoft and IBM. Snyk has had a busy 12 months too. The Boston-headquartered company, which was founded out of London and Tel Aviv back in 2015, has now raised $775 million since its inception. This includes a $150 million tranche last year, followed by a $300 million cash injection in March that valued the firm at $4.7 billion. This means Snyk’s perceived worth has almost doubled in the space of six months. On top of that, Snyk has been on something of an acquisition spree, snapping up AI-powered semantic code analysis platform Deepcode; Manifold ; and, more recently , FossID, a software composition analysis tool for open source code. And back in May, Snyk found a powerful ally in the form of cybersecurity giant Trend Micro , which launched a new product in conjunction with Snyk to offer security teams “continuous insight” into open source vulnerabilities and compliance risks. Snyk’s latest funding round was co-led by Tiger Global and Sands Capital, with participation from a slew of high-profile investors, including BlackRock, Accel, Salesforce Ventures, Atlassian Ventures, and Coatue. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,459
2,020
"Intel trains neuromorphic chip to detect 10 different odors | VentureBeat"
"https://venturebeat.com/2020/03/16/intel-trains-neuromorphic-chip-to-detect-10-different-odors"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel trains neuromorphic chip to detect 10 different odors Share on Facebook Share on X Share on LinkedIn Loihi, Intel’s first-generation neuromorphic research chip. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Intel and Cornell University today published a joint paper demonstrating the ability of Intel’s neuromorphic chip, Loihi , to learn and recognize 10 hazardous materials from smell — even in the presence of “significant” data noise and occlusion. The coauthors say it shows how neuromorphic computing could be used to detect the precursor smells to explosives, narcotics, polymers, and more. In the study, which was published this week in the journal Nature Machine Intelligence , the Intel- and Cornell-affiliated researchers describe “teaching” Loihi odors by configuring the circuit diagram of biological olfaction, drawing from a data set consisting of the activity of 72 chemical sensors in response to various smells. They say that their technique didn’t disrupt the chip’s memory of the scents and that it achieved “superior” recognition accuracy compared with conventional state-of-the-art methods, including a machine learning solution that required 3,000 times more training samples per class to reach the same level of classification accuracy. Nabil Imam, a neuromorphic computing lab senior research scientist at Intel, believes the research will pave the way for neuromorphic systems that can diagnose diseases, detect weapons and explosives, find narcotics, and spot signs of smoke and carbon monoxide. “We are developing neural algorithms on Loihi that mimic what happens in your brain when you smell something,” he said in a statement. “This work is a prime example of contemporary research at the crossroads of neuroscience and artificial intelligence and demonstrates Loihi’s potential to provide important sensing capabilities that could benefit various industries.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Neuromorphic engineering, also known as neuromorphic computing, describes the use of circuits that mimic the nervous system’s neuro-biological architectures. Researchers at Intel, IBM, HP, MIT, Purdue, Stanford, and others hope to leverage it to develop a supercomputer a thousand times more powerful than any today. Intel’s 14-nanometer Loihi chip has a 60-millimeter die size and contains over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses, as well as three managing Lakemont cores for orchestration. Uniquely, Loihi features a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs), or AI models that incorporate time into their operating model such that the components of the model don’t process input data simultaneously. Intel claims this will be used for the implementation of adaptive self-modifying, event-driven, and fine-grained parallel computations “with high efficiency.” According to Intel, Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with more than three orders of magnitude gains in speed and energy efficiency. Moreover, Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times (whereas traditional hardware uses 500% more power), and it consumes roughly 100 times less energy than widely used CPU-run simultaneous location and mapping methods. Beyond the neuromorphic computing realm, researchers at Google, the Canadian Institute for Advanced Research, the Vector Institute for Artificial Intelligence, the University of Toronto, Arizona State University, and others have investigated AI approaches to the problems of molecule identification and odor prediction. Google recently demonstrated a model that outperforms state-of-the-art approaches and the top-performing model from the DREAM Olfaction Prediction Challenge, a competition for mapping the chemical properties of odors. Separately, IBM has developed Hypertaste , an “artificial tongue” designed to fingerprint beverages and other liquids “less fit for ingestion.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,460
2,020
"National University of Singapore used Intel neuromorphic chip to develop touch-sensing robotic 'skin' | VentureBeat"
"https://venturebeat.com/2020/07/15/national-university-of-singapore-used-intel-neuromorphic-chip-to-develop-touch-sensing-robotic-skin"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages National University of Singapore used Intel neuromorphic chip to develop touch-sensing robotic ‘skin’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During the virtually held Robotics: Science and Systems 2020 conference this week, scientists affiliated with the National University of Singapore (NUS) presented research that combines robotic vision and touch sensing with Intel-designed neuromorphic processors. The researchers claim the “electronic skin” — dubbed Asynchronous Coded Electronic Skin (ACES) — can detect touches more than 1,000 times faster than the human nervous system and identify the shape, texture, and hardness of objects within 10 milliseconds. At the same time, ACES is designed to be modular and highly robust to damage, ensuring it can continue functioning as long as at least one sensor remains. The human sense of touch is fine-grained enough to distinguish between surfaces that differ by only a single layer of molecules, yet the majority of today’s autonomous robots operate solely via visual, spatial, and inertial processing techniques. Bringing humanlike touch to machines could significantly improve their utility and even lead to new use cases. For example, robotic arms with artificial “skin” could employ tactile sensing to detect and grip unfamiliar objects with just the right amount of pressure. Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing their system. ACES comprises an electrical conductor connected to a network of sensors, which collect signals to enable the system to differentiate contact between sensors. ACES takes less than 60 nanoseconds to detect touch — reportedly the fastest rate to date for “electronic skin.” An Intel Loihi neuromorphic chip processes the data collected by the ACES sensors. (Neuromorphic engineering, also known as neuromorphic computing, describes the use of circuits that mimic the nervous system’s neurobiological architectures.) The 14-nanometer processor, which has a 60-millimeter die size and contains over 2 billion transistors, features a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). SNNs incorporate time into their operating model so the components of the model don’t process input data simultaneously, supporting workloads like touch perception that involve self-modifying and event-driven parallel computations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Intel, Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with gains in speed and energy efficiency greater than three orders of magnitude. Moreover, Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times (whereas traditional hardware uses 500% more power). It also consumes roughly 100 times less energy than widely used CPU-run simultaneous location and mapping methods. Above: A visualization showing the ACES sensor feedback. In their initial experiment, the NUS researchers used a robotic hand fitted with ACES to read Braille, passing the tactile data to Loihi via the cloud. Loihi achieved over 92% accuracy in classifying the Braille letters while using 20 times less power than a standard classical processor, according to the research. Building on this work, the NUS team further improved ACES’ perception capabilities by combining vision and touch data in an SNN. To do so, they tasked a robot with classifying various opaque containers containing differing amounts of liquid, using sensory inputs from ACES and recordings from an RGB video camera. Leveraging the same tactile and vision sensors, they also tested the ability of the perception system to identify rotational slip, an important metric for object grasping. Once this sensory data had been captured, the team sent it to both a graphics card and a Loihi chip to compare processing capabilities. The results show that combining vision and touch with an SNN led to 10% greater object classification accuracy versus a vision-only system. They also demonstrate Loihi’s prowess for sensory data processing: The chip was 21% faster than the best-performing graphics card while using 45 times less power. ACES can be paired with other synthetic “layers” of skin, like the transparent self-healing sensor skin layer developed by NUS assistant professor Benjamin Tee (a coauthor of the ACES research). Potential applications include disaster recovery robots and prosthetic limbs that help disabled people restore their sense of touch. Along with Intel, researchers at IBM, HP, MIT, Purdue, and Stanford hope to leverage neuromorphic computing to develop supercomputers a thousand times more powerful than any today. Chips like Loihi excel at constraint satisfaction problems, which require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. They’ve also been shown to rapidly identify the shortest paths in graphs and perform approximate image searches, as well as mathematically optimizing specific objectives over time in real-world optimization problems. ACES is among the first practical demonstration of the technology’s capabilities, following Intel research showing neuromorphic chips can be used to “teach” an AI model to distinguish between 10 different scents. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,461
2,020
"Intel details robotic assistive arm for wheelchair users | VentureBeat"
"https://venturebeat.com/2020/08/19/intel-details-robotic-assistive-arm-for-wheelchair-users"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel details robotic assistive arm for wheelchair users Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Intel today detailed a collaboration with Accenture and the ALYN Woldenberg Family Hospital (the Neuro-Biomorphic Engineering Lab at the Open University of Israel), to develop a wheelchair-mounted robotic arm that helps people with spinal injuries perform daily tasks. In the coming months, researchers affiliated with the organizations plan to clinically evaluate and test the arm with children at ALYN. While similar assistive technologies exist today, they’re prohibitively expensive for most of the estimated 75 million wheelchair users around the world. (Kinova’s Jaco, for example, costs $35,000.) The price reflects the cost of parts that enable the arms to adapt to new environments, which is why the team behind this new solution sourced more affordable, modular hardware. The arm leverages Intel’s Loihi neuromorphic research chip for real-time learning, which the team anticipates will enable them to implement adaptive control to enhance the arm’s functionality. Loihi will also support the use of parts that could make the arm 10 times cheaper while reducing power usage. Above: An early prototype of the robotic arm. According to Intel, Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with gains in speed and energy efficiency greater than three orders of magnitude. Moreover, Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times, whereas traditional hardware uses 500% more power to do the same. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Intel hopes to leverage neuromorphic computing to develop supercomputers 1,000 times more powerful than any today. Chips like Loihi excel at constraint satisfaction problems, which require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. The chips have also been shown to rapidly identify the shortest paths in graphs and perform approximate image searches, as well as mathematically optimizing specific objectives over time in real-world optimization problems (e.g., identifying scents and interpreting tactile touches ). Researchers at the Open University of Israel and ALYN have assembled the robotic arm hardware and now plan to build the machine learning model that will control it. They will build atop an algorithm called recurrent error-driven adaptive control hierarchy (REACH), which was developed by ABR. Intel says when paired with neuromorphic chips it’s been demonstrated to move a simple arm through complex paths — like handwritten words and numbers — with fewer errors and an improvement in energy efficiency over traditional control methods. Once the algorithmic work is complete, the research team will deploy the model on Loihi and refine the arm’s capabilities. The arm will then undergo testing with patients at ALYN suffering motor impairment in their upper extremities, who will control it using a joystick that allows the researchers to collect information on the arm’s performance. Intel notes that for those with neuromuscular or spinal cord injuries, even the most basic tasks — like drinking from a cup or eating with a spoon — can become a major challenge. Assistive robotics can address this gap. A 2017 study in the journal NeuroRehabilitation suggests wheelchair-mounted robotic arms offer users an increased sense of independence and can reduce caregiver time by up to 41%. Scientists at Accenture and Intel will assist with the development of the algorithm, as well as providing support for the design of the study. If the project is successful, the researchers plan to explore how to mass-produce the arm and investigate applications of adaptive control technology in flexible manufacturing and industrial automation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,462
2,020
"Intel inks agreement with Sandia National Laboratories to explore neuromorphic computing | VentureBeat"
"https://venturebeat.com/2020/10/02/intel-inks-agreement-with-sandia-national-laboratories-to-explore-neuromorphic-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel inks agreement with Sandia National Laboratories to explore neuromorphic computing Share on Facebook Share on X Share on LinkedIn Loihi, Intel’s first-generation neuromorphic research chip. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As a part of the U.S. Department of Energy’s Advanced Scientific Computing Research program, Intel today inked a three-year agreement with Sandia National Laboratories to explore the value of neuromorphic computing for scaled-up AI problems. Sandia will kick off its work using the 50-million-neuron Loihi-based system recently delivered to its facility in Albuquerque, New Mexico. As the collaboration progresses, Intel says the labs will receive systems built on the company’s next-generation neuromorphic architecture. Along with Intel, researchers at IBM, HP, MIT, Purdue, and Stanford hope to leverage neuromorphic computing — circuits that mimic the nervous system’s biology — to develop supercomputers 1,000 times more powerful than any today. Chips like Loihi excel at constraint satisfaction problems, which require evaluating a large number of potential solutions to identify the one or few that satisfy specific constraints. They’ve also been shown to rapidly identify the shortest paths in graphs and perform approximate image searches, as well as mathematically optimizing specific objectives over time in real-world optimization problems. Intel’s 14-nanometer Loihi chip contains over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses. Uniquely, the chip features a programmable microcode engine for on-die training of asynchronous spiking neural networks (SNNs), or AI models that incorporate time into their operating model such that the components of the model don’t process input data simultaneously. Loihi processes information up to 1,000 times faster and 10,000 more efficiently than traditional processors, and it can solve certain types of optimization problems with gains in speed and energy efficiency greater than three orders of magnitude, according to Intel. Moreover, Loihi maintains real-time performance results and uses only 30% more power when scaled up 50 times, whereas traditional hardware uses 500% more power to do the same. Intel and Sandia hope to apply neuromorphic computing to workloads in scientific computing, counterproliferation, counterterrorism, energy, and national security. Using neuromorphic research systems in-house, Sandia plans to evaluate the scaling of a range of spiking neural network workloads, including physics modeling, graph analytics, and large-scale deep networks. The labs will run tasks on the 50-million-neuron Loihi-based system and evaluate the initial results. This will lay the groundwork for later-phase collaboration expected to include the delivery of Intel’s largest neuromorphic research system to date, which the company claims could exceed more than 1 billion neurons in computational capacity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Earlier this year, Intel announced the general readiness of Pohoiki Springs, a powerful self-contained neuromorphic system that’s about the size of five standard servers. The company made the system available to members of the Intel Neuromorphic Research Community via the cloud using Intel’s Nx SDK and community-contributed software components, providing a tool to scale up research and explore ways to accelerate workloads that run slowly on today’s conventional architectures. Intel claims Pohoiki Springs , which was announced in July 2019, is similar in neural capacity to the brain of a small mammal, with 768 Loihi chips and 100 million neurons spread across 24 Arria10 FPGA Nahuku expansion boards (containing 32 chips each) that operate at under 500 watts. This is ostensibly a step on the path to supporting larger and more sophisticated neuromorphic workloads. Intel recently demonstrated that the chips can be used to “teach” an AI model to distinguish between 10 different scents , control a robotic assistive arm for wheelchairs , and power touch-sensing robotic “skin.” In somewhat related news, Intel today announced it has entered into an agreement with the U.S. Department of Energy to develop novel semiconductor technologies and manufacturing processes. In collaboration with Argonne National Laboratory, the company will focus on the development and design of next-generation microelectronics technologies such as exascale, neuromorphic, and quantum computing. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,463
2,021
"Gartner: 75% of VCs will use AI to make investment decisions by 2025 | VentureBeat"
"https://venturebeat.com/2021/03/10/gartner-75-of-vcs-will-use-ai-to-make-investment-decisions-by-2025"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gartner: 75% of VCs will use AI to make investment decisions by 2025 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. By 2025, more than 75% of venture capital and early-stage investor executive reviews will be informed by AI and data analytics. In other words, AI might determine whether a company makes it to a human evaluation at all, de-emphasizing the importance of pitch decks and financials. That’s according to a new whitepaper by Gartner, which predicts that in the next four years, the AI- and data-science-equipped investor will become commonplace. Increased advanced analytics capabilities are shifting the early-stage venture investing strategy away from “gut feel” and qualitative decision-making to a “platform-based” quantitative process, according to Gartner senior research director Patrick Stakenas. Stakenas says data gathered from sources like LinkedIn, PitchBook, Crunchbase, and Owler, along with third-party data marketplaces, will be leveraged alongside diverse past and current investments. “This data is increasingly being used to build sophisticated models that can better determine the viability, strategy, and potential outcome of an investment in a short amount of time. Questions such as when to invest, where to invest, and how much to invest are becoming almost automated,” Stakenas said. “The personality traits and work patterns required for success will be quantified in the same manner that the product and its use in the market, market size, and financial details are currently measured. AI tools will be used to determine how likely a leadership team is to succeed based on employment history, field expertise, and previous business success.” As the Gartner report points out, current technology is capable of providing insights into customer desires and predicting future behavior. Unique profiles can be built with little to no human input and further developed via natural language processing AI that can determine qualities about a person from real-time or audio recordings. While this technology is currently used primarily for marketing and sales purposes, by 2025 investment organizations will be leveraging it to determine which leadership teams are most likely to succeed. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One venture capital firm — San Francisco, California-based Signalfire — is already using a proprietary platform called Beacon to track the performance of more than 6 million companies. At a cost of over $10 million per year, the platform draws on 10 million data sources, including academic publications, patent registries, open source contributions, regulatory filings, company webpages, sales data, social networks, and even raw credit card data. Companies that are outperforming are flagged up on a dashboard, allowing Signalfire to see deals ostensibly earlier than traditional venture firms. This isn’t to suggest that AI and machine learning are — or will be — a silver bullet when it comes to investment decisions. In an experiment last November, Harvard Business Review built an investment algorithm and compared its performance with the returns of 255 angel investors. Leveraging state-of-the-art techniques, a team trained the system to select the most promising investment opportunities among 623 deals from one of the largest European angel networks. The model, whose decisions were based on the same data available to investors, outperformed novice investors but fared worse than experienced investors. Part of the problem with Harvard Business Review’s model was that it exhibited biases experienced investors did not. For example, the algorithm tended to pick white entrepreneurs rather than entrepreneurs of color and preferred investing in startups with male founders. That’s potentially because women and founders from other underrepresented groups tend to be disadvantaged in the funding process and ultimately raise less venture capital. Because it might not be possible to completely eliminate these forms of bias, it’s crucial that investors take a “hybrid approach” to AI-informed decision-making with humans in the loop, according to Harvard Business Review. While it’s true that algorithms can have an easier time picking out better portfolios because they analyze data at scale, potentially avoiding bad investments, there’s always a tradeoff between fairness and efficiency. “Managers and investors should consider that algorithms produce predictions about potential future outcomes rather than decisions. Depending on how predictions are intended to be used, they are based on human judgement that may (or may not) result in improved decision-making and action,” Harvard Business Review wrote in its analysis. “In complex and uncertain decision environments, the central question is, thus, not whether human decision-making should be replaced, but rather how it should be augmented by combining the strengths of human and artificial intelligence.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,464
2,021
"A first step to automating your business processes | VentureBeat"
"https://venturebeat.com/2021/05/22/a-first-step-to-automating-your-business-processes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest A first step to automating your business processes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Where to start and what scope to address has been one of the persistent challenges of process improvement. This challenge has continued to plague digital automation as well. If the scope is too large, then the team and the effort may be accused of “boiling the ocean.” If the scope is too small, then the result may be doomed to small incremental improvements with little impact on overall performance. In speaking with Brent Harder, who is currently Head of Enterprise Automation at Fiserv, I learned of an interesting case that may provide organizations with an understanding on the importance of transparency in digital automation. Prior to joining Fiserv, Brent worked at a major, international bank based in New York City. This bank, like many other large financial institutions, struggled with how and where to start its enterprise automation program. After much thought and discussion, the client onboarding process was carefully chosen as the place to start. If client onboarding could be done faster, then not only would customer satisfaction improve, but the bank would also benefit by being able to recognize revenue sooner. That would be a big win-win. However, when the team began looking at the end-to-end client onboarding process, they encountered several significant obstacles. A siloed organization, difficult-to-access data that was compartmentalized by department, and a fragmented technology landscape were just a few of the challenges. As Brent said, “we stood in our way.” It was not at all unusual that the first time the bank met with a new client to discuss onboarding and discuss how to implement the bank’s services was also the very first time that the bank’s back-office team came together. That was due to decades of managing the organization in vertical departments and units. Access to data was problematic as each unit kept data in different formats. Similarly, since units had impressed upon IT their individual needs for decades, without consideration of other units’ needs, the resulting fragmented technology environment made it difficult to communicate across applications. Consequently, each unit or product group had its own implementation requirements, and the types of information, types of transaction, sources of data, and interfaces depend on the policies and practices of each product group. The only commonalities were Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations. Given these challenges, the team decided to turn to design thinking instead of looking at the problem in a Six Sigma lens. This resulted in a decision to build an application that was called the “pizza tracker.” It’s well known that leading pizza companies such as Dominoes and Papa John’s may not have the very best pizza but offer transparency and reliability. Indeed, Dominos has even implemented a GPS tracker that allows customers to watch the progress of their orders on the way to their house. Now, that’s transparency! However, building a “pizza tracker” for client onboarding was easier said than done. Early on, the team recognized that the bank needed an intuitive interface that could be accessed in many modalities — presenting the same face of the onboarding process to the client as well as the internal product groups. That would finally get everyone on the same page. Accordingly, the team began by simply trying to find the many and various points of information that the client would need. Then they created something called the Client Passport, which held all the data the bank would need from the client for onboarding. Then the bank digitized a lot of the client flows through the Client Passport and was able to start using machine learning to try to anticipate what would be needed for various implementations. The implementation manager then became a traffic cop. She could see the roadblocks and who was responsible. The client stopped calling the bank, because they could see where they were in the onboarding process. It’s important to note that this did not solve anything underneath — the bank simply provided visibility into the journey. To fix it further would require a step down into the various elements that make up onboarding. But visibility and transparency provided more insight into where the problems were. At that point the bank had more data on the end-to-end process and could go on to use more traditional BPM approaches. Once everyone understood the process — and knew what data was important — the bank could bring in BPM tools such as Appian to improve workflow across its various systems. The results were exciting. Being able to track the progress of a new client onboarding on an iPhone, iPad, or Android resulted in significantly faster cycle times — up to 50% faster on the first few projects that were tracked. Part of that might be due to the “ Hawthorne effect ,” but the results are impressive nevertheless. The moral of the story, according to Harder, is this: “Success with digital automation requires both innovation automation and an innovative methodology. It’s not just about the science — it’s also about the art.” The power of transparency does not only apply to new client on-boarding in banking, of course. When you have a transparent view into a business process that spans several different systems, you’re opening up new possibilities for automation regardless of what industry you’re in or what process you’re looking at. Organizations that are able to apply this kind of transparency to their processes end-to-end will identify pain points that weren’t previously obvious. This will give them a better grasp of where to start on their digital automation journey and of how extensive that journey will be. Andrew Spanyi is President of Spanyi International. He is a member of the Board of Advisors at the Association of Business Process Professionals and has been an instructor at the BPM Institute. He is also a member of the Cognitive World Think Tank on enterprise AI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,465
2,021
"GitHub brings cloud-based Codespaces development environment to the enterprise | VentureBeat"
"https://venturebeat.com/2021/08/11/github-brings-cloud-based-codespaces-development-environment-to-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub brings cloud-based Codespaces development environment to the enterprise Share on Facebook Share on X Share on LinkedIn Codespaces Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here GitHub is kicking off a broader rollout of its browser-based Codespaces coding environment by extending it to GitHub Team and Enterprise (cloud) plans. The Microsoft- owned company also announced it has internally transitioned from a “MacOS model” to Codespaces, which is now the default development environment for GitHub.com. GitHub debuted Codespaces last May as a cloud-hosted development environment with all the usual GitHub features. It’s basically powered by Microsoft’s Visual Studio Code, which has been available as a web-based editor since 2019 and which was rebranded as Visual Studio Codespaces last year. In September, Microsoft also confirmed it was consolidating Visual Studio Codespaces into GitHub Codespaces. Local to cloud Codespaces is part of a larger trend in the coding world, with a growing number of platforms ditching local development environments for the speedier, more collaboration-friendly cloud. Gitpod , for example, is a browser-based open source development environment that recently raised $13 million , while Replit recently secured $20 million for what has been touted as Google Docs for code. Elsewhere, CodeSandbox , which enables developers to create a web app development sandbox in the browser, also secured venture capital backing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Taking coding environments to the cloud makes it easier for developers to join and collaborate on a project and begin coding with minimal configuration. GitHub’s Codespaces was initially launched in “limited public beta” for individual users, and the company has confirmed that this restricted beta will continue for now alongside the broader expansion into the enterprise. Today’s news means all businesses on the Team or Enterprise (not including self-hosted) plans can proactively enable Codespaces in their GitHub settings , and they can now use Codespaces in all their private repositories. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,466
2,021
"Omdia: 58% of employees say hybrid and remote options are here to stay | VentureBeat"
"https://venturebeat.com/2021/09/18/58-of-employees-say-hybrid-and-remote-options-are-here-to-stay"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 58% of employees say hybrid and remote options are here to stay Share on Facebook Share on X Share on LinkedIn Xanadu headquarters in Toronto, Canada Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Going forward, only 24% of employees will be permanently based in an office and working at a single desk. 58% will either be pure WFH or hybrid workers. For many, today’s digital transformation priorities look very different when compared to priorities pre-pandemic. The COVID-19 pandemic has brought about huge workplace disruption for organizations of all sizes, and across all industries and geographies. Work-style disruptions have brought about great change, not only in where people are working, but also in the way that enterprises operate and in the way that work gets done. Organizations have rapidly accelerated efforts to deliver against digital imperatives, such as modernizing enterprise communications, improving collaboration, and managing and securing a more modern, mobile, and digitally enabled workforce. While debate still rages on around where employees will work as restrictions ease, our data shows that most businesses are planning for more work to take place away from the traditional office environment over the long term. The conversation and focus must now switch away from the locations employees work from; businesses should plan to create an infrastructure that supports more modern work styles where employees can work from wherever they need to and with no compromises to security or productivity. While organizations “reacted to survive” during the initial stages of the pandemic, many organizations are entering a period of “reinvention to thrive.” Finally, a focus on people, process, and technology has never been more important. In overcoming the challenges brought about by the pandemic, organizations must strategize and make investments that focus on optimizing the value from people , processes, and technologies, as well as reinventing their former business model. This is by no means a new mandate, but it has become an imperative due to the scale and speed of technological, process, and people-centric changes and opportunities businesses now face. Read the full report by Omdia. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "